public abstract class RenameTracker extends AbstractStoreOperation
Subclasses must provide an implementation and return it in
MetadataStore.initiateRenameOperation()
.
The operationState
field/constructor argument is an opaque state to
be passed down to the metastore in its move operations; this allows the
stores to manage ongoing state -while still being able to share
rename tracker implementations.
This is to avoid performance problems wherein the progressive rename tracker causes the store to repeatedly create and write duplicate ancestor entries for every file added.
Modifier and Type | Field and Description |
---|---|
static org.slf4j.Logger |
LOG |
Modifier | Constructor and Description |
---|---|
protected |
RenameTracker(String name,
StoreContext storeContext,
MetadataStore metadataStore,
org.apache.hadoop.fs.Path sourceRoot,
org.apache.hadoop.fs.Path dest,
BulkOperationState operationState)
Constructor.
|
Modifier and Type | Method and Description |
---|---|
void |
completeRename()
Complete the operation.
|
protected IOException |
convertToIOException(Exception ex)
Convert a passed in exception (expected to be an IOE or AWS exception)
into an IOException.
|
IOException |
deleteFailed(Exception e,
List<org.apache.hadoop.fs.Path> pathsToDelete,
List<org.apache.hadoop.fs.Path> undeletedObjects)
The delete failed.
|
void |
directoryMarkerCopied(org.apache.hadoop.fs.Path sourcePath,
org.apache.hadoop.fs.Path destPath,
boolean addAncestors)
A directory marker has been copied.
|
abstract void |
fileCopied(org.apache.hadoop.fs.Path childSource,
S3ObjectAttributes sourceAttributes,
S3ObjectAttributes destAttributes,
org.apache.hadoop.fs.Path destPath,
long blockSize,
boolean addAncestors)
A file has been copied.
|
org.apache.hadoop.fs.Path |
getDest() |
protected MetadataStore |
getMetadataStore()
Get the metadata store.
|
BulkOperationState |
getOperationState() |
String |
getOwner() |
org.apache.hadoop.fs.Path |
getSourceRoot() |
void |
moveSourceDirectory()
Top level directory move.
|
protected void |
noteRenameFinished()
Note that the rename has finished by closing the duration info;
this will log the duration of the operation at debug.
|
IOException |
renameFailed(Exception ex)
Rename has failed.
|
void |
sourceObjectsDeleted(Collection<org.apache.hadoop.fs.Path> paths)
Note that source objects have been deleted.
|
String |
toString() |
getStoreContext
protected RenameTracker(String name, StoreContext storeContext, MetadataStore metadataStore, org.apache.hadoop.fs.Path sourceRoot, org.apache.hadoop.fs.Path dest, BulkOperationState operationState)
name
- tracker name for logs.storeContext
- store context.metadataStore
- the storesourceRoot
- source path.dest
- destination path.operationState
- ongoing move state.public org.apache.hadoop.fs.Path getSourceRoot()
public org.apache.hadoop.fs.Path getDest()
public String getOwner()
public BulkOperationState getOperationState()
protected MetadataStore getMetadataStore()
public abstract void fileCopied(org.apache.hadoop.fs.Path childSource, S3ObjectAttributes sourceAttributes, S3ObjectAttributes destAttributes, org.apache.hadoop.fs.Path destPath, long blockSize, boolean addAncestors) throws IOException
childSource
- source of the file. This may actually be different
from the path of the sourceAttributes. (HOW?)sourceAttributes
- status of source.destAttributes
- destination attributesdestPath
- destination path.blockSize
- block size.addAncestors
- should ancestors be added?IOException
- failure.public void directoryMarkerCopied(org.apache.hadoop.fs.Path sourcePath, org.apache.hadoop.fs.Path destPath, boolean addAncestors) throws IOException
sourcePath
- source path.destPath
- destination path.addAncestors
- should ancestors be added?IOException
- failure.public IOException deleteFailed(Exception e, List<org.apache.hadoop.fs.Path> pathsToDelete, List<org.apache.hadoop.fs.Path> undeletedObjects)
By the time this is called, the metastore will already have been updated with the results of any partial delete failure, such that all files known to have been deleted will have been removed.
e
- exceptionpathsToDelete
- paths which were to be deleted.undeletedObjects
- list of objects which were not deleted.public void moveSourceDirectory() throws IOException
IOException
- on failurepublic void sourceObjectsDeleted(Collection<org.apache.hadoop.fs.Path> paths) throws IOException
paths
- path of objects deleted.IOException
public void completeRename() throws IOException
IOException
- failure.protected void noteRenameFinished()
public IOException renameFailed(Exception ex)
The metastore now needs to be updated with its current state even though the operation is incomplete. Implementations MUST NOT throw exceptions here, as this is going to be invoked in an exception handler. catch and log or catch and return/wrap.
The base implementation returns the IOE passed in and translates any AWS exception into an IOE.
ex
- the exception which caused the failure.
This is either an IOException or and AWS exceptionprotected IOException convertToIOException(Exception ex)
ex
- exception caughtCopyright © 2008–2021 Apache Software Foundation. All rights reserved.