public class RenameOperation extends ExecutingStoreOperation<Long>
The parallel execution is in groups of size
InternalConstants.RENAME_PARALLEL_LIMIT
; it is only
after one group completes that the next group is initiated.
Once enough files have been copied that they meet the
InternalConstants.MAX_ENTRIES_TO_DELETE
threshold, a delete
is initiated.
If it succeeds, the rename continues with the next group of files.
Directory Markers which have child entries are never copied; only those
which represent empty directories are copied in the rename.
The DirMarkerTracker
tracks which markers must be copied, and
which can simply be deleted from the source.
As a result: rename always purges all non-leaf directory markers from
the copied tree. This is to ensure that even if a directory tree
is copied from an authoritative path to a non-authoritative one
there is never any contamination of the non-auth path with markers.
The rename operation implements the classic HDFS rename policy of rename(file, dir) renames the file under the directory.
There is no validation of input and output paths. Callers are required to themselves verify that destination is not under the source, above the source, the source itself, etc, etc.
Constructor and Description |
---|
RenameOperation(StoreContext storeContext,
org.apache.hadoop.fs.Path sourcePath,
String sourceKey,
S3AFileStatus sourceStatus,
org.apache.hadoop.fs.Path destPath,
String destKey,
S3AFileStatus destStatus,
OperationCallbacks callbacks,
int pageSize,
boolean dirOperationsPurgeUploads)
Initiate the rename.
|
Modifier and Type | Method and Description |
---|---|
protected IOException |
convertToIOException(Exception ex)
Convert a passed in exception (expected to be an IOE or AWS exception)
into an IOException.
|
Long |
execute()
Execute the operation.
|
Optional<Long> |
getUploadsAborted()
Get the count of uploads aborted.
|
protected CompletableFuture<org.apache.hadoop.fs.Path> |
initiateCopy(S3ALocatedFileStatus source,
String key,
String newDestKey,
org.apache.hadoop.fs.Path childDestPath)
Initiate a copy operation in the executor.
|
protected void |
recursiveDirectoryRename()
Execute a full recursive rename.
|
protected org.apache.hadoop.fs.Path |
renameFileToDest()
The source is a file: rename it to the destination, which
will be under the current destination path if that is a directory.
|
apply, executeOnlyOnce
activateAuditSpan, getAuditSpan, getStoreContext
public RenameOperation(StoreContext storeContext, org.apache.hadoop.fs.Path sourcePath, String sourceKey, S3AFileStatus sourceStatus, org.apache.hadoop.fs.Path destPath, String destKey, S3AFileStatus destStatus, OperationCallbacks callbacks, int pageSize, boolean dirOperationsPurgeUploads)
storeContext
- store contextsourcePath
- source pathsourceKey
- key of sourcesourceStatus
- pre-fetched source statusdestPath
- destination path.destKey
- destination keydestStatus
- destination status.callbacks
- callback providerpageSize
- size of delete requestsdirOperationsPurgeUploads
- Do directory operations purge pending uploads?public Optional<Long> getUploadsAborted()
@Retries.RetryMixed public Long execute() throws IOException
ExecutingStoreOperation
ExecutingStoreOperation.executeOnlyOnce()
so as to force
the (atomic) re-entrancy check.execute
in class ExecutingStoreOperation<Long>
IOException
- IO problemprotected org.apache.hadoop.fs.Path renameFileToDest() throws IOException
IOException
- failureprotected void recursiveDirectoryRename() throws IOException
IOException
- failureprotected CompletableFuture<org.apache.hadoop.fs.Path> initiateCopy(S3ALocatedFileStatus source, String key, String newDestKey, org.apache.hadoop.fs.Path childDestPath)
source
- status of the source object.key
- source keynewDestKey
- destination keychildDestPath
- destination path.protected IOException convertToIOException(Exception ex)
ex
- exception caughtCopyright © 2008–2024 Apache Software Foundation. All rights reserved.