public class DataLakeFileClient extends DataLakePathClient
This client is instantiated through DataLakePathClientBuilder
or retrieved via
getFileClient
.
Please refer to the Azure Docs for more information.
Modifier and Type | Method and Description |
---|---|
void |
append(InputStream data,
long fileOffset,
long length)
Appends data to the specified resource to later be flushed (written) by a call to flush
|
com.azure.core.http.rest.Response<Void> |
appendWithResponse(InputStream data,
long fileOffset,
long length,
byte[] contentMd5,
String leaseId,
Duration timeout,
com.azure.core.util.Context context)
Appends data to the specified resource to later be flushed (written) by a call to flush
|
void |
delete()
Deletes a file.
|
com.azure.core.http.rest.Response<Void> |
deleteWithResponse(DataLakeRequestConditions requestConditions,
Duration timeout,
com.azure.core.util.Context context)
Deletes a file.
|
PathInfo |
flush(long position)
Flushes (writes) data previously appended to the file through a call to append.
|
PathInfo |
flush(long position,
boolean overwrite)
Flushes (writes) data previously appended to the file through a call to append.
|
com.azure.core.http.rest.Response<PathInfo> |
flushWithResponse(long position,
boolean retainUncommittedData,
boolean close,
PathHttpHeaders httpHeaders,
DataLakeRequestConditions requestConditions,
Duration timeout,
com.azure.core.util.Context context)
Flushes (writes) data previously appended to the file through a call to append.
|
String |
getFileName()
Gets the name of this file, not including its full path.
|
String |
getFilePath()
Gets the path of this file, not including the name of the resource itself.
|
String |
getFileUrl()
Gets the URL of the file represented by this client on the Data Lake service.
|
InputStream |
openQueryInputStream(String expression)
Opens an input stream to query the file.
|
com.azure.core.http.rest.Response<InputStream> |
openQueryInputStreamWithResponse(FileQueryOptions queryOptions)
Opens an input stream to query the file.
|
void |
query(OutputStream stream,
String expression)
Queries an entire file into an output stream.
|
FileQueryResponse |
queryWithResponse(FileQueryOptions queryOptions,
Duration timeout,
com.azure.core.util.Context context)
Queries an entire file into an output stream.
|
void |
read(OutputStream stream)
Reads the entire file into an output stream.
|
PathProperties |
readToFile(String filePath)
Reads the entire file into a file specified by the path.
|
PathProperties |
readToFile(String filePath,
boolean overwrite)
Reads the entire file into a file specified by the path.
|
com.azure.core.http.rest.Response<PathProperties> |
readToFileWithResponse(String filePath,
FileRange range,
com.azure.storage.common.ParallelTransferOptions parallelTransferOptions,
DownloadRetryOptions downloadRetryOptions,
DataLakeRequestConditions requestConditions,
boolean rangeGetContentMd5,
Set<OpenOption> openOptions,
Duration timeout,
com.azure.core.util.Context context)
Reads the entire file into a file specified by the path.
|
FileReadResponse |
readWithResponse(OutputStream stream,
FileRange range,
DownloadRetryOptions options,
DataLakeRequestConditions requestConditions,
boolean getRangeContentMd5,
Duration timeout,
com.azure.core.util.Context context)
Reads a range of bytes from a file into an output stream.
|
DataLakeFileClient |
rename(String destinationFileSystem,
String destinationPath)
Moves the file to another location within the file system.
|
com.azure.core.http.rest.Response<DataLakeFileClient> |
renameWithResponse(String destinationFileSystem,
String destinationPath,
DataLakeRequestConditions sourceRequestConditions,
DataLakeRequestConditions destinationRequestConditions,
Duration timeout,
com.azure.core.util.Context context)
Moves the file to another location within the file system.
|
void |
scheduleDeletion(FileScheduleDeletionOptions options)
Schedules the file for deletion.
|
com.azure.core.http.rest.Response<Void> |
scheduleDeletionWithResponse(FileScheduleDeletionOptions options,
Duration timeout,
com.azure.core.util.Context context)
Schedules the file for deletion.
|
PathInfo |
upload(InputStream data,
long length)
Creates a new file.
|
PathInfo |
upload(InputStream data,
long length,
boolean overwrite)
Creates a new file, or updates the content of an existing file.
|
void |
uploadFromFile(String filePath)
Creates a file, with the content of the specified file.
|
void |
uploadFromFile(String filePath,
boolean overwrite)
Creates a file, with the content of the specified file.
|
void |
uploadFromFile(String filePath,
com.azure.storage.common.ParallelTransferOptions parallelTransferOptions,
PathHttpHeaders headers,
Map<String,String> metadata,
DataLakeRequestConditions requestConditions,
Duration timeout)
Creates a file, with the content of the specified file.
|
com.azure.core.http.rest.Response<PathInfo> |
uploadWithResponse(FileParallelUploadOptions options,
Duration timeout,
com.azure.core.util.Context context)
Creates a new file.
|
create, create, createWithResponse, exists, existsWithResponse, generateSas, generateSas, generateUserDelegationSas, generateUserDelegationSas, getAccessControl, getAccessControlWithResponse, getAccountName, getFileSystemName, getHttpPipeline, getProperties, getPropertiesWithResponse, getServiceVersion, removeAccessControlRecursive, removeAccessControlRecursiveWithResponse, setAccessControlList, setAccessControlListWithResponse, setAccessControlRecursive, setAccessControlRecursiveWithResponse, setHttpHeaders, setHttpHeadersWithResponse, setMetadata, setMetadataWithResponse, setPermissions, setPermissionsWithResponse, updateAccessControlRecursive, updateAccessControlRecursiveWithResponse
public String getFileUrl()
public String getFilePath()
public String getFileName()
public void delete()
Code Samples
client.delete(); System.out.println("Delete request completed");
For more information see the Azure Docs
public com.azure.core.http.rest.Response<Void> deleteWithResponse(DataLakeRequestConditions requestConditions, Duration timeout, com.azure.core.util.Context context)
Code Samples
DataLakeRequestConditions requestConditions = new DataLakeRequestConditions() .setLeaseId(leaseId); client.deleteWithResponse(requestConditions, timeout, new Context(key1, value1)); System.out.println("Delete request completed");
For more information see the Azure Docs
requestConditions
- DataLakeRequestConditions
timeout
- An optional timeout value beyond which a RuntimeException
will be raised.context
- Additional context that is passed through the Http pipeline during the service call.public PathInfo upload(InputStream data, long length)
Code Samples
try { client.upload(data, length); System.out.println("Upload from file succeeded"); } catch (UncheckedIOException ex) { System.err.printf("Failed to upload from file %s%n", ex.getMessage()); }
data
- The data to write to the blob. The data must be markable. This is in order to support retries. If
the data is not markable, consider wrapping your data source in a BufferedInputStream
to add mark
support.length
- The exact length of the data. It is important that this value match precisely the length of the
data provided in the InputStream
.public PathInfo upload(InputStream data, long length, boolean overwrite)
Code Samples
try { boolean overwrite = false; client.upload(data, length, overwrite); System.out.println("Upload from file succeeded"); } catch (UncheckedIOException ex) { System.err.printf("Failed to upload from file %s%n", ex.getMessage()); }
data
- The data to write to the blob. The data must be markable. This is in order to support retries. If
the data is not markable, consider wrapping your data source in a BufferedInputStream
to add mark
support.length
- The exact length of the data. It is important that this value match precisely the length of the
data provided in the InputStream
.overwrite
- Whether or not to overwrite, should data exist on the bfilelob.public com.azure.core.http.rest.Response<PathInfo> uploadWithResponse(FileParallelUploadOptions options, Duration timeout, com.azure.core.util.Context context)
To avoid overwriting, pass "*" to DataLakeRequestConditions.setIfNoneMatch(String)
.
Code Samples
PathHttpHeaders headers = new PathHttpHeaders() .setContentMd5("data".getBytes(StandardCharsets.UTF_8)) .setContentLanguage("en-US") .setContentType("binary"); Map<String, String> metadata = Collections.singletonMap("metadata", "value"); DataLakeRequestConditions requestConditions = new DataLakeRequestConditions() .setLeaseId(leaseId) .setIfUnmodifiedSince(OffsetDateTime.now().minusDays(3)); Long blockSize = 100L * 1024L * 1024L; // 100 MB; ParallelTransferOptions parallelTransferOptions = new ParallelTransferOptions().setBlockSizeLong(blockSize); try { client.uploadWithResponse(new FileParallelUploadOptions(data, length) .setParallelTransferOptions(parallelTransferOptions).setHeaders(headers) .setMetadata(metadata).setRequestConditions(requestConditions) .setPermissions("permissions").setUmask("umask"), timeout, new Context("key", "value")); System.out.println("Upload from file succeeded"); } catch (UncheckedIOException ex) { System.err.printf("Failed to upload from file %s%n", ex.getMessage()); }
options
- FileParallelUploadOptions
timeout
- An optional timeout value beyond which a RuntimeException
will be raised.context
- Additional context that is passed through the Http pipeline during the service call.public void uploadFromFile(String filePath)
Code Samples
try { client.uploadFromFile(filePath); System.out.println("Upload from file succeeded"); } catch (UncheckedIOException ex) { System.err.printf("Failed to upload from file %s%n", ex.getMessage()); }
filePath
- Path of the file to uploadUncheckedIOException
- If an I/O error occurspublic void uploadFromFile(String filePath, boolean overwrite)
Code Samples
try { boolean overwrite = false; client.uploadFromFile(filePath, overwrite); System.out.println("Upload from file succeeded"); } catch (UncheckedIOException ex) { System.err.printf("Failed to upload from file %s%n", ex.getMessage()); }
filePath
- Path of the file to uploadoverwrite
- Whether or not to overwrite, should the file already existUncheckedIOException
- If an I/O error occurspublic void uploadFromFile(String filePath, com.azure.storage.common.ParallelTransferOptions parallelTransferOptions, PathHttpHeaders headers, Map<String,String> metadata, DataLakeRequestConditions requestConditions, Duration timeout)
To avoid overwriting, pass "*" to DataLakeRequestConditions.setIfNoneMatch(String)
.
Code Samples
PathHttpHeaders headers = new PathHttpHeaders() .setContentMd5("data".getBytes(StandardCharsets.UTF_8)) .setContentLanguage("en-US") .setContentType("binary"); Map<String, String> metadata = Collections.singletonMap("metadata", "value"); DataLakeRequestConditions requestConditions = new DataLakeRequestConditions() .setLeaseId(leaseId) .setIfUnmodifiedSince(OffsetDateTime.now().minusDays(3)); Long blockSize = 100L * 1024L * 1024L; // 100 MB; ParallelTransferOptions parallelTransferOptions = new ParallelTransferOptions().setBlockSizeLong(blockSize); try { client.uploadFromFile(filePath, parallelTransferOptions, headers, metadata, requestConditions, timeout); System.out.println("Upload from file succeeded"); } catch (UncheckedIOException ex) { System.err.printf("Failed to upload from file %s%n", ex.getMessage()); }
filePath
- Path of the file to uploadparallelTransferOptions
- ParallelTransferOptions
used to configure buffered uploading.headers
- PathHttpHeaders
metadata
- Metadata to associate with the resource. If there is leading or trailing whitespace in any
metadata key or value, it must be removed or encoded.requestConditions
- DataLakeRequestConditions
timeout
- An optional timeout value beyond which a RuntimeException
will be raised.UncheckedIOException
- If an I/O error occurspublic void append(InputStream data, long fileOffset, long length)
Code Samples
client.append(data, offset, length); System.out.println("Append data completed");
For more information, see the Azure Docs
data
- The data to write to the file.fileOffset
- The position where the data is to be appended.length
- The exact length of the data.public com.azure.core.http.rest.Response<Void> appendWithResponse(InputStream data, long fileOffset, long length, byte[] contentMd5, String leaseId, Duration timeout, com.azure.core.util.Context context)
Code Samples
FileRange range = new FileRange(1024, 2048L); DownloadRetryOptions options = new DownloadRetryOptions().setMaxRetryRequests(5); byte[] contentMd5 = new byte[0]; // Replace with valid md5 Response<Void> response = client.appendWithResponse(data, offset, length, contentMd5, leaseId, timeout, new Context(key1, value1)); System.out.printf("Append data completed with status %d%n", response.getStatusCode());
For more information, see the Azure Docs
data
- The data to write to the file.fileOffset
- The position where the data is to be appended.length
- The exact length of the data.contentMd5
- An MD5 hash of the content of the data. If specified, the service will calculate the MD5 of the
received data and fail the request if it does not match the provided MD5.leaseId
- By setting lease id, requests will fail if the provided lease does not match the active lease on
the file.timeout
- An optional timeout value beyond which a RuntimeException
will be raised.context
- Additional context that is passed through the Http pipeline during the service call.public PathInfo flush(long position)
By default this method will not overwrite existing data.
Code Samples
client.flush(position); System.out.println("Flush data completed");
For more information, see the Azure Docs
position
- The length of the file after all data has been written.public PathInfo flush(long position, boolean overwrite)
Code Samples
boolean overwrite = true; client.flush(position, overwrite); System.out.println("Flush data completed");
For more information, see the Azure Docs
position
- The length of the file after all data has been written.overwrite
- Whether or not to overwrite, should data exist on the file.public com.azure.core.http.rest.Response<PathInfo> flushWithResponse(long position, boolean retainUncommittedData, boolean close, PathHttpHeaders httpHeaders, DataLakeRequestConditions requestConditions, Duration timeout, com.azure.core.util.Context context)
Code Samples
FileRange range = new FileRange(1024, 2048L); DownloadRetryOptions options = new DownloadRetryOptions().setMaxRetryRequests(5); byte[] contentMd5 = new byte[0]; // Replace with valid md5 boolean retainUncommittedData = false; boolean close = false; PathHttpHeaders httpHeaders = new PathHttpHeaders() .setContentLanguage("en-US") .setContentType("binary"); DataLakeRequestConditions requestConditions = new DataLakeRequestConditions() .setLeaseId(leaseId); Response<PathInfo> response = client.flushWithResponse(position, retainUncommittedData, close, httpHeaders, requestConditions, timeout, new Context(key1, value1)); System.out.printf("Flush data completed with status %d%n", response.getStatusCode());
For more information, see the Azure Docs
position
- The length of the file after all data has been written.retainUncommittedData
- Whether or not uncommitted data is to be retained after the operation.close
- Whether or not a file changed event raised indicates completion (true) or modification (false).httpHeaders
- httpHeaders
requestConditions
- requestConditions
timeout
- An optional timeout value beyond which a RuntimeException
will be raised.context
- Additional context that is passed through the Http pipeline during the service call.public void read(OutputStream stream)
Code Samples
client.read(new ByteArrayOutputStream()); System.out.println("Download completed.");
For more information, see the Azure Docs
stream
- A non-null OutputStream
instance where the downloaded data will be written.UncheckedIOException
- If an I/O error occurs.NullPointerException
- if stream
is nullpublic FileReadResponse readWithResponse(OutputStream stream, FileRange range, DownloadRetryOptions options, DataLakeRequestConditions requestConditions, boolean getRangeContentMd5, Duration timeout, com.azure.core.util.Context context)
Code Samples
FileRange range = new FileRange(1024, 2048L); DownloadRetryOptions options = new DownloadRetryOptions().setMaxRetryRequests(5); System.out.printf("Download completed with status %d%n", client.readWithResponse(new ByteArrayOutputStream(), range, options, null, false, timeout, new Context(key2, value2)).getStatusCode());
For more information, see the Azure Docs
stream
- A non-null OutputStream
instance where the downloaded data will be written.range
- FileRange
options
- DownloadRetryOptions
requestConditions
- DataLakeRequestConditions
getRangeContentMd5
- Whether the contentMD5 for the specified file range should be returned.timeout
- An optional timeout value beyond which a RuntimeException
will be raised.context
- Additional context that is passed through the Http pipeline during the service call.UncheckedIOException
- If an I/O error occurs.NullPointerException
- if stream
is nullpublic PathProperties readToFile(String filePath)
The file will be created and must not exist, if the file already exists a FileAlreadyExistsException
will be thrown.
Code Samples
client.readToFile(file); System.out.println("Completed download to file");
For more information, see the Azure Docs
filePath
- A String
representing the filePath where the downloaded data will be written.UncheckedIOException
- If an I/O error occurspublic PathProperties readToFile(String filePath, boolean overwrite)
If overwrite is set to false, the file will be created and must not exist, if the file already exists a
FileAlreadyExistsException
will be thrown.
Code Samples
boolean overwrite = false; // Default value client.readToFile(file, overwrite); System.out.println("Completed download to file");
For more information, see the Azure Docs
filePath
- A String
representing the filePath where the downloaded data will be written.overwrite
- Whether or not to overwrite the file, should the file exist.UncheckedIOException
- If an I/O error occurspublic com.azure.core.http.rest.Response<PathProperties> readToFileWithResponse(String filePath, FileRange range, com.azure.storage.common.ParallelTransferOptions parallelTransferOptions, DownloadRetryOptions downloadRetryOptions, DataLakeRequestConditions requestConditions, boolean rangeGetContentMd5, Set<OpenOption> openOptions, Duration timeout, com.azure.core.util.Context context)
By default the file will be created and must not exist, if the file already exists a
FileAlreadyExistsException
will be thrown. To override this behavior, provide appropriate
OpenOptions
Code Samples
FileRange fileRange = new FileRange(1024, 2048L); DownloadRetryOptions downloadRetryOptions = new DownloadRetryOptions().setMaxRetryRequests(5); Set<OpenOption> openOptions = new HashSet<>(Arrays.asList(StandardOpenOption.CREATE_NEW, StandardOpenOption.WRITE, StandardOpenOption.READ)); // Default options client.readToFileWithResponse(file, fileRange, new ParallelTransferOptions().setBlockSizeLong(4L * Constants.MB), downloadRetryOptions, null, false, openOptions, timeout, new Context(key2, value2)); System.out.println("Completed download to file");
For more information, see the Azure Docs
filePath
- A String
representing the filePath where the downloaded data will be written.range
- FileRange
parallelTransferOptions
- ParallelTransferOptions
to use to download to file. Number of parallel
transfers parameter is ignored.downloadRetryOptions
- DownloadRetryOptions
requestConditions
- DataLakeRequestConditions
rangeGetContentMd5
- Whether the contentMD5 for the specified file range should be returned.openOptions
- OpenOptions
to use to configure how to open or create the file.timeout
- An optional timeout value beyond which a RuntimeException
will be raised.context
- Additional context that is passed through the Http pipeline during the service call.UncheckedIOException
- If an I/O error occurs.public DataLakeFileClient rename(String destinationFileSystem, String destinationPath)
Code Samples
DataLakeDirectoryAsyncClient renamedClient = client.rename(fileSystemName, destinationPath).block(); System.out.println("Directory Client has been renamed");
destinationFileSystem
- The file system of the destination within the account.
null
for the current file system.destinationPath
- Relative path from the file system to rename the file to, excludes the file system name.
For example if you want to move a file with fileSystem = "myfilesystem", path = "mydir/hello.txt" to another path
in myfilesystem (ex: newdir/hi.txt) then set the destinationPath = "newdir/hi.txt"DataLakeFileClient
used to interact with the new file created.public com.azure.core.http.rest.Response<DataLakeFileClient> renameWithResponse(String destinationFileSystem, String destinationPath, DataLakeRequestConditions sourceRequestConditions, DataLakeRequestConditions destinationRequestConditions, Duration timeout, com.azure.core.util.Context context)
Code Samples
DataLakeRequestConditions sourceRequestConditions = new DataLakeRequestConditions() .setLeaseId(leaseId); DataLakeRequestConditions destinationRequestConditions = new DataLakeRequestConditions(); DataLakeFileClient newRenamedClient = client.renameWithResponse(fileSystemName, destinationPath, sourceRequestConditions, destinationRequestConditions, timeout, new Context(key1, value1)).getValue(); System.out.println("Directory Client has been renamed");
destinationFileSystem
- The file system of the destination within the account.
null
for the current file system.destinationPath
- Relative path from the file system to rename the file to, excludes the file system name.
For example if you want to move a file with fileSystem = "myfilesystem", path = "mydir/hello.txt" to another path
in myfilesystem (ex: newdir/hi.txt) then set the destinationPath = "newdir/hi.txt"sourceRequestConditions
- DataLakeRequestConditions
against the source.destinationRequestConditions
- DataLakeRequestConditions
against the destination.timeout
- An optional timeout value beyond which a RuntimeException
will be raised.context
- Additional context that is passed through the Http pipeline during the service call.Response
whose value
that contains a DataLakeFileClient
used to interact with the file created.public InputStream openQueryInputStream(String expression)
For more information, see the Azure Docs
Code Samples
String expression = "SELECT * from BlobStorage"; InputStream inputStream = client.openQueryInputStream(expression); // Now you can read from the input stream like you would normally.
expression
- The query expression.InputStream
object that represents the stream to use for reading the query response.public com.azure.core.http.rest.Response<InputStream> openQueryInputStreamWithResponse(FileQueryOptions queryOptions)
For more information, see the Azure Docs
Code Samples
String expression = "SELECT * from BlobStorage"; FileQuerySerialization input = new FileQueryDelimitedSerialization() .setColumnSeparator(',') .setEscapeChar('\n') .setRecordSeparator('\n') .setHeadersPresent(true) .setFieldQuote('"'); FileQuerySerialization output = new FileQueryJsonSerialization() .setRecordSeparator('\n'); DataLakeRequestConditions requestConditions = new DataLakeRequestConditions() .setLeaseId("leaseId"); Consumer<FileQueryError> errorConsumer = System.out::println; Consumer<FileQueryProgress> progressConsumer = progress -> System.out.println("total file bytes read: " + progress.getBytesScanned()); FileQueryOptions queryOptions = new FileQueryOptions(expression) .setInputSerialization(input) .setOutputSerialization(output) .setRequestConditions(requestConditions) .setErrorConsumer(errorConsumer) .setProgressConsumer(progressConsumer); InputStream inputStream = client.openQueryInputStreamWithResponse(queryOptions).getValue(); // Now you can read from the input stream like you would normally.
queryOptions
- The query options
.InputStream
object
that represents the stream to use for reading the query response.public void query(OutputStream stream, String expression)
For more information, see the Azure Docs
Code Samples
ByteArrayOutputStream queryData = new ByteArrayOutputStream(); String expression = "SELECT * from BlobStorage"; client.query(queryData, expression); System.out.println("Query completed.");
stream
- A non-null OutputStream
instance where the downloaded data will be written.expression
- The query expression.UncheckedIOException
- If an I/O error occurs.NullPointerException
- if stream
is null.public FileQueryResponse queryWithResponse(FileQueryOptions queryOptions, Duration timeout, com.azure.core.util.Context context)
For more information, see the Azure Docs
Code Samples
ByteArrayOutputStream queryData = new ByteArrayOutputStream(); String expression = "SELECT * from BlobStorage"; FileQueryJsonSerialization input = new FileQueryJsonSerialization() .setRecordSeparator('\n'); FileQueryDelimitedSerialization output = new FileQueryDelimitedSerialization() .setEscapeChar('\0') .setColumnSeparator(',') .setRecordSeparator('\n') .setFieldQuote('\'') .setHeadersPresent(true); DataLakeRequestConditions requestConditions = new DataLakeRequestConditions().setLeaseId(leaseId); Consumer<FileQueryError> errorConsumer = System.out::println; Consumer<FileQueryProgress> progressConsumer = progress -> System.out.println("total file bytes read: " + progress.getBytesScanned()); FileQueryOptions queryOptions = new FileQueryOptions(expression, queryData) .setInputSerialization(input) .setOutputSerialization(output) .setRequestConditions(requestConditions) .setErrorConsumer(errorConsumer) .setProgressConsumer(progressConsumer); System.out.printf("Query completed with status %d%n", client.queryWithResponse(queryOptions, timeout, new Context(key1, value1)) .getStatusCode());
queryOptions
- The query options
.timeout
- An optional timeout value beyond which a RuntimeException
will be raised.context
- Additional context that is passed through the Http pipeline during the service call.UncheckedIOException
- If an I/O error occurs.NullPointerException
- if stream
is null.public void scheduleDeletion(FileScheduleDeletionOptions options)
Code Samples
FileScheduleDeletionOptions options = new FileScheduleDeletionOptions(OffsetDateTime.now().plusDays(1)); client.scheduleDeletion(options); System.out.println("File deletion has been scheduled");
options
- Schedule deletion parameters.public com.azure.core.http.rest.Response<Void> scheduleDeletionWithResponse(FileScheduleDeletionOptions options, Duration timeout, com.azure.core.util.Context context)
Code Samples
FileScheduleDeletionOptions options = new FileScheduleDeletionOptions(OffsetDateTime.now().plusDays(1)); Context context = new Context("key", "value"); client.scheduleDeletionWithResponse(options, timeout, context); System.out.println("File deletion has been scheduled");
options
- Schedule deletion parameters.timeout
- An optional timeout value beyond which a RuntimeException
will be raised.context
- Additional context that is passed through the Http pipeline during the service call.Copyright © 2021 Microsoft Corporation. All rights reserved.