@Generated(value="com.amazonaws:aws-java-sdk-code-generator") public class DataRepositoryConfiguration extends Object implements Serializable, Cloneable, StructuredPojo
The data repository configuration object for Lustre file systems returned in the response of the
CreateFileSystem operation.
| Constructor and Description |
|---|
DataRepositoryConfiguration() |
| Modifier and Type | Method and Description |
|---|---|
DataRepositoryConfiguration |
clone() |
boolean |
equals(Object obj) |
String |
getExportPath()
The Amazon S3 commit path to use for storing new and changed Lustre file system files as part of the archive
operation from the file system to Amazon S3.
|
Integer |
getImportedFileChunkSize()
For files imported from a data repository, this value determines the stripe count and maximum amount of data per
file (in MiB) stored on a single physical disk.
|
String |
getImportPath()
The import path to the Amazon S3 bucket (and optional prefix) that you're using as the data repository for your
FSx for Lustre file system, for example
s3://import-bucket/optional-prefix. |
int |
hashCode() |
void |
marshall(ProtocolMarshaller protocolMarshaller)
Marshalls this structured data using the given
ProtocolMarshaller. |
void |
setExportPath(String exportPath)
The Amazon S3 commit path to use for storing new and changed Lustre file system files as part of the archive
operation from the file system to Amazon S3.
|
void |
setImportedFileChunkSize(Integer importedFileChunkSize)
For files imported from a data repository, this value determines the stripe count and maximum amount of data per
file (in MiB) stored on a single physical disk.
|
void |
setImportPath(String importPath)
The import path to the Amazon S3 bucket (and optional prefix) that you're using as the data repository for your
FSx for Lustre file system, for example
s3://import-bucket/optional-prefix. |
String |
toString()
Returns a string representation of this object.
|
DataRepositoryConfiguration |
withExportPath(String exportPath)
The Amazon S3 commit path to use for storing new and changed Lustre file system files as part of the archive
operation from the file system to Amazon S3.
|
DataRepositoryConfiguration |
withImportedFileChunkSize(Integer importedFileChunkSize)
For files imported from a data repository, this value determines the stripe count and maximum amount of data per
file (in MiB) stored on a single physical disk.
|
DataRepositoryConfiguration |
withImportPath(String importPath)
The import path to the Amazon S3 bucket (and optional prefix) that you're using as the data repository for your
FSx for Lustre file system, for example
s3://import-bucket/optional-prefix. |
public void setImportPath(String importPath)
The import path to the Amazon S3 bucket (and optional prefix) that you're using as the data repository for your
FSx for Lustre file system, for example s3://import-bucket/optional-prefix. If a prefix is specified
after the Amazon S3 bucket name, only object keys with that prefix are loaded into the file system.
importPath - The import path to the Amazon S3 bucket (and optional prefix) that you're using as the data repository for
your FSx for Lustre file system, for example s3://import-bucket/optional-prefix. If a prefix
is specified after the Amazon S3 bucket name, only object keys with that prefix are loaded into the file
system.public String getImportPath()
The import path to the Amazon S3 bucket (and optional prefix) that you're using as the data repository for your
FSx for Lustre file system, for example s3://import-bucket/optional-prefix. If a prefix is specified
after the Amazon S3 bucket name, only object keys with that prefix are loaded into the file system.
s3://import-bucket/optional-prefix. If a
prefix is specified after the Amazon S3 bucket name, only object keys with that prefix are loaded into
the file system.public DataRepositoryConfiguration withImportPath(String importPath)
The import path to the Amazon S3 bucket (and optional prefix) that you're using as the data repository for your
FSx for Lustre file system, for example s3://import-bucket/optional-prefix. If a prefix is specified
after the Amazon S3 bucket name, only object keys with that prefix are loaded into the file system.
importPath - The import path to the Amazon S3 bucket (and optional prefix) that you're using as the data repository for
your FSx for Lustre file system, for example s3://import-bucket/optional-prefix. If a prefix
is specified after the Amazon S3 bucket name, only object keys with that prefix are loaded into the file
system.public void setExportPath(String exportPath)
The Amazon S3 commit path to use for storing new and changed Lustre file system files as part of the archive
operation from the file system to Amazon S3. The value is
s3://import-bucket/FSxLustre[creationtimestamp]. The timestamp is presented in UTC format, for
example s3://import-bucket/FSxLustre20181105T222312Z. Files are archived to a different prefix in
the Amazon S3 bucket, preventing input data from being overwritten.
exportPath - The Amazon S3 commit path to use for storing new and changed Lustre file system files as part of the
archive operation from the file system to Amazon S3. The value is
s3://import-bucket/FSxLustre[creationtimestamp]. The timestamp is presented in UTC format,
for example s3://import-bucket/FSxLustre20181105T222312Z. Files are archived to a different
prefix in the Amazon S3 bucket, preventing input data from being overwritten.public String getExportPath()
The Amazon S3 commit path to use for storing new and changed Lustre file system files as part of the archive
operation from the file system to Amazon S3. The value is
s3://import-bucket/FSxLustre[creationtimestamp]. The timestamp is presented in UTC format, for
example s3://import-bucket/FSxLustre20181105T222312Z. Files are archived to a different prefix in
the Amazon S3 bucket, preventing input data from being overwritten.
s3://import-bucket/FSxLustre[creationtimestamp]. The timestamp is presented in UTC format,
for example s3://import-bucket/FSxLustre20181105T222312Z. Files are archived to a different
prefix in the Amazon S3 bucket, preventing input data from being overwritten.public DataRepositoryConfiguration withExportPath(String exportPath)
The Amazon S3 commit path to use for storing new and changed Lustre file system files as part of the archive
operation from the file system to Amazon S3. The value is
s3://import-bucket/FSxLustre[creationtimestamp]. The timestamp is presented in UTC format, for
example s3://import-bucket/FSxLustre20181105T222312Z. Files are archived to a different prefix in
the Amazon S3 bucket, preventing input data from being overwritten.
exportPath - The Amazon S3 commit path to use for storing new and changed Lustre file system files as part of the
archive operation from the file system to Amazon S3. The value is
s3://import-bucket/FSxLustre[creationtimestamp]. The timestamp is presented in UTC format,
for example s3://import-bucket/FSxLustre20181105T222312Z. Files are archived to a different
prefix in the Amazon S3 bucket, preventing input data from being overwritten.public void setImportedFileChunkSize(Integer importedFileChunkSize)
For files imported from a data repository, this value determines the stripe count and maximum amount of data per file (in MiB) stored on a single physical disk. The maximum number of disks that a single file can be striped across is limited by the total number of disks that make up the file system.
The default chunk size is 1,024 MiB (1 GiB) and can go as high as 512,000 MiB (500 GiB). Amazon S3 objects have a maximum size of 5 TB.
importedFileChunkSize - For files imported from a data repository, this value determines the stripe count and maximum amount of
data per file (in MiB) stored on a single physical disk. The maximum number of disks that a single file
can be striped across is limited by the total number of disks that make up the file system.
The default chunk size is 1,024 MiB (1 GiB) and can go as high as 512,000 MiB (500 GiB). Amazon S3 objects have a maximum size of 5 TB.
public Integer getImportedFileChunkSize()
For files imported from a data repository, this value determines the stripe count and maximum amount of data per file (in MiB) stored on a single physical disk. The maximum number of disks that a single file can be striped across is limited by the total number of disks that make up the file system.
The default chunk size is 1,024 MiB (1 GiB) and can go as high as 512,000 MiB (500 GiB). Amazon S3 objects have a maximum size of 5 TB.
The default chunk size is 1,024 MiB (1 GiB) and can go as high as 512,000 MiB (500 GiB). Amazon S3 objects have a maximum size of 5 TB.
public DataRepositoryConfiguration withImportedFileChunkSize(Integer importedFileChunkSize)
For files imported from a data repository, this value determines the stripe count and maximum amount of data per file (in MiB) stored on a single physical disk. The maximum number of disks that a single file can be striped across is limited by the total number of disks that make up the file system.
The default chunk size is 1,024 MiB (1 GiB) and can go as high as 512,000 MiB (500 GiB). Amazon S3 objects have a maximum size of 5 TB.
importedFileChunkSize - For files imported from a data repository, this value determines the stripe count and maximum amount of
data per file (in MiB) stored on a single physical disk. The maximum number of disks that a single file
can be striped across is limited by the total number of disks that make up the file system.
The default chunk size is 1,024 MiB (1 GiB) and can go as high as 512,000 MiB (500 GiB). Amazon S3 objects have a maximum size of 5 TB.
public String toString()
toString in class ObjectObject.toString()public DataRepositoryConfiguration clone()
public void marshall(ProtocolMarshaller protocolMarshaller)
StructuredPojoProtocolMarshaller.marshall in interface StructuredPojoprotocolMarshaller - Implementation of ProtocolMarshaller used to marshall this object's data.Copyright © 2013 Amazon Web Services, Inc. All Rights Reserved.