@Generated(value="com.amazonaws:aws-java-sdk-code-generator") public class CreateFileSystemLustreConfiguration extends Object implements Serializable, Cloneable, StructuredPojo
The Lustre configuration for the file system being created.
Constructor and Description |
---|
CreateFileSystemLustreConfiguration() |
Modifier and Type | Method and Description |
---|---|
CreateFileSystemLustreConfiguration |
clone() |
boolean |
equals(Object obj) |
String |
getDeploymentType()
(Optional) Choose
SCRATCH_1 and SCRATCH_2 deployment types when you need temporary
storage and shorter-term processing of data. |
String |
getExportPath()
(Optional) The path in Amazon S3 where the root of your Amazon FSx file system is exported.
|
Integer |
getImportedFileChunkSize()
(Optional) For files imported from a data repository, this value determines the stripe count and maximum amount
of data per file (in MiB) stored on a single physical disk.
|
String |
getImportPath()
(Optional) The path to the Amazon S3 bucket (including the optional prefix) that you're using as the data
repository for your Amazon FSx for Lustre file system.
|
Integer |
getPerUnitStorageThroughput()
Required for the
PERSISTENT_1 deployment type, describes the amount of read and write throughput for
each 1 tebibyte of storage, in MB/s/TiB. |
String |
getWeeklyMaintenanceStartTime()
The preferred start time to perform weekly maintenance, formatted d:HH:MM in the UTC time zone, where d is the
weekday number, from 1 through 7, beginning with Monday and ending with Sunday.
|
int |
hashCode() |
void |
marshall(ProtocolMarshaller protocolMarshaller)
Marshalls this structured data using the given
ProtocolMarshaller . |
void |
setDeploymentType(String deploymentType)
(Optional) Choose
SCRATCH_1 and SCRATCH_2 deployment types when you need temporary
storage and shorter-term processing of data. |
void |
setExportPath(String exportPath)
(Optional) The path in Amazon S3 where the root of your Amazon FSx file system is exported.
|
void |
setImportedFileChunkSize(Integer importedFileChunkSize)
(Optional) For files imported from a data repository, this value determines the stripe count and maximum amount
of data per file (in MiB) stored on a single physical disk.
|
void |
setImportPath(String importPath)
(Optional) The path to the Amazon S3 bucket (including the optional prefix) that you're using as the data
repository for your Amazon FSx for Lustre file system.
|
void |
setPerUnitStorageThroughput(Integer perUnitStorageThroughput)
Required for the
PERSISTENT_1 deployment type, describes the amount of read and write throughput for
each 1 tebibyte of storage, in MB/s/TiB. |
void |
setWeeklyMaintenanceStartTime(String weeklyMaintenanceStartTime)
The preferred start time to perform weekly maintenance, formatted d:HH:MM in the UTC time zone, where d is the
weekday number, from 1 through 7, beginning with Monday and ending with Sunday.
|
String |
toString()
Returns a string representation of this object.
|
CreateFileSystemLustreConfiguration |
withDeploymentType(LustreDeploymentType deploymentType)
(Optional) Choose
SCRATCH_1 and SCRATCH_2 deployment types when you need temporary
storage and shorter-term processing of data. |
CreateFileSystemLustreConfiguration |
withDeploymentType(String deploymentType)
(Optional) Choose
SCRATCH_1 and SCRATCH_2 deployment types when you need temporary
storage and shorter-term processing of data. |
CreateFileSystemLustreConfiguration |
withExportPath(String exportPath)
(Optional) The path in Amazon S3 where the root of your Amazon FSx file system is exported.
|
CreateFileSystemLustreConfiguration |
withImportedFileChunkSize(Integer importedFileChunkSize)
(Optional) For files imported from a data repository, this value determines the stripe count and maximum amount
of data per file (in MiB) stored on a single physical disk.
|
CreateFileSystemLustreConfiguration |
withImportPath(String importPath)
(Optional) The path to the Amazon S3 bucket (including the optional prefix) that you're using as the data
repository for your Amazon FSx for Lustre file system.
|
CreateFileSystemLustreConfiguration |
withPerUnitStorageThroughput(Integer perUnitStorageThroughput)
Required for the
PERSISTENT_1 deployment type, describes the amount of read and write throughput for
each 1 tebibyte of storage, in MB/s/TiB. |
CreateFileSystemLustreConfiguration |
withWeeklyMaintenanceStartTime(String weeklyMaintenanceStartTime)
The preferred start time to perform weekly maintenance, formatted d:HH:MM in the UTC time zone, where d is the
weekday number, from 1 through 7, beginning with Monday and ending with Sunday.
|
public void setWeeklyMaintenanceStartTime(String weeklyMaintenanceStartTime)
The preferred start time to perform weekly maintenance, formatted d:HH:MM in the UTC time zone, where d is the weekday number, from 1 through 7, beginning with Monday and ending with Sunday.
weeklyMaintenanceStartTime
- The preferred start time to perform weekly maintenance, formatted d:HH:MM in the UTC time zone, where d is
the weekday number, from 1 through 7, beginning with Monday and ending with Sunday.public String getWeeklyMaintenanceStartTime()
The preferred start time to perform weekly maintenance, formatted d:HH:MM in the UTC time zone, where d is the weekday number, from 1 through 7, beginning with Monday and ending with Sunday.
public CreateFileSystemLustreConfiguration withWeeklyMaintenanceStartTime(String weeklyMaintenanceStartTime)
The preferred start time to perform weekly maintenance, formatted d:HH:MM in the UTC time zone, where d is the weekday number, from 1 through 7, beginning with Monday and ending with Sunday.
weeklyMaintenanceStartTime
- The preferred start time to perform weekly maintenance, formatted d:HH:MM in the UTC time zone, where d is
the weekday number, from 1 through 7, beginning with Monday and ending with Sunday.public void setImportPath(String importPath)
(Optional) The path to the Amazon S3 bucket (including the optional prefix) that you're using as the data
repository for your Amazon FSx for Lustre file system. The root of your FSx for Lustre file system will be mapped
to the root of the Amazon S3 bucket you select. An example is s3://import-bucket/optional-prefix
. If
you specify a prefix after the Amazon S3 bucket name, only object keys with that prefix are loaded into the file
system.
importPath
- (Optional) The path to the Amazon S3 bucket (including the optional prefix) that you're using as the data
repository for your Amazon FSx for Lustre file system. The root of your FSx for Lustre file system will be
mapped to the root of the Amazon S3 bucket you select. An example is
s3://import-bucket/optional-prefix
. If you specify a prefix after the Amazon S3 bucket name,
only object keys with that prefix are loaded into the file system.public String getImportPath()
(Optional) The path to the Amazon S3 bucket (including the optional prefix) that you're using as the data
repository for your Amazon FSx for Lustre file system. The root of your FSx for Lustre file system will be mapped
to the root of the Amazon S3 bucket you select. An example is s3://import-bucket/optional-prefix
. If
you specify a prefix after the Amazon S3 bucket name, only object keys with that prefix are loaded into the file
system.
s3://import-bucket/optional-prefix
. If you specify a prefix after the Amazon S3 bucket name,
only object keys with that prefix are loaded into the file system.public CreateFileSystemLustreConfiguration withImportPath(String importPath)
(Optional) The path to the Amazon S3 bucket (including the optional prefix) that you're using as the data
repository for your Amazon FSx for Lustre file system. The root of your FSx for Lustre file system will be mapped
to the root of the Amazon S3 bucket you select. An example is s3://import-bucket/optional-prefix
. If
you specify a prefix after the Amazon S3 bucket name, only object keys with that prefix are loaded into the file
system.
importPath
- (Optional) The path to the Amazon S3 bucket (including the optional prefix) that you're using as the data
repository for your Amazon FSx for Lustre file system. The root of your FSx for Lustre file system will be
mapped to the root of the Amazon S3 bucket you select. An example is
s3://import-bucket/optional-prefix
. If you specify a prefix after the Amazon S3 bucket name,
only object keys with that prefix are loaded into the file system.public void setExportPath(String exportPath)
(Optional) The path in Amazon S3 where the root of your Amazon FSx file system is exported. The path must use the
same Amazon S3 bucket as specified in ImportPath. You can provide an optional prefix to which new and changed
data is to be exported from your Amazon FSx for Lustre file system. If an ExportPath
value is not
provided, Amazon FSx sets a default export path, s3://import-bucket/FSxLustre[creation-timestamp]
.
The timestamp is in UTC format, for example s3://import-bucket/FSxLustre20181105T222312Z
.
The Amazon S3 export bucket must be the same as the import bucket specified by ImportPath
. If you
only specify a bucket name, such as s3://import-bucket
, you get a 1:1 mapping of file system objects
to S3 bucket objects. This mapping means that the input data in S3 is overwritten on export. If you provide a
custom prefix in the export path, such as s3://import-bucket/[custom-optional-prefix]
, Amazon FSx
exports the contents of your file system to that export prefix in the Amazon S3 bucket.
exportPath
- (Optional) The path in Amazon S3 where the root of your Amazon FSx file system is exported. The path must
use the same Amazon S3 bucket as specified in ImportPath. You can provide an optional prefix to which new
and changed data is to be exported from your Amazon FSx for Lustre file system. If an
ExportPath
value is not provided, Amazon FSx sets a default export path,
s3://import-bucket/FSxLustre[creation-timestamp]
. The timestamp is in UTC format, for example
s3://import-bucket/FSxLustre20181105T222312Z
.
The Amazon S3 export bucket must be the same as the import bucket specified by ImportPath
. If
you only specify a bucket name, such as s3://import-bucket
, you get a 1:1 mapping of file
system objects to S3 bucket objects. This mapping means that the input data in S3 is overwritten on
export. If you provide a custom prefix in the export path, such as
s3://import-bucket/[custom-optional-prefix]
, Amazon FSx exports the contents of your file
system to that export prefix in the Amazon S3 bucket.
public String getExportPath()
(Optional) The path in Amazon S3 where the root of your Amazon FSx file system is exported. The path must use the
same Amazon S3 bucket as specified in ImportPath. You can provide an optional prefix to which new and changed
data is to be exported from your Amazon FSx for Lustre file system. If an ExportPath
value is not
provided, Amazon FSx sets a default export path, s3://import-bucket/FSxLustre[creation-timestamp]
.
The timestamp is in UTC format, for example s3://import-bucket/FSxLustre20181105T222312Z
.
The Amazon S3 export bucket must be the same as the import bucket specified by ImportPath
. If you
only specify a bucket name, such as s3://import-bucket
, you get a 1:1 mapping of file system objects
to S3 bucket objects. This mapping means that the input data in S3 is overwritten on export. If you provide a
custom prefix in the export path, such as s3://import-bucket/[custom-optional-prefix]
, Amazon FSx
exports the contents of your file system to that export prefix in the Amazon S3 bucket.
ExportPath
value is not provided, Amazon FSx sets a default export path,
s3://import-bucket/FSxLustre[creation-timestamp]
. The timestamp is in UTC format, for
example s3://import-bucket/FSxLustre20181105T222312Z
.
The Amazon S3 export bucket must be the same as the import bucket specified by ImportPath
.
If you only specify a bucket name, such as s3://import-bucket
, you get a 1:1 mapping of file
system objects to S3 bucket objects. This mapping means that the input data in S3 is overwritten on
export. If you provide a custom prefix in the export path, such as
s3://import-bucket/[custom-optional-prefix]
, Amazon FSx exports the contents of your file
system to that export prefix in the Amazon S3 bucket.
public CreateFileSystemLustreConfiguration withExportPath(String exportPath)
(Optional) The path in Amazon S3 where the root of your Amazon FSx file system is exported. The path must use the
same Amazon S3 bucket as specified in ImportPath. You can provide an optional prefix to which new and changed
data is to be exported from your Amazon FSx for Lustre file system. If an ExportPath
value is not
provided, Amazon FSx sets a default export path, s3://import-bucket/FSxLustre[creation-timestamp]
.
The timestamp is in UTC format, for example s3://import-bucket/FSxLustre20181105T222312Z
.
The Amazon S3 export bucket must be the same as the import bucket specified by ImportPath
. If you
only specify a bucket name, such as s3://import-bucket
, you get a 1:1 mapping of file system objects
to S3 bucket objects. This mapping means that the input data in S3 is overwritten on export. If you provide a
custom prefix in the export path, such as s3://import-bucket/[custom-optional-prefix]
, Amazon FSx
exports the contents of your file system to that export prefix in the Amazon S3 bucket.
exportPath
- (Optional) The path in Amazon S3 where the root of your Amazon FSx file system is exported. The path must
use the same Amazon S3 bucket as specified in ImportPath. You can provide an optional prefix to which new
and changed data is to be exported from your Amazon FSx for Lustre file system. If an
ExportPath
value is not provided, Amazon FSx sets a default export path,
s3://import-bucket/FSxLustre[creation-timestamp]
. The timestamp is in UTC format, for example
s3://import-bucket/FSxLustre20181105T222312Z
.
The Amazon S3 export bucket must be the same as the import bucket specified by ImportPath
. If
you only specify a bucket name, such as s3://import-bucket
, you get a 1:1 mapping of file
system objects to S3 bucket objects. This mapping means that the input data in S3 is overwritten on
export. If you provide a custom prefix in the export path, such as
s3://import-bucket/[custom-optional-prefix]
, Amazon FSx exports the contents of your file
system to that export prefix in the Amazon S3 bucket.
public void setImportedFileChunkSize(Integer importedFileChunkSize)
(Optional) For files imported from a data repository, this value determines the stripe count and maximum amount of data per file (in MiB) stored on a single physical disk. The maximum number of disks that a single file can be striped across is limited by the total number of disks that make up the file system.
The default chunk size is 1,024 MiB (1 GiB) and can go as high as 512,000 MiB (500 GiB). Amazon S3 objects have a maximum size of 5 TB.
importedFileChunkSize
- (Optional) For files imported from a data repository, this value determines the stripe count and maximum
amount of data per file (in MiB) stored on a single physical disk. The maximum number of disks that a
single file can be striped across is limited by the total number of disks that make up the file
system.
The default chunk size is 1,024 MiB (1 GiB) and can go as high as 512,000 MiB (500 GiB). Amazon S3 objects have a maximum size of 5 TB.
public Integer getImportedFileChunkSize()
(Optional) For files imported from a data repository, this value determines the stripe count and maximum amount of data per file (in MiB) stored on a single physical disk. The maximum number of disks that a single file can be striped across is limited by the total number of disks that make up the file system.
The default chunk size is 1,024 MiB (1 GiB) and can go as high as 512,000 MiB (500 GiB). Amazon S3 objects have a maximum size of 5 TB.
The default chunk size is 1,024 MiB (1 GiB) and can go as high as 512,000 MiB (500 GiB). Amazon S3 objects have a maximum size of 5 TB.
public CreateFileSystemLustreConfiguration withImportedFileChunkSize(Integer importedFileChunkSize)
(Optional) For files imported from a data repository, this value determines the stripe count and maximum amount of data per file (in MiB) stored on a single physical disk. The maximum number of disks that a single file can be striped across is limited by the total number of disks that make up the file system.
The default chunk size is 1,024 MiB (1 GiB) and can go as high as 512,000 MiB (500 GiB). Amazon S3 objects have a maximum size of 5 TB.
importedFileChunkSize
- (Optional) For files imported from a data repository, this value determines the stripe count and maximum
amount of data per file (in MiB) stored on a single physical disk. The maximum number of disks that a
single file can be striped across is limited by the total number of disks that make up the file
system.
The default chunk size is 1,024 MiB (1 GiB) and can go as high as 512,000 MiB (500 GiB). Amazon S3 objects have a maximum size of 5 TB.
public void setDeploymentType(String deploymentType)
(Optional) Choose SCRATCH_1
and SCRATCH_2
deployment types when you need temporary
storage and shorter-term processing of data. The SCRATCH_2
deployment type provides in-transit
encryption of data and higher burst throughput capacity than SCRATCH_1
.
Choose PERSISTENT_1
deployment type for longer-term storage and workloads and encryption of data in
transit. To learn more about deployment types, see FSx for Lustre Deployment
Options.
Encryption of data in-transit is automatically enabled when you access a SCRATCH_2
or
PERSISTENT_1
file system from Amazon EC2 instances that support this feature.
(Default = SCRATCH_1
)
Encryption of data in-transit for SCRATCH_2
and PERSISTENT_1
deployment types is
supported when accessed from supported instance types in supported AWS Regions. To learn more, Encrypting Data in
Transit.
deploymentType
- (Optional) Choose SCRATCH_1
and SCRATCH_2
deployment types when you need
temporary storage and shorter-term processing of data. The SCRATCH_2
deployment type provides
in-transit encryption of data and higher burst throughput capacity than SCRATCH_1
.
Choose PERSISTENT_1
deployment type for longer-term storage and workloads and encryption of
data in transit. To learn more about deployment types, see FSx for Lustre
Deployment Options.
Encryption of data in-transit is automatically enabled when you access a SCRATCH_2
or
PERSISTENT_1
file system from Amazon EC2 instances that support this feature.
(Default = SCRATCH_1
)
Encryption of data in-transit for SCRATCH_2
and PERSISTENT_1
deployment types is
supported when accessed from supported instance types in supported AWS Regions. To learn more, Encrypting Data
in Transit.
LustreDeploymentType
public String getDeploymentType()
(Optional) Choose SCRATCH_1
and SCRATCH_2
deployment types when you need temporary
storage and shorter-term processing of data. The SCRATCH_2
deployment type provides in-transit
encryption of data and higher burst throughput capacity than SCRATCH_1
.
Choose PERSISTENT_1
deployment type for longer-term storage and workloads and encryption of data in
transit. To learn more about deployment types, see FSx for Lustre Deployment
Options.
Encryption of data in-transit is automatically enabled when you access a SCRATCH_2
or
PERSISTENT_1
file system from Amazon EC2 instances that support this feature.
(Default = SCRATCH_1
)
Encryption of data in-transit for SCRATCH_2
and PERSISTENT_1
deployment types is
supported when accessed from supported instance types in supported AWS Regions. To learn more, Encrypting Data in
Transit.
SCRATCH_1
and SCRATCH_2
deployment types when you need
temporary storage and shorter-term processing of data. The SCRATCH_2
deployment type
provides in-transit encryption of data and higher burst throughput capacity than SCRATCH_1
.
Choose PERSISTENT_1
deployment type for longer-term storage and workloads and encryption of
data in transit. To learn more about deployment types, see FSx for Lustre
Deployment Options.
Encryption of data in-transit is automatically enabled when you access a SCRATCH_2
or
PERSISTENT_1
file system from Amazon EC2 instances that support this
feature. (Default = SCRATCH_1
)
Encryption of data in-transit for SCRATCH_2
and PERSISTENT_1
deployment types
is supported when accessed from supported instance types in supported AWS Regions. To learn more, Encrypting Data
in Transit.
LustreDeploymentType
public CreateFileSystemLustreConfiguration withDeploymentType(String deploymentType)
(Optional) Choose SCRATCH_1
and SCRATCH_2
deployment types when you need temporary
storage and shorter-term processing of data. The SCRATCH_2
deployment type provides in-transit
encryption of data and higher burst throughput capacity than SCRATCH_1
.
Choose PERSISTENT_1
deployment type for longer-term storage and workloads and encryption of data in
transit. To learn more about deployment types, see FSx for Lustre Deployment
Options.
Encryption of data in-transit is automatically enabled when you access a SCRATCH_2
or
PERSISTENT_1
file system from Amazon EC2 instances that support this feature.
(Default = SCRATCH_1
)
Encryption of data in-transit for SCRATCH_2
and PERSISTENT_1
deployment types is
supported when accessed from supported instance types in supported AWS Regions. To learn more, Encrypting Data in
Transit.
deploymentType
- (Optional) Choose SCRATCH_1
and SCRATCH_2
deployment types when you need
temporary storage and shorter-term processing of data. The SCRATCH_2
deployment type provides
in-transit encryption of data and higher burst throughput capacity than SCRATCH_1
.
Choose PERSISTENT_1
deployment type for longer-term storage and workloads and encryption of
data in transit. To learn more about deployment types, see FSx for Lustre
Deployment Options.
Encryption of data in-transit is automatically enabled when you access a SCRATCH_2
or
PERSISTENT_1
file system from Amazon EC2 instances that support this feature.
(Default = SCRATCH_1
)
Encryption of data in-transit for SCRATCH_2
and PERSISTENT_1
deployment types is
supported when accessed from supported instance types in supported AWS Regions. To learn more, Encrypting Data
in Transit.
LustreDeploymentType
public CreateFileSystemLustreConfiguration withDeploymentType(LustreDeploymentType deploymentType)
(Optional) Choose SCRATCH_1
and SCRATCH_2
deployment types when you need temporary
storage and shorter-term processing of data. The SCRATCH_2
deployment type provides in-transit
encryption of data and higher burst throughput capacity than SCRATCH_1
.
Choose PERSISTENT_1
deployment type for longer-term storage and workloads and encryption of data in
transit. To learn more about deployment types, see FSx for Lustre Deployment
Options.
Encryption of data in-transit is automatically enabled when you access a SCRATCH_2
or
PERSISTENT_1
file system from Amazon EC2 instances that support this feature.
(Default = SCRATCH_1
)
Encryption of data in-transit for SCRATCH_2
and PERSISTENT_1
deployment types is
supported when accessed from supported instance types in supported AWS Regions. To learn more, Encrypting Data in
Transit.
deploymentType
- (Optional) Choose SCRATCH_1
and SCRATCH_2
deployment types when you need
temporary storage and shorter-term processing of data. The SCRATCH_2
deployment type provides
in-transit encryption of data and higher burst throughput capacity than SCRATCH_1
.
Choose PERSISTENT_1
deployment type for longer-term storage and workloads and encryption of
data in transit. To learn more about deployment types, see FSx for Lustre
Deployment Options.
Encryption of data in-transit is automatically enabled when you access a SCRATCH_2
or
PERSISTENT_1
file system from Amazon EC2 instances that support this feature.
(Default = SCRATCH_1
)
Encryption of data in-transit for SCRATCH_2
and PERSISTENT_1
deployment types is
supported when accessed from supported instance types in supported AWS Regions. To learn more, Encrypting Data
in Transit.
LustreDeploymentType
public void setPerUnitStorageThroughput(Integer perUnitStorageThroughput)
Required for the PERSISTENT_1
deployment type, describes the amount of read and write throughput for
each 1 tebibyte of storage, in MB/s/TiB. File system throughput capacity is calculated by multiplying file system
storage capacity (TiB) by the PerUnitStorageThroughput (MB/s/TiB). For a 2.4 TiB file system, provisioning 50
MB/s/TiB of PerUnitStorageThroughput yields 117 MB/s of file system throughput. You pay for the amount of
throughput that you provision.
Valid values are 50, 100, 200.
perUnitStorageThroughput
- Required for the PERSISTENT_1
deployment type, describes the amount of read and write
throughput for each 1 tebibyte of storage, in MB/s/TiB. File system throughput capacity is calculated by
multiplying file system storage capacity (TiB) by the PerUnitStorageThroughput (MB/s/TiB). For a 2.4 TiB
file system, provisioning 50 MB/s/TiB of PerUnitStorageThroughput yields 117 MB/s of file system throughput.
You pay for the amount of throughput that you provision.
Valid values are 50, 100, 200.
public Integer getPerUnitStorageThroughput()
Required for the PERSISTENT_1
deployment type, describes the amount of read and write throughput for
each 1 tebibyte of storage, in MB/s/TiB. File system throughput capacity is calculated by multiplying file system
storage capacity (TiB) by the PerUnitStorageThroughput (MB/s/TiB). For a 2.4 TiB file system, provisioning 50
MB/s/TiB of PerUnitStorageThroughput yields 117 MB/s of file system throughput. You pay for the amount of
throughput that you provision.
Valid values are 50, 100, 200.
PERSISTENT_1
deployment type, describes the amount of read and write
throughput for each 1 tebibyte of storage, in MB/s/TiB. File system throughput capacity is calculated by
multiplying file system storage capacity (TiB) by the PerUnitStorageThroughput (MB/s/TiB). For a 2.4 TiB
file system, provisioning 50 MB/s/TiB of PerUnitStorageThroughput yields 117 MB/s of file system
throughput. You pay for the amount of throughput that you provision.
Valid values are 50, 100, 200.
public CreateFileSystemLustreConfiguration withPerUnitStorageThroughput(Integer perUnitStorageThroughput)
Required for the PERSISTENT_1
deployment type, describes the amount of read and write throughput for
each 1 tebibyte of storage, in MB/s/TiB. File system throughput capacity is calculated by multiplying file system
storage capacity (TiB) by the PerUnitStorageThroughput (MB/s/TiB). For a 2.4 TiB file system, provisioning 50
MB/s/TiB of PerUnitStorageThroughput yields 117 MB/s of file system throughput. You pay for the amount of
throughput that you provision.
Valid values are 50, 100, 200.
perUnitStorageThroughput
- Required for the PERSISTENT_1
deployment type, describes the amount of read and write
throughput for each 1 tebibyte of storage, in MB/s/TiB. File system throughput capacity is calculated by
multiplying file system storage capacity (TiB) by the PerUnitStorageThroughput (MB/s/TiB). For a 2.4 TiB
file system, provisioning 50 MB/s/TiB of PerUnitStorageThroughput yields 117 MB/s of file system throughput.
You pay for the amount of throughput that you provision.
Valid values are 50, 100, 200.
public String toString()
toString
in class Object
Object.toString()
public CreateFileSystemLustreConfiguration clone()
public void marshall(ProtocolMarshaller protocolMarshaller)
StructuredPojo
ProtocolMarshaller
.marshall
in interface StructuredPojo
protocolMarshaller
- Implementation of ProtocolMarshaller
used to marshall this object's data.Copyright © 2013 Amazon Web Services, Inc. All Rights Reserved.