@Generated(value="software.amazon.awssdk:codegen") @ThreadSafe public interface DynamoDbAsyncClient extends AwsClient
builder()
method.
Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. DynamoDB lets you offload the administrative burdens of operating and scaling a distributed database, so that you don't have to worry about hardware provisioning, setup and configuration, replication, software patching, or cluster scaling.
With DynamoDB, you can create database tables that can store and retrieve any amount of data, and serve any level of request traffic. You can scale up or scale down your tables' throughput capacity without downtime or performance degradation, and use the Amazon Web Services Management Console to monitor resource utilization and performance metrics.
DynamoDB automatically spreads the data and traffic for your tables over a sufficient number of servers to handle your throughput and storage requirements, while maintaining consistent and fast performance. All of your data is stored on solid state disks (SSDs) and automatically replicated across multiple Availability Zones in an Amazon Web Services Region, providing built-in high availability and data durability.
Modifier and Type | Field and Description |
---|---|
static String |
SERVICE_METADATA_ID
Value for looking up the service's metadata from the
ServiceMetadataProvider . |
static String |
SERVICE_NAME |
Modifier and Type | Method and Description |
---|---|
default CompletableFuture<BatchExecuteStatementResponse> |
batchExecuteStatement(BatchExecuteStatementRequest batchExecuteStatementRequest)
This operation allows you to perform batch reads or writes on data stored in DynamoDB, using PartiQL.
|
default CompletableFuture<BatchExecuteStatementResponse> |
batchExecuteStatement(Consumer<BatchExecuteStatementRequest.Builder> batchExecuteStatementRequest)
This operation allows you to perform batch reads or writes on data stored in DynamoDB, using PartiQL.
|
default CompletableFuture<BatchGetItemResponse> |
batchGetItem(BatchGetItemRequest batchGetItemRequest)
The
BatchGetItem operation returns the attributes of one or more items from one or more tables. |
default CompletableFuture<BatchGetItemResponse> |
batchGetItem(Consumer<BatchGetItemRequest.Builder> batchGetItemRequest)
The
BatchGetItem operation returns the attributes of one or more items from one or more tables. |
default BatchGetItemPublisher |
batchGetItemPaginator(BatchGetItemRequest batchGetItemRequest)
The
BatchGetItem operation returns the attributes of one or more items from one or more tables. |
default BatchGetItemPublisher |
batchGetItemPaginator(Consumer<BatchGetItemRequest.Builder> batchGetItemRequest)
The
BatchGetItem operation returns the attributes of one or more items from one or more tables. |
default CompletableFuture<BatchWriteItemResponse> |
batchWriteItem(BatchWriteItemRequest batchWriteItemRequest)
The
BatchWriteItem operation puts or deletes multiple items in one or more tables. |
default CompletableFuture<BatchWriteItemResponse> |
batchWriteItem(Consumer<BatchWriteItemRequest.Builder> batchWriteItemRequest)
The
BatchWriteItem operation puts or deletes multiple items in one or more tables. |
static DynamoDbAsyncClientBuilder |
builder()
Create a builder that can be used to configure and create a
DynamoDbAsyncClient . |
static DynamoDbAsyncClient |
create()
Create a
DynamoDbAsyncClient with the region loaded from the
DefaultAwsRegionProviderChain and credentials loaded from the
DefaultCredentialsProvider . |
default CompletableFuture<CreateBackupResponse> |
createBackup(Consumer<CreateBackupRequest.Builder> createBackupRequest)
Creates a backup for an existing table.
|
default CompletableFuture<CreateBackupResponse> |
createBackup(CreateBackupRequest createBackupRequest)
Creates a backup for an existing table.
|
default CompletableFuture<CreateGlobalTableResponse> |
createGlobalTable(Consumer<CreateGlobalTableRequest.Builder> createGlobalTableRequest)
Creates a global table from an existing table.
|
default CompletableFuture<CreateGlobalTableResponse> |
createGlobalTable(CreateGlobalTableRequest createGlobalTableRequest)
Creates a global table from an existing table.
|
default CompletableFuture<CreateTableResponse> |
createTable(Consumer<CreateTableRequest.Builder> createTableRequest)
The
CreateTable operation adds a new table to your account. |
default CompletableFuture<CreateTableResponse> |
createTable(CreateTableRequest createTableRequest)
The
CreateTable operation adds a new table to your account. |
default CompletableFuture<DeleteBackupResponse> |
deleteBackup(Consumer<DeleteBackupRequest.Builder> deleteBackupRequest)
Deletes an existing backup of a table.
|
default CompletableFuture<DeleteBackupResponse> |
deleteBackup(DeleteBackupRequest deleteBackupRequest)
Deletes an existing backup of a table.
|
default CompletableFuture<DeleteItemResponse> |
deleteItem(Consumer<DeleteItemRequest.Builder> deleteItemRequest)
Deletes a single item in a table by primary key.
|
default CompletableFuture<DeleteItemResponse> |
deleteItem(DeleteItemRequest deleteItemRequest)
Deletes a single item in a table by primary key.
|
default CompletableFuture<DeleteTableResponse> |
deleteTable(Consumer<DeleteTableRequest.Builder> deleteTableRequest)
The
DeleteTable operation deletes a table and all of its items. |
default CompletableFuture<DeleteTableResponse> |
deleteTable(DeleteTableRequest deleteTableRequest)
The
DeleteTable operation deletes a table and all of its items. |
default CompletableFuture<DescribeBackupResponse> |
describeBackup(Consumer<DescribeBackupRequest.Builder> describeBackupRequest)
Describes an existing backup of a table.
|
default CompletableFuture<DescribeBackupResponse> |
describeBackup(DescribeBackupRequest describeBackupRequest)
Describes an existing backup of a table.
|
default CompletableFuture<DescribeContinuousBackupsResponse> |
describeContinuousBackups(Consumer<DescribeContinuousBackupsRequest.Builder> describeContinuousBackupsRequest)
Checks the status of continuous backups and point in time recovery on the specified table.
|
default CompletableFuture<DescribeContinuousBackupsResponse> |
describeContinuousBackups(DescribeContinuousBackupsRequest describeContinuousBackupsRequest)
Checks the status of continuous backups and point in time recovery on the specified table.
|
default CompletableFuture<DescribeContributorInsightsResponse> |
describeContributorInsights(Consumer<DescribeContributorInsightsRequest.Builder> describeContributorInsightsRequest)
Returns information about contributor insights for a given table or global secondary index.
|
default CompletableFuture<DescribeContributorInsightsResponse> |
describeContributorInsights(DescribeContributorInsightsRequest describeContributorInsightsRequest)
Returns information about contributor insights for a given table or global secondary index.
|
default CompletableFuture<DescribeEndpointsResponse> |
describeEndpoints()
Returns the regional endpoint information.
|
default CompletableFuture<DescribeEndpointsResponse> |
describeEndpoints(Consumer<DescribeEndpointsRequest.Builder> describeEndpointsRequest)
Returns the regional endpoint information.
|
default CompletableFuture<DescribeEndpointsResponse> |
describeEndpoints(DescribeEndpointsRequest describeEndpointsRequest)
Returns the regional endpoint information.
|
default CompletableFuture<DescribeExportResponse> |
describeExport(Consumer<DescribeExportRequest.Builder> describeExportRequest)
Describes an existing table export.
|
default CompletableFuture<DescribeExportResponse> |
describeExport(DescribeExportRequest describeExportRequest)
Describes an existing table export.
|
default CompletableFuture<DescribeGlobalTableResponse> |
describeGlobalTable(Consumer<DescribeGlobalTableRequest.Builder> describeGlobalTableRequest)
Returns information about the specified global table.
|
default CompletableFuture<DescribeGlobalTableResponse> |
describeGlobalTable(DescribeGlobalTableRequest describeGlobalTableRequest)
Returns information about the specified global table.
|
default CompletableFuture<DescribeGlobalTableSettingsResponse> |
describeGlobalTableSettings(Consumer<DescribeGlobalTableSettingsRequest.Builder> describeGlobalTableSettingsRequest)
Describes Region-specific settings for a global table.
|
default CompletableFuture<DescribeGlobalTableSettingsResponse> |
describeGlobalTableSettings(DescribeGlobalTableSettingsRequest describeGlobalTableSettingsRequest)
Describes Region-specific settings for a global table.
|
default CompletableFuture<DescribeImportResponse> |
describeImport(Consumer<DescribeImportRequest.Builder> describeImportRequest)
Represents the properties of the import.
|
default CompletableFuture<DescribeImportResponse> |
describeImport(DescribeImportRequest describeImportRequest)
Represents the properties of the import.
|
default CompletableFuture<DescribeKinesisStreamingDestinationResponse> |
describeKinesisStreamingDestination(Consumer<DescribeKinesisStreamingDestinationRequest.Builder> describeKinesisStreamingDestinationRequest)
Returns information about the status of Kinesis streaming.
|
default CompletableFuture<DescribeKinesisStreamingDestinationResponse> |
describeKinesisStreamingDestination(DescribeKinesisStreamingDestinationRequest describeKinesisStreamingDestinationRequest)
Returns information about the status of Kinesis streaming.
|
default CompletableFuture<DescribeLimitsResponse> |
describeLimits()
Returns the current provisioned-capacity quotas for your Amazon Web Services account in a Region, both for the
Region as a whole and for any one DynamoDB table that you create there.
|
default CompletableFuture<DescribeLimitsResponse> |
describeLimits(Consumer<DescribeLimitsRequest.Builder> describeLimitsRequest)
Returns the current provisioned-capacity quotas for your Amazon Web Services account in a Region, both for the
Region as a whole and for any one DynamoDB table that you create there.
|
default CompletableFuture<DescribeLimitsResponse> |
describeLimits(DescribeLimitsRequest describeLimitsRequest)
Returns the current provisioned-capacity quotas for your Amazon Web Services account in a Region, both for the
Region as a whole and for any one DynamoDB table that you create there.
|
default CompletableFuture<DescribeTableResponse> |
describeTable(Consumer<DescribeTableRequest.Builder> describeTableRequest)
Returns information about the table, including the current status of the table, when it was created, the primary
key schema, and any indexes on the table.
|
default CompletableFuture<DescribeTableResponse> |
describeTable(DescribeTableRequest describeTableRequest)
Returns information about the table, including the current status of the table, when it was created, the primary
key schema, and any indexes on the table.
|
default CompletableFuture<DescribeTableReplicaAutoScalingResponse> |
describeTableReplicaAutoScaling(Consumer<DescribeTableReplicaAutoScalingRequest.Builder> describeTableReplicaAutoScalingRequest)
Describes auto scaling settings across replicas of the global table at once.
|
default CompletableFuture<DescribeTableReplicaAutoScalingResponse> |
describeTableReplicaAutoScaling(DescribeTableReplicaAutoScalingRequest describeTableReplicaAutoScalingRequest)
Describes auto scaling settings across replicas of the global table at once.
|
default CompletableFuture<DescribeTimeToLiveResponse> |
describeTimeToLive(Consumer<DescribeTimeToLiveRequest.Builder> describeTimeToLiveRequest)
Gives a description of the Time to Live (TTL) status on the specified table.
|
default CompletableFuture<DescribeTimeToLiveResponse> |
describeTimeToLive(DescribeTimeToLiveRequest describeTimeToLiveRequest)
Gives a description of the Time to Live (TTL) status on the specified table.
|
default CompletableFuture<DisableKinesisStreamingDestinationResponse> |
disableKinesisStreamingDestination(Consumer<DisableKinesisStreamingDestinationRequest.Builder> disableKinesisStreamingDestinationRequest)
Stops replication from the DynamoDB table to the Kinesis data stream.
|
default CompletableFuture<DisableKinesisStreamingDestinationResponse> |
disableKinesisStreamingDestination(DisableKinesisStreamingDestinationRequest disableKinesisStreamingDestinationRequest)
Stops replication from the DynamoDB table to the Kinesis data stream.
|
default CompletableFuture<EnableKinesisStreamingDestinationResponse> |
enableKinesisStreamingDestination(Consumer<EnableKinesisStreamingDestinationRequest.Builder> enableKinesisStreamingDestinationRequest)
Starts table data replication to the specified Kinesis data stream at a timestamp chosen during the enable
workflow.
|
default CompletableFuture<EnableKinesisStreamingDestinationResponse> |
enableKinesisStreamingDestination(EnableKinesisStreamingDestinationRequest enableKinesisStreamingDestinationRequest)
Starts table data replication to the specified Kinesis data stream at a timestamp chosen during the enable
workflow.
|
default CompletableFuture<ExecuteStatementResponse> |
executeStatement(Consumer<ExecuteStatementRequest.Builder> executeStatementRequest)
This operation allows you to perform reads and singleton writes on data stored in DynamoDB, using PartiQL.
|
default CompletableFuture<ExecuteStatementResponse> |
executeStatement(ExecuteStatementRequest executeStatementRequest)
This operation allows you to perform reads and singleton writes on data stored in DynamoDB, using PartiQL.
|
default CompletableFuture<ExecuteTransactionResponse> |
executeTransaction(Consumer<ExecuteTransactionRequest.Builder> executeTransactionRequest)
This operation allows you to perform transactional reads or writes on data stored in DynamoDB, using PartiQL.
|
default CompletableFuture<ExecuteTransactionResponse> |
executeTransaction(ExecuteTransactionRequest executeTransactionRequest)
This operation allows you to perform transactional reads or writes on data stored in DynamoDB, using PartiQL.
|
default CompletableFuture<ExportTableToPointInTimeResponse> |
exportTableToPointInTime(Consumer<ExportTableToPointInTimeRequest.Builder> exportTableToPointInTimeRequest)
Exports table data to an S3 bucket.
|
default CompletableFuture<ExportTableToPointInTimeResponse> |
exportTableToPointInTime(ExportTableToPointInTimeRequest exportTableToPointInTimeRequest)
Exports table data to an S3 bucket.
|
default CompletableFuture<GetItemResponse> |
getItem(Consumer<GetItemRequest.Builder> getItemRequest)
The
GetItem operation returns a set of attributes for the item with the given primary key. |
default CompletableFuture<GetItemResponse> |
getItem(GetItemRequest getItemRequest)
The
GetItem operation returns a set of attributes for the item with the given primary key. |
default CompletableFuture<ImportTableResponse> |
importTable(Consumer<ImportTableRequest.Builder> importTableRequest)
Imports table data from an S3 bucket.
|
default CompletableFuture<ImportTableResponse> |
importTable(ImportTableRequest importTableRequest)
Imports table data from an S3 bucket.
|
default CompletableFuture<ListBackupsResponse> |
listBackups()
List backups associated with an Amazon Web Services account.
|
default CompletableFuture<ListBackupsResponse> |
listBackups(Consumer<ListBackupsRequest.Builder> listBackupsRequest)
List backups associated with an Amazon Web Services account.
|
default CompletableFuture<ListBackupsResponse> |
listBackups(ListBackupsRequest listBackupsRequest)
List backups associated with an Amazon Web Services account.
|
default CompletableFuture<ListContributorInsightsResponse> |
listContributorInsights(Consumer<ListContributorInsightsRequest.Builder> listContributorInsightsRequest)
Returns a list of ContributorInsightsSummary for a table and all its global secondary indexes.
|
default CompletableFuture<ListContributorInsightsResponse> |
listContributorInsights(ListContributorInsightsRequest listContributorInsightsRequest)
Returns a list of ContributorInsightsSummary for a table and all its global secondary indexes.
|
default ListContributorInsightsPublisher |
listContributorInsightsPaginator(Consumer<ListContributorInsightsRequest.Builder> listContributorInsightsRequest)
Returns a list of ContributorInsightsSummary for a table and all its global secondary indexes.
|
default ListContributorInsightsPublisher |
listContributorInsightsPaginator(ListContributorInsightsRequest listContributorInsightsRequest)
Returns a list of ContributorInsightsSummary for a table and all its global secondary indexes.
|
default CompletableFuture<ListExportsResponse> |
listExports(Consumer<ListExportsRequest.Builder> listExportsRequest)
Lists completed exports within the past 90 days.
|
default CompletableFuture<ListExportsResponse> |
listExports(ListExportsRequest listExportsRequest)
Lists completed exports within the past 90 days.
|
default ListExportsPublisher |
listExportsPaginator(Consumer<ListExportsRequest.Builder> listExportsRequest)
Lists completed exports within the past 90 days.
|
default ListExportsPublisher |
listExportsPaginator(ListExportsRequest listExportsRequest)
Lists completed exports within the past 90 days.
|
default CompletableFuture<ListGlobalTablesResponse> |
listGlobalTables()
Lists all global tables that have a replica in the specified Region.
|
default CompletableFuture<ListGlobalTablesResponse> |
listGlobalTables(Consumer<ListGlobalTablesRequest.Builder> listGlobalTablesRequest)
Lists all global tables that have a replica in the specified Region.
|
default CompletableFuture<ListGlobalTablesResponse> |
listGlobalTables(ListGlobalTablesRequest listGlobalTablesRequest)
Lists all global tables that have a replica in the specified Region.
|
default CompletableFuture<ListImportsResponse> |
listImports(Consumer<ListImportsRequest.Builder> listImportsRequest)
Lists completed imports within the past 90 days.
|
default CompletableFuture<ListImportsResponse> |
listImports(ListImportsRequest listImportsRequest)
Lists completed imports within the past 90 days.
|
default ListImportsPublisher |
listImportsPaginator(Consumer<ListImportsRequest.Builder> listImportsRequest)
Lists completed imports within the past 90 days.
|
default ListImportsPublisher |
listImportsPaginator(ListImportsRequest listImportsRequest)
Lists completed imports within the past 90 days.
|
default CompletableFuture<ListTablesResponse> |
listTables()
Returns an array of table names associated with the current account and endpoint.
|
default CompletableFuture<ListTablesResponse> |
listTables(Consumer<ListTablesRequest.Builder> listTablesRequest)
Returns an array of table names associated with the current account and endpoint.
|
default CompletableFuture<ListTablesResponse> |
listTables(ListTablesRequest listTablesRequest)
Returns an array of table names associated with the current account and endpoint.
|
default ListTablesPublisher |
listTablesPaginator()
Returns an array of table names associated with the current account and endpoint.
|
default ListTablesPublisher |
listTablesPaginator(Consumer<ListTablesRequest.Builder> listTablesRequest)
Returns an array of table names associated with the current account and endpoint.
|
default ListTablesPublisher |
listTablesPaginator(ListTablesRequest listTablesRequest)
Returns an array of table names associated with the current account and endpoint.
|
default CompletableFuture<ListTagsOfResourceResponse> |
listTagsOfResource(Consumer<ListTagsOfResourceRequest.Builder> listTagsOfResourceRequest)
List all tags on an Amazon DynamoDB resource.
|
default CompletableFuture<ListTagsOfResourceResponse> |
listTagsOfResource(ListTagsOfResourceRequest listTagsOfResourceRequest)
List all tags on an Amazon DynamoDB resource.
|
default CompletableFuture<PutItemResponse> |
putItem(Consumer<PutItemRequest.Builder> putItemRequest)
Creates a new item, or replaces an old item with a new item.
|
default CompletableFuture<PutItemResponse> |
putItem(PutItemRequest putItemRequest)
Creates a new item, or replaces an old item with a new item.
|
default CompletableFuture<QueryResponse> |
query(Consumer<QueryRequest.Builder> queryRequest)
You must provide the name of the partition key attribute and a single value for that attribute.
|
default CompletableFuture<QueryResponse> |
query(QueryRequest queryRequest)
You must provide the name of the partition key attribute and a single value for that attribute.
|
default QueryPublisher |
queryPaginator(Consumer<QueryRequest.Builder> queryRequest)
You must provide the name of the partition key attribute and a single value for that attribute.
|
default QueryPublisher |
queryPaginator(QueryRequest queryRequest)
You must provide the name of the partition key attribute and a single value for that attribute.
|
default CompletableFuture<RestoreTableFromBackupResponse> |
restoreTableFromBackup(Consumer<RestoreTableFromBackupRequest.Builder> restoreTableFromBackupRequest)
Creates a new table from an existing backup.
|
default CompletableFuture<RestoreTableFromBackupResponse> |
restoreTableFromBackup(RestoreTableFromBackupRequest restoreTableFromBackupRequest)
Creates a new table from an existing backup.
|
default CompletableFuture<RestoreTableToPointInTimeResponse> |
restoreTableToPointInTime(Consumer<RestoreTableToPointInTimeRequest.Builder> restoreTableToPointInTimeRequest)
Restores the specified table to the specified point in time within
EarliestRestorableDateTime and
LatestRestorableDateTime . |
default CompletableFuture<RestoreTableToPointInTimeResponse> |
restoreTableToPointInTime(RestoreTableToPointInTimeRequest restoreTableToPointInTimeRequest)
Restores the specified table to the specified point in time within
EarliestRestorableDateTime and
LatestRestorableDateTime . |
default CompletableFuture<ScanResponse> |
scan(Consumer<ScanRequest.Builder> scanRequest)
The
Scan operation returns one or more items and item attributes by accessing every item in a table
or a secondary index. |
default CompletableFuture<ScanResponse> |
scan(ScanRequest scanRequest)
The
Scan operation returns one or more items and item attributes by accessing every item in a table
or a secondary index. |
default ScanPublisher |
scanPaginator(Consumer<ScanRequest.Builder> scanRequest)
The
Scan operation returns one or more items and item attributes by accessing every item in a table
or a secondary index. |
default ScanPublisher |
scanPaginator(ScanRequest scanRequest)
The
Scan operation returns one or more items and item attributes by accessing every item in a table
or a secondary index. |
default DynamoDbServiceClientConfiguration |
serviceClientConfiguration() |
default CompletableFuture<TagResourceResponse> |
tagResource(Consumer<TagResourceRequest.Builder> tagResourceRequest)
Associate a set of tags with an Amazon DynamoDB resource.
|
default CompletableFuture<TagResourceResponse> |
tagResource(TagResourceRequest tagResourceRequest)
Associate a set of tags with an Amazon DynamoDB resource.
|
default CompletableFuture<TransactGetItemsResponse> |
transactGetItems(Consumer<TransactGetItemsRequest.Builder> transactGetItemsRequest)
TransactGetItems is a synchronous operation that atomically retrieves multiple items from one or
more tables (but not from indexes) in a single account and Region. |
default CompletableFuture<TransactGetItemsResponse> |
transactGetItems(TransactGetItemsRequest transactGetItemsRequest)
TransactGetItems is a synchronous operation that atomically retrieves multiple items from one or
more tables (but not from indexes) in a single account and Region. |
default CompletableFuture<TransactWriteItemsResponse> |
transactWriteItems(Consumer<TransactWriteItemsRequest.Builder> transactWriteItemsRequest)
TransactWriteItems is a synchronous write operation that groups up to 100 action requests. |
default CompletableFuture<TransactWriteItemsResponse> |
transactWriteItems(TransactWriteItemsRequest transactWriteItemsRequest)
TransactWriteItems is a synchronous write operation that groups up to 100 action requests. |
default CompletableFuture<UntagResourceResponse> |
untagResource(Consumer<UntagResourceRequest.Builder> untagResourceRequest)
Removes the association of tags from an Amazon DynamoDB resource.
|
default CompletableFuture<UntagResourceResponse> |
untagResource(UntagResourceRequest untagResourceRequest)
Removes the association of tags from an Amazon DynamoDB resource.
|
default CompletableFuture<UpdateContinuousBackupsResponse> |
updateContinuousBackups(Consumer<UpdateContinuousBackupsRequest.Builder> updateContinuousBackupsRequest)
UpdateContinuousBackups enables or disables point in time recovery for the specified table. |
default CompletableFuture<UpdateContinuousBackupsResponse> |
updateContinuousBackups(UpdateContinuousBackupsRequest updateContinuousBackupsRequest)
UpdateContinuousBackups enables or disables point in time recovery for the specified table. |
default CompletableFuture<UpdateContributorInsightsResponse> |
updateContributorInsights(Consumer<UpdateContributorInsightsRequest.Builder> updateContributorInsightsRequest)
Updates the status for contributor insights for a specific table or index.
|
default CompletableFuture<UpdateContributorInsightsResponse> |
updateContributorInsights(UpdateContributorInsightsRequest updateContributorInsightsRequest)
Updates the status for contributor insights for a specific table or index.
|
default CompletableFuture<UpdateGlobalTableResponse> |
updateGlobalTable(Consumer<UpdateGlobalTableRequest.Builder> updateGlobalTableRequest)
Adds or removes replicas in the specified global table.
|
default CompletableFuture<UpdateGlobalTableResponse> |
updateGlobalTable(UpdateGlobalTableRequest updateGlobalTableRequest)
Adds or removes replicas in the specified global table.
|
default CompletableFuture<UpdateGlobalTableSettingsResponse> |
updateGlobalTableSettings(Consumer<UpdateGlobalTableSettingsRequest.Builder> updateGlobalTableSettingsRequest)
Updates settings for a global table.
|
default CompletableFuture<UpdateGlobalTableSettingsResponse> |
updateGlobalTableSettings(UpdateGlobalTableSettingsRequest updateGlobalTableSettingsRequest)
Updates settings for a global table.
|
default CompletableFuture<UpdateItemResponse> |
updateItem(Consumer<UpdateItemRequest.Builder> updateItemRequest)
Edits an existing item's attributes, or adds a new item to the table if it does not already exist.
|
default CompletableFuture<UpdateItemResponse> |
updateItem(UpdateItemRequest updateItemRequest)
Edits an existing item's attributes, or adds a new item to the table if it does not already exist.
|
default CompletableFuture<UpdateTableResponse> |
updateTable(Consumer<UpdateTableRequest.Builder> updateTableRequest)
Modifies the provisioned throughput settings, global secondary indexes, or DynamoDB Streams settings for a given
table.
|
default CompletableFuture<UpdateTableResponse> |
updateTable(UpdateTableRequest updateTableRequest)
Modifies the provisioned throughput settings, global secondary indexes, or DynamoDB Streams settings for a given
table.
|
default CompletableFuture<UpdateTableReplicaAutoScalingResponse> |
updateTableReplicaAutoScaling(Consumer<UpdateTableReplicaAutoScalingRequest.Builder> updateTableReplicaAutoScalingRequest)
Updates auto scaling settings on your global tables at once.
|
default CompletableFuture<UpdateTableReplicaAutoScalingResponse> |
updateTableReplicaAutoScaling(UpdateTableReplicaAutoScalingRequest updateTableReplicaAutoScalingRequest)
Updates auto scaling settings on your global tables at once.
|
default CompletableFuture<UpdateTimeToLiveResponse> |
updateTimeToLive(Consumer<UpdateTimeToLiveRequest.Builder> updateTimeToLiveRequest)
The
UpdateTimeToLive method enables or disables Time to Live (TTL) for the specified table. |
default CompletableFuture<UpdateTimeToLiveResponse> |
updateTimeToLive(UpdateTimeToLiveRequest updateTimeToLiveRequest)
The
UpdateTimeToLive method enables or disables Time to Live (TTL) for the specified table. |
default DynamoDbAsyncWaiter |
waiter()
Create an instance of
DynamoDbAsyncWaiter using this client. |
serviceName
close
static final String SERVICE_NAME
static final String SERVICE_METADATA_ID
ServiceMetadataProvider
.default CompletableFuture<BatchExecuteStatementResponse> batchExecuteStatement(BatchExecuteStatementRequest batchExecuteStatementRequest)
This operation allows you to perform batch reads or writes on data stored in DynamoDB, using PartiQL. Each read
statement in a BatchExecuteStatement
must specify an equality condition on all key attributes. This
enforces that each SELECT
statement in a batch returns at most a single item.
The entire batch must consist of either read statements or write statements, you cannot mix both in one batch.
A HTTP 200 response does not mean that all statements in the BatchExecuteStatement succeeded. Error details for
individual statements can be found under the Error field of the BatchStatementResponse
for each statement.
batchExecuteStatementRequest
- default CompletableFuture<BatchExecuteStatementResponse> batchExecuteStatement(Consumer<BatchExecuteStatementRequest.Builder> batchExecuteStatementRequest)
This operation allows you to perform batch reads or writes on data stored in DynamoDB, using PartiQL. Each read
statement in a BatchExecuteStatement
must specify an equality condition on all key attributes. This
enforces that each SELECT
statement in a batch returns at most a single item.
The entire batch must consist of either read statements or write statements, you cannot mix both in one batch.
A HTTP 200 response does not mean that all statements in the BatchExecuteStatement succeeded. Error details for
individual statements can be found under the Error field of the BatchStatementResponse
for each statement.
This is a convenience which creates an instance of the BatchExecuteStatementRequest.Builder
avoiding the
need to create one manually via BatchExecuteStatementRequest.builder()
batchExecuteStatementRequest
- A Consumer
that will call methods on BatchExecuteStatementInput.Builder
to create a
request.default CompletableFuture<BatchGetItemResponse> batchGetItem(BatchGetItemRequest batchGetItemRequest)
The BatchGetItem
operation returns the attributes of one or more items from one or more tables. You
identify requested items by primary key.
A single operation can retrieve up to 16 MB of data, which can contain as many as 100 items.
BatchGetItem
returns a partial result if the response size limit is exceeded, the table's
provisioned throughput is exceeded, more than 1MB per partition is requested, or an internal processing failure
occurs. If a partial result is returned, the operation returns a value for UnprocessedKeys
. You can
use this value to retry the operation starting with the next item to get.
If you request more than 100 items, BatchGetItem
returns a ValidationException
with the
message "Too many items requested for the BatchGetItem call."
For example, if you ask to retrieve 100 items, but each individual item is 300 KB in size, the system returns 52
items (so as not to exceed the 16 MB limit). It also returns an appropriate UnprocessedKeys
value so
you can get the next page of results. If desired, your application can include its own logic to assemble the
pages of results into one dataset.
If none of the items can be processed due to insufficient provisioned throughput on all of the tables in
the request, then BatchGetItem
returns a ProvisionedThroughputExceededException
. If
at least one of the items is successfully processed, then BatchGetItem
completes
successfully, while returning the keys of the unread items in UnprocessedKeys
.
If DynamoDB returns any unprocessed items, you should retry the batch operation on those items. However, we strongly recommend that you use an exponential backoff algorithm. If you retry the batch operation immediately, the underlying read or write requests can still fail due to throttling on the individual tables. If you delay the batch operation using exponential backoff, the individual requests in the batch are much more likely to succeed.
For more information, see Batch Operations and Error Handling in the Amazon DynamoDB Developer Guide.
By default, BatchGetItem
performs eventually consistent reads on every table in the request. If you
want strongly consistent reads instead, you can set ConsistentRead
to true
for any or
all tables.
In order to minimize response latency, BatchGetItem
may retrieve items in parallel.
When designing your application, keep in mind that DynamoDB does not return items in any particular order. To
help parse the response by item, include the primary key values for the items in your request in the
ProjectionExpression
parameter.
If a requested item does not exist, it is not returned in the result. Requests for nonexistent items consume the minimum read capacity units according to the type of read. For more information, see Working with Tables in the Amazon DynamoDB Developer Guide.
batchGetItemRequest
- Represents the input of a BatchGetItem
operation.ACTIVE
.default CompletableFuture<BatchGetItemResponse> batchGetItem(Consumer<BatchGetItemRequest.Builder> batchGetItemRequest)
The BatchGetItem
operation returns the attributes of one or more items from one or more tables. You
identify requested items by primary key.
A single operation can retrieve up to 16 MB of data, which can contain as many as 100 items.
BatchGetItem
returns a partial result if the response size limit is exceeded, the table's
provisioned throughput is exceeded, more than 1MB per partition is requested, or an internal processing failure
occurs. If a partial result is returned, the operation returns a value for UnprocessedKeys
. You can
use this value to retry the operation starting with the next item to get.
If you request more than 100 items, BatchGetItem
returns a ValidationException
with the
message "Too many items requested for the BatchGetItem call."
For example, if you ask to retrieve 100 items, but each individual item is 300 KB in size, the system returns 52
items (so as not to exceed the 16 MB limit). It also returns an appropriate UnprocessedKeys
value so
you can get the next page of results. If desired, your application can include its own logic to assemble the
pages of results into one dataset.
If none of the items can be processed due to insufficient provisioned throughput on all of the tables in
the request, then BatchGetItem
returns a ProvisionedThroughputExceededException
. If
at least one of the items is successfully processed, then BatchGetItem
completes
successfully, while returning the keys of the unread items in UnprocessedKeys
.
If DynamoDB returns any unprocessed items, you should retry the batch operation on those items. However, we strongly recommend that you use an exponential backoff algorithm. If you retry the batch operation immediately, the underlying read or write requests can still fail due to throttling on the individual tables. If you delay the batch operation using exponential backoff, the individual requests in the batch are much more likely to succeed.
For more information, see Batch Operations and Error Handling in the Amazon DynamoDB Developer Guide.
By default, BatchGetItem
performs eventually consistent reads on every table in the request. If you
want strongly consistent reads instead, you can set ConsistentRead
to true
for any or
all tables.
In order to minimize response latency, BatchGetItem
may retrieve items in parallel.
When designing your application, keep in mind that DynamoDB does not return items in any particular order. To
help parse the response by item, include the primary key values for the items in your request in the
ProjectionExpression
parameter.
If a requested item does not exist, it is not returned in the result. Requests for nonexistent items consume the minimum read capacity units according to the type of read. For more information, see Working with Tables in the Amazon DynamoDB Developer Guide.
This is a convenience which creates an instance of the BatchGetItemRequest.Builder
avoiding the need to
create one manually via BatchGetItemRequest.builder()
batchGetItemRequest
- A Consumer
that will call methods on BatchGetItemInput.Builder
to create a request.
Represents the input of a BatchGetItem
operation.ACTIVE
.default BatchGetItemPublisher batchGetItemPaginator(BatchGetItemRequest batchGetItemRequest)
The BatchGetItem
operation returns the attributes of one or more items from one or more tables. You
identify requested items by primary key.
A single operation can retrieve up to 16 MB of data, which can contain as many as 100 items.
BatchGetItem
returns a partial result if the response size limit is exceeded, the table's
provisioned throughput is exceeded, more than 1MB per partition is requested, or an internal processing failure
occurs. If a partial result is returned, the operation returns a value for UnprocessedKeys
. You can
use this value to retry the operation starting with the next item to get.
If you request more than 100 items, BatchGetItem
returns a ValidationException
with the
message "Too many items requested for the BatchGetItem call."
For example, if you ask to retrieve 100 items, but each individual item is 300 KB in size, the system returns 52
items (so as not to exceed the 16 MB limit). It also returns an appropriate UnprocessedKeys
value so
you can get the next page of results. If desired, your application can include its own logic to assemble the
pages of results into one dataset.
If none of the items can be processed due to insufficient provisioned throughput on all of the tables in
the request, then BatchGetItem
returns a ProvisionedThroughputExceededException
. If
at least one of the items is successfully processed, then BatchGetItem
completes
successfully, while returning the keys of the unread items in UnprocessedKeys
.
If DynamoDB returns any unprocessed items, you should retry the batch operation on those items. However, we strongly recommend that you use an exponential backoff algorithm. If you retry the batch operation immediately, the underlying read or write requests can still fail due to throttling on the individual tables. If you delay the batch operation using exponential backoff, the individual requests in the batch are much more likely to succeed.
For more information, see Batch Operations and Error Handling in the Amazon DynamoDB Developer Guide.
By default, BatchGetItem
performs eventually consistent reads on every table in the request. If you
want strongly consistent reads instead, you can set ConsistentRead
to true
for any or
all tables.
In order to minimize response latency, BatchGetItem
may retrieve items in parallel.
When designing your application, keep in mind that DynamoDB does not return items in any particular order. To
help parse the response by item, include the primary key values for the items in your request in the
ProjectionExpression
parameter.
If a requested item does not exist, it is not returned in the result. Requests for nonexistent items consume the minimum read capacity units according to the type of read. For more information, see Working with Tables in the Amazon DynamoDB Developer Guide.
This is a variant of batchGetItem(software.amazon.awssdk.services.dynamodb.model.BatchGetItemRequest)
operation. The return type is a custom publisher that can be subscribed to request a stream of response pages.
SDK will internally handle making service calls for you.
When the operation is called, an instance of this class is returned. At this point, no service calls are made yet
and so there is no guarantee that the request is valid. If there are errors in your request, you will see the
failures only after you start streaming the data. The subscribe method should be called as a request to start
streaming data. For more info, see
Publisher.subscribe(org.reactivestreams.Subscriber)
. Each call to the subscribe
method will result in a new Subscription
i.e., a new contract to stream data from the
starting request.
The following are few ways to use the response class:
1) Using the subscribe helper method
software.amazon.awssdk.services.dynamodb.paginators.BatchGetItemPublisher publisher = client.batchGetItemPaginator(request);
CompletableFuture<Void> future = publisher.subscribe(res -> { // Do something with the response });
future.get();
2) Using a custom subscriber
software.amazon.awssdk.services.dynamodb.paginators.BatchGetItemPublisher publisher = client.batchGetItemPaginator(request);
publisher.subscribe(new Subscriber<software.amazon.awssdk.services.dynamodb.model.BatchGetItemResponse>() {
public void onSubscribe(org.reactivestreams.Subscriber subscription) { //... };
public void onNext(software.amazon.awssdk.services.dynamodb.model.BatchGetItemResponse response) { //... };
});
As the response is a publisher, it can work well with third party reactive streams implementations like RxJava2.
Please notice that the configuration of null won't limit the number of results you get with the paginator. It only limits the number of results in each page.
Note: If you prefer to have control on service calls, use the
batchGetItem(software.amazon.awssdk.services.dynamodb.model.BatchGetItemRequest)
operation.
batchGetItemRequest
- Represents the input of a BatchGetItem
operation.ACTIVE
.default BatchGetItemPublisher batchGetItemPaginator(Consumer<BatchGetItemRequest.Builder> batchGetItemRequest)
The BatchGetItem
operation returns the attributes of one or more items from one or more tables. You
identify requested items by primary key.
A single operation can retrieve up to 16 MB of data, which can contain as many as 100 items.
BatchGetItem
returns a partial result if the response size limit is exceeded, the table's
provisioned throughput is exceeded, more than 1MB per partition is requested, or an internal processing failure
occurs. If a partial result is returned, the operation returns a value for UnprocessedKeys
. You can
use this value to retry the operation starting with the next item to get.
If you request more than 100 items, BatchGetItem
returns a ValidationException
with the
message "Too many items requested for the BatchGetItem call."
For example, if you ask to retrieve 100 items, but each individual item is 300 KB in size, the system returns 52
items (so as not to exceed the 16 MB limit). It also returns an appropriate UnprocessedKeys
value so
you can get the next page of results. If desired, your application can include its own logic to assemble the
pages of results into one dataset.
If none of the items can be processed due to insufficient provisioned throughput on all of the tables in
the request, then BatchGetItem
returns a ProvisionedThroughputExceededException
. If
at least one of the items is successfully processed, then BatchGetItem
completes
successfully, while returning the keys of the unread items in UnprocessedKeys
.
If DynamoDB returns any unprocessed items, you should retry the batch operation on those items. However, we strongly recommend that you use an exponential backoff algorithm. If you retry the batch operation immediately, the underlying read or write requests can still fail due to throttling on the individual tables. If you delay the batch operation using exponential backoff, the individual requests in the batch are much more likely to succeed.
For more information, see Batch Operations and Error Handling in the Amazon DynamoDB Developer Guide.
By default, BatchGetItem
performs eventually consistent reads on every table in the request. If you
want strongly consistent reads instead, you can set ConsistentRead
to true
for any or
all tables.
In order to minimize response latency, BatchGetItem
may retrieve items in parallel.
When designing your application, keep in mind that DynamoDB does not return items in any particular order. To
help parse the response by item, include the primary key values for the items in your request in the
ProjectionExpression
parameter.
If a requested item does not exist, it is not returned in the result. Requests for nonexistent items consume the minimum read capacity units according to the type of read. For more information, see Working with Tables in the Amazon DynamoDB Developer Guide.
This is a variant of batchGetItem(software.amazon.awssdk.services.dynamodb.model.BatchGetItemRequest)
operation. The return type is a custom publisher that can be subscribed to request a stream of response pages.
SDK will internally handle making service calls for you.
When the operation is called, an instance of this class is returned. At this point, no service calls are made yet
and so there is no guarantee that the request is valid. If there are errors in your request, you will see the
failures only after you start streaming the data. The subscribe method should be called as a request to start
streaming data. For more info, see
Publisher.subscribe(org.reactivestreams.Subscriber)
. Each call to the subscribe
method will result in a new Subscription
i.e., a new contract to stream data from the
starting request.
The following are few ways to use the response class:
1) Using the subscribe helper method
software.amazon.awssdk.services.dynamodb.paginators.BatchGetItemPublisher publisher = client.batchGetItemPaginator(request);
CompletableFuture<Void> future = publisher.subscribe(res -> { // Do something with the response });
future.get();
2) Using a custom subscriber
software.amazon.awssdk.services.dynamodb.paginators.BatchGetItemPublisher publisher = client.batchGetItemPaginator(request);
publisher.subscribe(new Subscriber<software.amazon.awssdk.services.dynamodb.model.BatchGetItemResponse>() {
public void onSubscribe(org.reactivestreams.Subscriber subscription) { //... };
public void onNext(software.amazon.awssdk.services.dynamodb.model.BatchGetItemResponse response) { //... };
});
As the response is a publisher, it can work well with third party reactive streams implementations like RxJava2.
Please notice that the configuration of null won't limit the number of results you get with the paginator. It only limits the number of results in each page.
Note: If you prefer to have control on service calls, use the
batchGetItem(software.amazon.awssdk.services.dynamodb.model.BatchGetItemRequest)
operation.
This is a convenience which creates an instance of the BatchGetItemRequest.Builder
avoiding the need to
create one manually via BatchGetItemRequest.builder()
batchGetItemRequest
- A Consumer
that will call methods on BatchGetItemInput.Builder
to create a request.
Represents the input of a BatchGetItem
operation.ACTIVE
.default CompletableFuture<BatchWriteItemResponse> batchWriteItem(BatchWriteItemRequest batchWriteItemRequest)
The BatchWriteItem
operation puts or deletes multiple items in one or more tables. A single call to
BatchWriteItem
can transmit up to 16MB of data over the network, consisting of up to 25 item put or
delete operations. While individual items can be up to 400 KB once stored, it's important to note that an item's
representation might be greater than 400KB while being sent in DynamoDB's JSON format for the API call. For more
details on this distinction, see Naming Rules and Data Types.
BatchWriteItem
cannot update items. If you perform a BatchWriteItem
operation on an
existing item, that item's values will be overwritten by the operation and it will appear like it was updated. To
update items, we recommend you use the UpdateItem
action.
The individual PutItem
and DeleteItem
operations specified in
BatchWriteItem
are atomic; however BatchWriteItem
as a whole is not. If any requested
operations fail because the table's provisioned throughput is exceeded or an internal processing failure occurs,
the failed operations are returned in the UnprocessedItems
response parameter. You can investigate
and optionally resend the requests. Typically, you would call BatchWriteItem
in a loop. Each
iteration would check for unprocessed items and submit a new BatchWriteItem
request with those
unprocessed items until all items have been processed.
If none of the items can be processed due to insufficient provisioned throughput on all of the tables in
the request, then BatchWriteItem
returns a ProvisionedThroughputExceededException
.
If DynamoDB returns any unprocessed items, you should retry the batch operation on those items. However, we strongly recommend that you use an exponential backoff algorithm. If you retry the batch operation immediately, the underlying read or write requests can still fail due to throttling on the individual tables. If you delay the batch operation using exponential backoff, the individual requests in the batch are much more likely to succeed.
For more information, see Batch Operations and Error Handling in the Amazon DynamoDB Developer Guide.
With BatchWriteItem
, you can efficiently write or delete large amounts of data, such as from Amazon
EMR, or copy data from another database into DynamoDB. In order to improve performance with these large-scale
operations, BatchWriteItem
does not behave in the same way as individual PutItem
and
DeleteItem
calls would. For example, you cannot specify conditions on individual put and delete
requests, and BatchWriteItem
does not return deleted items in the response.
If you use a programming language that supports concurrency, you can use threads to write items in parallel. Your
application must include the necessary logic to manage the threads. With languages that don't support threading,
you must update or delete the specified items one at a time. In both situations, BatchWriteItem
performs the specified put and delete operations in parallel, giving you the power of the thread pool approach
without having to introduce complexity into your application.
Parallel processing reduces latency, but each specified put and delete request consumes the same number of write capacity units whether it is processed in parallel or not. Delete operations on nonexistent items consume one write capacity unit.
If one or more of the following is true, DynamoDB rejects the entire batch write operation:
One or more tables specified in the BatchWriteItem
request does not exist.
Primary key attributes specified on an item in the request do not match those in the corresponding table's primary key schema.
You try to perform multiple operations on the same item in the same BatchWriteItem
request. For
example, you cannot put and delete the same item in the same BatchWriteItem
request.
Your request contains at least two items with identical hash and range keys (which essentially is two put operations).
There are more than 25 requests in the batch.
Any individual item in a batch exceeds 400 KB.
The total request size exceeds 16 MB.
batchWriteItemRequest
- Represents the input of a BatchWriteItem
operation.ACTIVE
.default CompletableFuture<BatchWriteItemResponse> batchWriteItem(Consumer<BatchWriteItemRequest.Builder> batchWriteItemRequest)
The BatchWriteItem
operation puts or deletes multiple items in one or more tables. A single call to
BatchWriteItem
can transmit up to 16MB of data over the network, consisting of up to 25 item put or
delete operations. While individual items can be up to 400 KB once stored, it's important to note that an item's
representation might be greater than 400KB while being sent in DynamoDB's JSON format for the API call. For more
details on this distinction, see Naming Rules and Data Types.
BatchWriteItem
cannot update items. If you perform a BatchWriteItem
operation on an
existing item, that item's values will be overwritten by the operation and it will appear like it was updated. To
update items, we recommend you use the UpdateItem
action.
The individual PutItem
and DeleteItem
operations specified in
BatchWriteItem
are atomic; however BatchWriteItem
as a whole is not. If any requested
operations fail because the table's provisioned throughput is exceeded or an internal processing failure occurs,
the failed operations are returned in the UnprocessedItems
response parameter. You can investigate
and optionally resend the requests. Typically, you would call BatchWriteItem
in a loop. Each
iteration would check for unprocessed items and submit a new BatchWriteItem
request with those
unprocessed items until all items have been processed.
If none of the items can be processed due to insufficient provisioned throughput on all of the tables in
the request, then BatchWriteItem
returns a ProvisionedThroughputExceededException
.
If DynamoDB returns any unprocessed items, you should retry the batch operation on those items. However, we strongly recommend that you use an exponential backoff algorithm. If you retry the batch operation immediately, the underlying read or write requests can still fail due to throttling on the individual tables. If you delay the batch operation using exponential backoff, the individual requests in the batch are much more likely to succeed.
For more information, see Batch Operations and Error Handling in the Amazon DynamoDB Developer Guide.
With BatchWriteItem
, you can efficiently write or delete large amounts of data, such as from Amazon
EMR, or copy data from another database into DynamoDB. In order to improve performance with these large-scale
operations, BatchWriteItem
does not behave in the same way as individual PutItem
and
DeleteItem
calls would. For example, you cannot specify conditions on individual put and delete
requests, and BatchWriteItem
does not return deleted items in the response.
If you use a programming language that supports concurrency, you can use threads to write items in parallel. Your
application must include the necessary logic to manage the threads. With languages that don't support threading,
you must update or delete the specified items one at a time. In both situations, BatchWriteItem
performs the specified put and delete operations in parallel, giving you the power of the thread pool approach
without having to introduce complexity into your application.
Parallel processing reduces latency, but each specified put and delete request consumes the same number of write capacity units whether it is processed in parallel or not. Delete operations on nonexistent items consume one write capacity unit.
If one or more of the following is true, DynamoDB rejects the entire batch write operation:
One or more tables specified in the BatchWriteItem
request does not exist.
Primary key attributes specified on an item in the request do not match those in the corresponding table's primary key schema.
You try to perform multiple operations on the same item in the same BatchWriteItem
request. For
example, you cannot put and delete the same item in the same BatchWriteItem
request.
Your request contains at least two items with identical hash and range keys (which essentially is two put operations).
There are more than 25 requests in the batch.
Any individual item in a batch exceeds 400 KB.
The total request size exceeds 16 MB.
This is a convenience which creates an instance of the BatchWriteItemRequest.Builder
avoiding the need to
create one manually via BatchWriteItemRequest.builder()
batchWriteItemRequest
- A Consumer
that will call methods on BatchWriteItemInput.Builder
to create a request.
Represents the input of a BatchWriteItem
operation.ACTIVE
.default CompletableFuture<CreateBackupResponse> createBackup(CreateBackupRequest createBackupRequest)
Creates a backup for an existing table.
Each time you create an on-demand backup, the entire table data is backed up. There is no limit to the number of on-demand backups that can be taken.
When you create an on-demand backup, a time marker of the request is cataloged, and the backup is created asynchronously, by applying all changes until the time of the request to the last full table snapshot. Backup requests are processed instantaneously and become available for restore within minutes.
You can call CreateBackup
at a maximum rate of 50 times per second.
All backups in DynamoDB work without consuming any provisioned throughput on the table.
If you submit a backup request on 2018-12-14 at 14:25:00, the backup is guaranteed to contain all data committed to the table up to 14:24:00, and data committed after 14:26:00 will not be. The backup might contain data modifications made between 14:24:00 and 14:26:00. On-demand backup does not support causal consistency.
Along with data, the following are also included on the backups:
Global secondary indexes (GSIs)
Local secondary indexes (LSIs)
Streams
Provisioned read and write capacity
createBackupRequest
- TableName
does not currently exist
within the subscriber's account or the subscriber is operating in the wrong Amazon Web Services Region.
For most purposes, up to 500 simultaneous table operations are allowed per account. These operations
include CreateTable
, UpdateTable
, DeleteTable
,
UpdateTimeToLive
, RestoreTableFromBackup
, and
RestoreTableToPointInTime
.
When you are creating a table with one or more secondary indexes, you can have up to 250 such requests running at a time. However, if the table or index specifications are complex, then DynamoDB might temporarily reduce the number of concurrent operations.
When importing into DynamoDB, up to 50 simultaneous import table operations are allowed per account.
There is a soft account quota of 2,500 tables.
GetRecords was called with a value of more than 1000 for the limit request parameter.
More than 2 processes are reading from the same streams shard at the same time. Exceeding this limit may result in request throttling.
default CompletableFuture<CreateBackupResponse> createBackup(Consumer<CreateBackupRequest.Builder> createBackupRequest)
Creates a backup for an existing table.
Each time you create an on-demand backup, the entire table data is backed up. There is no limit to the number of on-demand backups that can be taken.
When you create an on-demand backup, a time marker of the request is cataloged, and the backup is created asynchronously, by applying all changes until the time of the request to the last full table snapshot. Backup requests are processed instantaneously and become available for restore within minutes.
You can call CreateBackup
at a maximum rate of 50 times per second.
All backups in DynamoDB work without consuming any provisioned throughput on the table.
If you submit a backup request on 2018-12-14 at 14:25:00, the backup is guaranteed to contain all data committed to the table up to 14:24:00, and data committed after 14:26:00 will not be. The backup might contain data modifications made between 14:24:00 and 14:26:00. On-demand backup does not support causal consistency.
Along with data, the following are also included on the backups:
Global secondary indexes (GSIs)
Local secondary indexes (LSIs)
Streams
Provisioned read and write capacity
This is a convenience which creates an instance of the CreateBackupRequest.Builder
avoiding the need to
create one manually via CreateBackupRequest.builder()
createBackupRequest
- A Consumer
that will call methods on CreateBackupInput.Builder
to create a request.TableName
does not currently exist
within the subscriber's account or the subscriber is operating in the wrong Amazon Web Services Region.
For most purposes, up to 500 simultaneous table operations are allowed per account. These operations
include CreateTable
, UpdateTable
, DeleteTable
,
UpdateTimeToLive
, RestoreTableFromBackup
, and
RestoreTableToPointInTime
.
When you are creating a table with one or more secondary indexes, you can have up to 250 such requests running at a time. However, if the table or index specifications are complex, then DynamoDB might temporarily reduce the number of concurrent operations.
When importing into DynamoDB, up to 50 simultaneous import table operations are allowed per account.
There is a soft account quota of 2,500 tables.
GetRecords was called with a value of more than 1000 for the limit request parameter.
More than 2 processes are reading from the same streams shard at the same time. Exceeding this limit may result in request throttling.
default CompletableFuture<CreateGlobalTableResponse> createGlobalTable(CreateGlobalTableRequest createGlobalTableRequest)
Creates a global table from an existing table. A global table creates a replication relationship between two or more DynamoDB tables with the same table name in the provided Regions.
This operation only applies to Version 2017.11.29 (Legacy) of global tables. We recommend using Version 2019.11.21 (Current) when creating new global tables, as it provides greater flexibility, higher efficiency and consumes less write capacity than 2017.11.29 (Legacy). To determine which version you are using, see Determining the version. To update existing global tables from version 2017.11.29 (Legacy) to version 2019.11.21 (Current), see Updating global tables.
If you want to add a new replica table to a global table, each of the following conditions must be true:
The table must have the same primary key as all of the other replicas.
The table must have the same name as all of the other replicas.
The table must have DynamoDB Streams enabled, with the stream containing both the new and the old images of the item.
None of the replica tables in the global table can contain any data.
If global secondary indexes are specified, then the following conditions must also be met:
The global secondary indexes must have the same name.
The global secondary indexes must have the same hash key and sort key (if present).
If local secondary indexes are specified, then the following conditions must also be met:
The local secondary indexes must have the same name.
The local secondary indexes must have the same hash key and sort key (if present).
Write capacity settings should be set consistently across your replica tables and secondary indexes. DynamoDB strongly recommends enabling auto scaling to manage the write capacity settings for all of your global tables replicas and indexes.
If you prefer to manage write capacity settings manually, you should provision equal replicated write capacity units to your replica tables. You should also provision equal replicated write capacity units to matching secondary indexes across your global table.
createGlobalTableRequest
-
For most purposes, up to 500 simultaneous table operations are allowed per account. These operations
include CreateTable
, UpdateTable
, DeleteTable
,
UpdateTimeToLive
, RestoreTableFromBackup
, and
RestoreTableToPointInTime
.
When you are creating a table with one or more secondary indexes, you can have up to 250 such requests running at a time. However, if the table or index specifications are complex, then DynamoDB might temporarily reduce the number of concurrent operations.
When importing into DynamoDB, up to 50 simultaneous import table operations are allowed per account.
There is a soft account quota of 2,500 tables.
GetRecords was called with a value of more than 1000 for the limit request parameter.
More than 2 processes are reading from the same streams shard at the same time. Exceeding this limit may result in request throttling.
TableName
does not currently exist
within the subscriber's account or the subscriber is operating in the wrong Amazon Web Services Region.default CompletableFuture<CreateGlobalTableResponse> createGlobalTable(Consumer<CreateGlobalTableRequest.Builder> createGlobalTableRequest)
Creates a global table from an existing table. A global table creates a replication relationship between two or more DynamoDB tables with the same table name in the provided Regions.
This operation only applies to Version 2017.11.29 (Legacy) of global tables. We recommend using Version 2019.11.21 (Current) when creating new global tables, as it provides greater flexibility, higher efficiency and consumes less write capacity than 2017.11.29 (Legacy). To determine which version you are using, see Determining the version. To update existing global tables from version 2017.11.29 (Legacy) to version 2019.11.21 (Current), see Updating global tables.
If you want to add a new replica table to a global table, each of the following conditions must be true:
The table must have the same primary key as all of the other replicas.
The table must have the same name as all of the other replicas.
The table must have DynamoDB Streams enabled, with the stream containing both the new and the old images of the item.
None of the replica tables in the global table can contain any data.
If global secondary indexes are specified, then the following conditions must also be met:
The global secondary indexes must have the same name.
The global secondary indexes must have the same hash key and sort key (if present).
If local secondary indexes are specified, then the following conditions must also be met:
The local secondary indexes must have the same name.
The local secondary indexes must have the same hash key and sort key (if present).
Write capacity settings should be set consistently across your replica tables and secondary indexes. DynamoDB strongly recommends enabling auto scaling to manage the write capacity settings for all of your global tables replicas and indexes.
If you prefer to manage write capacity settings manually, you should provision equal replicated write capacity units to your replica tables. You should also provision equal replicated write capacity units to matching secondary indexes across your global table.
This is a convenience which creates an instance of the CreateGlobalTableRequest.Builder
avoiding the need
to create one manually via CreateGlobalTableRequest.builder()
createGlobalTableRequest
- A Consumer
that will call methods on CreateGlobalTableInput.Builder
to create a request.
For most purposes, up to 500 simultaneous table operations are allowed per account. These operations
include CreateTable
, UpdateTable
, DeleteTable
,
UpdateTimeToLive
, RestoreTableFromBackup
, and
RestoreTableToPointInTime
.
When you are creating a table with one or more secondary indexes, you can have up to 250 such requests running at a time. However, if the table or index specifications are complex, then DynamoDB might temporarily reduce the number of concurrent operations.
When importing into DynamoDB, up to 50 simultaneous import table operations are allowed per account.
There is a soft account quota of 2,500 tables.
GetRecords was called with a value of more than 1000 for the limit request parameter.
More than 2 processes are reading from the same streams shard at the same time. Exceeding this limit may result in request throttling.
TableName
does not currently exist
within the subscriber's account or the subscriber is operating in the wrong Amazon Web Services Region.default CompletableFuture<CreateTableResponse> createTable(CreateTableRequest createTableRequest)
The CreateTable
operation adds a new table to your account. In an Amazon Web Services account, table
names must be unique within each Region. That is, you can have two tables with same name if you create the tables
in different Regions.
CreateTable
is an asynchronous operation. Upon receiving a CreateTable
request,
DynamoDB immediately returns a response with a TableStatus
of CREATING
. After the table
is created, DynamoDB sets the TableStatus
to ACTIVE
. You can perform read and write
operations only on an ACTIVE
table.
You can optionally define secondary indexes on the new table, as part of the CreateTable
operation.
If you want to create multiple tables with secondary indexes on them, you must create the tables sequentially.
Only one table with secondary indexes can be in the CREATING
state at any given time.
You can use the DescribeTable
action to check the table status.
createTableRequest
- Represents the input of a CreateTable
operation.CREATING
state.
For most purposes, up to 500 simultaneous table operations are allowed per account. These operations
include CreateTable
, UpdateTable
, DeleteTable
,
UpdateTimeToLive
, RestoreTableFromBackup
, and
RestoreTableToPointInTime
.
When you are creating a table with one or more secondary indexes, you can have up to 250 such requests running at a time. However, if the table or index specifications are complex, then DynamoDB might temporarily reduce the number of concurrent operations.
When importing into DynamoDB, up to 50 simultaneous import table operations are allowed per account.
There is a soft account quota of 2,500 tables.
GetRecords was called with a value of more than 1000 for the limit request parameter.
More than 2 processes are reading from the same streams shard at the same time. Exceeding this limit may result in request throttling.
default CompletableFuture<CreateTableResponse> createTable(Consumer<CreateTableRequest.Builder> createTableRequest)
The CreateTable
operation adds a new table to your account. In an Amazon Web Services account, table
names must be unique within each Region. That is, you can have two tables with same name if you create the tables
in different Regions.
CreateTable
is an asynchronous operation. Upon receiving a CreateTable
request,
DynamoDB immediately returns a response with a TableStatus
of CREATING
. After the table
is created, DynamoDB sets the TableStatus
to ACTIVE
. You can perform read and write
operations only on an ACTIVE
table.
You can optionally define secondary indexes on the new table, as part of the CreateTable
operation.
If you want to create multiple tables with secondary indexes on them, you must create the tables sequentially.
Only one table with secondary indexes can be in the CREATING
state at any given time.
You can use the DescribeTable
action to check the table status.
This is a convenience which creates an instance of the CreateTableRequest.Builder
avoiding the need to
create one manually via CreateTableRequest.builder()
createTableRequest
- A Consumer
that will call methods on CreateTableInput.Builder
to create a request.
Represents the input of a CreateTable
operation.CREATING
state.
For most purposes, up to 500 simultaneous table operations are allowed per account. These operations
include CreateTable
, UpdateTable
, DeleteTable
,
UpdateTimeToLive
, RestoreTableFromBackup
, and
RestoreTableToPointInTime
.
When you are creating a table with one or more secondary indexes, you can have up to 250 such requests running at a time. However, if the table or index specifications are complex, then DynamoDB might temporarily reduce the number of concurrent operations.
When importing into DynamoDB, up to 50 simultaneous import table operations are allowed per account.
There is a soft account quota of 2,500 tables.
GetRecords was called with a value of more than 1000 for the limit request parameter.
More than 2 processes are reading from the same streams shard at the same time. Exceeding this limit may result in request throttling.
default CompletableFuture<DeleteBackupResponse> deleteBackup(DeleteBackupRequest deleteBackupRequest)
Deletes an existing backup of a table.
You can call DeleteBackup
at a maximum rate of 10 times per second.
deleteBackupRequest
-
For most purposes, up to 500 simultaneous table operations are allowed per account. These operations
include CreateTable
, UpdateTable
, DeleteTable
,
UpdateTimeToLive
, RestoreTableFromBackup
, and
RestoreTableToPointInTime
.
When you are creating a table with one or more secondary indexes, you can have up to 250 such requests running at a time. However, if the table or index specifications are complex, then DynamoDB might temporarily reduce the number of concurrent operations.
When importing into DynamoDB, up to 50 simultaneous import table operations are allowed per account.
There is a soft account quota of 2,500 tables.
GetRecords was called with a value of more than 1000 for the limit request parameter.
More than 2 processes are reading from the same streams shard at the same time. Exceeding this limit may result in request throttling.
default CompletableFuture<DeleteBackupResponse> deleteBackup(Consumer<DeleteBackupRequest.Builder> deleteBackupRequest)
Deletes an existing backup of a table.
You can call DeleteBackup
at a maximum rate of 10 times per second.
This is a convenience which creates an instance of the DeleteBackupRequest.Builder
avoiding the need to
create one manually via DeleteBackupRequest.builder()
deleteBackupRequest
- A Consumer
that will call methods on DeleteBackupInput.Builder
to create a request.
For most purposes, up to 500 simultaneous table operations are allowed per account. These operations
include CreateTable
, UpdateTable
, DeleteTable
,
UpdateTimeToLive
, RestoreTableFromBackup
, and
RestoreTableToPointInTime
.
When you are creating a table with one or more secondary indexes, you can have up to 250 such requests running at a time. However, if the table or index specifications are complex, then DynamoDB might temporarily reduce the number of concurrent operations.
When importing into DynamoDB, up to 50 simultaneous import table operations are allowed per account.
There is a soft account quota of 2,500 tables.
GetRecords was called with a value of more than 1000 for the limit request parameter.
More than 2 processes are reading from the same streams shard at the same time. Exceeding this limit may result in request throttling.
default CompletableFuture<DeleteItemResponse> deleteItem(DeleteItemRequest deleteItemRequest)
Deletes a single item in a table by primary key. You can perform a conditional delete operation that deletes the item if it exists, or if it has an expected attribute value.
In addition to deleting an item, you can also return the item's attribute values in the same operation, using the
ReturnValues
parameter.
Unless you specify conditions, the DeleteItem
is an idempotent operation; running it multiple times
on the same item or attribute does not result in an error response.
Conditional deletes are useful for deleting items only if specific conditions are met. If those conditions are met, DynamoDB performs the delete. Otherwise, the item is not deleted.
deleteItemRequest
- Represents the input of a DeleteItem
operation.ACTIVE
.default CompletableFuture<DeleteItemResponse> deleteItem(Consumer<DeleteItemRequest.Builder> deleteItemRequest)
Deletes a single item in a table by primary key. You can perform a conditional delete operation that deletes the item if it exists, or if it has an expected attribute value.
In addition to deleting an item, you can also return the item's attribute values in the same operation, using the
ReturnValues
parameter.
Unless you specify conditions, the DeleteItem
is an idempotent operation; running it multiple times
on the same item or attribute does not result in an error response.
Conditional deletes are useful for deleting items only if specific conditions are met. If those conditions are met, DynamoDB performs the delete. Otherwise, the item is not deleted.
This is a convenience which creates an instance of the DeleteItemRequest.Builder
avoiding the need to
create one manually via DeleteItemRequest.builder()
deleteItemRequest
- A Consumer
that will call methods on DeleteItemInput.Builder
to create a request.
Represents the input of a DeleteItem
operation.ACTIVE
.default CompletableFuture<DeleteTableResponse> deleteTable(DeleteTableRequest deleteTableRequest)
The DeleteTable
operation deletes a table and all of its items. After a DeleteTable
request, the specified table is in the DELETING
state until DynamoDB completes the deletion. If the
table is in the ACTIVE
state, you can delete it. If a table is in CREATING
or
UPDATING
states, then DynamoDB returns a ResourceInUseException
. If the specified table
does not exist, DynamoDB returns a ResourceNotFoundException
. If table is already in the
DELETING
state, no error is returned.
This operation only applies to Version 2019.11.21 (Current) of global tables.
DynamoDB might continue to accept data read and write operations, such as GetItem
and
PutItem
, on a table in the DELETING
state until the table deletion is complete.
When you delete a table, any indexes on that table are also deleted.
If you have DynamoDB Streams enabled on the table, then the corresponding stream on that table goes into the
DISABLED
state, and the stream is automatically deleted after 24 hours.
Use the DescribeTable
action to check the status of the table.
deleteTableRequest
- Represents the input of a DeleteTable
operation.CREATING
state.ACTIVE
.
For most purposes, up to 500 simultaneous table operations are allowed per account. These operations
include CreateTable
, UpdateTable
, DeleteTable
,
UpdateTimeToLive
, RestoreTableFromBackup
, and
RestoreTableToPointInTime
.
When you are creating a table with one or more secondary indexes, you can have up to 250 such requests running at a time. However, if the table or index specifications are complex, then DynamoDB might temporarily reduce the number of concurrent operations.
When importing into DynamoDB, up to 50 simultaneous import table operations are allowed per account.
There is a soft account quota of 2,500 tables.
GetRecords was called with a value of more than 1000 for the limit request parameter.
More than 2 processes are reading from the same streams shard at the same time. Exceeding this limit may result in request throttling.
default CompletableFuture<DeleteTableResponse> deleteTable(Consumer<DeleteTableRequest.Builder> deleteTableRequest)
The DeleteTable
operation deletes a table and all of its items. After a DeleteTable
request, the specified table is in the DELETING
state until DynamoDB completes the deletion. If the
table is in the ACTIVE
state, you can delete it. If a table is in CREATING
or
UPDATING
states, then DynamoDB returns a ResourceInUseException
. If the specified table
does not exist, DynamoDB returns a ResourceNotFoundException
. If table is already in the
DELETING
state, no error is returned.
This operation only applies to Version 2019.11.21 (Current) of global tables.
DynamoDB might continue to accept data read and write operations, such as GetItem
and
PutItem
, on a table in the DELETING
state until the table deletion is complete.
When you delete a table, any indexes on that table are also deleted.
If you have DynamoDB Streams enabled on the table, then the corresponding stream on that table goes into the
DISABLED
state, and the stream is automatically deleted after 24 hours.
Use the DescribeTable
action to check the status of the table.
This is a convenience which creates an instance of the DeleteTableRequest.Builder
avoiding the need to
create one manually via DeleteTableRequest.builder()
deleteTableRequest
- A Consumer
that will call methods on DeleteTableInput.Builder
to create a request.
Represents the input of a DeleteTable
operation.CREATING
state.ACTIVE
.
For most purposes, up to 500 simultaneous table operations are allowed per account. These operations
include CreateTable
, UpdateTable
, DeleteTable
,
UpdateTimeToLive
, RestoreTableFromBackup
, and
RestoreTableToPointInTime
.
When you are creating a table with one or more secondary indexes, you can have up to 250 such requests running at a time. However, if the table or index specifications are complex, then DynamoDB might temporarily reduce the number of concurrent operations.
When importing into DynamoDB, up to 50 simultaneous import table operations are allowed per account.
There is a soft account quota of 2,500 tables.
GetRecords was called with a value of more than 1000 for the limit request parameter.
More than 2 processes are reading from the same streams shard at the same time. Exceeding this limit may result in request throttling.
default CompletableFuture<DescribeBackupResponse> describeBackup(DescribeBackupRequest describeBackupRequest)
Describes an existing backup of a table.
You can call DescribeBackup
at a maximum rate of 10 times per second.
describeBackupRequest
- default CompletableFuture<DescribeBackupResponse> describeBackup(Consumer<DescribeBackupRequest.Builder> describeBackupRequest)
Describes an existing backup of a table.
You can call DescribeBackup
at a maximum rate of 10 times per second.
This is a convenience which creates an instance of the DescribeBackupRequest.Builder
avoiding the need to
create one manually via DescribeBackupRequest.builder()
describeBackupRequest
- A Consumer
that will call methods on DescribeBackupInput.Builder
to create a request.default CompletableFuture<DescribeContinuousBackupsResponse> describeContinuousBackups(DescribeContinuousBackupsRequest describeContinuousBackupsRequest)
Checks the status of continuous backups and point in time recovery on the specified table. Continuous backups are
ENABLED
on all tables at table creation. If point in time recovery is enabled,
PointInTimeRecoveryStatus
will be set to ENABLED.
After continuous backups and point in time recovery are enabled, you can restore to any point in time within
EarliestRestorableDateTime
and LatestRestorableDateTime
.
LatestRestorableDateTime
is typically 5 minutes before the current time. You can restore your table
to any point in time during the last 35 days.
You can call DescribeContinuousBackups
at a maximum rate of 10 times per second.
describeContinuousBackupsRequest
- TableName
does not currently exist
within the subscriber's account or the subscriber is operating in the wrong Amazon Web Services Region.default CompletableFuture<DescribeContinuousBackupsResponse> describeContinuousBackups(Consumer<DescribeContinuousBackupsRequest.Builder> describeContinuousBackupsRequest)
Checks the status of continuous backups and point in time recovery on the specified table. Continuous backups are
ENABLED
on all tables at table creation. If point in time recovery is enabled,
PointInTimeRecoveryStatus
will be set to ENABLED.
After continuous backups and point in time recovery are enabled, you can restore to any point in time within
EarliestRestorableDateTime
and LatestRestorableDateTime
.
LatestRestorableDateTime
is typically 5 minutes before the current time. You can restore your table
to any point in time during the last 35 days.
You can call DescribeContinuousBackups
at a maximum rate of 10 times per second.
This is a convenience which creates an instance of the DescribeContinuousBackupsRequest.Builder
avoiding
the need to create one manually via DescribeContinuousBackupsRequest.builder()
describeContinuousBackupsRequest
- A Consumer
that will call methods on DescribeContinuousBackupsInput.Builder
to create a
request.TableName
does not currently exist
within the subscriber's account or the subscriber is operating in the wrong Amazon Web Services Region.default CompletableFuture<DescribeContributorInsightsResponse> describeContributorInsights(DescribeContributorInsightsRequest describeContributorInsightsRequest)
Returns information about contributor insights for a given table or global secondary index.
describeContributorInsightsRequest
- ACTIVE
.default CompletableFuture<DescribeContributorInsightsResponse> describeContributorInsights(Consumer<DescribeContributorInsightsRequest.Builder> describeContributorInsightsRequest)
Returns information about contributor insights for a given table or global secondary index.
This is a convenience which creates an instance of the DescribeContributorInsightsRequest.Builder
avoiding the need to create one manually via DescribeContributorInsightsRequest.builder()
describeContributorInsightsRequest
- A Consumer
that will call methods on DescribeContributorInsightsInput.Builder
to create a
request.ACTIVE
.default CompletableFuture<DescribeEndpointsResponse> describeEndpoints(DescribeEndpointsRequest describeEndpointsRequest)
Returns the regional endpoint information. This action must be included in your VPC endpoint policies, or access to the DescribeEndpoints API will be denied. For more information on policy permissions, please see Internetwork traffic privacy.
describeEndpointsRequest
- default CompletableFuture<DescribeEndpointsResponse> describeEndpoints(Consumer<DescribeEndpointsRequest.Builder> describeEndpointsRequest)
Returns the regional endpoint information. This action must be included in your VPC endpoint policies, or access to the DescribeEndpoints API will be denied. For more information on policy permissions, please see Internetwork traffic privacy.
This is a convenience which creates an instance of the DescribeEndpointsRequest.Builder
avoiding the need
to create one manually via DescribeEndpointsRequest.builder()
describeEndpointsRequest
- A Consumer
that will call methods on DescribeEndpointsRequest.Builder
to create a request.default CompletableFuture<DescribeEndpointsResponse> describeEndpoints()
Returns the regional endpoint information. This action must be included in your VPC endpoint policies, or access to the DescribeEndpoints API will be denied. For more information on policy permissions, please see Internetwork traffic privacy.
default CompletableFuture<DescribeExportResponse> describeExport(DescribeExportRequest describeExportRequest)
Describes an existing table export.
describeExportRequest
-
For most purposes, up to 500 simultaneous table operations are allowed per account. These operations
include CreateTable
, UpdateTable
, DeleteTable
,
UpdateTimeToLive
, RestoreTableFromBackup
, and
RestoreTableToPointInTime
.
When you are creating a table with one or more secondary indexes, you can have up to 250 such requests running at a time. However, if the table or index specifications are complex, then DynamoDB might temporarily reduce the number of concurrent operations.
When importing into DynamoDB, up to 50 simultaneous import table operations are allowed per account.
There is a soft account quota of 2,500 tables.
GetRecords was called with a value of more than 1000 for the limit request parameter.
More than 2 processes are reading from the same streams shard at the same time. Exceeding this limit may result in request throttling.
default CompletableFuture<DescribeExportResponse> describeExport(Consumer<DescribeExportRequest.Builder> describeExportRequest)
Describes an existing table export.
This is a convenience which creates an instance of the DescribeExportRequest.Builder
avoiding the need to
create one manually via DescribeExportRequest.builder()
describeExportRequest
- A Consumer
that will call methods on DescribeExportInput.Builder
to create a request.
For most purposes, up to 500 simultaneous table operations are allowed per account. These operations
include CreateTable
, UpdateTable
, DeleteTable
,
UpdateTimeToLive
, RestoreTableFromBackup
, and
RestoreTableToPointInTime
.
When you are creating a table with one or more secondary indexes, you can have up to 250 such requests running at a time. However, if the table or index specifications are complex, then DynamoDB might temporarily reduce the number of concurrent operations.
When importing into DynamoDB, up to 50 simultaneous import table operations are allowed per account.
There is a soft account quota of 2,500 tables.
GetRecords was called with a value of more than 1000 for the limit request parameter.
More than 2 processes are reading from the same streams shard at the same time. Exceeding this limit may result in request throttling.
default CompletableFuture<DescribeGlobalTableResponse> describeGlobalTable(DescribeGlobalTableRequest describeGlobalTableRequest)
Returns information about the specified global table.
This operation only applies to Version 2017.11.29 (Legacy) of global tables. We recommend using Version 2019.11.21 (Current) when creating new global tables, as it provides greater flexibility, higher efficiency and consumes less write capacity than 2017.11.29 (Legacy). To determine which version you are using, see Determining the version. To update existing global tables from version 2017.11.29 (Legacy) to version 2019.11.21 (Current), see Updating global tables.
describeGlobalTableRequest
- default CompletableFuture<DescribeGlobalTableResponse> describeGlobalTable(Consumer<DescribeGlobalTableRequest.Builder> describeGlobalTableRequest)
Returns information about the specified global table.
This operation only applies to Version 2017.11.29 (Legacy) of global tables. We recommend using Version 2019.11.21 (Current) when creating new global tables, as it provides greater flexibility, higher efficiency and consumes less write capacity than 2017.11.29 (Legacy). To determine which version you are using, see Determining the version. To update existing global tables from version 2017.11.29 (Legacy) to version 2019.11.21 (Current), see Updating global tables.
This is a convenience which creates an instance of the DescribeGlobalTableRequest.Builder
avoiding the
need to create one manually via DescribeGlobalTableRequest.builder()
describeGlobalTableRequest
- A Consumer
that will call methods on DescribeGlobalTableInput.Builder
to create a request.default CompletableFuture<DescribeGlobalTableSettingsResponse> describeGlobalTableSettings(DescribeGlobalTableSettingsRequest describeGlobalTableSettingsRequest)
Describes Region-specific settings for a global table.
This operation only applies to Version 2017.11.29 (Legacy) of global tables. We recommend using Version 2019.11.21 (Current) when creating new global tables, as it provides greater flexibility, higher efficiency and consumes less write capacity than 2017.11.29 (Legacy). To determine which version you are using, see Determining the version. To update existing global tables from version 2017.11.29 (Legacy) to version 2019.11.21 (Current), see Updating global tables.
describeGlobalTableSettingsRequest
- default CompletableFuture<DescribeGlobalTableSettingsResponse> describeGlobalTableSettings(Consumer<DescribeGlobalTableSettingsRequest.Builder> describeGlobalTableSettingsRequest)
Describes Region-specific settings for a global table.
This operation only applies to Version 2017.11.29 (Legacy) of global tables. We recommend using Version 2019.11.21 (Current) when creating new global tables, as it provides greater flexibility, higher efficiency and consumes less write capacity than 2017.11.29 (Legacy). To determine which version you are using, see Determining the version. To update existing global tables from version 2017.11.29 (Legacy) to version 2019.11.21 (Current), see Updating global tables.
This is a convenience which creates an instance of the DescribeGlobalTableSettingsRequest.Builder
avoiding the need to create one manually via DescribeGlobalTableSettingsRequest.builder()
describeGlobalTableSettingsRequest
- A Consumer
that will call methods on DescribeGlobalTableSettingsInput.Builder
to create a
request.default CompletableFuture<DescribeImportResponse> describeImport(DescribeImportRequest describeImportRequest)
Represents the properties of the import.
describeImportRequest
- default CompletableFuture<DescribeImportResponse> describeImport(Consumer<DescribeImportRequest.Builder> describeImportRequest)
Represents the properties of the import.
This is a convenience which creates an instance of the DescribeImportRequest.Builder
avoiding the need to
create one manually via DescribeImportRequest.builder()
describeImportRequest
- A Consumer
that will call methods on DescribeImportInput.Builder
to create a request.default CompletableFuture<DescribeKinesisStreamingDestinationResponse> describeKinesisStreamingDestination(DescribeKinesisStreamingDestinationRequest describeKinesisStreamingDestinationRequest)
Returns information about the status of Kinesis streaming.
describeKinesisStreamingDestinationRequest
- ACTIVE
.default CompletableFuture<DescribeKinesisStreamingDestinationResponse> describeKinesisStreamingDestination(Consumer<DescribeKinesisStreamingDestinationRequest.Builder> describeKinesisStreamingDestinationRequest)
Returns information about the status of Kinesis streaming.
This is a convenience which creates an instance of the DescribeKinesisStreamingDestinationRequest.Builder
avoiding the need to create one manually via DescribeKinesisStreamingDestinationRequest.builder()
describeKinesisStreamingDestinationRequest
- A Consumer
that will call methods on DescribeKinesisStreamingDestinationInput.Builder
to
create a request.ACTIVE
.default CompletableFuture<DescribeLimitsResponse> describeLimits(DescribeLimitsRequest describeLimitsRequest)
Returns the current provisioned-capacity quotas for your Amazon Web Services account in a Region, both for the Region as a whole and for any one DynamoDB table that you create there.
When you establish an Amazon Web Services account, the account has initial quotas on the maximum read capacity units and write capacity units that you can provision across all of your DynamoDB tables in a given Region. Also, there are per-table quotas that apply when you create a table there. For more information, see Service, Account, and Table Quotas page in the Amazon DynamoDB Developer Guide.
Although you can increase these quotas by filing a case at Amazon Web Services Support Center, obtaining the
increase is not instantaneous. The DescribeLimits
action lets you write code to compare the capacity
you are currently using to those quotas imposed by your account so that you have enough time to apply for an
increase before you hit a quota.
For example, you could use one of the Amazon Web Services SDKs to do the following:
Call DescribeLimits
for a particular Region to obtain your current account quotas on provisioned
capacity there.
Create a variable to hold the aggregate read capacity units provisioned for all your tables in that Region, and one to hold the aggregate write capacity units. Zero them both.
Call ListTables
to obtain a list of all your DynamoDB tables.
For each table name listed by ListTables
, do the following:
Call DescribeTable
with the table name.
Use the data returned by DescribeTable
to add the read capacity units and write capacity units
provisioned for the table itself to your variables.
If the table has one or more global secondary indexes (GSIs), loop over these GSIs and add their provisioned capacity values to your variables as well.
Report the account quotas for that Region returned by DescribeLimits
, along with the total current
provisioned capacity levels you have calculated.
This will let you see whether you are getting close to your account-level quotas.
The per-table quotas apply only when you are creating a new table. They restrict the sum of the provisioned capacity of the new table itself and all its global secondary indexes.
For existing tables and their GSIs, DynamoDB doesn't let you increase provisioned capacity extremely rapidly, but the only quota that applies is that the aggregate provisioned capacity over all your tables and GSIs cannot exceed either of the per-account quotas.
DescribeLimits
should only be called periodically. You can expect throttling errors if you call it
more than once in a minute.
The DescribeLimits
Request element has no content.
describeLimitsRequest
- Represents the input of a DescribeLimits
operation. Has no content.default CompletableFuture<DescribeLimitsResponse> describeLimits(Consumer<DescribeLimitsRequest.Builder> describeLimitsRequest)
Returns the current provisioned-capacity quotas for your Amazon Web Services account in a Region, both for the Region as a whole and for any one DynamoDB table that you create there.
When you establish an Amazon Web Services account, the account has initial quotas on the maximum read capacity units and write capacity units that you can provision across all of your DynamoDB tables in a given Region. Also, there are per-table quotas that apply when you create a table there. For more information, see Service, Account, and Table Quotas page in the Amazon DynamoDB Developer Guide.
Although you can increase these quotas by filing a case at Amazon Web Services Support Center, obtaining the
increase is not instantaneous. The DescribeLimits
action lets you write code to compare the capacity
you are currently using to those quotas imposed by your account so that you have enough time to apply for an
increase before you hit a quota.
For example, you could use one of the Amazon Web Services SDKs to do the following:
Call DescribeLimits
for a particular Region to obtain your current account quotas on provisioned
capacity there.
Create a variable to hold the aggregate read capacity units provisioned for all your tables in that Region, and one to hold the aggregate write capacity units. Zero them both.
Call ListTables
to obtain a list of all your DynamoDB tables.
For each table name listed by ListTables
, do the following:
Call DescribeTable
with the table name.
Use the data returned by DescribeTable
to add the read capacity units and write capacity units
provisioned for the table itself to your variables.
If the table has one or more global secondary indexes (GSIs), loop over these GSIs and add their provisioned capacity values to your variables as well.
Report the account quotas for that Region returned by DescribeLimits
, along with the total current
provisioned capacity levels you have calculated.
This will let you see whether you are getting close to your account-level quotas.
The per-table quotas apply only when you are creating a new table. They restrict the sum of the provisioned capacity of the new table itself and all its global secondary indexes.
For existing tables and their GSIs, DynamoDB doesn't let you increase provisioned capacity extremely rapidly, but the only quota that applies is that the aggregate provisioned capacity over all your tables and GSIs cannot exceed either of the per-account quotas.
DescribeLimits
should only be called periodically. You can expect throttling errors if you call it
more than once in a minute.
The DescribeLimits
Request element has no content.
This is a convenience which creates an instance of the DescribeLimitsRequest.Builder
avoiding the need to
create one manually via DescribeLimitsRequest.builder()
describeLimitsRequest
- A Consumer
that will call methods on DescribeLimitsInput.Builder
to create a request.
Represents the input of a DescribeLimits
operation. Has no content.default CompletableFuture<DescribeLimitsResponse> describeLimits()
Returns the current provisioned-capacity quotas for your Amazon Web Services account in a Region, both for the Region as a whole and for any one DynamoDB table that you create there.
When you establish an Amazon Web Services account, the account has initial quotas on the maximum read capacity units and write capacity units that you can provision across all of your DynamoDB tables in a given Region. Also, there are per-table quotas that apply when you create a table there. For more information, see Service, Account, and Table Quotas page in the Amazon DynamoDB Developer Guide.
Although you can increase these quotas by filing a case at Amazon Web Services Support Center, obtaining the
increase is not instantaneous. The DescribeLimits
action lets you write code to compare the capacity
you are currently using to those quotas imposed by your account so that you have enough time to apply for an
increase before you hit a quota.
For example, you could use one of the Amazon Web Services SDKs to do the following:
Call DescribeLimits
for a particular Region to obtain your current account quotas on provisioned
capacity there.
Create a variable to hold the aggregate read capacity units provisioned for all your tables in that Region, and one to hold the aggregate write capacity units. Zero them both.
Call ListTables
to obtain a list of all your DynamoDB tables.
For each table name listed by ListTables
, do the following:
Call DescribeTable
with the table name.
Use the data returned by DescribeTable
to add the read capacity units and write capacity units
provisioned for the table itself to your variables.
If the table has one or more global secondary indexes (GSIs), loop over these GSIs and add their provisioned capacity values to your variables as well.
Report the account quotas for that Region returned by DescribeLimits
, along with the total current
provisioned capacity levels you have calculated.
This will let you see whether you are getting close to your account-level quotas.
The per-table quotas apply only when you are creating a new table. They restrict the sum of the provisioned capacity of the new table itself and all its global secondary indexes.
For existing tables and their GSIs, DynamoDB doesn't let you increase provisioned capacity extremely rapidly, but the only quota that applies is that the aggregate provisioned capacity over all your tables and GSIs cannot exceed either of the per-account quotas.
DescribeLimits
should only be called periodically. You can expect throttling errors if you call it
more than once in a minute.
The DescribeLimits
Request element has no content.
default CompletableFuture<DescribeTableResponse> describeTable(DescribeTableRequest describeTableRequest)
Returns information about the table, including the current status of the table, when it was created, the primary key schema, and any indexes on the table.
This operation only applies to Version 2019.11.21 (Current) of global tables.
If you issue a DescribeTable
request immediately after a CreateTable
request, DynamoDB
might return a ResourceNotFoundException
. This is because DescribeTable
uses an
eventually consistent query, and the metadata for your table might not be available at that moment. Wait for a
few seconds, and then try the DescribeTable
request again.
describeTableRequest
- Represents the input of a DescribeTable
operation.ACTIVE
.default CompletableFuture<DescribeTableResponse> describeTable(Consumer<DescribeTableRequest.Builder> describeTableRequest)
Returns information about the table, including the current status of the table, when it was created, the primary key schema, and any indexes on the table.
This operation only applies to Version 2019.11.21 (Current) of global tables.
If you issue a DescribeTable
request immediately after a CreateTable
request, DynamoDB
might return a ResourceNotFoundException
. This is because DescribeTable
uses an
eventually consistent query, and the metadata for your table might not be available at that moment. Wait for a
few seconds, and then try the DescribeTable
request again.
This is a convenience which creates an instance of the DescribeTableRequest.Builder
avoiding the need to
create one manually via DescribeTableRequest.builder()
describeTableRequest
- A Consumer
that will call methods on DescribeTableInput.Builder
to create a request.
Represents the input of a DescribeTable
operation.ACTIVE
.default CompletableFuture<DescribeTableReplicaAutoScalingResponse> describeTableReplicaAutoScaling(DescribeTableReplicaAutoScalingRequest describeTableReplicaAutoScalingRequest)
Describes auto scaling settings across replicas of the global table at once.
This operation only applies to Version 2019.11.21 (Current) of global tables.
describeTableReplicaAutoScalingRequest
- ACTIVE
.default CompletableFuture<DescribeTableReplicaAutoScalingResponse> describeTableReplicaAutoScaling(Consumer<DescribeTableReplicaAutoScalingRequest.Builder> describeTableReplicaAutoScalingRequest)
Describes auto scaling settings across replicas of the global table at once.
This operation only applies to Version 2019.11.21 (Current) of global tables.
This is a convenience which creates an instance of the DescribeTableReplicaAutoScalingRequest.Builder
avoiding the need to create one manually via DescribeTableReplicaAutoScalingRequest.builder()
describeTableReplicaAutoScalingRequest
- A Consumer
that will call methods on DescribeTableReplicaAutoScalingInput.Builder
to
create a request.ACTIVE
.default CompletableFuture<DescribeTimeToLiveResponse> describeTimeToLive(DescribeTimeToLiveRequest describeTimeToLiveRequest)
Gives a description of the Time to Live (TTL) status on the specified table.
describeTimeToLiveRequest
- ACTIVE
.default CompletableFuture<DescribeTimeToLiveResponse> describeTimeToLive(Consumer<DescribeTimeToLiveRequest.Builder> describeTimeToLiveRequest)
Gives a description of the Time to Live (TTL) status on the specified table.
This is a convenience which creates an instance of the DescribeTimeToLiveRequest.Builder
avoiding the
need to create one manually via DescribeTimeToLiveRequest.builder()
describeTimeToLiveRequest
- A Consumer
that will call methods on DescribeTimeToLiveInput.Builder
to create a request.ACTIVE
.default CompletableFuture<DisableKinesisStreamingDestinationResponse> disableKinesisStreamingDestination(DisableKinesisStreamingDestinationRequest disableKinesisStreamingDestinationRequest)
Stops replication from the DynamoDB table to the Kinesis data stream. This is done without deleting either of the resources.
disableKinesisStreamingDestinationRequest
-
For most purposes, up to 500 simultaneous table operations are allowed per account. These operations
include CreateTable
, UpdateTable
, DeleteTable
,
UpdateTimeToLive
, RestoreTableFromBackup
, and
RestoreTableToPointInTime
.
When you are creating a table with one or more secondary indexes, you can have up to 250 such requests running at a time. However, if the table or index specifications are complex, then DynamoDB might temporarily reduce the number of concurrent operations.
When importing into DynamoDB, up to 50 simultaneous import table operations are allowed per account.
There is a soft account quota of 2,500 tables.
GetRecords was called with a value of more than 1000 for the limit request parameter.
More than 2 processes are reading from the same streams shard at the same time. Exceeding this limit may result in request throttling.
CREATING
state.ACTIVE
.default CompletableFuture<DisableKinesisStreamingDestinationResponse> disableKinesisStreamingDestination(Consumer<DisableKinesisStreamingDestinationRequest.Builder> disableKinesisStreamingDestinationRequest)
Stops replication from the DynamoDB table to the Kinesis data stream. This is done without deleting either of the resources.
This is a convenience which creates an instance of the DisableKinesisStreamingDestinationRequest.Builder
avoiding the need to create one manually via DisableKinesisStreamingDestinationRequest.builder()
disableKinesisStreamingDestinationRequest
- A Consumer
that will call methods on KinesisStreamingDestinationInput.Builder
to create a
request.
For most purposes, up to 500 simultaneous table operations are allowed per account. These operations
include CreateTable
, UpdateTable
, DeleteTable
,
UpdateTimeToLive
, RestoreTableFromBackup
, and
RestoreTableToPointInTime
.
When you are creating a table with one or more secondary indexes, you can have up to 250 such requests running at a time. However, if the table or index specifications are complex, then DynamoDB might temporarily reduce the number of concurrent operations.
When importing into DynamoDB, up to 50 simultaneous import table operations are allowed per account.
There is a soft account quota of 2,500 tables.
GetRecords was called with a value of more than 1000 for the limit request parameter.
More than 2 processes are reading from the same streams shard at the same time. Exceeding this limit may result in request throttling.
CREATING
state.ACTIVE
.default CompletableFuture<EnableKinesisStreamingDestinationResponse> enableKinesisStreamingDestination(EnableKinesisStreamingDestinationRequest enableKinesisStreamingDestinationRequest)
Starts table data replication to the specified Kinesis data stream at a timestamp chosen during the enable workflow. If this operation doesn't return results immediately, use DescribeKinesisStreamingDestination to check if streaming to the Kinesis data stream is ACTIVE.
enableKinesisStreamingDestinationRequest
-
For most purposes, up to 500 simultaneous table operations are allowed per account. These operations
include CreateTable
, UpdateTable
, DeleteTable
,
UpdateTimeToLive
, RestoreTableFromBackup
, and
RestoreTableToPointInTime
.
When you are creating a table with one or more secondary indexes, you can have up to 250 such requests running at a time. However, if the table or index specifications are complex, then DynamoDB might temporarily reduce the number of concurrent operations.
When importing into DynamoDB, up to 50 simultaneous import table operations are allowed per account.
There is a soft account quota of 2,500 tables.
GetRecords was called with a value of more than 1000 for the limit request parameter.
More than 2 processes are reading from the same streams shard at the same time. Exceeding this limit may result in request throttling.
CREATING
state.ACTIVE
.default CompletableFuture<EnableKinesisStreamingDestinationResponse> enableKinesisStreamingDestination(Consumer<EnableKinesisStreamingDestinationRequest.Builder> enableKinesisStreamingDestinationRequest)
Starts table data replication to the specified Kinesis data stream at a timestamp chosen during the enable workflow. If this operation doesn't return results immediately, use DescribeKinesisStreamingDestination to check if streaming to the Kinesis data stream is ACTIVE.
This is a convenience which creates an instance of the EnableKinesisStreamingDestinationRequest.Builder
avoiding the need to create one manually via EnableKinesisStreamingDestinationRequest.builder()
enableKinesisStreamingDestinationRequest
- A Consumer
that will call methods on KinesisStreamingDestinationInput.Builder
to create a
request.
For most purposes, up to 500 simultaneous table operations are allowed per account. These operations
include CreateTable
, UpdateTable
, DeleteTable
,
UpdateTimeToLive
, RestoreTableFromBackup
, and
RestoreTableToPointInTime
.
When you are creating a table with one or more secondary indexes, you can have up to 250 such requests running at a time. However, if the table or index specifications are complex, then DynamoDB might temporarily reduce the number of concurrent operations.
When importing into DynamoDB, up to 50 simultaneous import table operations are allowed per account.
There is a soft account quota of 2,500 tables.
GetRecords was called with a value of more than 1000 for the limit request parameter.
More than 2 processes are reading from the same streams shard at the same time. Exceeding this limit may result in request throttling.
CREATING
state.ACTIVE
.default CompletableFuture<ExecuteStatementResponse> executeStatement(ExecuteStatementRequest executeStatementRequest)
This operation allows you to perform reads and singleton writes on data stored in DynamoDB, using PartiQL.
For PartiQL reads (SELECT
statement), if the total number of processed items exceeds the maximum
dataset size limit of 1 MB, the read stops and results are returned to the user as a
LastEvaluatedKey
value to continue the read in a subsequent operation. If the filter criteria in
WHERE
clause does not match any data, the read will return an empty result set.
A single SELECT
statement response can return up to the maximum number of items (if using the Limit
parameter) or a maximum of 1 MB of data (and then apply any filtering to the results using WHERE
clause). If LastEvaluatedKey
is present in the response, you need to paginate the result set. If
NextToken
is present, you need to paginate the result set and include NextToken
.
executeStatementRequest
- ACTIVE
.default CompletableFuture<ExecuteStatementResponse> executeStatement(Consumer<ExecuteStatementRequest.Builder> executeStatementRequest)
This operation allows you to perform reads and singleton writes on data stored in DynamoDB, using PartiQL.
For PartiQL reads (SELECT
statement), if the total number of processed items exceeds the maximum
dataset size limit of 1 MB, the read stops and results are returned to the user as a
LastEvaluatedKey
value to continue the read in a subsequent operation. If the filter criteria in
WHERE
clause does not match any data, the read will return an empty result set.
A single SELECT
statement response can return up to the maximum number of items (if using the Limit
parameter) or a maximum of 1 MB of data (and then apply any filtering to the results using WHERE
clause). If LastEvaluatedKey
is present in the response, you need to paginate the result set. If
NextToken
is present, you need to paginate the result set and include NextToken
.
This is a convenience which creates an instance of the ExecuteStatementRequest.Builder
avoiding the need
to create one manually via ExecuteStatementRequest.builder()
executeStatementRequest
- A Consumer
that will call methods on ExecuteStatementInput.Builder
to create a request.ACTIVE
.default CompletableFuture<ExecuteTransactionResponse> executeTransaction(ExecuteTransactionRequest executeTransactionRequest)
This operation allows you to perform transactional reads or writes on data stored in DynamoDB, using PartiQL.
The entire transaction must consist of either read statements or write statements, you cannot mix both in one
transaction. The EXISTS function is an exception and can be used to check the condition of specific attributes of
the item in a similar manner to ConditionCheck
in the TransactWriteItems API.
executeTransactionRequest
- ACTIVE
.
DynamoDB cancels a TransactWriteItems
request under the following circumstances:
A condition in one of the condition expressions is not met.
A table in the TransactWriteItems
request is in a different account or region.
More than one action in the TransactWriteItems
operation targets the same item.
There is insufficient provisioned capacity for the transaction to be completed.
An item size becomes too large (larger than 400 KB), or a local secondary index (LSI) becomes too large, or a similar validation error occurs because of changes made by the transaction.
There is a user error, such as an invalid data format.
DynamoDB cancels a TransactGetItems
request under the following circumstances:
There is an ongoing TransactGetItems
operation that conflicts with a concurrent
PutItem
, UpdateItem
, DeleteItem
or TransactWriteItems
request. In this case the TransactGetItems
operation fails with a
TransactionCanceledException
.
A table in the TransactGetItems
request is in a different account or region.
There is insufficient provisioned capacity for the transaction to be completed.
There is a user error, such as an invalid data format.
If using Java, DynamoDB lists the cancellation reasons on the CancellationReasons
property.
This property is not set for other languages. Transaction cancellation reasons are ordered in the order
of requested items, if an item has no error it will have None
code and Null
message.
Cancellation reason codes and possible error messages:
No Errors:
Code: None
Message: null
Conditional Check Failed:
Code: ConditionalCheckFailed
Message: The conditional request failed.
Item Collection Size Limit Exceeded:
Code: ItemCollectionSizeLimitExceeded
Message: Collection size exceeded.
Transaction Conflict:
Code: TransactionConflict
Message: Transaction is ongoing for the item.
Provisioned Throughput Exceeded:
Code: ProvisionedThroughputExceeded
Messages:
The level of configured provisioned throughput for the table was exceeded. Consider increasing your provisioning level with the UpdateTable API.
This Message is received when provisioned throughput is exceeded is on a provisioned DynamoDB table.
The level of configured provisioned throughput for one or more global secondary indexes of the table was exceeded. Consider increasing your provisioning level for the under-provisioned global secondary indexes with the UpdateTable API.
This message is returned when provisioned throughput is exceeded is on a provisioned GSI.
Throttling Error:
Code: ThrottlingError
Messages:
Throughput exceeds the current capacity of your table or index. DynamoDB is automatically scaling your table or index so please try again shortly. If exceptions persist, check if you have a hot key: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/bp-partition-key-design.html.
This message is returned when writes get throttled on an On-Demand table as DynamoDB is automatically scaling the table.
Throughput exceeds the current capacity for one or more global secondary indexes. DynamoDB is automatically scaling your index so please try again shortly.
This message is returned when writes get throttled on an On-Demand GSI as DynamoDB is automatically scaling the GSI.
Validation Error:
Code: ValidationError
Messages:
One or more parameter values were invalid.
The update expression attempted to update the secondary index key beyond allowed size limits.
The update expression attempted to update the secondary index key to unsupported type.
An operand in the update expression has an incorrect data type.
Item size to update has exceeded the maximum allowed size.
Number overflow. Attempting to store a number with magnitude larger than supported range.
Type mismatch for attribute to update.
Nesting Levels have exceeded supported limits.
The document path provided in the update expression is invalid for update.
The provided expression refers to an attribute that does not exist in the item.
Recommended Settings
This is a general recommendation for handling the TransactionInProgressException
. These
settings help ensure that the client retries will trigger completion of the ongoing
TransactWriteItems
request.
Set clientExecutionTimeout
to a value that allows at least one retry to be processed after 5
seconds have elapsed since the first attempt for the TransactWriteItems
operation.
Set socketTimeout
to a value a little lower than the requestTimeout
setting.
requestTimeout
should be set based on the time taken for the individual retries of a single
HTTP request for your use case, but setting it to 1 second or higher should work well to reduce chances
of retries and TransactionInProgressException
errors.
Use exponential backoff when retrying and tune backoff if needed.
Assuming default retry policy, example timeout settings based on the guidelines above are as follows:
Example timeline:
0-1000 first attempt
1000-1500 first sleep/delay (default retry policy uses 500 ms as base delay for 4xx errors)
1500-2500 second attempt
2500-3500 second sleep/delay (500 * 2, exponential backoff)
3500-4500 third attempt
4500-6500 third sleep/delay (500 * 2^2)
6500-7500 fourth attempt (this can trigger inline recovery since 5 seconds have elapsed since the first attempt reached TC)
default CompletableFuture<ExecuteTransactionResponse> executeTransaction(Consumer<ExecuteTransactionRequest.Builder> executeTransactionRequest)
This operation allows you to perform transactional reads or writes on data stored in DynamoDB, using PartiQL.
The entire transaction must consist of either read statements or write statements, you cannot mix both in one
transaction. The EXISTS function is an exception and can be used to check the condition of specific attributes of
the item in a similar manner to ConditionCheck
in the TransactWriteItems API.
This is a convenience which creates an instance of the ExecuteTransactionRequest.Builder
avoiding the
need to create one manually via ExecuteTransactionRequest.builder()
executeTransactionRequest
- A Consumer
that will call methods on ExecuteTransactionInput.Builder
to create a request.ACTIVE
.
DynamoDB cancels a TransactWriteItems
request under the following circumstances:
A condition in one of the condition expressions is not met.
A table in the TransactWriteItems
request is in a different account or region.
More than one action in the TransactWriteItems
operation targets the same item.
There is insufficient provisioned capacity for the transaction to be completed.
An item size becomes too large (larger than 400 KB), or a local secondary index (LSI) becomes too large, or a similar validation error occurs because of changes made by the transaction.
There is a user error, such as an invalid data format.
DynamoDB cancels a TransactGetItems
request under the following circumstances:
There is an ongoing TransactGetItems
operation that conflicts with a concurrent
PutItem
, UpdateItem
, DeleteItem
or TransactWriteItems
request. In this case the TransactGetItems
operation fails with a
TransactionCanceledException
.
A table in the TransactGetItems
request is in a different account or region.
There is insufficient provisioned capacity for the transaction to be completed.
There is a user error, such as an invalid data format.
If using Java, DynamoDB lists the cancellation reasons on the CancellationReasons
property.
This property is not set for other languages. Transaction cancellation reasons are ordered in the order
of requested items, if an item has no error it will have None
code and Null
message.
Cancellation reason codes and possible error messages:
No Errors:
Code: None
Message: null
Conditional Check Failed:
Code: ConditionalCheckFailed
Message: The conditional request failed.
Item Collection Size Limit Exceeded:
Code: ItemCollectionSizeLimitExceeded
Message: Collection size exceeded.
Transaction Conflict:
Code: TransactionConflict
Message: Transaction is ongoing for the item.
Provisioned Throughput Exceeded:
Code: ProvisionedThroughputExceeded
Messages:
The level of configured provisioned throughput for the table was exceeded. Consider increasing your provisioning level with the UpdateTable API.
This Message is received when provisioned throughput is exceeded is on a provisioned DynamoDB table.
The level of configured provisioned throughput for one or more global secondary indexes of the table was exceeded. Consider increasing your provisioning level for the under-provisioned global secondary indexes with the UpdateTable API.
This message is returned when provisioned throughput is exceeded is on a provisioned GSI.
Throttling Error:
Code: ThrottlingError
Messages:
Throughput exceeds the current capacity of your table or index. DynamoDB is automatically scaling your table or index so please try again shortly. If exceptions persist, check if you have a hot key: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/bp-partition-key-design.html.
This message is returned when writes get throttled on an On-Demand table as DynamoDB is automatically scaling the table.
Throughput exceeds the current capacity for one or more global secondary indexes. DynamoDB is automatically scaling your index so please try again shortly.
This message is returned when writes get throttled on an On-Demand GSI as DynamoDB is automatically scaling the GSI.
Validation Error:
Code: ValidationError
Messages:
One or more parameter values were invalid.
The update expression attempted to update the secondary index key beyond allowed size limits.
The update expression attempted to update the secondary index key to unsupported type.
An operand in the update expression has an incorrect data type.
Item size to update has exceeded the maximum allowed size.
Number overflow. Attempting to store a number with magnitude larger than supported range.
Type mismatch for attribute to update.
Nesting Levels have exceeded supported limits.
The document path provided in the update expression is invalid for update.
The provided expression refers to an attribute that does not exist in the item.
Recommended Settings
This is a general recommendation for handling the TransactionInProgressException
. These
settings help ensure that the client retries will trigger completion of the ongoing
TransactWriteItems
request.
Set clientExecutionTimeout
to a value that allows at least one retry to be processed after 5
seconds have elapsed since the first attempt for the TransactWriteItems
operation.
Set socketTimeout
to a value a little lower than the requestTimeout
setting.
requestTimeout
should be set based on the time taken for the individual retries of a single
HTTP request for your use case, but setting it to 1 second or higher should work well to reduce chances
of retries and TransactionInProgressException
errors.
Use exponential backoff when retrying and tune backoff if needed.
Assuming default retry policy, example timeout settings based on the guidelines above are as follows:
Example timeline:
0-1000 first attempt
1000-1500 first sleep/delay (default retry policy uses 500 ms as base delay for 4xx errors)
1500-2500 second attempt
2500-3500 second sleep/delay (500 * 2, exponential backoff)
3500-4500 third attempt
4500-6500 third sleep/delay (500 * 2^2)
6500-7500 fourth attempt (this can trigger inline recovery since 5 seconds have elapsed since the first attempt reached TC)
default CompletableFuture<ExportTableToPointInTimeResponse> exportTableToPointInTime(ExportTableToPointInTimeRequest exportTableToPointInTimeRequest)
Exports table data to an S3 bucket. The table must have point in time recovery enabled, and you can export data from any time within the point in time recovery window.
exportTableToPointInTimeRequest
- TableName
does not currently exist
within the subscriber's account or the subscriber is operating in the wrong Amazon Web Services Region.
For most purposes, up to 500 simultaneous table operations are allowed per account. These operations
include CreateTable
, UpdateTable
, DeleteTable
,
UpdateTimeToLive
, RestoreTableFromBackup
, and
RestoreTableToPointInTime
.
When you are creating a table with one or more secondary indexes, you can have up to 250 such requests running at a time. However, if the table or index specifications are complex, then DynamoDB might temporarily reduce the number of concurrent operations.
When importing into DynamoDB, up to 50 simultaneous import table operations are allowed per account.
There is a soft account quota of 2,500 tables.
GetRecords was called with a value of more than 1000 for the limit request parameter.
More than 2 processes are reading from the same streams shard at the same time. Exceeding this limit may result in request throttling.
ExportTime
is outside of the point in time
recovery window.default CompletableFuture<ExportTableToPointInTimeResponse> exportTableToPointInTime(Consumer<ExportTableToPointInTimeRequest.Builder> exportTableToPointInTimeRequest)
Exports table data to an S3 bucket. The table must have point in time recovery enabled, and you can export data from any time within the point in time recovery window.
This is a convenience which creates an instance of the ExportTableToPointInTimeRequest.Builder
avoiding
the need to create one manually via ExportTableToPointInTimeRequest.builder()
exportTableToPointInTimeRequest
- A Consumer
that will call methods on ExportTableToPointInTimeInput.Builder
to create a
request.TableName
does not currently exist
within the subscriber's account or the subscriber is operating in the wrong Amazon Web Services Region.
For most purposes, up to 500 simultaneous table operations are allowed per account. These operations
include CreateTable
, UpdateTable
, DeleteTable
,
UpdateTimeToLive
, RestoreTableFromBackup
, and
RestoreTableToPointInTime
.
When you are creating a table with one or more secondary indexes, you can have up to 250 such requests running at a time. However, if the table or index specifications are complex, then DynamoDB might temporarily reduce the number of concurrent operations.
When importing into DynamoDB, up to 50 simultaneous import table operations are allowed per account.
There is a soft account quota of 2,500 tables.
GetRecords was called with a value of more than 1000 for the limit request parameter.
More than 2 processes are reading from the same streams shard at the same time. Exceeding this limit may result in request throttling.
ExportTime
is outside of the point in time
recovery window.default CompletableFuture<GetItemResponse> getItem(GetItemRequest getItemRequest)
The GetItem
operation returns a set of attributes for the item with the given primary key. If there
is no matching item, GetItem
does not return any data and there will be no Item
element
in the response.
GetItem
provides an eventually consistent read by default. If your application requires a strongly
consistent read, set ConsistentRead
to true
. Although a strongly consistent read might
take more time than an eventually consistent read, it always returns the last updated value.
getItemRequest
- Represents the input of a GetItem
operation.ACTIVE
.default CompletableFuture<GetItemResponse> getItem(Consumer<GetItemRequest.Builder> getItemRequest)
The GetItem
operation returns a set of attributes for the item with the given primary key. If there
is no matching item, GetItem
does not return any data and there will be no Item
element
in the response.
GetItem
provides an eventually consistent read by default. If your application requires a strongly
consistent read, set ConsistentRead
to true
. Although a strongly consistent read might
take more time than an eventually consistent read, it always returns the last updated value.
This is a convenience which creates an instance of the GetItemRequest.Builder
avoiding the need to create
one manually via GetItemRequest.builder()
getItemRequest
- A Consumer
that will call methods on GetItemInput.Builder
to create a request. Represents
the input of a GetItem
operation.ACTIVE
.default CompletableFuture<ImportTableResponse> importTable(ImportTableRequest importTableRequest)
Imports table data from an S3 bucket.
importTableRequest
- CREATING
state.
For most purposes, up to 500 simultaneous table operations are allowed per account. These operations
include CreateTable
, UpdateTable
, DeleteTable
,
UpdateTimeToLive
, RestoreTableFromBackup
, and
RestoreTableToPointInTime
.
When you are creating a table with one or more secondary indexes, you can have up to 250 such requests running at a time. However, if the table or index specifications are complex, then DynamoDB might temporarily reduce the number of concurrent operations.
When importing into DynamoDB, up to 50 simultaneous import table operations are allowed per account.
There is a soft account quota of 2,500 tables.
GetRecords was called with a value of more than 1000 for the limit request parameter.
More than 2 processes are reading from the same streams shard at the same time. Exceeding this limit may result in request throttling.
default CompletableFuture<ImportTableResponse> importTable(Consumer<ImportTableRequest.Builder> importTableRequest)
Imports table data from an S3 bucket.
This is a convenience which creates an instance of the ImportTableRequest.Builder
avoiding the need to
create one manually via ImportTableRequest.builder()
importTableRequest
- A Consumer
that will call methods on ImportTableInput.Builder
to create a request.CREATING
state.
For most purposes, up to 500 simultaneous table operations are allowed per account. These operations
include CreateTable
, UpdateTable
, DeleteTable
,
UpdateTimeToLive
, RestoreTableFromBackup
, and
RestoreTableToPointInTime
.
When you are creating a table with one or more secondary indexes, you can have up to 250 such requests running at a time. However, if the table or index specifications are complex, then DynamoDB might temporarily reduce the number of concurrent operations.
When importing into DynamoDB, up to 50 simultaneous import table operations are allowed per account.
There is a soft account quota of 2,500 tables.
GetRecords was called with a value of more than 1000 for the limit request parameter.
More than 2 processes are reading from the same streams shard at the same time. Exceeding this limit may result in request throttling.
default CompletableFuture<ListBackupsResponse> listBackups(ListBackupsRequest listBackupsRequest)
List backups associated with an Amazon Web Services account. To list backups for a given table, specify
TableName
. ListBackups
returns a paginated list of results with at most 1 MB worth of
items in a page. You can also specify a maximum number of entries to be returned in a page.
In the request, start time is inclusive, but end time is exclusive. Note that these boundaries are for the time at which the original backup was requested.
You can call ListBackups
a maximum of five times per second.
listBackupsRequest
- default CompletableFuture<ListBackupsResponse> listBackups(Consumer<ListBackupsRequest.Builder> listBackupsRequest)
List backups associated with an Amazon Web Services account. To list backups for a given table, specify
TableName
. ListBackups
returns a paginated list of results with at most 1 MB worth of
items in a page. You can also specify a maximum number of entries to be returned in a page.
In the request, start time is inclusive, but end time is exclusive. Note that these boundaries are for the time at which the original backup was requested.
You can call ListBackups
a maximum of five times per second.
This is a convenience which creates an instance of the ListBackupsRequest.Builder
avoiding the need to
create one manually via ListBackupsRequest.builder()
listBackupsRequest
- A Consumer
that will call methods on ListBackupsInput.Builder
to create a request.default CompletableFuture<ListBackupsResponse> listBackups()
List backups associated with an Amazon Web Services account. To list backups for a given table, specify
TableName
. ListBackups
returns a paginated list of results with at most 1 MB worth of
items in a page. You can also specify a maximum number of entries to be returned in a page.
In the request, start time is inclusive, but end time is exclusive. Note that these boundaries are for the time at which the original backup was requested.
You can call ListBackups
a maximum of five times per second.
default CompletableFuture<ListContributorInsightsResponse> listContributorInsights(ListContributorInsightsRequest listContributorInsightsRequest)
Returns a list of ContributorInsightsSummary for a table and all its global secondary indexes.
listContributorInsightsRequest
- ACTIVE
.default CompletableFuture<ListContributorInsightsResponse> listContributorInsights(Consumer<ListContributorInsightsRequest.Builder> listContributorInsightsRequest)
Returns a list of ContributorInsightsSummary for a table and all its global secondary indexes.
This is a convenience which creates an instance of the ListContributorInsightsRequest.Builder
avoiding
the need to create one manually via ListContributorInsightsRequest.builder()
listContributorInsightsRequest
- A Consumer
that will call methods on ListContributorInsightsInput.Builder
to create a
request.ACTIVE
.default ListContributorInsightsPublisher listContributorInsightsPaginator(ListContributorInsightsRequest listContributorInsightsRequest)
Returns a list of ContributorInsightsSummary for a table and all its global secondary indexes.
This is a variant of
listContributorInsights(software.amazon.awssdk.services.dynamodb.model.ListContributorInsightsRequest)
operation. The return type is a custom publisher that can be subscribed to request a stream of response pages.
SDK will internally handle making service calls for you.
When the operation is called, an instance of this class is returned. At this point, no service calls are made yet
and so there is no guarantee that the request is valid. If there are errors in your request, you will see the
failures only after you start streaming the data. The subscribe method should be called as a request to start
streaming data. For more info, see
Publisher.subscribe(org.reactivestreams.Subscriber)
. Each call to the subscribe
method will result in a new Subscription
i.e., a new contract to stream data from the
starting request.
The following are few ways to use the response class:
1) Using the subscribe helper method
software.amazon.awssdk.services.dynamodb.paginators.ListContributorInsightsPublisher publisher = client.listContributorInsightsPaginator(request);
CompletableFuture<Void> future = publisher.subscribe(res -> { // Do something with the response });
future.get();
2) Using a custom subscriber
software.amazon.awssdk.services.dynamodb.paginators.ListContributorInsightsPublisher publisher = client.listContributorInsightsPaginator(request);
publisher.subscribe(new Subscriber<software.amazon.awssdk.services.dynamodb.model.ListContributorInsightsResponse>() {
public void onSubscribe(org.reactivestreams.Subscriber subscription) { //... };
public void onNext(software.amazon.awssdk.services.dynamodb.model.ListContributorInsightsResponse response) { //... };
});
As the response is a publisher, it can work well with third party reactive streams implementations like RxJava2.
Please notice that the configuration of MaxResults won't limit the number of results you get with the paginator. It only limits the number of results in each page.
Note: If you prefer to have control on service calls, use the
listContributorInsights(software.amazon.awssdk.services.dynamodb.model.ListContributorInsightsRequest)
operation.
listContributorInsightsRequest
- ACTIVE
.default ListContributorInsightsPublisher listContributorInsightsPaginator(Consumer<ListContributorInsightsRequest.Builder> listContributorInsightsRequest)
Returns a list of ContributorInsightsSummary for a table and all its global secondary indexes.
This is a variant of
listContributorInsights(software.amazon.awssdk.services.dynamodb.model.ListContributorInsightsRequest)
operation. The return type is a custom publisher that can be subscribed to request a stream of response pages.
SDK will internally handle making service calls for you.
When the operation is called, an instance of this class is returned. At this point, no service calls are made yet
and so there is no guarantee that the request is valid. If there are errors in your request, you will see the
failures only after you start streaming the data. The subscribe method should be called as a request to start
streaming data. For more info, see
Publisher.subscribe(org.reactivestreams.Subscriber)
. Each call to the subscribe
method will result in a new Subscription
i.e., a new contract to stream data from the
starting request.
The following are few ways to use the response class:
1) Using the subscribe helper method
software.amazon.awssdk.services.dynamodb.paginators.ListContributorInsightsPublisher publisher = client.listContributorInsightsPaginator(request);
CompletableFuture<Void> future = publisher.subscribe(res -> { // Do something with the response });
future.get();
2) Using a custom subscriber
software.amazon.awssdk.services.dynamodb.paginators.ListContributorInsightsPublisher publisher = client.listContributorInsightsPaginator(request);
publisher.subscribe(new Subscriber<software.amazon.awssdk.services.dynamodb.model.ListContributorInsightsResponse>() {
public void onSubscribe(org.reactivestreams.Subscriber subscription) { //... };
public void onNext(software.amazon.awssdk.services.dynamodb.model.ListContributorInsightsResponse response) { //... };
});
As the response is a publisher, it can work well with third party reactive streams implementations like RxJava2.
Please notice that the configuration of MaxResults won't limit the number of results you get with the paginator. It only limits the number of results in each page.
Note: If you prefer to have control on service calls, use the
listContributorInsights(software.amazon.awssdk.services.dynamodb.model.ListContributorInsightsRequest)
operation.
This is a convenience which creates an instance of the ListContributorInsightsRequest.Builder
avoiding
the need to create one manually via ListContributorInsightsRequest.builder()
listContributorInsightsRequest
- A Consumer
that will call methods on ListContributorInsightsInput.Builder
to create a
request.ACTIVE
.default CompletableFuture<ListExportsResponse> listExports(ListExportsRequest listExportsRequest)
Lists completed exports within the past 90 days.
listExportsRequest
-
For most purposes, up to 500 simultaneous table operations are allowed per account. These operations
include CreateTable
, UpdateTable
, DeleteTable
,
UpdateTimeToLive
, RestoreTableFromBackup
, and
RestoreTableToPointInTime
.
When you are creating a table with one or more secondary indexes, you can have up to 250 such requests running at a time. However, if the table or index specifications are complex, then DynamoDB might temporarily reduce the number of concurrent operations.
When importing into DynamoDB, up to 50 simultaneous import table operations are allowed per account.
There is a soft account quota of 2,500 tables.
GetRecords was called with a value of more than 1000 for the limit request parameter.
More than 2 processes are reading from the same streams shard at the same time. Exceeding this limit may result in request throttling.
default CompletableFuture<ListExportsResponse> listExports(Consumer<ListExportsRequest.Builder> listExportsRequest)
Lists completed exports within the past 90 days.
This is a convenience which creates an instance of the ListExportsRequest.Builder
avoiding the need to
create one manually via ListExportsRequest.builder()
listExportsRequest
- A Consumer
that will call methods on ListExportsInput.Builder
to create a request.
For most purposes, up to 500 simultaneous table operations are allowed per account. These operations
include CreateTable
, UpdateTable
, DeleteTable
,
UpdateTimeToLive
, RestoreTableFromBackup
, and
RestoreTableToPointInTime
.
When you are creating a table with one or more secondary indexes, you can have up to 250 such requests running at a time. However, if the table or index specifications are complex, then DynamoDB might temporarily reduce the number of concurrent operations.
When importing into DynamoDB, up to 50 simultaneous import table operations are allowed per account.
There is a soft account quota of 2,500 tables.
GetRecords was called with a value of more than 1000 for the limit request parameter.
More than 2 processes are reading from the same streams shard at the same time. Exceeding this limit may result in request throttling.
default ListExportsPublisher listExportsPaginator(ListExportsRequest listExportsRequest)
Lists completed exports within the past 90 days.
This is a variant of listExports(software.amazon.awssdk.services.dynamodb.model.ListExportsRequest)
operation. The return type is a custom publisher that can be subscribed to request a stream of response pages.
SDK will internally handle making service calls for you.
When the operation is called, an instance of this class is returned. At this point, no service calls are made yet
and so there is no guarantee that the request is valid. If there are errors in your request, you will see the
failures only after you start streaming the data. The subscribe method should be called as a request to start
streaming data. For more info, see
Publisher.subscribe(org.reactivestreams.Subscriber)
. Each call to the subscribe
method will result in a new Subscription
i.e., a new contract to stream data from the
starting request.
The following are few ways to use the response class:
1) Using the subscribe helper method
software.amazon.awssdk.services.dynamodb.paginators.ListExportsPublisher publisher = client.listExportsPaginator(request);
CompletableFuture<Void> future = publisher.subscribe(res -> { // Do something with the response });
future.get();
2) Using a custom subscriber
software.amazon.awssdk.services.dynamodb.paginators.ListExportsPublisher publisher = client.listExportsPaginator(request);
publisher.subscribe(new Subscriber<software.amazon.awssdk.services.dynamodb.model.ListExportsResponse>() {
public void onSubscribe(org.reactivestreams.Subscriber subscription) { //... };
public void onNext(software.amazon.awssdk.services.dynamodb.model.ListExportsResponse response) { //... };
});
As the response is a publisher, it can work well with third party reactive streams implementations like RxJava2.
Please notice that the configuration of MaxResults won't limit the number of results you get with the paginator. It only limits the number of results in each page.
Note: If you prefer to have control on service calls, use the
listExports(software.amazon.awssdk.services.dynamodb.model.ListExportsRequest)
operation.
listExportsRequest
-
For most purposes, up to 500 simultaneous table operations are allowed per account. These operations
include CreateTable
, UpdateTable
, DeleteTable
,
UpdateTimeToLive
, RestoreTableFromBackup
, and
RestoreTableToPointInTime
.
When you are creating a table with one or more secondary indexes, you can have up to 250 such requests running at a time. However, if the table or index specifications are complex, then DynamoDB might temporarily reduce the number of concurrent operations.
When importing into DynamoDB, up to 50 simultaneous import table operations are allowed per account.
There is a soft account quota of 2,500 tables.
GetRecords was called with a value of more than 1000 for the limit request parameter.
More than 2 processes are reading from the same streams shard at the same time. Exceeding this limit may result in request throttling.
default ListExportsPublisher listExportsPaginator(Consumer<ListExportsRequest.Builder> listExportsRequest)
Lists completed exports within the past 90 days.
This is a variant of listExports(software.amazon.awssdk.services.dynamodb.model.ListExportsRequest)
operation. The return type is a custom publisher that can be subscribed to request a stream of response pages.
SDK will internally handle making service calls for you.
When the operation is called, an instance of this class is returned. At this point, no service calls are made yet
and so there is no guarantee that the request is valid. If there are errors in your request, you will see the
failures only after you start streaming the data. The subscribe method should be called as a request to start
streaming data. For more info, see
Publisher.subscribe(org.reactivestreams.Subscriber)
. Each call to the subscribe
method will result in a new Subscription
i.e., a new contract to stream data from the
starting request.
The following are few ways to use the response class:
1) Using the subscribe helper method
software.amazon.awssdk.services.dynamodb.paginators.ListExportsPublisher publisher = client.listExportsPaginator(request);
CompletableFuture<Void> future = publisher.subscribe(res -> { // Do something with the response });
future.get();
2) Using a custom subscriber
software.amazon.awssdk.services.dynamodb.paginators.ListExportsPublisher publisher = client.listExportsPaginator(request);
publisher.subscribe(new Subscriber<software.amazon.awssdk.services.dynamodb.model.ListExportsResponse>() {
public void onSubscribe(org.reactivestreams.Subscriber subscription) { //... };
public void onNext(software.amazon.awssdk.services.dynamodb.model.ListExportsResponse response) { //... };
});
As the response is a publisher, it can work well with third party reactive streams implementations like RxJava2.
Please notice that the configuration of MaxResults won't limit the number of results you get with the paginator. It only limits the number of results in each page.
Note: If you prefer to have control on service calls, use the
listExports(software.amazon.awssdk.services.dynamodb.model.ListExportsRequest)
operation.
This is a convenience which creates an instance of the ListExportsRequest.Builder
avoiding the need to
create one manually via ListExportsRequest.builder()
listExportsRequest
- A Consumer
that will call methods on ListExportsInput.Builder
to create a request.
For most purposes, up to 500 simultaneous table operations are allowed per account. These operations
include CreateTable
, UpdateTable
, DeleteTable
,
UpdateTimeToLive
, RestoreTableFromBackup
, and
RestoreTableToPointInTime
.
When you are creating a table with one or more secondary indexes, you can have up to 250 such requests running at a time. However, if the table or index specifications are complex, then DynamoDB might temporarily reduce the number of concurrent operations.
When importing into DynamoDB, up to 50 simultaneous import table operations are allowed per account.
There is a soft account quota of 2,500 tables.
GetRecords was called with a value of more than 1000 for the limit request parameter.
More than 2 processes are reading from the same streams shard at the same time. Exceeding this limit may result in request throttling.
default CompletableFuture<ListGlobalTablesResponse> listGlobalTables(ListGlobalTablesRequest listGlobalTablesRequest)
Lists all global tables that have a replica in the specified Region.
This operation only applies to Version 2017.11.29 (Legacy) of global tables. We recommend using Version 2019.11.21 (Current) when creating new global tables, as it provides greater flexibility, higher efficiency and consumes less write capacity than 2017.11.29 (Legacy). To determine which version you are using, see Determining the version. To update existing global tables from version 2017.11.29 (Legacy) to version 2019.11.21 (Current), see Updating global tables.
listGlobalTablesRequest
- default CompletableFuture<ListGlobalTablesResponse> listGlobalTables(Consumer<ListGlobalTablesRequest.Builder> listGlobalTablesRequest)
Lists all global tables that have a replica in the specified Region.
This operation only applies to Version 2017.11.29 (Legacy) of global tables. We recommend using Version 2019.11.21 (Current) when creating new global tables, as it provides greater flexibility, higher efficiency and consumes less write capacity than 2017.11.29 (Legacy). To determine which version you are using, see Determining the version. To update existing global tables from version 2017.11.29 (Legacy) to version 2019.11.21 (Current), see Updating global tables.
This is a convenience which creates an instance of the ListGlobalTablesRequest.Builder
avoiding the need
to create one manually via ListGlobalTablesRequest.builder()
listGlobalTablesRequest
- A Consumer
that will call methods on ListGlobalTablesInput.Builder
to create a request.default CompletableFuture<ListGlobalTablesResponse> listGlobalTables()
Lists all global tables that have a replica in the specified Region.
This operation only applies to Version 2017.11.29 (Legacy) of global tables. We recommend using Version 2019.11.21 (Current) when creating new global tables, as it provides greater flexibility, higher efficiency and consumes less write capacity than 2017.11.29 (Legacy). To determine which version you are using, see Determining the version. To update existing global tables from version 2017.11.29 (Legacy) to version 2019.11.21 (Current), see Updating global tables.
default CompletableFuture<ListImportsResponse> listImports(ListImportsRequest listImportsRequest)
Lists completed imports within the past 90 days.
listImportsRequest
-
For most purposes, up to 500 simultaneous table operations are allowed per account. These operations
include CreateTable
, UpdateTable
, DeleteTable
,
UpdateTimeToLive
, RestoreTableFromBackup
, and
RestoreTableToPointInTime
.
When you are creating a table with one or more secondary indexes, you can have up to 250 such requests running at a time. However, if the table or index specifications are complex, then DynamoDB might temporarily reduce the number of concurrent operations.
When importing into DynamoDB, up to 50 simultaneous import table operations are allowed per account.
There is a soft account quota of 2,500 tables.
GetRecords was called with a value of more than 1000 for the limit request parameter.
More than 2 processes are reading from the same streams shard at the same time. Exceeding this limit may result in request throttling.
default CompletableFuture<ListImportsResponse> listImports(Consumer<ListImportsRequest.Builder> listImportsRequest)
Lists completed imports within the past 90 days.
This is a convenience which creates an instance of the ListImportsRequest.Builder
avoiding the need to
create one manually via ListImportsRequest.builder()
listImportsRequest
- A Consumer
that will call methods on ListImportsInput.Builder
to create a request.
For most purposes, up to 500 simultaneous table operations are allowed per account. These operations
include CreateTable
, UpdateTable
, DeleteTable
,
UpdateTimeToLive
, RestoreTableFromBackup
, and
RestoreTableToPointInTime
.
When you are creating a table with one or more secondary indexes, you can have up to 250 such requests running at a time. However, if the table or index specifications are complex, then DynamoDB might temporarily reduce the number of concurrent operations.
When importing into DynamoDB, up to 50 simultaneous import table operations are allowed per account.
There is a soft account quota of 2,500 tables.
GetRecords was called with a value of more than 1000 for the limit request parameter.
More than 2 processes are reading from the same streams shard at the same time. Exceeding this limit may result in request throttling.
default ListImportsPublisher listImportsPaginator(ListImportsRequest listImportsRequest)
Lists completed imports within the past 90 days.
This is a variant of listImports(software.amazon.awssdk.services.dynamodb.model.ListImportsRequest)
operation. The return type is a custom publisher that can be subscribed to request a stream of response pages.
SDK will internally handle making service calls for you.
When the operation is called, an instance of this class is returned. At this point, no service calls are made yet
and so there is no guarantee that the request is valid. If there are errors in your request, you will see the
failures only after you start streaming the data. The subscribe method should be called as a request to start
streaming data. For more info, see
Publisher.subscribe(org.reactivestreams.Subscriber)
. Each call to the subscribe
method will result in a new Subscription
i.e., a new contract to stream data from the
starting request.
The following are few ways to use the response class:
1) Using the subscribe helper method
software.amazon.awssdk.services.dynamodb.paginators.ListImportsPublisher publisher = client.listImportsPaginator(request);
CompletableFuture<Void> future = publisher.subscribe(res -> { // Do something with the response });
future.get();
2) Using a custom subscriber
software.amazon.awssdk.services.dynamodb.paginators.ListImportsPublisher publisher = client.listImportsPaginator(request);
publisher.subscribe(new Subscriber<software.amazon.awssdk.services.dynamodb.model.ListImportsResponse>() {
public void onSubscribe(org.reactivestreams.Subscriber subscription) { //... };
public void onNext(software.amazon.awssdk.services.dynamodb.model.ListImportsResponse response) { //... };
});
As the response is a publisher, it can work well with third party reactive streams implementations like RxJava2.
Please notice that the configuration of PageSize won't limit the number of results you get with the paginator. It only limits the number of results in each page.
Note: If you prefer to have control on service calls, use the
listImports(software.amazon.awssdk.services.dynamodb.model.ListImportsRequest)
operation.
listImportsRequest
-
For most purposes, up to 500 simultaneous table operations are allowed per account. These operations
include CreateTable
, UpdateTable
, DeleteTable
,
UpdateTimeToLive
, RestoreTableFromBackup
, and
RestoreTableToPointInTime
.
When you are creating a table with one or more secondary indexes, you can have up to 250 such requests running at a time. However, if the table or index specifications are complex, then DynamoDB might temporarily reduce the number of concurrent operations.
When importing into DynamoDB, up to 50 simultaneous import table operations are allowed per account.
There is a soft account quota of 2,500 tables.
GetRecords was called with a value of more than 1000 for the limit request parameter.
More than 2 processes are reading from the same streams shard at the same time. Exceeding this limit may result in request throttling.
default ListImportsPublisher listImportsPaginator(Consumer<ListImportsRequest.Builder> listImportsRequest)
Lists completed imports within the past 90 days.
This is a variant of listImports(software.amazon.awssdk.services.dynamodb.model.ListImportsRequest)
operation. The return type is a custom publisher that can be subscribed to request a stream of response pages.
SDK will internally handle making service calls for you.
When the operation is called, an instance of this class is returned. At this point, no service calls are made yet
and so there is no guarantee that the request is valid. If there are errors in your request, you will see the
failures only after you start streaming the data. The subscribe method should be called as a request to start
streaming data. For more info, see
Publisher.subscribe(org.reactivestreams.Subscriber)
. Each call to the subscribe
method will result in a new Subscription
i.e., a new contract to stream data from the
starting request.
The following are few ways to use the response class:
1) Using the subscribe helper method
software.amazon.awssdk.services.dynamodb.paginators.ListImportsPublisher publisher = client.listImportsPaginator(request);
CompletableFuture<Void> future = publisher.subscribe(res -> { // Do something with the response });
future.get();
2) Using a custom subscriber
software.amazon.awssdk.services.dynamodb.paginators.ListImportsPublisher publisher = client.listImportsPaginator(request);
publisher.subscribe(new Subscriber<software.amazon.awssdk.services.dynamodb.model.ListImportsResponse>() {
public void onSubscribe(org.reactivestreams.Subscriber subscription) { //... };
public void onNext(software.amazon.awssdk.services.dynamodb.model.ListImportsResponse response) { //... };
});
As the response is a publisher, it can work well with third party reactive streams implementations like RxJava2.
Please notice that the configuration of PageSize won't limit the number of results you get with the paginator. It only limits the number of results in each page.
Note: If you prefer to have control on service calls, use the
listImports(software.amazon.awssdk.services.dynamodb.model.ListImportsRequest)
operation.
This is a convenience which creates an instance of the ListImportsRequest.Builder
avoiding the need to
create one manually via ListImportsRequest.builder()
listImportsRequest
- A Consumer
that will call methods on ListImportsInput.Builder
to create a request.
For most purposes, up to 500 simultaneous table operations are allowed per account. These operations
include CreateTable
, UpdateTable
, DeleteTable
,
UpdateTimeToLive
, RestoreTableFromBackup
, and
RestoreTableToPointInTime
.
When you are creating a table with one or more secondary indexes, you can have up to 250 such requests running at a time. However, if the table or index specifications are complex, then DynamoDB might temporarily reduce the number of concurrent operations.
When importing into DynamoDB, up to 50 simultaneous import table operations are allowed per account.
There is a soft account quota of 2,500 tables.
GetRecords was called with a value of more than 1000 for the limit request parameter.
More than 2 processes are reading from the same streams shard at the same time. Exceeding this limit may result in request throttling.
default CompletableFuture<ListTablesResponse> listTables(ListTablesRequest listTablesRequest)
Returns an array of table names associated with the current account and endpoint. The output from
ListTables
is paginated, with each page returning a maximum of 100 table names.
listTablesRequest
- Represents the input of a ListTables
operation.default CompletableFuture<ListTablesResponse> listTables(Consumer<ListTablesRequest.Builder> listTablesRequest)
Returns an array of table names associated with the current account and endpoint. The output from
ListTables
is paginated, with each page returning a maximum of 100 table names.
This is a convenience which creates an instance of the ListTablesRequest.Builder
avoiding the need to
create one manually via ListTablesRequest.builder()
listTablesRequest
- A Consumer
that will call methods on ListTablesInput.Builder
to create a request.
Represents the input of a ListTables
operation.default CompletableFuture<ListTablesResponse> listTables()
Returns an array of table names associated with the current account and endpoint. The output from
ListTables
is paginated, with each page returning a maximum of 100 table names.
default ListTablesPublisher listTablesPaginator()
Returns an array of table names associated with the current account and endpoint. The output from
ListTables
is paginated, with each page returning a maximum of 100 table names.
This is a variant of listTables(software.amazon.awssdk.services.dynamodb.model.ListTablesRequest)
operation. The return type is a custom publisher that can be subscribed to request a stream of response pages.
SDK will internally handle making service calls for you.
When the operation is called, an instance of this class is returned. At this point, no service calls are made yet
and so there is no guarantee that the request is valid. If there are errors in your request, you will see the
failures only after you start streaming the data. The subscribe method should be called as a request to start
streaming data. For more info, see
Publisher.subscribe(org.reactivestreams.Subscriber)
. Each call to the subscribe
method will result in a new Subscription
i.e., a new contract to stream data from the
starting request.
The following are few ways to use the response class:
1) Using the subscribe helper method
software.amazon.awssdk.services.dynamodb.paginators.ListTablesPublisher publisher = client.listTablesPaginator(request);
CompletableFuture<Void> future = publisher.subscribe(res -> { // Do something with the response });
future.get();
2) Using a custom subscriber
software.amazon.awssdk.services.dynamodb.paginators.ListTablesPublisher publisher = client.listTablesPaginator(request);
publisher.subscribe(new Subscriber<software.amazon.awssdk.services.dynamodb.model.ListTablesResponse>() {
public void onSubscribe(org.reactivestreams.Subscriber subscription) { //... };
public void onNext(software.amazon.awssdk.services.dynamodb.model.ListTablesResponse response) { //... };
});
As the response is a publisher, it can work well with third party reactive streams implementations like RxJava2.
Please notice that the configuration of Limit won't limit the number of results you get with the paginator. It only limits the number of results in each page.
Note: If you prefer to have control on service calls, use the
listTables(software.amazon.awssdk.services.dynamodb.model.ListTablesRequest)
operation.
default ListTablesPublisher listTablesPaginator(ListTablesRequest listTablesRequest)
Returns an array of table names associated with the current account and endpoint. The output from
ListTables
is paginated, with each page returning a maximum of 100 table names.
This is a variant of listTables(software.amazon.awssdk.services.dynamodb.model.ListTablesRequest)
operation. The return type is a custom publisher that can be subscribed to request a stream of response pages.
SDK will internally handle making service calls for you.
When the operation is called, an instance of this class is returned. At this point, no service calls are made yet
and so there is no guarantee that the request is valid. If there are errors in your request, you will see the
failures only after you start streaming the data. The subscribe method should be called as a request to start
streaming data. For more info, see
Publisher.subscribe(org.reactivestreams.Subscriber)
. Each call to the subscribe
method will result in a new Subscription
i.e., a new contract to stream data from the
starting request.
The following are few ways to use the response class:
1) Using the subscribe helper method
software.amazon.awssdk.services.dynamodb.paginators.ListTablesPublisher publisher = client.listTablesPaginator(request);
CompletableFuture<Void> future = publisher.subscribe(res -> { // Do something with the response });
future.get();
2) Using a custom subscriber
software.amazon.awssdk.services.dynamodb.paginators.ListTablesPublisher publisher = client.listTablesPaginator(request);
publisher.subscribe(new Subscriber<software.amazon.awssdk.services.dynamodb.model.ListTablesResponse>() {
public void onSubscribe(org.reactivestreams.Subscriber subscription) { //... };
public void onNext(software.amazon.awssdk.services.dynamodb.model.ListTablesResponse response) { //... };
});
As the response is a publisher, it can work well with third party reactive streams implementations like RxJava2.
Please notice that the configuration of Limit won't limit the number of results you get with the paginator. It only limits the number of results in each page.
Note: If you prefer to have control on service calls, use the
listTables(software.amazon.awssdk.services.dynamodb.model.ListTablesRequest)
operation.
listTablesRequest
- Represents the input of a ListTables
operation.default ListTablesPublisher listTablesPaginator(Consumer<ListTablesRequest.Builder> listTablesRequest)
Returns an array of table names associated with the current account and endpoint. The output from
ListTables
is paginated, with each page returning a maximum of 100 table names.
This is a variant of listTables(software.amazon.awssdk.services.dynamodb.model.ListTablesRequest)
operation. The return type is a custom publisher that can be subscribed to request a stream of response pages.
SDK will internally handle making service calls for you.
When the operation is called, an instance of this class is returned. At this point, no service calls are made yet
and so there is no guarantee that the request is valid. If there are errors in your request, you will see the
failures only after you start streaming the data. The subscribe method should be called as a request to start
streaming data. For more info, see
Publisher.subscribe(org.reactivestreams.Subscriber)
. Each call to the subscribe
method will result in a new Subscription
i.e., a new contract to stream data from the
starting request.
The following are few ways to use the response class:
1) Using the subscribe helper method
software.amazon.awssdk.services.dynamodb.paginators.ListTablesPublisher publisher = client.listTablesPaginator(request);
CompletableFuture<Void> future = publisher.subscribe(res -> { // Do something with the response });
future.get();
2) Using a custom subscriber
software.amazon.awssdk.services.dynamodb.paginators.ListTablesPublisher publisher = client.listTablesPaginator(request);
publisher.subscribe(new Subscriber<software.amazon.awssdk.services.dynamodb.model.ListTablesResponse>() {
public void onSubscribe(org.reactivestreams.Subscriber subscription) { //... };
public void onNext(software.amazon.awssdk.services.dynamodb.model.ListTablesResponse response) { //... };
});
As the response is a publisher, it can work well with third party reactive streams implementations like RxJava2.
Please notice that the configuration of Limit won't limit the number of results you get with the paginator. It only limits the number of results in each page.
Note: If you prefer to have control on service calls, use the
listTables(software.amazon.awssdk.services.dynamodb.model.ListTablesRequest)
operation.
This is a convenience which creates an instance of the ListTablesRequest.Builder
avoiding the need to
create one manually via ListTablesRequest.builder()
listTablesRequest
- A Consumer
that will call methods on ListTablesInput.Builder
to create a request.
Represents the input of a ListTables
operation.default CompletableFuture<ListTagsOfResourceResponse> listTagsOfResource(ListTagsOfResourceRequest listTagsOfResourceRequest)
List all tags on an Amazon DynamoDB resource. You can call ListTagsOfResource up to 10 times per second, per account.
For an overview on tagging DynamoDB resources, see Tagging for DynamoDB in the Amazon DynamoDB Developer Guide.
listTagsOfResourceRequest
- ACTIVE
.default CompletableFuture<ListTagsOfResourceResponse> listTagsOfResource(Consumer<ListTagsOfResourceRequest.Builder> listTagsOfResourceRequest)
List all tags on an Amazon DynamoDB resource. You can call ListTagsOfResource up to 10 times per second, per account.
For an overview on tagging DynamoDB resources, see Tagging for DynamoDB in the Amazon DynamoDB Developer Guide.
This is a convenience which creates an instance of the ListTagsOfResourceRequest.Builder
avoiding the
need to create one manually via ListTagsOfResourceRequest.builder()
listTagsOfResourceRequest
- A Consumer
that will call methods on ListTagsOfResourceInput.Builder
to create a request.ACTIVE
.default CompletableFuture<PutItemResponse> putItem(PutItemRequest putItemRequest)
Creates a new item, or replaces an old item with a new item. If an item that has the same primary key as the new
item already exists in the specified table, the new item completely replaces the existing item. You can perform a
conditional put operation (add a new item if one with the specified primary key doesn't exist), or replace an
existing item if it has certain attribute values. You can return the item's attribute values in the same
operation, using the ReturnValues
parameter.
When you add an item, the primary key attributes are the only required attributes.
Empty String and Binary attribute values are allowed. Attribute values of type String and Binary must have a length greater than zero if the attribute is used as a key attribute for a table or index. Set type attributes cannot be empty.
Invalid Requests with empty values will be rejected with a ValidationException
exception.
To prevent a new item from replacing an existing item, use a conditional expression that contains the
attribute_not_exists
function with the name of the attribute being used as the partition key for the
table. Since every record must contain that attribute, the attribute_not_exists
function will only
succeed if no matching item exists.
For more information about PutItem
, see Working with
Items in the Amazon DynamoDB Developer Guide.
putItemRequest
- Represents the input of a PutItem
operation.ACTIVE
.default CompletableFuture<PutItemResponse> putItem(Consumer<PutItemRequest.Builder> putItemRequest)
Creates a new item, or replaces an old item with a new item. If an item that has the same primary key as the new
item already exists in the specified table, the new item completely replaces the existing item. You can perform a
conditional put operation (add a new item if one with the specified primary key doesn't exist), or replace an
existing item if it has certain attribute values. You can return the item's attribute values in the same
operation, using the ReturnValues
parameter.
When you add an item, the primary key attributes are the only required attributes.
Empty String and Binary attribute values are allowed. Attribute values of type String and Binary must have a length greater than zero if the attribute is used as a key attribute for a table or index. Set type attributes cannot be empty.
Invalid Requests with empty values will be rejected with a ValidationException
exception.
To prevent a new item from replacing an existing item, use a conditional expression that contains the
attribute_not_exists
function with the name of the attribute being used as the partition key for the
table. Since every record must contain that attribute, the attribute_not_exists
function will only
succeed if no matching item exists.
For more information about PutItem
, see Working with
Items in the Amazon DynamoDB Developer Guide.
This is a convenience which creates an instance of the PutItemRequest.Builder
avoiding the need to create
one manually via PutItemRequest.builder()
putItemRequest
- A Consumer
that will call methods on PutItemInput.Builder
to create a request. Represents
the input of a PutItem
operation.ACTIVE
.default CompletableFuture<QueryResponse> query(QueryRequest queryRequest)
You must provide the name of the partition key attribute and a single value for that attribute.
Query
returns all items with that partition key value. Optionally, you can provide a sort key
attribute and use a comparison operator to refine the search results.
Use the KeyConditionExpression
parameter to provide a specific value for the partition key. The
Query
operation will return all of the items from the table or index with that partition key value.
You can optionally narrow the scope of the Query
operation by specifying a sort key value and a
comparison operator in KeyConditionExpression
. To further refine the Query
results, you
can optionally provide a FilterExpression
. A FilterExpression
determines which items
within the results should be returned to you. All of the other results are discarded.
A Query
operation always returns a result set. If no matching items are found, the result set will
be empty. Queries that do not return results consume the minimum number of read capacity units for that type of
read operation.
DynamoDB calculates the number of read capacity units consumed based on item size, not on the amount of data that
is returned to an application. The number of capacity units consumed will be the same whether you request all of
the attributes (the default behavior) or just some of them (using a projection expression). The number will also
be the same whether or not you use a FilterExpression
.
Query
results are always sorted by the sort key value. If the data type of the sort key is Number,
the results are returned in numeric order; otherwise, the results are returned in order of UTF-8 bytes. By
default, the sort order is ascending. To reverse the order, set the ScanIndexForward
parameter to
false.
A single Query
operation will read up to the maximum number of items set (if using the
Limit
parameter) or a maximum of 1 MB of data and then apply any filtering to the results using
FilterExpression
. If LastEvaluatedKey
is present in the response, you will need to
paginate the result set. For more information, see Paginating
the Results in the Amazon DynamoDB Developer Guide.
FilterExpression
is applied after a Query
finishes, but before the results are
returned. A FilterExpression
cannot contain partition key or sort key attributes. You need to
specify those attributes in the KeyConditionExpression
.
A Query
operation can return an empty result set and a LastEvaluatedKey
if all the
items read for the page of results are filtered out.
You can query a table, a local secondary index, or a global secondary index. For a query on a table or on a local
secondary index, you can set the ConsistentRead
parameter to true
and obtain a strongly
consistent result. Global secondary indexes support eventually consistent reads only, so do not specify
ConsistentRead
when querying a global secondary index.
queryRequest
- Represents the input of a Query
operation.ACTIVE
.default CompletableFuture<QueryResponse> query(Consumer<QueryRequest.Builder> queryRequest)
You must provide the name of the partition key attribute and a single value for that attribute.
Query
returns all items with that partition key value. Optionally, you can provide a sort key
attribute and use a comparison operator to refine the search results.
Use the KeyConditionExpression
parameter to provide a specific value for the partition key. The
Query
operation will return all of the items from the table or index with that partition key value.
You can optionally narrow the scope of the Query
operation by specifying a sort key value and a
comparison operator in KeyConditionExpression
. To further refine the Query
results, you
can optionally provide a FilterExpression
. A FilterExpression
determines which items
within the results should be returned to you. All of the other results are discarded.
A Query
operation always returns a result set. If no matching items are found, the result set will
be empty. Queries that do not return results consume the minimum number of read capacity units for that type of
read operation.
DynamoDB calculates the number of read capacity units consumed based on item size, not on the amount of data that
is returned to an application. The number of capacity units consumed will be the same whether you request all of
the attributes (the default behavior) or just some of them (using a projection expression). The number will also
be the same whether or not you use a FilterExpression
.
Query
results are always sorted by the sort key value. If the data type of the sort key is Number,
the results are returned in numeric order; otherwise, the results are returned in order of UTF-8 bytes. By
default, the sort order is ascending. To reverse the order, set the ScanIndexForward
parameter to
false.
A single Query
operation will read up to the maximum number of items set (if using the
Limit
parameter) or a maximum of 1 MB of data and then apply any filtering to the results using
FilterExpression
. If LastEvaluatedKey
is present in the response, you will need to
paginate the result set. For more information, see Paginating
the Results in the Amazon DynamoDB Developer Guide.
FilterExpression
is applied after a Query
finishes, but before the results are
returned. A FilterExpression
cannot contain partition key or sort key attributes. You need to
specify those attributes in the KeyConditionExpression
.
A Query
operation can return an empty result set and a LastEvaluatedKey
if all the
items read for the page of results are filtered out.
You can query a table, a local secondary index, or a global secondary index. For a query on a table or on a local
secondary index, you can set the ConsistentRead
parameter to true
and obtain a strongly
consistent result. Global secondary indexes support eventually consistent reads only, so do not specify
ConsistentRead
when querying a global secondary index.
This is a convenience which creates an instance of the QueryRequest.Builder
avoiding the need to create
one manually via QueryRequest.builder()
queryRequest
- A Consumer
that will call methods on QueryInput.Builder
to create a request. Represents
the input of a Query
operation.ACTIVE
.default QueryPublisher queryPaginator(QueryRequest queryRequest)
You must provide the name of the partition key attribute and a single value for that attribute.
Query
returns all items with that partition key value. Optionally, you can provide a sort key
attribute and use a comparison operator to refine the search results.
Use the KeyConditionExpression
parameter to provide a specific value for the partition key. The
Query
operation will return all of the items from the table or index with that partition key value.
You can optionally narrow the scope of the Query
operation by specifying a sort key value and a
comparison operator in KeyConditionExpression
. To further refine the Query
results, you
can optionally provide a FilterExpression
. A FilterExpression
determines which items
within the results should be returned to you. All of the other results are discarded.
A Query
operation always returns a result set. If no matching items are found, the result set will
be empty. Queries that do not return results consume the minimum number of read capacity units for that type of
read operation.
DynamoDB calculates the number of read capacity units consumed based on item size, not on the amount of data that
is returned to an application. The number of capacity units consumed will be the same whether you request all of
the attributes (the default behavior) or just some of them (using a projection expression). The number will also
be the same whether or not you use a FilterExpression
.
Query
results are always sorted by the sort key value. If the data type of the sort key is Number,
the results are returned in numeric order; otherwise, the results are returned in order of UTF-8 bytes. By
default, the sort order is ascending. To reverse the order, set the ScanIndexForward
parameter to
false.
A single Query
operation will read up to the maximum number of items set (if using the
Limit
parameter) or a maximum of 1 MB of data and then apply any filtering to the results using
FilterExpression
. If LastEvaluatedKey
is present in the response, you will need to
paginate the result set. For more information, see Paginating
the Results in the Amazon DynamoDB Developer Guide.
FilterExpression
is applied after a Query
finishes, but before the results are
returned. A FilterExpression
cannot contain partition key or sort key attributes. You need to
specify those attributes in the KeyConditionExpression
.
A Query
operation can return an empty result set and a LastEvaluatedKey
if all the
items read for the page of results are filtered out.
You can query a table, a local secondary index, or a global secondary index. For a query on a table or on a local
secondary index, you can set the ConsistentRead
parameter to true
and obtain a strongly
consistent result. Global secondary indexes support eventually consistent reads only, so do not specify
ConsistentRead
when querying a global secondary index.
This is a variant of query(software.amazon.awssdk.services.dynamodb.model.QueryRequest)
operation. The
return type is a custom publisher that can be subscribed to request a stream of response pages. SDK will
internally handle making service calls for you.
When the operation is called, an instance of this class is returned. At this point, no service calls are made yet
and so there is no guarantee that the request is valid. If there are errors in your request, you will see the
failures only after you start streaming the data. The subscribe method should be called as a request to start
streaming data. For more info, see
Publisher.subscribe(org.reactivestreams.Subscriber)
. Each call to the subscribe
method will result in a new Subscription
i.e., a new contract to stream data from the
starting request.
The following are few ways to use the response class:
1) Using the subscribe helper method
software.amazon.awssdk.services.dynamodb.paginators.QueryPublisher publisher = client.queryPaginator(request);
CompletableFuture<Void> future = publisher.subscribe(res -> { // Do something with the response });
future.get();
2) Using a custom subscriber
software.amazon.awssdk.services.dynamodb.paginators.QueryPublisher publisher = client.queryPaginator(request);
publisher.subscribe(new Subscriber<software.amazon.awssdk.services.dynamodb.model.QueryResponse>() {
public void onSubscribe(org.reactivestreams.Subscriber subscription) { //... };
public void onNext(software.amazon.awssdk.services.dynamodb.model.QueryResponse response) { //... };
});
As the response is a publisher, it can work well with third party reactive streams implementations like RxJava2.
Please notice that the configuration of Limit won't limit the number of results you get with the paginator. It only limits the number of results in each page.
Note: If you prefer to have control on service calls, use the
query(software.amazon.awssdk.services.dynamodb.model.QueryRequest)
operation.
queryRequest
- Represents the input of a Query
operation.ACTIVE
.default QueryPublisher queryPaginator(Consumer<QueryRequest.Builder> queryRequest)
You must provide the name of the partition key attribute and a single value for that attribute.
Query
returns all items with that partition key value. Optionally, you can provide a sort key
attribute and use a comparison operator to refine the search results.
Use the KeyConditionExpression
parameter to provide a specific value for the partition key. The
Query
operation will return all of the items from the table or index with that partition key value.
You can optionally narrow the scope of the Query
operation by specifying a sort key value and a
comparison operator in KeyConditionExpression
. To further refine the Query
results, you
can optionally provide a FilterExpression
. A FilterExpression
determines which items
within the results should be returned to you. All of the other results are discarded.
A Query
operation always returns a result set. If no matching items are found, the result set will
be empty. Queries that do not return results consume the minimum number of read capacity units for that type of
read operation.
DynamoDB calculates the number of read capacity units consumed based on item size, not on the amount of data that
is returned to an application. The number of capacity units consumed will be the same whether you request all of
the attributes (the default behavior) or just some of them (using a projection expression). The number will also
be the same whether or not you use a FilterExpression
.
Query
results are always sorted by the sort key value. If the data type of the sort key is Number,
the results are returned in numeric order; otherwise, the results are returned in order of UTF-8 bytes. By
default, the sort order is ascending. To reverse the order, set the ScanIndexForward
parameter to
false.
A single Query
operation will read up to the maximum number of items set (if using the
Limit
parameter) or a maximum of 1 MB of data and then apply any filtering to the results using
FilterExpression
. If LastEvaluatedKey
is present in the response, you will need to
paginate the result set. For more information, see Paginating
the Results in the Amazon DynamoDB Developer Guide.
FilterExpression
is applied after a Query
finishes, but before the results are
returned. A FilterExpression
cannot contain partition key or sort key attributes. You need to
specify those attributes in the KeyConditionExpression
.
A Query
operation can return an empty result set and a LastEvaluatedKey
if all the
items read for the page of results are filtered out.
You can query a table, a local secondary index, or a global secondary index. For a query on a table or on a local
secondary index, you can set the ConsistentRead
parameter to true
and obtain a strongly
consistent result. Global secondary indexes support eventually consistent reads only, so do not specify
ConsistentRead
when querying a global secondary index.
This is a variant of query(software.amazon.awssdk.services.dynamodb.model.QueryRequest)
operation. The
return type is a custom publisher that can be subscribed to request a stream of response pages. SDK will
internally handle making service calls for you.
When the operation is called, an instance of this class is returned. At this point, no service calls are made yet
and so there is no guarantee that the request is valid. If there are errors in your request, you will see the
failures only after you start streaming the data. The subscribe method should be called as a request to start
streaming data. For more info, see
Publisher.subscribe(org.reactivestreams.Subscriber)
. Each call to the subscribe
method will result in a new Subscription
i.e., a new contract to stream data from the
starting request.
The following are few ways to use the response class:
1) Using the subscribe helper method
software.amazon.awssdk.services.dynamodb.paginators.QueryPublisher publisher = client.queryPaginator(request);
CompletableFuture<Void> future = publisher.subscribe(res -> { // Do something with the response });
future.get();
2) Using a custom subscriber
software.amazon.awssdk.services.dynamodb.paginators.QueryPublisher publisher = client.queryPaginator(request);
publisher.subscribe(new Subscriber<software.amazon.awssdk.services.dynamodb.model.QueryResponse>() {
public void onSubscribe(org.reactivestreams.Subscriber subscription) { //... };
public void onNext(software.amazon.awssdk.services.dynamodb.model.QueryResponse response) { //... };
});
As the response is a publisher, it can work well with third party reactive streams implementations like RxJava2.
Please notice that the configuration of Limit won't limit the number of results you get with the paginator. It only limits the number of results in each page.
Note: If you prefer to have control on service calls, use the
query(software.amazon.awssdk.services.dynamodb.model.QueryRequest)
operation.
This is a convenience which creates an instance of the QueryRequest.Builder
avoiding the need to create
one manually via QueryRequest.builder()
queryRequest
- A Consumer
that will call methods on QueryInput.Builder
to create a request. Represents
the input of a Query
operation.ACTIVE
.default CompletableFuture<RestoreTableFromBackupResponse> restoreTableFromBackup(RestoreTableFromBackupRequest restoreTableFromBackupRequest)
Creates a new table from an existing backup. Any number of users can execute up to 50 concurrent restores (any type of restore) in a given account.
You can call RestoreTableFromBackup
at a maximum rate of 10 times per second.
You must manually set up the following on the restored table:
Auto scaling policies
IAM policies
Amazon CloudWatch metrics and alarms
Tags
Stream settings
Time to Live (TTL) settings
restoreTableFromBackupRequest
-
For most purposes, up to 500 simultaneous table operations are allowed per account. These operations
include CreateTable
, UpdateTable
, DeleteTable
,
UpdateTimeToLive
, RestoreTableFromBackup
, and
RestoreTableToPointInTime
.
When you are creating a table with one or more secondary indexes, you can have up to 250 such requests running at a time. However, if the table or index specifications are complex, then DynamoDB might temporarily reduce the number of concurrent operations.
When importing into DynamoDB, up to 50 simultaneous import table operations are allowed per account.
There is a soft account quota of 2,500 tables.
GetRecords was called with a value of more than 1000 for the limit request parameter.
More than 2 processes are reading from the same streams shard at the same time. Exceeding this limit may result in request throttling.
default CompletableFuture<RestoreTableFromBackupResponse> restoreTableFromBackup(Consumer<RestoreTableFromBackupRequest.Builder> restoreTableFromBackupRequest)
Creates a new table from an existing backup. Any number of users can execute up to 50 concurrent restores (any type of restore) in a given account.
You can call RestoreTableFromBackup
at a maximum rate of 10 times per second.
You must manually set up the following on the restored table:
Auto scaling policies
IAM policies
Amazon CloudWatch metrics and alarms
Tags
Stream settings
Time to Live (TTL) settings
This is a convenience which creates an instance of the RestoreTableFromBackupRequest.Builder
avoiding the
need to create one manually via RestoreTableFromBackupRequest.builder()
restoreTableFromBackupRequest
- A Consumer
that will call methods on RestoreTableFromBackupInput.Builder
to create a
request.
For most purposes, up to 500 simultaneous table operations are allowed per account. These operations
include CreateTable
, UpdateTable
, DeleteTable
,
UpdateTimeToLive
, RestoreTableFromBackup
, and
RestoreTableToPointInTime
.
When you are creating a table with one or more secondary indexes, you can have up to 250 such requests running at a time. However, if the table or index specifications are complex, then DynamoDB might temporarily reduce the number of concurrent operations.
When importing into DynamoDB, up to 50 simultaneous import table operations are allowed per account.
There is a soft account quota of 2,500 tables.
GetRecords was called with a value of more than 1000 for the limit request parameter.
More than 2 processes are reading from the same streams shard at the same time. Exceeding this limit may result in request throttling.
default CompletableFuture<RestoreTableToPointInTimeResponse> restoreTableToPointInTime(RestoreTableToPointInTimeRequest restoreTableToPointInTimeRequest)
Restores the specified table to the specified point in time within EarliestRestorableDateTime
and
LatestRestorableDateTime
. You can restore your table to any point in time during the last 35 days.
Any number of users can execute up to 4 concurrent restores (any type of restore) in a given account.
When you restore using point in time recovery, DynamoDB restores your table data to the state based on the selected date and time (day:hour:minute:second) to a new table.
Along with data, the following are also included on the new restored table using point in time recovery:
Global secondary indexes (GSIs)
Local secondary indexes (LSIs)
Provisioned read and write capacity
Encryption settings
All these settings come from the current settings of the source table at the time of restore.
You must manually set up the following on the restored table:
Auto scaling policies
IAM policies
Amazon CloudWatch metrics and alarms
Tags
Stream settings
Time to Live (TTL) settings
Point in time recovery settings
restoreTableToPointInTimeRequest
- TableName
does not currently exist
within the subscriber's account or the subscriber is operating in the wrong Amazon Web Services Region.
For most purposes, up to 500 simultaneous table operations are allowed per account. These operations
include CreateTable
, UpdateTable
, DeleteTable
,
UpdateTimeToLive
, RestoreTableFromBackup
, and
RestoreTableToPointInTime
.
When you are creating a table with one or more secondary indexes, you can have up to 250 such requests running at a time. However, if the table or index specifications are complex, then DynamoDB might temporarily reduce the number of concurrent operations.
When importing into DynamoDB, up to 50 simultaneous import table operations are allowed per account.
There is a soft account quota of 2,500 tables.
GetRecords was called with a value of more than 1000 for the limit request parameter.
More than 2 processes are reading from the same streams shard at the same time. Exceeding this limit may result in request throttling.
default CompletableFuture<RestoreTableToPointInTimeResponse> restoreTableToPointInTime(Consumer<RestoreTableToPointInTimeRequest.Builder> restoreTableToPointInTimeRequest)
Restores the specified table to the specified point in time within EarliestRestorableDateTime
and
LatestRestorableDateTime
. You can restore your table to any point in time during the last 35 days.
Any number of users can execute up to 4 concurrent restores (any type of restore) in a given account.
When you restore using point in time recovery, DynamoDB restores your table data to the state based on the selected date and time (day:hour:minute:second) to a new table.
Along with data, the following are also included on the new restored table using point in time recovery:
Global secondary indexes (GSIs)
Local secondary indexes (LSIs)
Provisioned read and write capacity
Encryption settings
All these settings come from the current settings of the source table at the time of restore.
You must manually set up the following on the restored table:
Auto scaling policies
IAM policies
Amazon CloudWatch metrics and alarms
Tags
Stream settings
Time to Live (TTL) settings
Point in time recovery settings
This is a convenience which creates an instance of the RestoreTableToPointInTimeRequest.Builder
avoiding
the need to create one manually via RestoreTableToPointInTimeRequest.builder()
restoreTableToPointInTimeRequest
- A Consumer
that will call methods on RestoreTableToPointInTimeInput.Builder
to create a
request.TableName
does not currently exist
within the subscriber's account or the subscriber is operating in the wrong Amazon Web Services Region.
For most purposes, up to 500 simultaneous table operations are allowed per account. These operations
include CreateTable
, UpdateTable
, DeleteTable
,
UpdateTimeToLive
, RestoreTableFromBackup
, and
RestoreTableToPointInTime
.
When you are creating a table with one or more secondary indexes, you can have up to 250 such requests running at a time. However, if the table or index specifications are complex, then DynamoDB might temporarily reduce the number of concurrent operations.
When importing into DynamoDB, up to 50 simultaneous import table operations are allowed per account.
There is a soft account quota of 2,500 tables.
GetRecords was called with a value of more than 1000 for the limit request parameter.
More than 2 processes are reading from the same streams shard at the same time. Exceeding this limit may result in request throttling.
default CompletableFuture<ScanResponse> scan(ScanRequest scanRequest)
The Scan
operation returns one or more items and item attributes by accessing every item in a table
or a secondary index. To have DynamoDB return fewer items, you can provide a FilterExpression
operation.
If the total number of scanned items exceeds the maximum dataset size limit of 1 MB, the scan stops and results
are returned to the user as a LastEvaluatedKey
value to continue the scan in a subsequent operation.
The results also include the number of items exceeding the limit. A scan can result in no table data meeting the
filter criteria.
A single Scan
operation reads up to the maximum number of items set (if using the Limit
parameter) or a maximum of 1 MB of data and then apply any filtering to the results using
FilterExpression
. If LastEvaluatedKey
is present in the response, you need to paginate
the result set. For more information, see Paginating the
Results in the Amazon DynamoDB Developer Guide.
Scan
operations proceed sequentially; however, for faster performance on a large table or secondary
index, applications can request a parallel Scan
operation by providing the Segment
and
TotalSegments
parameters. For more information, see Parallel
Scan in the Amazon DynamoDB Developer Guide.
Scan
uses eventually consistent reads when accessing the data in a table; therefore, the result set
might not include the changes to data in the table immediately before the operation began. If you need a
consistent copy of the data, as of the time that the Scan
begins, you can set the
ConsistentRead
parameter to true
.
scanRequest
- Represents the input of a Scan
operation.ACTIVE
.default CompletableFuture<ScanResponse> scan(Consumer<ScanRequest.Builder> scanRequest)
The Scan
operation returns one or more items and item attributes by accessing every item in a table
or a secondary index. To have DynamoDB return fewer items, you can provide a FilterExpression
operation.
If the total number of scanned items exceeds the maximum dataset size limit of 1 MB, the scan stops and results
are returned to the user as a LastEvaluatedKey
value to continue the scan in a subsequent operation.
The results also include the number of items exceeding the limit. A scan can result in no table data meeting the
filter criteria.
A single Scan
operation reads up to the maximum number of items set (if using the Limit
parameter) or a maximum of 1 MB of data and then apply any filtering to the results using
FilterExpression
. If LastEvaluatedKey
is present in the response, you need to paginate
the result set. For more information, see Paginating the
Results in the Amazon DynamoDB Developer Guide.
Scan
operations proceed sequentially; however, for faster performance on a large table or secondary
index, applications can request a parallel Scan
operation by providing the Segment
and
TotalSegments
parameters. For more information, see Parallel
Scan in the Amazon DynamoDB Developer Guide.
Scan
uses eventually consistent reads when accessing the data in a table; therefore, the result set
might not include the changes to data in the table immediately before the operation began. If you need a
consistent copy of the data, as of the time that the Scan
begins, you can set the
ConsistentRead
parameter to true
.
This is a convenience which creates an instance of the ScanRequest.Builder
avoiding the need to create
one manually via ScanRequest.builder()
scanRequest
- A Consumer
that will call methods on ScanInput.Builder
to create a request. Represents the
input of a Scan
operation.ACTIVE
.default ScanPublisher scanPaginator(ScanRequest scanRequest)
The Scan
operation returns one or more items and item attributes by accessing every item in a table
or a secondary index. To have DynamoDB return fewer items, you can provide a FilterExpression
operation.
If the total number of scanned items exceeds the maximum dataset size limit of 1 MB, the scan stops and results
are returned to the user as a LastEvaluatedKey
value to continue the scan in a subsequent operation.
The results also include the number of items exceeding the limit. A scan can result in no table data meeting the
filter criteria.
A single Scan
operation reads up to the maximum number of items set (if using the Limit
parameter) or a maximum of 1 MB of data and then apply any filtering to the results using
FilterExpression
. If LastEvaluatedKey
is present in the response, you need to paginate
the result set. For more information, see Paginating the
Results in the Amazon DynamoDB Developer Guide.
Scan
operations proceed sequentially; however, for faster performance on a large table or secondary
index, applications can request a parallel Scan
operation by providing the Segment
and
TotalSegments
parameters. For more information, see Parallel
Scan in the Amazon DynamoDB Developer Guide.
Scan
uses eventually consistent reads when accessing the data in a table; therefore, the result set
might not include the changes to data in the table immediately before the operation began. If you need a
consistent copy of the data, as of the time that the Scan
begins, you can set the
ConsistentRead
parameter to true
.
This is a variant of scan(software.amazon.awssdk.services.dynamodb.model.ScanRequest)
operation. The
return type is a custom publisher that can be subscribed to request a stream of response pages. SDK will
internally handle making service calls for you.
When the operation is called, an instance of this class is returned. At this point, no service calls are made yet
and so there is no guarantee that the request is valid. If there are errors in your request, you will see the
failures only after you start streaming the data. The subscribe method should be called as a request to start
streaming data. For more info, see
Publisher.subscribe(org.reactivestreams.Subscriber)
. Each call to the subscribe
method will result in a new Subscription
i.e., a new contract to stream data from the
starting request.
The following are few ways to use the response class:
1) Using the subscribe helper method
software.amazon.awssdk.services.dynamodb.paginators.ScanPublisher publisher = client.scanPaginator(request);
CompletableFuture<Void> future = publisher.subscribe(res -> { // Do something with the response });
future.get();
2) Using a custom subscriber
software.amazon.awssdk.services.dynamodb.paginators.ScanPublisher publisher = client.scanPaginator(request);
publisher.subscribe(new Subscriber<software.amazon.awssdk.services.dynamodb.model.ScanResponse>() {
public void onSubscribe(org.reactivestreams.Subscriber subscription) { //... };
public void onNext(software.amazon.awssdk.services.dynamodb.model.ScanResponse response) { //... };
});
As the response is a publisher, it can work well with third party reactive streams implementations like RxJava2.
Please notice that the configuration of Limit won't limit the number of results you get with the paginator. It only limits the number of results in each page.
Note: If you prefer to have control on service calls, use the
scan(software.amazon.awssdk.services.dynamodb.model.ScanRequest)
operation.
scanRequest
- Represents the input of a Scan
operation.ACTIVE
.default ScanPublisher scanPaginator(Consumer<ScanRequest.Builder> scanRequest)
The Scan
operation returns one or more items and item attributes by accessing every item in a table
or a secondary index. To have DynamoDB return fewer items, you can provide a FilterExpression
operation.
If the total number of scanned items exceeds the maximum dataset size limit of 1 MB, the scan stops and results
are returned to the user as a LastEvaluatedKey
value to continue the scan in a subsequent operation.
The results also include the number of items exceeding the limit. A scan can result in no table data meeting the
filter criteria.
A single Scan
operation reads up to the maximum number of items set (if using the Limit
parameter) or a maximum of 1 MB of data and then apply any filtering to the results using
FilterExpression
. If LastEvaluatedKey
is present in the response, you need to paginate
the result set. For more information, see Paginating the
Results in the Amazon DynamoDB Developer Guide.
Scan
operations proceed sequentially; however, for faster performance on a large table or secondary
index, applications can request a parallel Scan
operation by providing the Segment
and
TotalSegments
parameters. For more information, see Parallel
Scan in the Amazon DynamoDB Developer Guide.
Scan
uses eventually consistent reads when accessing the data in a table; therefore, the result set
might not include the changes to data in the table immediately before the operation began. If you need a
consistent copy of the data, as of the time that the Scan
begins, you can set the
ConsistentRead
parameter to true
.
This is a variant of scan(software.amazon.awssdk.services.dynamodb.model.ScanRequest)
operation. The
return type is a custom publisher that can be subscribed to request a stream of response pages. SDK will
internally handle making service calls for you.
When the operation is called, an instance of this class is returned. At this point, no service calls are made yet
and so there is no guarantee that the request is valid. If there are errors in your request, you will see the
failures only after you start streaming the data. The subscribe method should be called as a request to start
streaming data. For more info, see
Publisher.subscribe(org.reactivestreams.Subscriber)
. Each call to the subscribe
method will result in a new Subscription
i.e., a new contract to stream data from the
starting request.
The following are few ways to use the response class:
1) Using the subscribe helper method
software.amazon.awssdk.services.dynamodb.paginators.ScanPublisher publisher = client.scanPaginator(request);
CompletableFuture<Void> future = publisher.subscribe(res -> { // Do something with the response });
future.get();
2) Using a custom subscriber
software.amazon.awssdk.services.dynamodb.paginators.ScanPublisher publisher = client.scanPaginator(request);
publisher.subscribe(new Subscriber<software.amazon.awssdk.services.dynamodb.model.ScanResponse>() {
public void onSubscribe(org.reactivestreams.Subscriber subscription) { //... };
public void onNext(software.amazon.awssdk.services.dynamodb.model.ScanResponse response) { //... };
});
As the response is a publisher, it can work well with third party reactive streams implementations like RxJava2.
Please notice that the configuration of Limit won't limit the number of results you get with the paginator. It only limits the number of results in each page.
Note: If you prefer to have control on service calls, use the
scan(software.amazon.awssdk.services.dynamodb.model.ScanRequest)
operation.
This is a convenience which creates an instance of the ScanRequest.Builder
avoiding the need to create
one manually via ScanRequest.builder()
scanRequest
- A Consumer
that will call methods on ScanInput.Builder
to create a request. Represents the
input of a Scan
operation.ACTIVE
.default CompletableFuture<TagResourceResponse> tagResource(TagResourceRequest tagResourceRequest)
Associate a set of tags with an Amazon DynamoDB resource. You can then activate these user-defined tags so that they appear on the Billing and Cost Management console for cost allocation tracking. You can call TagResource up to five times per second, per account.
For an overview on tagging DynamoDB resources, see Tagging for DynamoDB in the Amazon DynamoDB Developer Guide.
tagResourceRequest
-
For most purposes, up to 500 simultaneous table operations are allowed per account. These operations
include CreateTable
, UpdateTable
, DeleteTable
,
UpdateTimeToLive
, RestoreTableFromBackup
, and
RestoreTableToPointInTime
.
When you are creating a table with one or more secondary indexes, you can have up to 250 such requests running at a time. However, if the table or index specifications are complex, then DynamoDB might temporarily reduce the number of concurrent operations.
When importing into DynamoDB, up to 50 simultaneous import table operations are allowed per account.
There is a soft account quota of 2,500 tables.
GetRecords was called with a value of more than 1000 for the limit request parameter.
More than 2 processes are reading from the same streams shard at the same time. Exceeding this limit may result in request throttling.
ACTIVE
.CREATING
state.default CompletableFuture<TagResourceResponse> tagResource(Consumer<TagResourceRequest.Builder> tagResourceRequest)
Associate a set of tags with an Amazon DynamoDB resource. You can then activate these user-defined tags so that they appear on the Billing and Cost Management console for cost allocation tracking. You can call TagResource up to five times per second, per account.
For an overview on tagging DynamoDB resources, see Tagging for DynamoDB in the Amazon DynamoDB Developer Guide.
This is a convenience which creates an instance of the TagResourceRequest.Builder
avoiding the need to
create one manually via TagResourceRequest.builder()
tagResourceRequest
- A Consumer
that will call methods on TagResourceInput.Builder
to create a request.
For most purposes, up to 500 simultaneous table operations are allowed per account. These operations
include CreateTable
, UpdateTable
, DeleteTable
,
UpdateTimeToLive
, RestoreTableFromBackup
, and
RestoreTableToPointInTime
.
When you are creating a table with one or more secondary indexes, you can have up to 250 such requests running at a time. However, if the table or index specifications are complex, then DynamoDB might temporarily reduce the number of concurrent operations.
When importing into DynamoDB, up to 50 simultaneous import table operations are allowed per account.
There is a soft account quota of 2,500 tables.
GetRecords was called with a value of more than 1000 for the limit request parameter.
More than 2 processes are reading from the same streams shard at the same time. Exceeding this limit may result in request throttling.
ACTIVE
.CREATING
state.default CompletableFuture<TransactGetItemsResponse> transactGetItems(TransactGetItemsRequest transactGetItemsRequest)
TransactGetItems
is a synchronous operation that atomically retrieves multiple items from one or
more tables (but not from indexes) in a single account and Region. A TransactGetItems
call can
contain up to 100 TransactGetItem
objects, each of which contains a Get
structure that
specifies an item to retrieve from a table in the account and Region. A call to TransactGetItems
cannot retrieve items from tables in more than one Amazon Web Services account or Region. The aggregate size of
the items in the transaction cannot exceed 4 MB.
DynamoDB rejects the entire TransactGetItems
request if any of the following is true:
A conflicting operation is in the process of updating an item to be read.
There is insufficient provisioned capacity for the transaction to be completed.
There is a user error, such as an invalid data format.
The aggregate size of the items in the transaction exceeded 4 MB.
transactGetItemsRequest
- ACTIVE
.
DynamoDB cancels a TransactWriteItems
request under the following circumstances:
A condition in one of the condition expressions is not met.
A table in the TransactWriteItems
request is in a different account or region.
More than one action in the TransactWriteItems
operation targets the same item.
There is insufficient provisioned capacity for the transaction to be completed.
An item size becomes too large (larger than 400 KB), or a local secondary index (LSI) becomes too large, or a similar validation error occurs because of changes made by the transaction.
There is a user error, such as an invalid data format.
DynamoDB cancels a TransactGetItems
request under the following circumstances:
There is an ongoing TransactGetItems
operation that conflicts with a concurrent
PutItem
, UpdateItem
, DeleteItem
or TransactWriteItems
request. In this case the TransactGetItems
operation fails with a
TransactionCanceledException
.
A table in the TransactGetItems
request is in a different account or region.
There is insufficient provisioned capacity for the transaction to be completed.
There is a user error, such as an invalid data format.
If using Java, DynamoDB lists the cancellation reasons on the CancellationReasons
property.
This property is not set for other languages. Transaction cancellation reasons are ordered in the order
of requested items, if an item has no error it will have None
code and Null
message.
Cancellation reason codes and possible error messages:
No Errors:
Code: None
Message: null
Conditional Check Failed:
Code: ConditionalCheckFailed
Message: The conditional request failed.
Item Collection Size Limit Exceeded:
Code: ItemCollectionSizeLimitExceeded
Message: Collection size exceeded.
Transaction Conflict:
Code: TransactionConflict
Message: Transaction is ongoing for the item.
Provisioned Throughput Exceeded:
Code: ProvisionedThroughputExceeded
Messages:
The level of configured provisioned throughput for the table was exceeded. Consider increasing your provisioning level with the UpdateTable API.
This Message is received when provisioned throughput is exceeded is on a provisioned DynamoDB table.
The level of configured provisioned throughput for one or more global secondary indexes of the table was exceeded. Consider increasing your provisioning level for the under-provisioned global secondary indexes with the UpdateTable API.
This message is returned when provisioned throughput is exceeded is on a provisioned GSI.
Throttling Error:
Code: ThrottlingError
Messages:
Throughput exceeds the current capacity of your table or index. DynamoDB is automatically scaling your table or index so please try again shortly. If exceptions persist, check if you have a hot key: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/bp-partition-key-design.html.
This message is returned when writes get throttled on an On-Demand table as DynamoDB is automatically scaling the table.
Throughput exceeds the current capacity for one or more global secondary indexes. DynamoDB is automatically scaling your index so please try again shortly.
This message is returned when writes get throttled on an On-Demand GSI as DynamoDB is automatically scaling the GSI.
Validation Error:
Code: ValidationError
Messages:
One or more parameter values were invalid.
The update expression attempted to update the secondary index key beyond allowed size limits.
The update expression attempted to update the secondary index key to unsupported type.
An operand in the update expression has an incorrect data type.
Item size to update has exceeded the maximum allowed size.
Number overflow. Attempting to store a number with magnitude larger than supported range.
Type mismatch for attribute to update.
Nesting Levels have exceeded supported limits.
The document path provided in the update expression is invalid for update.
The provided expression refers to an attribute that does not exist in the item.
default CompletableFuture<TransactGetItemsResponse> transactGetItems(Consumer<TransactGetItemsRequest.Builder> transactGetItemsRequest)
TransactGetItems
is a synchronous operation that atomically retrieves multiple items from one or
more tables (but not from indexes) in a single account and Region. A TransactGetItems
call can
contain up to 100 TransactGetItem
objects, each of which contains a Get
structure that
specifies an item to retrieve from a table in the account and Region. A call to TransactGetItems
cannot retrieve items from tables in more than one Amazon Web Services account or Region. The aggregate size of
the items in the transaction cannot exceed 4 MB.
DynamoDB rejects the entire TransactGetItems
request if any of the following is true:
A conflicting operation is in the process of updating an item to be read.
There is insufficient provisioned capacity for the transaction to be completed.
There is a user error, such as an invalid data format.
The aggregate size of the items in the transaction exceeded 4 MB.
This is a convenience which creates an instance of the TransactGetItemsRequest.Builder
avoiding the need
to create one manually via TransactGetItemsRequest.builder()
transactGetItemsRequest
- A Consumer
that will call methods on TransactGetItemsInput.Builder
to create a request.ACTIVE
.
DynamoDB cancels a TransactWriteItems
request under the following circumstances:
A condition in one of the condition expressions is not met.
A table in the TransactWriteItems
request is in a different account or region.
More than one action in the TransactWriteItems
operation targets the same item.
There is insufficient provisioned capacity for the transaction to be completed.
An item size becomes too large (larger than 400 KB), or a local secondary index (LSI) becomes too large, or a similar validation error occurs because of changes made by the transaction.
There is a user error, such as an invalid data format.
DynamoDB cancels a TransactGetItems
request under the following circumstances:
There is an ongoing TransactGetItems
operation that conflicts with a concurrent
PutItem
, UpdateItem
, DeleteItem
or TransactWriteItems
request. In this case the TransactGetItems
operation fails with a
TransactionCanceledException
.
A table in the TransactGetItems
request is in a different account or region.
There is insufficient provisioned capacity for the transaction to be completed.
There is a user error, such as an invalid data format.
If using Java, DynamoDB lists the cancellation reasons on the CancellationReasons
property.
This property is not set for other languages. Transaction cancellation reasons are ordered in the order
of requested items, if an item has no error it will have None
code and Null
message.
Cancellation reason codes and possible error messages:
No Errors:
Code: None
Message: null
Conditional Check Failed:
Code: ConditionalCheckFailed
Message: The conditional request failed.
Item Collection Size Limit Exceeded:
Code: ItemCollectionSizeLimitExceeded
Message: Collection size exceeded.
Transaction Conflict:
Code: TransactionConflict
Message: Transaction is ongoing for the item.
Provisioned Throughput Exceeded:
Code: ProvisionedThroughputExceeded
Messages:
The level of configured provisioned throughput for the table was exceeded. Consider increasing your provisioning level with the UpdateTable API.
This Message is received when provisioned throughput is exceeded is on a provisioned DynamoDB table.
The level of configured provisioned throughput for one or more global secondary indexes of the table was exceeded. Consider increasing your provisioning level for the under-provisioned global secondary indexes with the UpdateTable API.
This message is returned when provisioned throughput is exceeded is on a provisioned GSI.
Throttling Error:
Code: ThrottlingError
Messages:
Throughput exceeds the current capacity of your table or index. DynamoDB is automatically scaling your table or index so please try again shortly. If exceptions persist, check if you have a hot key: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/bp-partition-key-design.html.
This message is returned when writes get throttled on an On-Demand table as DynamoDB is automatically scaling the table.
Throughput exceeds the current capacity for one or more global secondary indexes. DynamoDB is automatically scaling your index so please try again shortly.
This message is returned when writes get throttled on an On-Demand GSI as DynamoDB is automatically scaling the GSI.
Validation Error:
Code: ValidationError
Messages:
One or more parameter values were invalid.
The update expression attempted to update the secondary index key beyond allowed size limits.
The update expression attempted to update the secondary index key to unsupported type.
An operand in the update expression has an incorrect data type.
Item size to update has exceeded the maximum allowed size.
Number overflow. Attempting to store a number with magnitude larger than supported range.
Type mismatch for attribute to update.
Nesting Levels have exceeded supported limits.
The document path provided in the update expression is invalid for update.
The provided expression refers to an attribute that does not exist in the item.
default CompletableFuture<TransactWriteItemsResponse> transactWriteItems(TransactWriteItemsRequest transactWriteItemsRequest)
TransactWriteItems
is a synchronous write operation that groups up to 100 action requests. These
actions can target items in different tables, but not in different Amazon Web Services accounts or Regions, and
no two actions can target the same item. For example, you cannot both ConditionCheck
and
Update
the same item. The aggregate size of the items in the transaction cannot exceed 4 MB.
The actions are completed atomically so that either all of them succeed, or all of them fail. They are defined by the following objects:
Put
 —  Initiates a PutItem
operation to write a new item. This structure specifies
the primary key of the item to be written, the name of the table to write it in, an optional condition expression
that must be satisfied for the write to succeed, a list of the item's attributes, and a field indicating whether
to retrieve the item's attributes if the condition is not met.
Update
 —  Initiates an UpdateItem
operation to update an existing item. This
structure specifies the primary key of the item to be updated, the name of the table where it resides, an
optional condition expression that must be satisfied for the update to succeed, an expression that defines one or
more attributes to be updated, and a field indicating whether to retrieve the item's attributes if the condition
is not met.
Delete
 —  Initiates a DeleteItem
operation to delete an existing item. This structure
specifies the primary key of the item to be deleted, the name of the table where it resides, an optional
condition expression that must be satisfied for the deletion to succeed, and a field indicating whether to
retrieve the item's attributes if the condition is not met.
ConditionCheck
 —  Applies a condition to an item that is not being modified by the transaction.
This structure specifies the primary key of the item to be checked, the name of the table where it resides, a
condition expression that must be satisfied for the transaction to succeed, and a field indicating whether to
retrieve the item's attributes if the condition is not met.
DynamoDB rejects the entire TransactWriteItems
request if any of the following is true:
A condition in one of the condition expressions is not met.
An ongoing operation is in the process of updating the same item.
There is insufficient provisioned capacity for the transaction to be completed.
An item size becomes too large (bigger than 400 KB), a local secondary index (LSI) becomes too large, or a similar validation error occurs because of changes made by the transaction.
The aggregate size of the items in the transaction exceeds 4 MB.
There is a user error, such as an invalid data format.
transactWriteItemsRequest
- ACTIVE
.
DynamoDB cancels a TransactWriteItems
request under the following circumstances:
A condition in one of the condition expressions is not met.
A table in the TransactWriteItems
request is in a different account or region.
More than one action in the TransactWriteItems
operation targets the same item.
There is insufficient provisioned capacity for the transaction to be completed.
An item size becomes too large (larger than 400 KB), or a local secondary index (LSI) becomes too large, or a similar validation error occurs because of changes made by the transaction.
There is a user error, such as an invalid data format.
DynamoDB cancels a TransactGetItems
request under the following circumstances:
There is an ongoing TransactGetItems
operation that conflicts with a concurrent
PutItem
, UpdateItem
, DeleteItem
or TransactWriteItems
request. In this case the TransactGetItems
operation fails with a
TransactionCanceledException
.
A table in the TransactGetItems
request is in a different account or region.
There is insufficient provisioned capacity for the transaction to be completed.
There is a user error, such as an invalid data format.
If using Java, DynamoDB lists the cancellation reasons on the CancellationReasons
property.
This property is not set for other languages. Transaction cancellation reasons are ordered in the order
of requested items, if an item has no error it will have None
code and Null
message.
Cancellation reason codes and possible error messages:
No Errors:
Code: None
Message: null
Conditional Check Failed:
Code: ConditionalCheckFailed
Message: The conditional request failed.
Item Collection Size Limit Exceeded:
Code: ItemCollectionSizeLimitExceeded
Message: Collection size exceeded.
Transaction Conflict:
Code: TransactionConflict
Message: Transaction is ongoing for the item.
Provisioned Throughput Exceeded:
Code: ProvisionedThroughputExceeded
Messages:
The level of configured provisioned throughput for the table was exceeded. Consider increasing your provisioning level with the UpdateTable API.
This Message is received when provisioned throughput is exceeded is on a provisioned DynamoDB table.
The level of configured provisioned throughput for one or more global secondary indexes of the table was exceeded. Consider increasing your provisioning level for the under-provisioned global secondary indexes with the UpdateTable API.
This message is returned when provisioned throughput is exceeded is on a provisioned GSI.
Throttling Error:
Code: ThrottlingError
Messages:
Throughput exceeds the current capacity of your table or index. DynamoDB is automatically scaling your table or index so please try again shortly. If exceptions persist, check if you have a hot key: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/bp-partition-key-design.html.
This message is returned when writes get throttled on an On-Demand table as DynamoDB is automatically scaling the table.
Throughput exceeds the current capacity for one or more global secondary indexes. DynamoDB is automatically scaling your index so please try again shortly.
This message is returned when writes get throttled on an On-Demand GSI as DynamoDB is automatically scaling the GSI.
Validation Error:
Code: ValidationError
Messages:
One or more parameter values were invalid.
The update expression attempted to update the secondary index key beyond allowed size limits.
The update expression attempted to update the secondary index key to unsupported type.
An operand in the update expression has an incorrect data type.
Item size to update has exceeded the maximum allowed size.
Number overflow. Attempting to store a number with magnitude larger than supported range.
Type mismatch for attribute to update.
Nesting Levels have exceeded supported limits.
The document path provided in the update expression is invalid for update.
The provided expression refers to an attribute that does not exist in the item.
Recommended Settings
This is a general recommendation for handling the TransactionInProgressException
. These
settings help ensure that the client retries will trigger completion of the ongoing
TransactWriteItems
request.
Set clientExecutionTimeout
to a value that allows at least one retry to be processed after 5
seconds have elapsed since the first attempt for the TransactWriteItems
operation.
Set socketTimeout
to a value a little lower than the requestTimeout
setting.
requestTimeout
should be set based on the time taken for the individual retries of a single
HTTP request for your use case, but setting it to 1 second or higher should work well to reduce chances
of retries and TransactionInProgressException
errors.
Use exponential backoff when retrying and tune backoff if needed.
Assuming default retry policy, example timeout settings based on the guidelines above are as follows:
Example timeline:
0-1000 first attempt
1000-1500 first sleep/delay (default retry policy uses 500 ms as base delay for 4xx errors)
1500-2500 second attempt
2500-3500 second sleep/delay (500 * 2, exponential backoff)
3500-4500 third attempt
4500-6500 third sleep/delay (500 * 2^2)
6500-7500 fourth attempt (this can trigger inline recovery since 5 seconds have elapsed since the first attempt reached TC)
default CompletableFuture<TransactWriteItemsResponse> transactWriteItems(Consumer<TransactWriteItemsRequest.Builder> transactWriteItemsRequest)
TransactWriteItems
is a synchronous write operation that groups up to 100 action requests. These
actions can target items in different tables, but not in different Amazon Web Services accounts or Regions, and
no two actions can target the same item. For example, you cannot both ConditionCheck
and
Update
the same item. The aggregate size of the items in the transaction cannot exceed 4 MB.
The actions are completed atomically so that either all of them succeed, or all of them fail. They are defined by the following objects:
Put
 —  Initiates a PutItem
operation to write a new item. This structure specifies
the primary key of the item to be written, the name of the table to write it in, an optional condition expression
that must be satisfied for the write to succeed, a list of the item's attributes, and a field indicating whether
to retrieve the item's attributes if the condition is not met.
Update
 —  Initiates an UpdateItem
operation to update an existing item. This
structure specifies the primary key of the item to be updated, the name of the table where it resides, an
optional condition expression that must be satisfied for the update to succeed, an expression that defines one or
more attributes to be updated, and a field indicating whether to retrieve the item's attributes if the condition
is not met.
Delete
 —  Initiates a DeleteItem
operation to delete an existing item. This structure
specifies the primary key of the item to be deleted, the name of the table where it resides, an optional
condition expression that must be satisfied for the deletion to succeed, and a field indicating whether to
retrieve the item's attributes if the condition is not met.
ConditionCheck
 —  Applies a condition to an item that is not being modified by the transaction.
This structure specifies the primary key of the item to be checked, the name of the table where it resides, a
condition expression that must be satisfied for the transaction to succeed, and a field indicating whether to
retrieve the item's attributes if the condition is not met.
DynamoDB rejects the entire TransactWriteItems
request if any of the following is true:
A condition in one of the condition expressions is not met.
An ongoing operation is in the process of updating the same item.
There is insufficient provisioned capacity for the transaction to be completed.
An item size becomes too large (bigger than 400 KB), a local secondary index (LSI) becomes too large, or a similar validation error occurs because of changes made by the transaction.
The aggregate size of the items in the transaction exceeds 4 MB.
There is a user error, such as an invalid data format.
This is a convenience which creates an instance of the TransactWriteItemsRequest.Builder
avoiding the
need to create one manually via TransactWriteItemsRequest.builder()
transactWriteItemsRequest
- A Consumer
that will call methods on TransactWriteItemsInput.Builder
to create a request.ACTIVE
.
DynamoDB cancels a TransactWriteItems
request under the following circumstances:
A condition in one of the condition expressions is not met.
A table in the TransactWriteItems
request is in a different account or region.
More than one action in the TransactWriteItems
operation targets the same item.
There is insufficient provisioned capacity for the transaction to be completed.
An item size becomes too large (larger than 400 KB), or a local secondary index (LSI) becomes too large, or a similar validation error occurs because of changes made by the transaction.
There is a user error, such as an invalid data format.
DynamoDB cancels a TransactGetItems
request under the following circumstances:
There is an ongoing TransactGetItems
operation that conflicts with a concurrent
PutItem
, UpdateItem
, DeleteItem
or TransactWriteItems
request. In this case the TransactGetItems
operation fails with a
TransactionCanceledException
.
A table in the TransactGetItems
request is in a different account or region.
There is insufficient provisioned capacity for the transaction to be completed.
There is a user error, such as an invalid data format.
If using Java, DynamoDB lists the cancellation reasons on the CancellationReasons
property.
This property is not set for other languages. Transaction cancellation reasons are ordered in the order
of requested items, if an item has no error it will have None
code and Null
message.
Cancellation reason codes and possible error messages:
No Errors:
Code: None
Message: null
Conditional Check Failed:
Code: ConditionalCheckFailed
Message: The conditional request failed.
Item Collection Size Limit Exceeded:
Code: ItemCollectionSizeLimitExceeded
Message: Collection size exceeded.
Transaction Conflict:
Code: TransactionConflict
Message: Transaction is ongoing for the item.
Provisioned Throughput Exceeded:
Code: ProvisionedThroughputExceeded
Messages:
The level of configured provisioned throughput for the table was exceeded. Consider increasing your provisioning level with the UpdateTable API.
This Message is received when provisioned throughput is exceeded is on a provisioned DynamoDB table.
The level of configured provisioned throughput for one or more global secondary indexes of the table was exceeded. Consider increasing your provisioning level for the under-provisioned global secondary indexes with the UpdateTable API.
This message is returned when provisioned throughput is exceeded is on a provisioned GSI.
Throttling Error:
Code: ThrottlingError
Messages:
Throughput exceeds the current capacity of your table or index. DynamoDB is automatically scaling your table or index so please try again shortly. If exceptions persist, check if you have a hot key: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/bp-partition-key-design.html.
This message is returned when writes get throttled on an On-Demand table as DynamoDB is automatically scaling the table.
Throughput exceeds the current capacity for one or more global secondary indexes. DynamoDB is automatically scaling your index so please try again shortly.
This message is returned when writes get throttled on an On-Demand GSI as DynamoDB is automatically scaling the GSI.
Validation Error:
Code: ValidationError
Messages:
One or more parameter values were invalid.
The update expression attempted to update the secondary index key beyond allowed size limits.
The update expression attempted to update the secondary index key to unsupported type.
An operand in the update expression has an incorrect data type.
Item size to update has exceeded the maximum allowed size.
Number overflow. Attempting to store a number with magnitude larger than supported range.
Type mismatch for attribute to update.
Nesting Levels have exceeded supported limits.
The document path provided in the update expression is invalid for update.
The provided expression refers to an attribute that does not exist in the item.
Recommended Settings
This is a general recommendation for handling the TransactionInProgressException
. These
settings help ensure that the client retries will trigger completion of the ongoing
TransactWriteItems
request.
Set clientExecutionTimeout
to a value that allows at least one retry to be processed after 5
seconds have elapsed since the first attempt for the TransactWriteItems
operation.
Set socketTimeout
to a value a little lower than the requestTimeout
setting.
requestTimeout
should be set based on the time taken for the individual retries of a single
HTTP request for your use case, but setting it to 1 second or higher should work well to reduce chances
of retries and TransactionInProgressException
errors.
Use exponential backoff when retrying and tune backoff if needed.
Assuming default retry policy, example timeout settings based on the guidelines above are as follows:
Example timeline:
0-1000 first attempt
1000-1500 first sleep/delay (default retry policy uses 500 ms as base delay for 4xx errors)
1500-2500 second attempt
2500-3500 second sleep/delay (500 * 2, exponential backoff)
3500-4500 third attempt
4500-6500 third sleep/delay (500 * 2^2)
6500-7500 fourth attempt (this can trigger inline recovery since 5 seconds have elapsed since the first attempt reached TC)
default CompletableFuture<UntagResourceResponse> untagResource(UntagResourceRequest untagResourceRequest)
Removes the association of tags from an Amazon DynamoDB resource. You can call UntagResource
up to
five times per second, per account.
For an overview on tagging DynamoDB resources, see Tagging for DynamoDB in the Amazon DynamoDB Developer Guide.
untagResourceRequest
-
For most purposes, up to 500 simultaneous table operations are allowed per account. These operations
include CreateTable
, UpdateTable
, DeleteTable
,
UpdateTimeToLive
, RestoreTableFromBackup
, and
RestoreTableToPointInTime
.
When you are creating a table with one or more secondary indexes, you can have up to 250 such requests running at a time. However, if the table or index specifications are complex, then DynamoDB might temporarily reduce the number of concurrent operations.
When importing into DynamoDB, up to 50 simultaneous import table operations are allowed per account.
There is a soft account quota of 2,500 tables.
GetRecords was called with a value of more than 1000 for the limit request parameter.
More than 2 processes are reading from the same streams shard at the same time. Exceeding this limit may result in request throttling.
ACTIVE
.CREATING
state.default CompletableFuture<UntagResourceResponse> untagResource(Consumer<UntagResourceRequest.Builder> untagResourceRequest)
Removes the association of tags from an Amazon DynamoDB resource. You can call UntagResource
up to
five times per second, per account.
For an overview on tagging DynamoDB resources, see Tagging for DynamoDB in the Amazon DynamoDB Developer Guide.
This is a convenience which creates an instance of the UntagResourceRequest.Builder
avoiding the need to
create one manually via UntagResourceRequest.builder()
untagResourceRequest
- A Consumer
that will call methods on UntagResourceInput.Builder
to create a request.
For most purposes, up to 500 simultaneous table operations are allowed per account. These operations
include CreateTable
, UpdateTable
, DeleteTable
,
UpdateTimeToLive
, RestoreTableFromBackup
, and
RestoreTableToPointInTime
.
When you are creating a table with one or more secondary indexes, you can have up to 250 such requests running at a time. However, if the table or index specifications are complex, then DynamoDB might temporarily reduce the number of concurrent operations.
When importing into DynamoDB, up to 50 simultaneous import table operations are allowed per account.
There is a soft account quota of 2,500 tables.
GetRecords was called with a value of more than 1000 for the limit request parameter.
More than 2 processes are reading from the same streams shard at the same time. Exceeding this limit may result in request throttling.
ACTIVE
.CREATING
state.default CompletableFuture<UpdateContinuousBackupsResponse> updateContinuousBackups(UpdateContinuousBackupsRequest updateContinuousBackupsRequest)
UpdateContinuousBackups
enables or disables point in time recovery for the specified table. A
successful UpdateContinuousBackups
call returns the current
ContinuousBackupsDescription
. Continuous backups are ENABLED
on all tables at table
creation. If point in time recovery is enabled, PointInTimeRecoveryStatus
will be set to ENABLED.
Once continuous backups and point in time recovery are enabled, you can restore to any point in time within
EarliestRestorableDateTime
and LatestRestorableDateTime
.
LatestRestorableDateTime
is typically 5 minutes before the current time. You can restore your table
to any point in time during the last 35 days.
updateContinuousBackupsRequest
- TableName
does not currently exist
within the subscriber's account or the subscriber is operating in the wrong Amazon Web Services Region.default CompletableFuture<UpdateContinuousBackupsResponse> updateContinuousBackups(Consumer<UpdateContinuousBackupsRequest.Builder> updateContinuousBackupsRequest)
UpdateContinuousBackups
enables or disables point in time recovery for the specified table. A
successful UpdateContinuousBackups
call returns the current
ContinuousBackupsDescription
. Continuous backups are ENABLED
on all tables at table
creation. If point in time recovery is enabled, PointInTimeRecoveryStatus
will be set to ENABLED.
Once continuous backups and point in time recovery are enabled, you can restore to any point in time within
EarliestRestorableDateTime
and LatestRestorableDateTime
.
LatestRestorableDateTime
is typically 5 minutes before the current time. You can restore your table
to any point in time during the last 35 days.
This is a convenience which creates an instance of the UpdateContinuousBackupsRequest.Builder
avoiding
the need to create one manually via UpdateContinuousBackupsRequest.builder()
updateContinuousBackupsRequest
- A Consumer
that will call methods on UpdateContinuousBackupsInput.Builder
to create a
request.TableName
does not currently exist
within the subscriber's account or the subscriber is operating in the wrong Amazon Web Services Region.default CompletableFuture<UpdateContributorInsightsResponse> updateContributorInsights(UpdateContributorInsightsRequest updateContributorInsightsRequest)
Updates the status for contributor insights for a specific table or index. CloudWatch Contributor Insights for DynamoDB graphs display the partition key and (if applicable) sort key of frequently accessed items and frequently throttled items in plaintext. If you require the use of Amazon Web Services Key Management Service (KMS) to encrypt this table’s partition key and sort key data with an Amazon Web Services managed key or customer managed key, you should not enable CloudWatch Contributor Insights for DynamoDB for this table.
updateContributorInsightsRequest
- ACTIVE
.default CompletableFuture<UpdateContributorInsightsResponse> updateContributorInsights(Consumer<UpdateContributorInsightsRequest.Builder> updateContributorInsightsRequest)
Updates the status for contributor insights for a specific table or index. CloudWatch Contributor Insights for DynamoDB graphs display the partition key and (if applicable) sort key of frequently accessed items and frequently throttled items in plaintext. If you require the use of Amazon Web Services Key Management Service (KMS) to encrypt this table’s partition key and sort key data with an Amazon Web Services managed key or customer managed key, you should not enable CloudWatch Contributor Insights for DynamoDB for this table.
This is a convenience which creates an instance of the UpdateContributorInsightsRequest.Builder
avoiding
the need to create one manually via UpdateContributorInsightsRequest.builder()
updateContributorInsightsRequest
- A Consumer
that will call methods on UpdateContributorInsightsInput.Builder
to create a
request.ACTIVE
.default CompletableFuture<UpdateGlobalTableResponse> updateGlobalTable(UpdateGlobalTableRequest updateGlobalTableRequest)
Adds or removes replicas in the specified global table. The global table must already exist to be able to use this operation. Any replica to be added must be empty, have the same name as the global table, have the same key schema, have DynamoDB Streams enabled, and have the same provisioned and maximum write capacity units.
This operation only applies to Version 2017.11.29 (Legacy) of global tables. We recommend using Version 2019.11.21 (Current) when creating new global tables, as it provides greater flexibility, higher efficiency and consumes less write capacity than 2017.11.29 (Legacy). To determine which version you are using, see Determining the version. To update existing global tables from version 2017.11.29 (Legacy) to version 2019.11.21 (Current), see Updating global tables.
This operation only applies to Version 2017.11.29 of global tables. If you are using global tables Version 2019.11.21 you can use DescribeTable instead.
Although you can use UpdateGlobalTable
to add replicas and remove replicas in a single request, for
simplicity we recommend that you issue separate requests for adding or removing replicas.
If global secondary indexes are specified, then the following conditions must also be met:
The global secondary indexes must have the same name.
The global secondary indexes must have the same hash key and sort key (if present).
The global secondary indexes must have the same provisioned and maximum write capacity units.
updateGlobalTableRequest
- TableName
does not currently exist
within the subscriber's account or the subscriber is operating in the wrong Amazon Web Services Region.default CompletableFuture<UpdateGlobalTableResponse> updateGlobalTable(Consumer<UpdateGlobalTableRequest.Builder> updateGlobalTableRequest)
Adds or removes replicas in the specified global table. The global table must already exist to be able to use this operation. Any replica to be added must be empty, have the same name as the global table, have the same key schema, have DynamoDB Streams enabled, and have the same provisioned and maximum write capacity units.
This operation only applies to Version 2017.11.29 (Legacy) of global tables. We recommend using Version 2019.11.21 (Current) when creating new global tables, as it provides greater flexibility, higher efficiency and consumes less write capacity than 2017.11.29 (Legacy). To determine which version you are using, see Determining the version. To update existing global tables from version 2017.11.29 (Legacy) to version 2019.11.21 (Current), see Updating global tables.
This operation only applies to Version 2017.11.29 of global tables. If you are using global tables Version 2019.11.21 you can use DescribeTable instead.
Although you can use UpdateGlobalTable
to add replicas and remove replicas in a single request, for
simplicity we recommend that you issue separate requests for adding or removing replicas.
If global secondary indexes are specified, then the following conditions must also be met:
The global secondary indexes must have the same name.
The global secondary indexes must have the same hash key and sort key (if present).
The global secondary indexes must have the same provisioned and maximum write capacity units.
This is a convenience which creates an instance of the UpdateGlobalTableRequest.Builder
avoiding the need
to create one manually via UpdateGlobalTableRequest.builder()
updateGlobalTableRequest
- A Consumer
that will call methods on UpdateGlobalTableInput.Builder
to create a request.TableName
does not currently exist
within the subscriber's account or the subscriber is operating in the wrong Amazon Web Services Region.default CompletableFuture<UpdateGlobalTableSettingsResponse> updateGlobalTableSettings(UpdateGlobalTableSettingsRequest updateGlobalTableSettingsRequest)
Updates settings for a global table.
This operation only applies to Version 2017.11.29 (Legacy) of global tables. We recommend using Version 2019.11.21 (Current) when creating new global tables, as it provides greater flexibility, higher efficiency and consumes less write capacity than 2017.11.29 (Legacy). To determine which version you are using, see Determining the version. To update existing global tables from version 2017.11.29 (Legacy) to version 2019.11.21 (Current), see Updating global tables.
updateGlobalTableSettingsRequest
-
For most purposes, up to 500 simultaneous table operations are allowed per account. These operations
include CreateTable
, UpdateTable
, DeleteTable
,
UpdateTimeToLive
, RestoreTableFromBackup
, and
RestoreTableToPointInTime
.
When you are creating a table with one or more secondary indexes, you can have up to 250 such requests running at a time. However, if the table or index specifications are complex, then DynamoDB might temporarily reduce the number of concurrent operations.
When importing into DynamoDB, up to 50 simultaneous import table operations are allowed per account.
There is a soft account quota of 2,500 tables.
GetRecords was called with a value of more than 1000 for the limit request parameter.
More than 2 processes are reading from the same streams shard at the same time. Exceeding this limit may result in request throttling.
CREATING
state.default CompletableFuture<UpdateGlobalTableSettingsResponse> updateGlobalTableSettings(Consumer<UpdateGlobalTableSettingsRequest.Builder> updateGlobalTableSettingsRequest)
Updates settings for a global table.
This operation only applies to Version 2017.11.29 (Legacy) of global tables. We recommend using Version 2019.11.21 (Current) when creating new global tables, as it provides greater flexibility, higher efficiency and consumes less write capacity than 2017.11.29 (Legacy). To determine which version you are using, see Determining the version. To update existing global tables from version 2017.11.29 (Legacy) to version 2019.11.21 (Current), see Updating global tables.
This is a convenience which creates an instance of the UpdateGlobalTableSettingsRequest.Builder
avoiding
the need to create one manually via UpdateGlobalTableSettingsRequest.builder()
updateGlobalTableSettingsRequest
- A Consumer
that will call methods on UpdateGlobalTableSettingsInput.Builder
to create a
request.
For most purposes, up to 500 simultaneous table operations are allowed per account. These operations
include CreateTable
, UpdateTable
, DeleteTable
,
UpdateTimeToLive
, RestoreTableFromBackup
, and
RestoreTableToPointInTime
.
When you are creating a table with one or more secondary indexes, you can have up to 250 such requests running at a time. However, if the table or index specifications are complex, then DynamoDB might temporarily reduce the number of concurrent operations.
When importing into DynamoDB, up to 50 simultaneous import table operations are allowed per account.
There is a soft account quota of 2,500 tables.
GetRecords was called with a value of more than 1000 for the limit request parameter.
More than 2 processes are reading from the same streams shard at the same time. Exceeding this limit may result in request throttling.
CREATING
state.default CompletableFuture<UpdateItemResponse> updateItem(UpdateItemRequest updateItemRequest)
Edits an existing item's attributes, or adds a new item to the table if it does not already exist. You can put, delete, or add attribute values. You can also perform a conditional update on an existing item (insert a new attribute name-value pair if it doesn't exist, or replace an existing name-value pair if it has certain expected attribute values).
You can also return the item's attribute values in the same UpdateItem
operation using the
ReturnValues
parameter.
updateItemRequest
- Represents the input of an UpdateItem
operation.ACTIVE
.default CompletableFuture<UpdateItemResponse> updateItem(Consumer<UpdateItemRequest.Builder> updateItemRequest)
Edits an existing item's attributes, or adds a new item to the table if it does not already exist. You can put, delete, or add attribute values. You can also perform a conditional update on an existing item (insert a new attribute name-value pair if it doesn't exist, or replace an existing name-value pair if it has certain expected attribute values).
You can also return the item's attribute values in the same UpdateItem
operation using the
ReturnValues
parameter.
This is a convenience which creates an instance of the UpdateItemRequest.Builder
avoiding the need to
create one manually via UpdateItemRequest.builder()
updateItemRequest
- A Consumer
that will call methods on UpdateItemInput.Builder
to create a request.
Represents the input of an UpdateItem
operation.ACTIVE
.default CompletableFuture<UpdateTableResponse> updateTable(UpdateTableRequest updateTableRequest)
Modifies the provisioned throughput settings, global secondary indexes, or DynamoDB Streams settings for a given table.
This operation only applies to Version 2019.11.21 (Current) of global tables.
You can only perform one of the following operations at once:
Modify the provisioned throughput settings of the table.
Remove a global secondary index from the table.
Create a new global secondary index on the table. After the index begins backfilling, you can use
UpdateTable
to perform other operations.
UpdateTable
is an asynchronous operation; while it is executing, the table status changes from
ACTIVE
to UPDATING
. While it is UPDATING
, you cannot issue another
UpdateTable
request. When the table returns to the ACTIVE
state, the
UpdateTable
operation is complete.
updateTableRequest
- Represents the input of an UpdateTable
operation.CREATING
state.ACTIVE
.
For most purposes, up to 500 simultaneous table operations are allowed per account. These operations
include CreateTable
, UpdateTable
, DeleteTable
,
UpdateTimeToLive
, RestoreTableFromBackup
, and
RestoreTableToPointInTime
.
When you are creating a table with one or more secondary indexes, you can have up to 250 such requests running at a time. However, if the table or index specifications are complex, then DynamoDB might temporarily reduce the number of concurrent operations.
When importing into DynamoDB, up to 50 simultaneous import table operations are allowed per account.
There is a soft account quota of 2,500 tables.
GetRecords was called with a value of more than 1000 for the limit request parameter.
More than 2 processes are reading from the same streams shard at the same time. Exceeding this limit may result in request throttling.
default CompletableFuture<UpdateTableResponse> updateTable(Consumer<UpdateTableRequest.Builder> updateTableRequest)
Modifies the provisioned throughput settings, global secondary indexes, or DynamoDB Streams settings for a given table.
This operation only applies to Version 2019.11.21 (Current) of global tables.
You can only perform one of the following operations at once:
Modify the provisioned throughput settings of the table.
Remove a global secondary index from the table.
Create a new global secondary index on the table. After the index begins backfilling, you can use
UpdateTable
to perform other operations.
UpdateTable
is an asynchronous operation; while it is executing, the table status changes from
ACTIVE
to UPDATING
. While it is UPDATING
, you cannot issue another
UpdateTable
request. When the table returns to the ACTIVE
state, the
UpdateTable
operation is complete.
This is a convenience which creates an instance of the UpdateTableRequest.Builder
avoiding the need to
create one manually via UpdateTableRequest.builder()
updateTableRequest
- A Consumer
that will call methods on UpdateTableInput.Builder
to create a request.
Represents the input of an UpdateTable
operation.CREATING
state.ACTIVE
.
For most purposes, up to 500 simultaneous table operations are allowed per account. These operations
include CreateTable
, UpdateTable
, DeleteTable
,
UpdateTimeToLive
, RestoreTableFromBackup
, and
RestoreTableToPointInTime
.
When you are creating a table with one or more secondary indexes, you can have up to 250 such requests running at a time. However, if the table or index specifications are complex, then DynamoDB might temporarily reduce the number of concurrent operations.
When importing into DynamoDB, up to 50 simultaneous import table operations are allowed per account.
There is a soft account quota of 2,500 tables.
GetRecords was called with a value of more than 1000 for the limit request parameter.
More than 2 processes are reading from the same streams shard at the same time. Exceeding this limit may result in request throttling.
default CompletableFuture<UpdateTableReplicaAutoScalingResponse> updateTableReplicaAutoScaling(UpdateTableReplicaAutoScalingRequest updateTableReplicaAutoScalingRequest)
Updates auto scaling settings on your global tables at once.
This operation only applies to Version 2019.11.21 (Current) of global tables.
updateTableReplicaAutoScalingRequest
- ACTIVE
.CREATING
state.
For most purposes, up to 500 simultaneous table operations are allowed per account. These operations
include CreateTable
, UpdateTable
, DeleteTable
,
UpdateTimeToLive
, RestoreTableFromBackup
, and
RestoreTableToPointInTime
.
When you are creating a table with one or more secondary indexes, you can have up to 250 such requests running at a time. However, if the table or index specifications are complex, then DynamoDB might temporarily reduce the number of concurrent operations.
When importing into DynamoDB, up to 50 simultaneous import table operations are allowed per account.
There is a soft account quota of 2,500 tables.
GetRecords was called with a value of more than 1000 for the limit request parameter.
More than 2 processes are reading from the same streams shard at the same time. Exceeding this limit may result in request throttling.
default CompletableFuture<UpdateTableReplicaAutoScalingResponse> updateTableReplicaAutoScaling(Consumer<UpdateTableReplicaAutoScalingRequest.Builder> updateTableReplicaAutoScalingRequest)
Updates auto scaling settings on your global tables at once.
This operation only applies to Version 2019.11.21 (Current) of global tables.
This is a convenience which creates an instance of the UpdateTableReplicaAutoScalingRequest.Builder
avoiding the need to create one manually via UpdateTableReplicaAutoScalingRequest.builder()
updateTableReplicaAutoScalingRequest
- A Consumer
that will call methods on UpdateTableReplicaAutoScalingInput.Builder
to create
a request.ACTIVE
.CREATING
state.
For most purposes, up to 500 simultaneous table operations are allowed per account. These operations
include CreateTable
, UpdateTable
, DeleteTable
,
UpdateTimeToLive
, RestoreTableFromBackup
, and
RestoreTableToPointInTime
.
When you are creating a table with one or more secondary indexes, you can have up to 250 such requests running at a time. However, if the table or index specifications are complex, then DynamoDB might temporarily reduce the number of concurrent operations.
When importing into DynamoDB, up to 50 simultaneous import table operations are allowed per account.
There is a soft account quota of 2,500 tables.
GetRecords was called with a value of more than 1000 for the limit request parameter.
More than 2 processes are reading from the same streams shard at the same time. Exceeding this limit may result in request throttling.
default CompletableFuture<UpdateTimeToLiveResponse> updateTimeToLive(UpdateTimeToLiveRequest updateTimeToLiveRequest)
The UpdateTimeToLive
method enables or disables Time to Live (TTL) for the specified table. A
successful UpdateTimeToLive
call returns the current TimeToLiveSpecification
. It can
take up to one hour for the change to fully process. Any additional UpdateTimeToLive
calls for the
same table during this one hour duration result in a ValidationException
.
TTL compares the current time in epoch time format to the time stored in the TTL attribute of an item. If the epoch time value stored in the attribute is less than the current time, the item is marked as expired and subsequently deleted.
The epoch time format is the number of seconds elapsed since 12:00:00 AM January 1, 1970 UTC.
DynamoDB deletes expired items on a best-effort basis to ensure availability of throughput for other data operations.
DynamoDB typically deletes expired items within two days of expiration. The exact duration within which an item gets deleted after expiration is specific to the nature of the workload. Items that have expired and not been deleted will still show up in reads, queries, and scans.
As items are deleted, they are removed from any local secondary index and global secondary index immediately in the same eventually consistent way as a standard delete operation.
For more information, see Time To Live in the Amazon DynamoDB Developer Guide.
updateTimeToLiveRequest
- Represents the input of an UpdateTimeToLive
operation.CREATING
state.ACTIVE
.
For most purposes, up to 500 simultaneous table operations are allowed per account. These operations
include CreateTable
, UpdateTable
, DeleteTable
,
UpdateTimeToLive
, RestoreTableFromBackup
, and
RestoreTableToPointInTime
.
When you are creating a table with one or more secondary indexes, you can have up to 250 such requests running at a time. However, if the table or index specifications are complex, then DynamoDB might temporarily reduce the number of concurrent operations.
When importing into DynamoDB, up to 50 simultaneous import table operations are allowed per account.
There is a soft account quota of 2,500 tables.
GetRecords was called with a value of more than 1000 for the limit request parameter.
More than 2 processes are reading from the same streams shard at the same time. Exceeding this limit may result in request throttling.
default CompletableFuture<UpdateTimeToLiveResponse> updateTimeToLive(Consumer<UpdateTimeToLiveRequest.Builder> updateTimeToLiveRequest)
The UpdateTimeToLive
method enables or disables Time to Live (TTL) for the specified table. A
successful UpdateTimeToLive
call returns the current TimeToLiveSpecification
. It can
take up to one hour for the change to fully process. Any additional UpdateTimeToLive
calls for the
same table during this one hour duration result in a ValidationException
.
TTL compares the current time in epoch time format to the time stored in the TTL attribute of an item. If the epoch time value stored in the attribute is less than the current time, the item is marked as expired and subsequently deleted.
The epoch time format is the number of seconds elapsed since 12:00:00 AM January 1, 1970 UTC.
DynamoDB deletes expired items on a best-effort basis to ensure availability of throughput for other data operations.
DynamoDB typically deletes expired items within two days of expiration. The exact duration within which an item gets deleted after expiration is specific to the nature of the workload. Items that have expired and not been deleted will still show up in reads, queries, and scans.
As items are deleted, they are removed from any local secondary index and global secondary index immediately in the same eventually consistent way as a standard delete operation.
For more information, see Time To Live in the Amazon DynamoDB Developer Guide.
This is a convenience which creates an instance of the UpdateTimeToLiveRequest.Builder
avoiding the need
to create one manually via UpdateTimeToLiveRequest.builder()
updateTimeToLiveRequest
- A Consumer
that will call methods on UpdateTimeToLiveInput.Builder
to create a request.
Represents the input of an UpdateTimeToLive
operation.CREATING
state.ACTIVE
.
For most purposes, up to 500 simultaneous table operations are allowed per account. These operations
include CreateTable
, UpdateTable
, DeleteTable
,
UpdateTimeToLive
, RestoreTableFromBackup
, and
RestoreTableToPointInTime
.
When you are creating a table with one or more secondary indexes, you can have up to 250 such requests running at a time. However, if the table or index specifications are complex, then DynamoDB might temporarily reduce the number of concurrent operations.
When importing into DynamoDB, up to 50 simultaneous import table operations are allowed per account.
There is a soft account quota of 2,500 tables.
GetRecords was called with a value of more than 1000 for the limit request parameter.
More than 2 processes are reading from the same streams shard at the same time. Exceeding this limit may result in request throttling.
default DynamoDbAsyncWaiter waiter()
DynamoDbAsyncWaiter
using this client.
Waiters created via this method are managed by the SDK and resources will be released when the service client is closed.
DynamoDbAsyncWaiter
default DynamoDbServiceClientConfiguration serviceClientConfiguration()
serviceClientConfiguration
in interface AwsClient
serviceClientConfiguration
in interface SdkClient
static DynamoDbAsyncClient create()
DynamoDbAsyncClient
with the region loaded from the
DefaultAwsRegionProviderChain
and credentials loaded from the
DefaultCredentialsProvider
.static DynamoDbAsyncClientBuilder builder()
DynamoDbAsyncClient
.Copyright © 2023. All rights reserved.