String streamName
A name to identify the stream. The stream name is scoped to the AWS account used by the application that creates the stream. It is also scoped by region. That is, two streams in two different AWS accounts can have the same name, and two streams in the same AWS account but in two different regions can have the same name.
Constraints:
Length: 1 - 128
Pattern: [a-zA-Z0-9_.-]+
Integer shardCount
The number of shards that the stream will use. The throughput of the stream is a function of the number of shards; more shards are required for greater provisioned throughput.
DefaultShardLimit;
Constraints:
Range: 1 - 100000
String streamName
The name of the stream to delete.
Constraints:
Length: 1 - 128
Pattern: [a-zA-Z0-9_.-]+
String streamName
The name of the stream to describe.
Constraints:
Length: 1 - 128
Pattern: [a-zA-Z0-9_.-]+
Integer limit
The maximum number of shards to return.
Constraints:
Range: 1 - 10000
String exclusiveStartShardId
The shard ID of the shard to start with.
Constraints:
Length: 1 - 128
Pattern: [a-zA-Z0-9_.-]+
StreamDescription streamDescription
The current status of the stream, the stream ARN, an array of shard objects that comprise the stream, and states whether there are more shards available.
String streamName
The name of the Amazon Kinesis stream for which to disable enhanced monitoring.
Constraints:
Length: 1 - 128
Pattern: [a-zA-Z0-9_.-]+
List<E> shardLevelMetrics
List of shard-level metrics to disable.
The following are the valid shard-level metrics. The value "
ALL
" disables every metric.
IncomingBytes
IncomingRecords
OutgoingBytes
OutgoingRecords
WriteProvisionedThroughputExceeded
ReadProvisionedThroughputExceeded
IteratorAgeMilliseconds
ALL
For more information, see Monitoring the Amazon Kinesis Streams Service with Amazon CloudWatch in the Amazon Kinesis Streams Developer Guide.
String streamName
The name of the Amazon Kinesis stream.
Constraints:
Length: 1 - 128
Pattern: [a-zA-Z0-9_.-]+
List<E> currentShardLevelMetrics
Represents the current state of the metrics that are in the enhanced state before the operation.
List<E> desiredShardLevelMetrics
Represents the list of all the metrics that would be in the enhanced state after the operation.
String streamName
The name of the stream for which to enable enhanced monitoring.
Constraints:
Length: 1 - 128
Pattern: [a-zA-Z0-9_.-]+
List<E> shardLevelMetrics
List of shard-level metrics to enable.
The following are the valid shard-level metrics. The value "
ALL
" enables every metric.
IncomingBytes
IncomingRecords
OutgoingBytes
OutgoingRecords
WriteProvisionedThroughputExceeded
ReadProvisionedThroughputExceeded
IteratorAgeMilliseconds
ALL
For more information, see Monitoring the Amazon Kinesis Streams Service with Amazon CloudWatch in the Amazon Kinesis Streams Developer Guide.
String streamName
The name of the Amazon Kinesis stream.
Constraints:
Length: 1 - 128
Pattern: [a-zA-Z0-9_.-]+
List<E> currentShardLevelMetrics
Represents the current state of the metrics that are in the enhanced state before the operation.
List<E> desiredShardLevelMetrics
Represents the list of all the metrics that would be in the enhanced state after the operation.
List<E> shardLevelMetrics
List of shard-level metrics.
The following are the valid shard-level metrics. The value "
ALL
" enhances every metric.
IncomingBytes
IncomingRecords
OutgoingBytes
OutgoingRecords
WriteProvisionedThroughputExceeded
ReadProvisionedThroughputExceeded
IteratorAgeMilliseconds
ALL
For more information, see Monitoring the Amazon Kinesis Streams Service with Amazon CloudWatch in the Amazon Kinesis Streams Developer Guide.
String shardIterator
The position in the shard from which you want to start sequentially reading data records. A shard iterator specifies this position using the sequence number of a data record in the shard.
Constraints:
Length: 1 - 512
Integer limit
The maximum number of records to return. Specify a value of up to 10,000.
If you specify a value that is greater than 10,000, GetRecords
throws InvalidArgumentException
.
Constraints:
Range: 1 - 10000
List<E> records
The data records retrieved from the shard.
String nextShardIterator
The next position in the shard from which to start sequentially reading
data records. If set to null
, the shard has been closed and
the requested iterator will not return any more data.
Constraints:
Length: 1 - 512
Long millisBehindLatest
The number of milliseconds the GetRecords response is from the tip of the stream, indicating how far behind current time the consumer is. A value of zero indicates record processing is caught up, and there are no new records to process at this moment.
Constraints:
Range: 0 -
String streamName
The name of the Amazon Kinesis stream.
Constraints:
Length: 1 - 128
Pattern: [a-zA-Z0-9_.-]+
String shardId
The shard ID of the Amazon Kinesis shard to get the iterator for.
Constraints:
Length: 1 - 128
Pattern: [a-zA-Z0-9_.-]+
String shardIteratorType
Determines how the shard iterator is used to start reading data records from the shard.
The following are the valid Amazon Kinesis shard iterator types:
StartingSequenceNumber
.StartingSequenceNumber
.Timestamp
.
Constraints:
Allowed Values: AT_SEQUENCE_NUMBER, AFTER_SEQUENCE_NUMBER,
TRIM_HORIZON, LATEST, AT_TIMESTAMP
String startingSequenceNumber
The sequence number of the data record in the shard from which to start reading. Used with shard iterator type AT_SEQUENCE_NUMBER and AFTER_SEQUENCE_NUMBER.
Constraints:
Pattern: 0|([1-9]\d{0,128})
Date timestamp
The timestamp of the data record from which to start reading. Used with
shard iterator type AT_TIMESTAMP. A timestamp is the Unix epoch date with
precision in milliseconds. For example,
2016-04-04T19:58:46.480-00:00
or 1459799926.480
. If a record with this exact timestamp does not exist, the iterator
returned is for the next (later) record. If the timestamp is older than
the current trim horizon, the iterator returned is for the oldest
untrimmed data record (TRIM_HORIZON).
String shardIterator
The position in the shard from which to start reading data records sequentially. A shard iterator specifies this position using the sequence number of a data record in a shard.
Constraints:
Length: 1 - 512
String streamName
The name of the stream.
Constraints:
Length: 1 - 128
Pattern: [a-zA-Z0-9_.-]+
String exclusiveStartTagKey
The key to use as the starting point for the list of tags. If this
parameter is set, ListTagsForStream
gets all tags that occur
after ExclusiveStartTagKey
.
Constraints:
Length: 1 - 128
Integer limit
The number of tags to return. If this number is less than the total
number of tags associated with the stream, HasMoreTags
is
set to true
. To list additional tags, set
ExclusiveStartTagKey
to the last key in the response.
Constraints:
Range: 1 - 10
List<E> tags
A list of tags associated with StreamName
, starting with the
first tag after ExclusiveStartTagKey
and up to the specified
Limit
.
Boolean hasMoreTags
If set to true
, more tags are available. To request
additional tags, set ExclusiveStartTagKey
to the key of the
last tag returned.
String streamName
The name of the stream for the merge.
Constraints:
Length: 1 - 128
Pattern: [a-zA-Z0-9_.-]+
String shardToMerge
The shard ID of the shard to combine with the adjacent shard for the merge.
Constraints:
Length: 1 - 128
Pattern: [a-zA-Z0-9_.-]+
String adjacentShardToMerge
The shard ID of the adjacent shard for the merge.
Constraints:
Length: 1 - 128
Pattern: [a-zA-Z0-9_.-]+
String streamName
The name of the stream to put the data record into.
Constraints:
Length: 1 - 128
Pattern: [a-zA-Z0-9_.-]+
ByteBuffer data
The data blob to put into the record, which is base64-encoded when the blob is serialized. When the data blob (the payload before base64-encoding) is added to the partition key size, the total size must not exceed the maximum record size (1 MB).
Constraints:
Length: 0 - 1048576
String partitionKey
Determines which shard in the stream the data record is assigned to. Partition keys are Unicode strings with a maximum length limit of 256 characters for each key. Amazon Kinesis uses the partition key as input to a hash function that maps the partition key and associated data to a specific shard. Specifically, an MD5 hash function is used to map partition keys to 128-bit integer values and to map associated data records to shards. As a result of this hashing mechanism, all data records with the same partition key map to the same shard within the stream.
Constraints:
Length: 1 - 256
String explicitHashKey
The hash value used to explicitly determine the shard the data record is assigned to by overriding the partition key hash.
Constraints:
Pattern: 0|([1-9]\d{0,38})
String sequenceNumberForOrdering
Guarantees strictly increasing sequence numbers, for puts from the same
client and to the same partition key. Usage: set the
SequenceNumberForOrdering
of record n to the sequence
number of record n-1 (as returned in the result when putting
record n-1). If this parameter is not set, records will be
coarsely ordered based on arrival time.
Constraints:
Pattern: 0|([1-9]\d{0,128})
String shardId
The shard ID of the shard where the data record was placed.
Constraints:
Length: 1 - 128
Pattern: [a-zA-Z0-9_.-]+
String sequenceNumber
The sequence number identifier that was assigned to the put data record. The sequence number for the record is unique across all records in the stream. A sequence number is the identifier associated with every record put into the stream.
Constraints:
Pattern: 0|([1-9]\d{0,128})
ByteBuffer data
The data blob to put into the record, which is base64-encoded when the blob is serialized. When the data blob (the payload before base64-encoding) is added to the partition key size, the total size must not exceed the maximum record size (1 MB).
Constraints:
Length: 0 - 1048576
String explicitHashKey
The hash value used to determine explicitly the shard that the data record is assigned to by overriding the partition key hash.
Constraints:
Pattern: 0|([1-9]\d{0,38})
String partitionKey
Determines which shard in the stream the data record is assigned to. Partition keys are Unicode strings with a maximum length limit of 256 characters for each key. Amazon Kinesis uses the partition key as input to a hash function that maps the partition key and associated data to a specific shard. Specifically, an MD5 hash function is used to map partition keys to 128-bit integer values and to map associated data records to shards. As a result of this hashing mechanism, all data records with the same partition key map to the same shard within the stream.
Constraints:
Length: 1 - 256
Integer failedRecordCount
The number of unsuccessfully processed records in a
PutRecords
request.
Constraints:
Range: 1 - 100000
List<E> records
An array of successfully and unsuccessfully processed record results,
correlated with the request by natural ordering. A record that is
successfully added to a stream includes SequenceNumber
and
ShardId
in the result. A record that fails to be added to a
stream includes ErrorCode
and ErrorMessage
in
the result.
String sequenceNumber
The sequence number for an individual record result.
Constraints:
Pattern: 0|([1-9]\d{0,128})
String shardId
The shard ID for an individual record result.
Constraints:
Length: 1 - 128
Pattern: [a-zA-Z0-9_.-]+
String errorCode
The error code for an individual record result. ErrorCodes
can be either ProvisionedThroughputExceededException
or
InternalFailure
.
String errorMessage
The error message for an individual record result. An
ErrorCode
value of
ProvisionedThroughputExceededException
has an error message
that includes the account ID, stream name, and shard ID. An
ErrorCode
value of InternalFailure
has the
error message "Internal Service Failure"
.
String sequenceNumber
The unique identifier of the record in the stream.
Constraints:
Pattern: 0|([1-9]\d{0,128})
Date approximateArrivalTimestamp
The approximate time that the record was inserted into the stream.
ByteBuffer data
The data blob. The data in the blob is both opaque and immutable to the Amazon Kinesis service, which does not inspect, interpret, or change the data in the blob in any way. When the data blob (the payload before base64-encoding) is added to the partition key size, the total size must not exceed the maximum record size (1 MB).
Constraints:
Length: 0 - 1048576
String partitionKey
Identifies which shard in the stream the data record is assigned to.
Constraints:
Length: 1 - 256
String startingSequenceNumber
The starting sequence number for the range.
Constraints:
Pattern: 0|([1-9]\d{0,128})
String endingSequenceNumber
The ending sequence number for the range. Shards that are in the OPEN
state have an ending sequence number of null
.
Constraints:
Pattern: 0|([1-9]\d{0,128})
String shardId
The unique identifier of the shard within the stream.
Constraints:
Length: 1 - 128
Pattern: [a-zA-Z0-9_.-]+
String parentShardId
The shard ID of the shard's parent.
Constraints:
Length: 1 - 128
Pattern: [a-zA-Z0-9_.-]+
String adjacentParentShardId
The shard ID of the shard adjacent to the shard's parent.
Constraints:
Length: 1 - 128
Pattern: [a-zA-Z0-9_.-]+
HashKeyRange hashKeyRange
The range of possible hash key values for the shard, which is a set of ordered contiguous positive integers.
SequenceNumberRange sequenceNumberRange
The range of possible sequence numbers for the shard.
String streamName
The name of the stream for the shard split.
Constraints:
Length: 1 - 128
Pattern: [a-zA-Z0-9_.-]+
String shardToSplit
The shard ID of the shard to split.
Constraints:
Length: 1 - 128
Pattern: [a-zA-Z0-9_.-]+
String newStartingHashKey
A hash key value for the starting hash key of one of the child shards
created by the split. The hash key range for a given shard constitutes a
set of ordered contiguous positive integers. The value for
NewStartingHashKey
must be in the range of hash keys being
mapped into the shard. The NewStartingHashKey
hash key value
and all higher hash key values in hash key range are distributed to one
of the child shards. All the lower hash key values in the range are
distributed to the other child shard.
Constraints:
Pattern: 0|([1-9]\d{0,38})
String streamName
The name of the stream being described.
Constraints:
Length: 1 - 128
Pattern: [a-zA-Z0-9_.-]+
String streamARN
The Amazon Resource Name (ARN) for the stream being described.
String streamStatus
The current status of the stream being described. The stream status is one of the following states:
CREATING
- The stream is being created. Amazon Kinesis
immediately returns and sets StreamStatus
to
CREATING
.DELETING
- The stream is being deleted. The specified
stream is in the DELETING
state until Amazon Kinesis
completes the deletion.ACTIVE
- The stream exists and is ready for read and
write operations or deletion. You should perform read and write
operations only on an ACTIVE
stream.UPDATING
- Shards in the stream are being merged or
split. Read and write operations continue to work while the stream is in
the UPDATING
state.
Constraints:
Allowed Values: CREATING, DELETING, ACTIVE, UPDATING
List<E> shards
The shards that comprise the stream.
Boolean hasMoreShards
If set to true
, more shards in the stream are available to
describe.
Integer retentionPeriodHours
The current retention period, in hours.
Constraints:
Range: 24 - 168
List<E> enhancedMonitoring
Represents the current enhanced monitoring settings of the stream.
String key
A unique identifier for the tag. Maximum length: 128 characters. Valid characters: Unicode letters, digits, white space, _ . / = + - % @
Constraints:
Length: 1 - 128
String value
An optional string, typically used to describe or define the tag. Maximum length: 256 characters. Valid characters: Unicode letters, digits, white space, _ . / = + - % @
Constraints:
Length: 0 - 256
ByteBuffer data
The data blob, which is base64-encoded when the blob is serialized. The maximum size of the data blob, before base64-encoding, is 1,000 KB.
Constraints:
Length: 0 - 1024000
Copyright © 2017. All rights reserved.