@Generated(value="com.amazonaws:aws-java-sdk-code-generator") public class PutRecordRequest extends AmazonWebServiceRequest implements Serializable, Cloneable
Represents the input for PutRecord
.
NOOP
Constructor and Description |
---|
PutRecordRequest() |
Modifier and Type | Method and Description |
---|---|
PutRecordRequest |
clone() |
boolean |
equals(Object obj) |
ByteBuffer |
getData()
The data blob to put into the record, which is base64-encoded when the blob is serialized.
|
String |
getExplicitHashKey()
The hash value used to explicitly determine the shard the data record is assigned to by overriding the partition
key hash.
|
String |
getPartitionKey()
Determines which shard in the stream the data record is assigned to.
|
String |
getSequenceNumberForOrdering()
Guarantees strictly increasing sequence numbers, for puts from the same client and to the same partition key.
|
String |
getStreamARN()
The ARN of the stream.
|
String |
getStreamName()
The name of the stream to put the data record into.
|
int |
hashCode() |
void |
setData(ByteBuffer data)
The data blob to put into the record, which is base64-encoded when the blob is serialized.
|
void |
setExplicitHashKey(String explicitHashKey)
The hash value used to explicitly determine the shard the data record is assigned to by overriding the partition
key hash.
|
void |
setPartitionKey(String partitionKey)
Determines which shard in the stream the data record is assigned to.
|
void |
setSequenceNumberForOrdering(String sequenceNumberForOrdering)
Guarantees strictly increasing sequence numbers, for puts from the same client and to the same partition key.
|
void |
setStreamARN(String streamARN)
The ARN of the stream.
|
void |
setStreamName(String streamName)
The name of the stream to put the data record into.
|
String |
toString()
Returns a string representation of this object.
|
PutRecordRequest |
withData(ByteBuffer data)
The data blob to put into the record, which is base64-encoded when the blob is serialized.
|
PutRecordRequest |
withExplicitHashKey(String explicitHashKey)
The hash value used to explicitly determine the shard the data record is assigned to by overriding the partition
key hash.
|
PutRecordRequest |
withPartitionKey(String partitionKey)
Determines which shard in the stream the data record is assigned to.
|
PutRecordRequest |
withSequenceNumberForOrdering(String sequenceNumberForOrdering)
Guarantees strictly increasing sequence numbers, for puts from the same client and to the same partition key.
|
PutRecordRequest |
withStreamARN(String streamARN)
The ARN of the stream.
|
PutRecordRequest |
withStreamName(String streamName)
The name of the stream to put the data record into.
|
addHandlerContext, copyBaseTo, getCloneRoot, getCloneSource, getCustomQueryParameters, getCustomRequestHeaders, getGeneralProgressListener, getHandlerContext, getReadLimit, getRequestClientOptions, getRequestCredentials, getRequestCredentialsProvider, getRequestMetricCollector, getSdkClientExecutionTimeout, getSdkRequestTimeout, putCustomQueryParameter, putCustomRequestHeader, setGeneralProgressListener, setRequestCredentials, setRequestCredentialsProvider, setRequestMetricCollector, setSdkClientExecutionTimeout, setSdkRequestTimeout, withGeneralProgressListener, withRequestCredentialsProvider, withRequestMetricCollector, withSdkClientExecutionTimeout, withSdkRequestTimeout
public void setStreamName(String streamName)
The name of the stream to put the data record into.
streamName
- The name of the stream to put the data record into.public String getStreamName()
The name of the stream to put the data record into.
public PutRecordRequest withStreamName(String streamName)
The name of the stream to put the data record into.
streamName
- The name of the stream to put the data record into.public void setData(ByteBuffer data)
The data blob to put into the record, which is base64-encoded when the blob is serialized. When the data blob (the payload before base64-encoding) is added to the partition key size, the total size must not exceed the maximum record size (1 MiB).
The AWS SDK for Java performs a Base64 encoding on this field before sending this request to the AWS service. Users of the SDK should not perform Base64 encoding on this field.
Warning: ByteBuffers returned by the SDK are mutable. Changes to the content or position of the byte buffer will be seen by all objects that have a reference to this object. It is recommended to call ByteBuffer.duplicate() or ByteBuffer.asReadOnlyBuffer() before using or reading from the buffer. This behavior will be changed in a future major version of the SDK.
data
- The data blob to put into the record, which is base64-encoded when the blob is serialized. When the data
blob (the payload before base64-encoding) is added to the partition key size, the total size must not
exceed the maximum record size (1 MiB).public ByteBuffer getData()
The data blob to put into the record, which is base64-encoded when the blob is serialized. When the data blob (the payload before base64-encoding) is added to the partition key size, the total size must not exceed the maximum record size (1 MiB).
ByteBuffer
s are stateful. Calling their get
methods changes their position
. We recommend
using ByteBuffer.asReadOnlyBuffer()
to create a read-only view of the buffer with an independent
position
, and calling get
methods on this rather than directly on the returned ByteBuffer
.
Doing so will ensure that anyone else using the ByteBuffer
will not be affected by changes to the
position
.
public PutRecordRequest withData(ByteBuffer data)
The data blob to put into the record, which is base64-encoded when the blob is serialized. When the data blob (the payload before base64-encoding) is added to the partition key size, the total size must not exceed the maximum record size (1 MiB).
The AWS SDK for Java performs a Base64 encoding on this field before sending this request to the AWS service. Users of the SDK should not perform Base64 encoding on this field.
Warning: ByteBuffers returned by the SDK are mutable. Changes to the content or position of the byte buffer will be seen by all objects that have a reference to this object. It is recommended to call ByteBuffer.duplicate() or ByteBuffer.asReadOnlyBuffer() before using or reading from the buffer. This behavior will be changed in a future major version of the SDK.
data
- The data blob to put into the record, which is base64-encoded when the blob is serialized. When the data
blob (the payload before base64-encoding) is added to the partition key size, the total size must not
exceed the maximum record size (1 MiB).public void setPartitionKey(String partitionKey)
Determines which shard in the stream the data record is assigned to. Partition keys are Unicode strings with a maximum length limit of 256 characters for each key. Amazon Kinesis Data Streams uses the partition key as input to a hash function that maps the partition key and associated data to a specific shard. Specifically, an MD5 hash function is used to map partition keys to 128-bit integer values and to map associated data records to shards. As a result of this hashing mechanism, all data records with the same partition key map to the same shard within the stream.
partitionKey
- Determines which shard in the stream the data record is assigned to. Partition keys are Unicode strings
with a maximum length limit of 256 characters for each key. Amazon Kinesis Data Streams uses the partition
key as input to a hash function that maps the partition key and associated data to a specific shard.
Specifically, an MD5 hash function is used to map partition keys to 128-bit integer values and to map
associated data records to shards. As a result of this hashing mechanism, all data records with the same
partition key map to the same shard within the stream.public String getPartitionKey()
Determines which shard in the stream the data record is assigned to. Partition keys are Unicode strings with a maximum length limit of 256 characters for each key. Amazon Kinesis Data Streams uses the partition key as input to a hash function that maps the partition key and associated data to a specific shard. Specifically, an MD5 hash function is used to map partition keys to 128-bit integer values and to map associated data records to shards. As a result of this hashing mechanism, all data records with the same partition key map to the same shard within the stream.
public PutRecordRequest withPartitionKey(String partitionKey)
Determines which shard in the stream the data record is assigned to. Partition keys are Unicode strings with a maximum length limit of 256 characters for each key. Amazon Kinesis Data Streams uses the partition key as input to a hash function that maps the partition key and associated data to a specific shard. Specifically, an MD5 hash function is used to map partition keys to 128-bit integer values and to map associated data records to shards. As a result of this hashing mechanism, all data records with the same partition key map to the same shard within the stream.
partitionKey
- Determines which shard in the stream the data record is assigned to. Partition keys are Unicode strings
with a maximum length limit of 256 characters for each key. Amazon Kinesis Data Streams uses the partition
key as input to a hash function that maps the partition key and associated data to a specific shard.
Specifically, an MD5 hash function is used to map partition keys to 128-bit integer values and to map
associated data records to shards. As a result of this hashing mechanism, all data records with the same
partition key map to the same shard within the stream.public void setExplicitHashKey(String explicitHashKey)
The hash value used to explicitly determine the shard the data record is assigned to by overriding the partition key hash.
explicitHashKey
- The hash value used to explicitly determine the shard the data record is assigned to by overriding the
partition key hash.public String getExplicitHashKey()
The hash value used to explicitly determine the shard the data record is assigned to by overriding the partition key hash.
public PutRecordRequest withExplicitHashKey(String explicitHashKey)
The hash value used to explicitly determine the shard the data record is assigned to by overriding the partition key hash.
explicitHashKey
- The hash value used to explicitly determine the shard the data record is assigned to by overriding the
partition key hash.public void setSequenceNumberForOrdering(String sequenceNumberForOrdering)
Guarantees strictly increasing sequence numbers, for puts from the same client and to the same partition key.
Usage: set the SequenceNumberForOrdering
of record n to the sequence number of record
n-1 (as returned in the result when putting record n-1). If this parameter is not set, records are
coarsely ordered based on arrival time.
sequenceNumberForOrdering
- Guarantees strictly increasing sequence numbers, for puts from the same client and to the same partition
key. Usage: set the SequenceNumberForOrdering
of record n to the sequence number of
record n-1 (as returned in the result when putting record n-1). If this parameter is not
set, records are coarsely ordered based on arrival time.public String getSequenceNumberForOrdering()
Guarantees strictly increasing sequence numbers, for puts from the same client and to the same partition key.
Usage: set the SequenceNumberForOrdering
of record n to the sequence number of record
n-1 (as returned in the result when putting record n-1). If this parameter is not set, records are
coarsely ordered based on arrival time.
SequenceNumberForOrdering
of record n to the sequence number of
record n-1 (as returned in the result when putting record n-1). If this parameter is not
set, records are coarsely ordered based on arrival time.public PutRecordRequest withSequenceNumberForOrdering(String sequenceNumberForOrdering)
Guarantees strictly increasing sequence numbers, for puts from the same client and to the same partition key.
Usage: set the SequenceNumberForOrdering
of record n to the sequence number of record
n-1 (as returned in the result when putting record n-1). If this parameter is not set, records are
coarsely ordered based on arrival time.
sequenceNumberForOrdering
- Guarantees strictly increasing sequence numbers, for puts from the same client and to the same partition
key. Usage: set the SequenceNumberForOrdering
of record n to the sequence number of
record n-1 (as returned in the result when putting record n-1). If this parameter is not
set, records are coarsely ordered based on arrival time.public void setStreamARN(String streamARN)
The ARN of the stream.
streamARN
- The ARN of the stream.public String getStreamARN()
The ARN of the stream.
public PutRecordRequest withStreamARN(String streamARN)
The ARN of the stream.
streamARN
- The ARN of the stream.public String toString()
toString
in class Object
Object.toString()
public PutRecordRequest clone()
clone
in class AmazonWebServiceRequest
Copyright © 2023. All rights reserved.