@Generated(value="com.amazonaws:aws-java-sdk-code-generator") public class PutRecordsRequestEntry extends Object implements Serializable, Cloneable, StructuredPojo
Represents the output for PutRecords
.
Constructor and Description |
---|
PutRecordsRequestEntry() |
Modifier and Type | Method and Description |
---|---|
PutRecordsRequestEntry |
clone() |
boolean |
equals(Object obj) |
ByteBuffer |
getData()
The data blob to put into the record, which is base64-encoded when the blob is serialized.
|
String |
getExplicitHashKey()
The hash value used to determine explicitly the shard that the data record is assigned to by overriding the
partition key hash.
|
String |
getPartitionKey()
Determines which shard in the stream the data record is assigned to.
|
int |
hashCode() |
void |
marshall(ProtocolMarshaller protocolMarshaller) |
void |
setData(ByteBuffer data)
The data blob to put into the record, which is base64-encoded when the blob is serialized.
|
void |
setExplicitHashKey(String explicitHashKey)
The hash value used to determine explicitly the shard that the data record is assigned to by overriding the
partition key hash.
|
void |
setPartitionKey(String partitionKey)
Determines which shard in the stream the data record is assigned to.
|
String |
toString()
Returns a string representation of this object.
|
PutRecordsRequestEntry |
withData(ByteBuffer data)
The data blob to put into the record, which is base64-encoded when the blob is serialized.
|
PutRecordsRequestEntry |
withExplicitHashKey(String explicitHashKey)
The hash value used to determine explicitly the shard that the data record is assigned to by overriding the
partition key hash.
|
PutRecordsRequestEntry |
withPartitionKey(String partitionKey)
Determines which shard in the stream the data record is assigned to.
|
public void setData(ByteBuffer data)
The data blob to put into the record, which is base64-encoded when the blob is serialized. When the data blob (the payload before base64-encoding) is added to the partition key size, the total size must not exceed the maximum record size (1 MiB).
The AWS SDK for Java performs a Base64 encoding on this field before sending this request to the AWS service. Users of the SDK should not perform Base64 encoding on this field.
Warning: ByteBuffers returned by the SDK are mutable. Changes to the content or position of the byte buffer will be seen by all objects that have a reference to this object. It is recommended to call ByteBuffer.duplicate() or ByteBuffer.asReadOnlyBuffer() before using or reading from the buffer. This behavior will be changed in a future major version of the SDK.
data
- The data blob to put into the record, which is base64-encoded when the blob is serialized. When the data
blob (the payload before base64-encoding) is added to the partition key size, the total size must not
exceed the maximum record size (1 MiB).public ByteBuffer getData()
The data blob to put into the record, which is base64-encoded when the blob is serialized. When the data blob (the payload before base64-encoding) is added to the partition key size, the total size must not exceed the maximum record size (1 MiB).
ByteBuffer
s are stateful. Calling their get
methods changes their position
. We recommend
using ByteBuffer.asReadOnlyBuffer()
to create a read-only view of the buffer with an independent
position
, and calling get
methods on this rather than directly on the returned ByteBuffer
.
Doing so will ensure that anyone else using the ByteBuffer
will not be affected by changes to the
position
.
public PutRecordsRequestEntry withData(ByteBuffer data)
The data blob to put into the record, which is base64-encoded when the blob is serialized. When the data blob (the payload before base64-encoding) is added to the partition key size, the total size must not exceed the maximum record size (1 MiB).
The AWS SDK for Java performs a Base64 encoding on this field before sending this request to the AWS service. Users of the SDK should not perform Base64 encoding on this field.
Warning: ByteBuffers returned by the SDK are mutable. Changes to the content or position of the byte buffer will be seen by all objects that have a reference to this object. It is recommended to call ByteBuffer.duplicate() or ByteBuffer.asReadOnlyBuffer() before using or reading from the buffer. This behavior will be changed in a future major version of the SDK.
data
- The data blob to put into the record, which is base64-encoded when the blob is serialized. When the data
blob (the payload before base64-encoding) is added to the partition key size, the total size must not
exceed the maximum record size (1 MiB).public void setExplicitHashKey(String explicitHashKey)
The hash value used to determine explicitly the shard that the data record is assigned to by overriding the partition key hash.
explicitHashKey
- The hash value used to determine explicitly the shard that the data record is assigned to by overriding
the partition key hash.public String getExplicitHashKey()
The hash value used to determine explicitly the shard that the data record is assigned to by overriding the partition key hash.
public PutRecordsRequestEntry withExplicitHashKey(String explicitHashKey)
The hash value used to determine explicitly the shard that the data record is assigned to by overriding the partition key hash.
explicitHashKey
- The hash value used to determine explicitly the shard that the data record is assigned to by overriding
the partition key hash.public void setPartitionKey(String partitionKey)
Determines which shard in the stream the data record is assigned to. Partition keys are Unicode strings with a maximum length limit of 256 characters for each key. Amazon Kinesis Data Streams uses the partition key as input to a hash function that maps the partition key and associated data to a specific shard. Specifically, an MD5 hash function is used to map partition keys to 128-bit integer values and to map associated data records to shards. As a result of this hashing mechanism, all data records with the same partition key map to the same shard within the stream.
partitionKey
- Determines which shard in the stream the data record is assigned to. Partition keys are Unicode strings
with a maximum length limit of 256 characters for each key. Amazon Kinesis Data Streams uses the partition
key as input to a hash function that maps the partition key and associated data to a specific shard.
Specifically, an MD5 hash function is used to map partition keys to 128-bit integer values and to map
associated data records to shards. As a result of this hashing mechanism, all data records with the same
partition key map to the same shard within the stream.public String getPartitionKey()
Determines which shard in the stream the data record is assigned to. Partition keys are Unicode strings with a maximum length limit of 256 characters for each key. Amazon Kinesis Data Streams uses the partition key as input to a hash function that maps the partition key and associated data to a specific shard. Specifically, an MD5 hash function is used to map partition keys to 128-bit integer values and to map associated data records to shards. As a result of this hashing mechanism, all data records with the same partition key map to the same shard within the stream.
public PutRecordsRequestEntry withPartitionKey(String partitionKey)
Determines which shard in the stream the data record is assigned to. Partition keys are Unicode strings with a maximum length limit of 256 characters for each key. Amazon Kinesis Data Streams uses the partition key as input to a hash function that maps the partition key and associated data to a specific shard. Specifically, an MD5 hash function is used to map partition keys to 128-bit integer values and to map associated data records to shards. As a result of this hashing mechanism, all data records with the same partition key map to the same shard within the stream.
partitionKey
- Determines which shard in the stream the data record is assigned to. Partition keys are Unicode strings
with a maximum length limit of 256 characters for each key. Amazon Kinesis Data Streams uses the partition
key as input to a hash function that maps the partition key and associated data to a specific shard.
Specifically, an MD5 hash function is used to map partition keys to 128-bit integer values and to map
associated data records to shards. As a result of this hashing mechanism, all data records with the same
partition key map to the same shard within the stream.public String toString()
toString
in class Object
Object.toString()
public PutRecordsRequestEntry clone()
public void marshall(ProtocolMarshaller protocolMarshaller)
marshall
in interface StructuredPojo
Copyright © 2024. All rights reserved.