String streamName
The name of the stream.
SdkInternalMap<K,V> tags
A set of up to 10 key-value pairs to use to create the tags.
String streamARN
The ARN of the stream.
String shardId
The shard ID of the existing child shard of the current shard.
SdkInternalList<T> parentShards
The current shard that is the parent of the existing child shard.
HashKeyRange hashKeyRange
String consumerName
The name of the consumer is something you choose when you register the consumer.
String consumerARN
When you register a consumer, Kinesis Data Streams generates an ARN for it. You need this ARN to be able to call SubscribeToShard.
If you delete a consumer and then create a new one with the same name, it won't have the same ARN. That's because consumer ARNs contain the creation timestamp. This is important to keep in mind if you have IAM policies that reference consumer ARNs.
String consumerStatus
A consumer can't read data while in the CREATING
or DELETING
states.
Date consumerCreationTimestamp
String consumerName
The name of the consumer is something you choose when you register the consumer.
String consumerARN
When you register a consumer, Kinesis Data Streams generates an ARN for it. You need this ARN to be able to call SubscribeToShard.
If you delete a consumer and then create a new one with the same name, it won't have the same ARN. That's because consumer ARNs contain the creation timestamp. This is important to keep in mind if you have IAM policies that reference consumer ARNs.
String consumerStatus
A consumer can't read data while in the CREATING
or DELETING
states.
Date consumerCreationTimestamp
String streamARN
The ARN of the stream with which you registered the consumer.
String streamName
A name to identify the stream. The stream name is scoped to the Amazon Web Services account used by the application that creates the stream. It is also scoped by Amazon Web Services Region. That is, two streams in two different Amazon Web Services accounts can have the same name. Two streams in the same Amazon Web Services account but in two different Regions can also have the same name.
Integer shardCount
The number of shards that the stream will use. The throughput of the stream is a function of the number of shards; more shards are required for greater provisioned throughput.
StreamModeDetails streamModeDetails
Indicates the capacity mode of the data stream. Currently, in Kinesis Data Streams, you can choose between an on-demand capacity mode and a provisioned capacity mode for your data streams.
String resourceARN
The Amazon Resource Name (ARN) of the data stream or consumer.
String streamName
The name of the stream to delete.
Boolean enforceConsumerDeletion
If this parameter is unset (null
) or if you set it to false
, and the stream has
registered consumers, the call to DeleteStream
fails with a ResourceInUseException
.
String streamARN
The ARN of the stream.
String streamARN
The ARN of the Kinesis data stream that the consumer is registered with. For more information, see Amazon Resource Names (ARNs) and Amazon Web Services Service Namespaces.
String consumerName
The name that you gave to the consumer.
String consumerARN
The ARN returned by Kinesis Data Streams when you registered the consumer. If you don't know the ARN of the consumer that you want to deregister, you can use the ListStreamConsumers operation to get a list of the descriptions of all the consumers that are currently registered with a given data stream. The description of a consumer contains its ARN.
Integer shardLimit
The maximum number of shards.
Integer openShardCount
The number of open shards.
Integer onDemandStreamCount
Indicates the number of data streams with the on-demand capacity mode.
Integer onDemandStreamCountLimit
The maximum number of data streams with the on-demand capacity mode.
String streamARN
The ARN of the Kinesis data stream that the consumer is registered with. For more information, see Amazon Resource Names (ARNs) and Amazon Web Services Service Namespaces.
String consumerName
The name that you gave to the consumer.
String consumerARN
The ARN returned by Kinesis Data Streams when you registered the consumer.
ConsumerDescription consumerDescription
An object that represents the details of the consumer.
String streamName
The name of the stream to describe.
Integer limit
The maximum number of shards to return in a single call. The default value is 100. If you specify a value greater than 100, at most 100 results are returned.
String exclusiveStartShardId
The shard ID of the shard to start with.
Specify this parameter to indicate that you want to describe the stream starting with the shard whose ID
immediately follows ExclusiveStartShardId
.
If you don't specify this parameter, the default behavior for DescribeStream
is to describe the
stream starting with the first shard in the stream.
String streamARN
The ARN of the stream.
StreamDescription streamDescription
The current status of the stream, the stream Amazon Resource Name (ARN), an array of shard objects that comprise the stream, and whether there are more shards available.
StreamDescriptionSummary streamDescriptionSummary
A StreamDescriptionSummary containing information about the stream.
String streamName
The name of the Kinesis data stream for which to disable enhanced monitoring.
SdkInternalList<T> shardLevelMetrics
List of shard-level metrics to disable.
The following are the valid shard-level metrics. The value "ALL
" disables every metric.
IncomingBytes
IncomingRecords
OutgoingBytes
OutgoingRecords
WriteProvisionedThroughputExceeded
ReadProvisionedThroughputExceeded
IteratorAgeMilliseconds
ALL
For more information, see Monitoring the Amazon Kinesis Data Streams Service with Amazon CloudWatch in the Amazon Kinesis Data Streams Developer Guide.
String streamARN
The ARN of the stream.
String streamName
The name of the Kinesis data stream.
SdkInternalList<T> currentShardLevelMetrics
Represents the current state of the metrics that are in the enhanced state before the operation.
SdkInternalList<T> desiredShardLevelMetrics
Represents the list of all the metrics that would be in the enhanced state after the operation.
String streamARN
The ARN of the stream.
String streamName
The name of the stream for which to enable enhanced monitoring.
SdkInternalList<T> shardLevelMetrics
List of shard-level metrics to enable.
The following are the valid shard-level metrics. The value "ALL
" enables every metric.
IncomingBytes
IncomingRecords
OutgoingBytes
OutgoingRecords
WriteProvisionedThroughputExceeded
ReadProvisionedThroughputExceeded
IteratorAgeMilliseconds
ALL
For more information, see Monitoring the Amazon Kinesis Data Streams Service with Amazon CloudWatch in the Amazon Kinesis Data Streams Developer Guide.
String streamARN
The ARN of the stream.
String streamName
The name of the Kinesis data stream.
SdkInternalList<T> currentShardLevelMetrics
Represents the current state of the metrics that are in the enhanced state before the operation.
SdkInternalList<T> desiredShardLevelMetrics
Represents the list of all the metrics that would be in the enhanced state after the operation.
String streamARN
The ARN of the stream.
SdkInternalList<T> shardLevelMetrics
List of shard-level metrics.
The following are the valid shard-level metrics. The value "ALL
" enhances every metric.
IncomingBytes
IncomingRecords
OutgoingBytes
OutgoingRecords
WriteProvisionedThroughputExceeded
ReadProvisionedThroughputExceeded
IteratorAgeMilliseconds
ALL
For more information, see Monitoring the Amazon Kinesis Data Streams Service with Amazon CloudWatch in the Amazon Kinesis Data Streams Developer Guide.
String shardIterator
The position in the shard from which you want to start sequentially reading data records. A shard iterator specifies this position using the sequence number of a data record in the shard.
Integer limit
The maximum number of records to return. Specify a value of up to 10,000. If you specify a value that is greater
than 10,000, GetRecords throws InvalidArgumentException
. The default value is 10,000.
String streamARN
The ARN of the stream.
SdkInternalList<T> records
The data records retrieved from the shard.
String nextShardIterator
The next position in the shard from which to start sequentially reading data records. If set to null
, the shard has been closed and the requested iterator does not return any more data.
Long millisBehindLatest
The number of milliseconds the GetRecords response is from the tip of the stream, indicating how far behind current time the consumer is. A value of zero indicates that record processing is caught up, and there are no new records to process at this moment.
SdkInternalList<T> childShards
The list of the current shard's child shards, returned in the GetRecords
API's response only when
the end of the current shard is reached.
String resourceARN
The Amazon Resource Name (ARN) of the data stream or consumer.
String policy
Details of the resource policy. This is formatted as a JSON string.
String streamName
The name of the Amazon Kinesis data stream.
String shardId
The shard ID of the Kinesis Data Streams shard to get the iterator for.
String shardIteratorType
Determines how the shard iterator is used to start reading data records from the shard.
The following are the valid Amazon Kinesis shard iterator types:
AT_SEQUENCE_NUMBER - Start reading from the position denoted by a specific sequence number, provided in the value
StartingSequenceNumber
.
AFTER_SEQUENCE_NUMBER - Start reading right after the position denoted by a specific sequence number, provided in
the value StartingSequenceNumber
.
AT_TIMESTAMP - Start reading from the position denoted by a specific time stamp, provided in the value
Timestamp
.
TRIM_HORIZON - Start reading at the last untrimmed record in the shard in the system, which is the oldest data record in the shard.
LATEST - Start reading just after the most recent record in the shard, so that you always read the most recent data in the shard.
String startingSequenceNumber
The sequence number of the data record in the shard from which to start reading. Used with shard iterator type AT_SEQUENCE_NUMBER and AFTER_SEQUENCE_NUMBER.
Date timestamp
The time stamp of the data record from which to start reading. Used with shard iterator type AT_TIMESTAMP. A time
stamp is the Unix epoch date with precision in milliseconds. For example,
2016-04-04T19:58:46.480-00:00
or 1459799926.480
. If a record with this exact time stamp
does not exist, the iterator returned is for the next (later) record. If the time stamp is older than the current
trim horizon, the iterator returned is for the oldest untrimmed data record (TRIM_HORIZON).
String streamARN
The ARN of the stream.
String shardIterator
The position in the shard from which to start reading data records sequentially. A shard iterator specifies this position using the sequence number of a data record in a shard.
String streamName
The name of the data stream whose shards you want to list.
You cannot specify this parameter if you specify the NextToken
parameter.
String nextToken
When the number of shards in the data stream is greater than the default value for the MaxResults
parameter, or if you explicitly specify a value for MaxResults
that is less than the number of
shards in the data stream, the response includes a pagination token named NextToken
. You can specify
this NextToken
value in a subsequent call to ListShards
to list the next set of shards.
Don't specify StreamName
or StreamCreationTimestamp
if you specify
NextToken
because the latter unambiguously identifies the stream.
You can optionally specify a value for the MaxResults
parameter when you specify
NextToken
. If you specify a MaxResults
value that is less than the number of shards
that the operation returns if you don't specify MaxResults
, the response will contain a new
NextToken
value. You can use the new NextToken
value in a subsequent call to the
ListShards
operation.
Tokens expire after 300 seconds. When you obtain a value for NextToken
in the response to a call to
ListShards
, you have 300 seconds to use that value. If you specify an expired token in a call to
ListShards
, you get ExpiredNextTokenException
.
String exclusiveStartShardId
Specify this parameter to indicate that you want to list the shards starting with the shard whose ID immediately
follows ExclusiveStartShardId
.
If you don't specify this parameter, the default behavior is for ListShards
to list the shards
starting with the first one in the stream.
You cannot specify this parameter if you specify NextToken
.
Integer maxResults
The maximum number of shards to return in a single call to ListShards
. The maximum number of shards
to return in a single call. The default value is 1000. If you specify a value greater than 1000, at most 1000
results are returned.
When the number of shards to be listed is greater than the value of MaxResults
, the response
contains a NextToken
value that you can use in a subsequent call to ListShards
to list
the next set of shards.
Date streamCreationTimestamp
Specify this input parameter to distinguish data streams that have the same name. For example, if you create a data stream and then delete it, and you later create another data stream with the same name, you can use this input parameter to specify which of the two streams you want to list the shards for.
You cannot specify this parameter if you specify the NextToken
parameter.
ShardFilter shardFilter
Enables you to filter out the response of the ListShards
API. You can only specify one filter at a
time.
If you use the ShardFilter
parameter when invoking the ListShards API, the Type
is the
required property and must be specified. If you specify the AT_TRIM_HORIZON
,
FROM_TRIM_HORIZON
, or AT_LATEST
types, you do not need to specify either the
ShardId
or the Timestamp
optional properties.
If you specify the AFTER_SHARD_ID
type, you must also provide the value for the optional
ShardId
property. The ShardId
property is identical in fuctionality to the
ExclusiveStartShardId
parameter of the ListShards
API. When ShardId
property is specified, the response includes the shards starting with the shard whose ID immediately follows the
ShardId
that you provided.
If you specify the AT_TIMESTAMP
or FROM_TIMESTAMP_ID
type, you must also provide the
value for the optional Timestamp
property. If you specify the AT_TIMESTAMP type, then all shards
that were open at the provided timestamp are returned. If you specify the FROM_TIMESTAMP type, then all shards
starting from the provided timestamp to TIP are returned.
String streamARN
The ARN of the stream.
SdkInternalList<T> shards
An array of JSON objects. Each object represents one shard and specifies the IDs of the shard, the shard's parent, and the shard that's adjacent to the shard's parent. Each object also contains the starting and ending hash keys and the starting and ending sequence numbers for the shard.
String nextToken
When the number of shards in the data stream is greater than the default value for the MaxResults
parameter, or if you explicitly specify a value for MaxResults
that is less than the number of
shards in the data stream, the response includes a pagination token named NextToken
. You can specify
this NextToken
value in a subsequent call to ListShards
to list the next set of shards.
For more information about the use of this pagination token when calling the ListShards
operation,
see ListShardsInput$NextToken.
Tokens expire after 300 seconds. When you obtain a value for NextToken
in the response to a call to
ListShards
, you have 300 seconds to use that value. If you specify an expired token in a call to
ListShards
, you get ExpiredNextTokenException
.
String streamARN
The ARN of the Kinesis data stream for which you want to list the registered consumers. For more information, see Amazon Resource Names (ARNs) and Amazon Web Services Service Namespaces.
String nextToken
When the number of consumers that are registered with the data stream is greater than the default value for the
MaxResults
parameter, or if you explicitly specify a value for MaxResults
that is less
than the number of consumers that are registered with the data stream, the response includes a pagination token
named NextToken
. You can specify this NextToken
value in a subsequent call to
ListStreamConsumers
to list the next set of registered consumers.
Don't specify StreamName
or StreamCreationTimestamp
if you specify
NextToken
because the latter unambiguously identifies the stream.
You can optionally specify a value for the MaxResults
parameter when you specify
NextToken
. If you specify a MaxResults
value that is less than the number of consumers
that the operation returns if you don't specify MaxResults
, the response will contain a new
NextToken
value. You can use the new NextToken
value in a subsequent call to the
ListStreamConsumers
operation to list the next set of consumers.
Tokens expire after 300 seconds. When you obtain a value for NextToken
in the response to a call to
ListStreamConsumers
, you have 300 seconds to use that value. If you specify an expired token in a
call to ListStreamConsumers
, you get ExpiredNextTokenException
.
Integer maxResults
The maximum number of consumers that you want a single call of ListStreamConsumers
to return. The
default value is 100. If you specify a value greater than 100, at most 100 results are returned.
Date streamCreationTimestamp
Specify this input parameter to distinguish data streams that have the same name. For example, if you create a data stream and then delete it, and you later create another data stream with the same name, you can use this input parameter to specify which of the two streams you want to list the consumers for.
You can't specify this parameter if you specify the NextToken parameter.
SdkInternalList<T> consumers
An array of JSON objects. Each object represents one registered consumer.
String nextToken
When the number of consumers that are registered with the data stream is greater than the default value for the
MaxResults
parameter, or if you explicitly specify a value for MaxResults
that is less
than the number of registered consumers, the response includes a pagination token named NextToken
.
You can specify this NextToken
value in a subsequent call to ListStreamConsumers
to
list the next set of registered consumers. For more information about the use of this pagination token when
calling the ListStreamConsumers
operation, see ListStreamConsumersInput$NextToken.
Tokens expire after 300 seconds. When you obtain a value for NextToken
in the response to a call to
ListStreamConsumers
, you have 300 seconds to use that value. If you specify an expired token in a
call to ListStreamConsumers
, you get ExpiredNextTokenException
.
SdkInternalList<T> streamNames
The names of the streams that are associated with the Amazon Web Services account making the
ListStreams
request.
Boolean hasMoreStreams
If set to true
, there are more streams available to list.
String nextToken
SdkInternalList<T> streamSummaries
String streamName
The name of the stream.
String exclusiveStartTagKey
The key to use as the starting point for the list of tags. If this parameter is set,
ListTagsForStream
gets all tags that occur after ExclusiveStartTagKey
.
Integer limit
The number of tags to return. If this number is less than the total number of tags associated with the stream,
HasMoreTags
is set to true
. To list additional tags, set
ExclusiveStartTagKey
to the last key in the response.
String streamARN
The ARN of the stream.
SdkInternalList<T> tags
A list of tags associated with StreamName
, starting with the first tag after
ExclusiveStartTagKey
and up to the specified Limit
.
Boolean hasMoreTags
If set to true
, more tags are available. To request additional tags, set
ExclusiveStartTagKey
to the key of the last tag returned.
String streamName
The name of the stream for the merge.
String shardToMerge
The shard ID of the shard to combine with the adjacent shard for the merge.
String adjacentShardToMerge
The shard ID of the adjacent shard for the merge.
String streamARN
The ARN of the stream.
String streamName
The name of the stream to put the data record into.
ByteBuffer data
The data blob to put into the record, which is base64-encoded when the blob is serialized. When the data blob (the payload before base64-encoding) is added to the partition key size, the total size must not exceed the maximum record size (1 MiB).
String partitionKey
Determines which shard in the stream the data record is assigned to. Partition keys are Unicode strings with a maximum length limit of 256 characters for each key. Amazon Kinesis Data Streams uses the partition key as input to a hash function that maps the partition key and associated data to a specific shard. Specifically, an MD5 hash function is used to map partition keys to 128-bit integer values and to map associated data records to shards. As a result of this hashing mechanism, all data records with the same partition key map to the same shard within the stream.
String explicitHashKey
The hash value used to explicitly determine the shard the data record is assigned to by overriding the partition key hash.
String sequenceNumberForOrdering
Guarantees strictly increasing sequence numbers, for puts from the same client and to the same partition key.
Usage: set the SequenceNumberForOrdering
of record n to the sequence number of record
n-1 (as returned in the result when putting record n-1). If this parameter is not set, records are
coarsely ordered based on arrival time.
String streamARN
The ARN of the stream.
String shardId
The shard ID of the shard where the data record was placed.
String sequenceNumber
The sequence number identifier that was assigned to the put data record. The sequence number for the record is unique across all records in the stream. A sequence number is the identifier associated with every record put into the stream.
String encryptionType
The encryption type to use on the record. This parameter can be one of the following values:
NONE
: Do not encrypt the records in the stream.
KMS
: Use server-side encryption on the records in the stream using a customer-managed Amazon Web
Services KMS key.
SdkInternalList<T> records
The records associated with the request.
String streamName
The stream name associated with the request.
String streamARN
The ARN of the stream.
ByteBuffer data
The data blob to put into the record, which is base64-encoded when the blob is serialized. When the data blob (the payload before base64-encoding) is added to the partition key size, the total size must not exceed the maximum record size (1 MiB).
String explicitHashKey
The hash value used to determine explicitly the shard that the data record is assigned to by overriding the partition key hash.
String partitionKey
Determines which shard in the stream the data record is assigned to. Partition keys are Unicode strings with a maximum length limit of 256 characters for each key. Amazon Kinesis Data Streams uses the partition key as input to a hash function that maps the partition key and associated data to a specific shard. Specifically, an MD5 hash function is used to map partition keys to 128-bit integer values and to map associated data records to shards. As a result of this hashing mechanism, all data records with the same partition key map to the same shard within the stream.
Integer failedRecordCount
The number of unsuccessfully processed records in a PutRecords
request.
SdkInternalList<T> records
An array of successfully and unsuccessfully processed record results. A record that is successfully added to a
stream includes SequenceNumber
and ShardId
in the result. A record that fails to be
added to a stream includes ErrorCode
and ErrorMessage
in the result.
String encryptionType
The encryption type used on the records. This parameter can be one of the following values:
NONE
: Do not encrypt the records.
KMS
: Use server-side encryption on the records using a customer-managed Amazon Web Services KMS key.
String sequenceNumber
The sequence number for an individual record result.
String shardId
The shard ID for an individual record result.
String errorCode
The error code for an individual record result. ErrorCodes
can be either
ProvisionedThroughputExceededException
or InternalFailure
.
String errorMessage
The error message for an individual record result. An ErrorCode
value of
ProvisionedThroughputExceededException
has an error message that includes the account ID, stream
name, and shard ID. An ErrorCode
value of InternalFailure
has the error message
"Internal Service Failure"
.
String sequenceNumber
The unique identifier of the record within its shard.
Date approximateArrivalTimestamp
The approximate time that the record was inserted into the stream.
ByteBuffer data
The data blob. The data in the blob is both opaque and immutable to Kinesis Data Streams, which does not inspect, interpret, or change the data in the blob in any way. When the data blob (the payload before base64-encoding) is added to the partition key size, the total size must not exceed the maximum record size (1 MiB).
String partitionKey
Identifies which shard in the stream the data record is assigned to.
String encryptionType
The encryption type used on the record. This parameter can be one of the following values:
NONE
: Do not encrypt the records in the stream.
KMS
: Use server-side encryption on the records in the stream using a customer-managed Amazon Web
Services KMS key.
String streamARN
The ARN of the Kinesis data stream that you want to register the consumer with. For more info, see Amazon Resource Names (ARNs) and Amazon Web Services Service Namespaces.
String consumerName
For a given Kinesis data stream, each consumer must have a unique name. However, consumer names don't have to be unique across data streams.
Consumer consumer
An object that represents the details of the consumer you registered. When you register a consumer, it gets an ARN that is generated by Kinesis Data Streams.
String streamName
The name of the stream.
SdkInternalList<T> tagKeys
A list of tag keys. Each corresponding tag is removed from the stream.
String streamARN
The ARN of the stream.
String shardId
The unique identifier of the shard within the stream.
String parentShardId
The shard ID of the shard's parent.
String adjacentParentShardId
The shard ID of the shard adjacent to the shard's parent.
HashKeyRange hashKeyRange
The range of possible hash key values for the shard, which is a set of ordered contiguous positive integers.
SequenceNumberRange sequenceNumberRange
The range of possible sequence numbers for the shard.
String type
The shard type specified in the ShardFilter
parameter. This is a required property of the
ShardFilter
parameter.
You can specify the following valid values:
AFTER_SHARD_ID
- the response includes all the shards, starting with the shard whose ID immediately
follows the ShardId
that you provided.
AT_TRIM_HORIZON
- the response includes all the shards that were open at TRIM_HORIZON
.
FROM_TRIM_HORIZON
- (default), the response includes all the shards within the retention period of
the data stream (trim to tip).
AT_LATEST
- the response includes only the currently open shards of the data stream.
AT_TIMESTAMP
- the response includes all shards whose start timestamp is less than or equal to the
given timestamp and end timestamp is greater than or equal to the given timestamp or still open.
FROM_TIMESTAMP
- the response incldues all closed shards whose end timestamp is greater than or
equal to the given timestamp and also all open shards. Corrected to TRIM_HORIZON
of the data stream
if FROM_TIMESTAMP
is less than the TRIM_HORIZON
value.
String shardId
The exclusive start shardID
speified in the ShardFilter
parameter. This property can
only be used if the AFTER_SHARD_ID
shard type is specified.
Date timestamp
The timestamps specified in the ShardFilter
parameter. A timestamp is a Unix epoch date with
precision in milliseconds. For example, 2016-04-04T19:58:46.480-00:00 or 1459799926.480. This property can only
be used if FROM_TIMESTAMP
or AT_TIMESTAMP
shard types are specified.
String streamName
The name of the stream for the shard split.
String shardToSplit
The shard ID of the shard to split.
String newStartingHashKey
A hash key value for the starting hash key of one of the child shards created by the split. The hash key range
for a given shard constitutes a set of ordered contiguous positive integers. The value for
NewStartingHashKey
must be in the range of hash keys being mapped into the shard. The
NewStartingHashKey
hash key value and all higher hash key values in hash key range are distributed
to one of the child shards. All the lower hash key values in the range are distributed to the other child shard.
String streamARN
The ARN of the stream.
String streamName
The name of the stream for which to start encrypting records.
String encryptionType
The encryption type to use. The only valid value is KMS
.
String keyId
The GUID for the customer-managed Amazon Web Services KMS key to use for encryption. This value can be a globally
unique identifier, a fully specified Amazon Resource Name (ARN) to either an alias or a key, or an alias name
prefixed by "alias/".You can also use a master key owned by Kinesis Data Streams by specifying the alias
aws/kinesis
.
Key ARN example: arn:aws:kms:us-east-1:123456789012:key/12345678-1234-1234-1234-123456789012
Alias ARN example: arn:aws:kms:us-east-1:123456789012:alias/MyAliasName
Globally unique key ID example: 12345678-1234-1234-1234-123456789012
Alias name example: alias/MyAliasName
Master key owned by Kinesis Data Streams: alias/aws/kinesis
String streamARN
The ARN of the stream.
String streamName
The name of the stream on which to stop encrypting records.
String encryptionType
The encryption type. The only valid value is KMS
.
String keyId
The GUID for the customer-managed Amazon Web Services KMS key to use for encryption. This value can be a globally
unique identifier, a fully specified Amazon Resource Name (ARN) to either an alias or a key, or an alias name
prefixed by "alias/".You can also use a master key owned by Kinesis Data Streams by specifying the alias
aws/kinesis
.
Key ARN example: arn:aws:kms:us-east-1:123456789012:key/12345678-1234-1234-1234-123456789012
Alias ARN example: arn:aws:kms:us-east-1:123456789012:alias/MyAliasName
Globally unique key ID example: 12345678-1234-1234-1234-123456789012
Alias name example: alias/MyAliasName
Master key owned by Kinesis Data Streams: alias/aws/kinesis
String streamARN
The ARN of the stream.
String streamName
The name of the stream being described.
String streamARN
The Amazon Resource Name (ARN) for the stream being described.
String streamStatus
The current status of the stream being described. The stream status is one of the following states:
CREATING
- The stream is being created. Kinesis Data Streams immediately returns and sets
StreamStatus
to CREATING
.
DELETING
- The stream is being deleted. The specified stream is in the DELETING
state
until Kinesis Data Streams completes the deletion.
ACTIVE
- The stream exists and is ready for read and write operations or deletion. You should
perform read and write operations only on an ACTIVE
stream.
UPDATING
- Shards in the stream are being merged or split. Read and write operations continue to
work while the stream is in the UPDATING
state.
StreamModeDetails streamModeDetails
Specifies the capacity mode to which you want to set your data stream. Currently, in Kinesis Data Streams, you can choose between an on-demand capacity mode and a provisioned capacity mode for your data streams.
SdkInternalList<T> shards
The shards that comprise the stream.
Boolean hasMoreShards
If set to true
, more shards in the stream are available to describe.
Integer retentionPeriodHours
The current retention period, in hours. Minimum value of 24. Maximum value of 168.
Date streamCreationTimestamp
The approximate time that the stream was created.
SdkInternalList<T> enhancedMonitoring
Represents the current enhanced monitoring settings of the stream.
String encryptionType
The server-side encryption type used on the stream. This parameter can be one of the following values:
NONE
: Do not encrypt the records in the stream.
KMS
: Use server-side encryption on the records in the stream using a customer-managed Amazon Web
Services KMS key.
String keyId
The GUID for the customer-managed Amazon Web Services KMS key to use for encryption. This value can be a globally
unique identifier, a fully specified ARN to either an alias or a key, or an alias name prefixed by "alias/".You
can also use a master key owned by Kinesis Data Streams by specifying the alias aws/kinesis
.
Key ARN example: arn:aws:kms:us-east-1:123456789012:key/12345678-1234-1234-1234-123456789012
Alias ARN example: arn:aws:kms:us-east-1:123456789012:alias/MyAliasName
Globally unique key ID example: 12345678-1234-1234-1234-123456789012
Alias name example: alias/MyAliasName
Master key owned by Kinesis Data Streams: alias/aws/kinesis
String streamName
The name of the stream being described.
String streamARN
The Amazon Resource Name (ARN) for the stream being described.
String streamStatus
The current status of the stream being described. The stream status is one of the following states:
CREATING
- The stream is being created. Kinesis Data Streams immediately returns and sets
StreamStatus
to CREATING
.
DELETING
- The stream is being deleted. The specified stream is in the DELETING
state
until Kinesis Data Streams completes the deletion.
ACTIVE
- The stream exists and is ready for read and write operations or deletion. You should
perform read and write operations only on an ACTIVE
stream.
UPDATING
- Shards in the stream are being merged or split. Read and write operations continue to
work while the stream is in the UPDATING
state.
StreamModeDetails streamModeDetails
Specifies the capacity mode to which you want to set your data stream. Currently, in Kinesis Data Streams, you can choose between an on-demand ycapacity mode and a provisioned capacity mode for your data streams.
Integer retentionPeriodHours
The current retention period, in hours.
Date streamCreationTimestamp
The approximate time that the stream was created.
SdkInternalList<T> enhancedMonitoring
Represents the current enhanced monitoring settings of the stream.
String encryptionType
The encryption type used. This value is one of the following:
KMS
NONE
String keyId
The GUID for the customer-managed Amazon Web Services KMS key to use for encryption. This value can be a globally
unique identifier, a fully specified ARN to either an alias or a key, or an alias name prefixed by "alias/".You
can also use a master key owned by Kinesis Data Streams by specifying the alias aws/kinesis
.
Key ARN example: arn:aws:kms:us-east-1:123456789012:key/12345678-1234-1234-1234-123456789012
Alias ARN example: arn:aws:kms:us-east-1:123456789012:alias/MyAliasName
Globally unique key ID example: 12345678-1234-1234-1234-123456789012
Alias name example: alias/MyAliasName
Master key owned by Kinesis Data Streams: alias/aws/kinesis
Integer openShardCount
The number of open shards in the stream.
Integer consumerCount
The number of enhanced fan-out consumers registered with the stream.
String streamMode
Specifies the capacity mode to which you want to set your data stream. Currently, in Kinesis Data Streams, you can choose between an on-demand capacity mode and a provisioned capacity mode for your data streams.
String streamName
The name of a stream.
String streamARN
The ARN of the stream.
String streamStatus
The status of the stream.
StreamModeDetails streamModeDetails
Date streamCreationTimestamp
The timestamp at which the stream was created.
String key
A unique identifier for the tag. Maximum length: 128 characters. Valid characters: Unicode letters, digits, white space, _ . / = + - % @
String value
An optional string, typically used to describe or define the tag. Maximum length: 256 characters. Valid characters: Unicode letters, digits, white space, _ . / = + - % @
String streamName
The name of the stream.
Integer targetShardCount
The new number of shards. This value has the following default limits. By default, you cannot do the following:
Set this value to more than double your current shard count for a stream.
Set this value below half your current shard count for a stream.
Set this value to more than 10000 shards in a stream (the default limit for shard count per stream is 10000 per account per region), unless you request a limit increase.
Scale a stream with more than 10000 shards down unless you set this value to less than 10000 shards.
String scalingType
The scaling type. Uniform scaling creates shards of equal size.
String streamARN
The ARN of the stream.
String streamARN
Specifies the ARN of the data stream whose capacity mode you want to update.
StreamModeDetails streamModeDetails
Specifies the capacity mode to which you want to set your data stream. Currently, in Kinesis Data Streams, you can choose between an on-demand capacity mode and a provisioned capacity mode for your data streams.
String applicationName
The Kinesis Analytics application name.
Long currentApplicationVersionId
The version ID of the Kinesis Analytics application.
CloudWatchLoggingOption cloudWatchLoggingOption
Provides the CloudWatch log stream Amazon Resource Name (ARN) and the IAM role ARN. Note: To write application
messages to CloudWatch, the IAM role that is used must have the PutLogEvents
policy action enabled.
String applicationName
Name of the application to which you want to add the input processing configuration.
Long currentApplicationVersionId
Version of the application to which you want to add the input processing configuration. You can use the DescribeApplication operation to get the current application version. If the version specified is not the
current version, the ConcurrentModificationException
is returned.
String inputId
The ID of the input configuration to add the input processing configuration to. You can get a list of the input IDs for an application using the DescribeApplication operation.
InputProcessingConfiguration inputProcessingConfiguration
The InputProcessingConfiguration to add to the application.
String applicationName
Name of your existing Amazon Kinesis Analytics application to which you want to add the streaming source.
Long currentApplicationVersionId
Current version of your Amazon Kinesis Analytics application. You can use the DescribeApplication operation to find the current application version.
Input input
The Input to add.
String applicationName
Name of the application to which you want to add the output configuration.
Long currentApplicationVersionId
Version of the application to which you want to add the output configuration. You can use the DescribeApplication operation to get the current application version. If the version specified is not the
current version, the ConcurrentModificationException
is returned.
Output output
An array of objects, each describing one output configuration. In the output configuration, you specify the name of an in-application stream, a destination (that is, an Amazon Kinesis stream, an Amazon Kinesis Firehose delivery stream, or an AWS Lambda function), and record the formation to use when writing to the destination.
String applicationName
Name of an existing application.
Long currentApplicationVersionId
Version of the application for which you are adding the reference data source. You can use the DescribeApplication operation to get the current application version. If the version specified is not the
current version, the ConcurrentModificationException
is returned.
ReferenceDataSource referenceDataSource
The reference data source can be an object in your Amazon S3 bucket. Amazon Kinesis Analytics reads the object and copies the data into the in-application table that is created. You provide an S3 bucket, object key name, and the resulting in-application table that is created. You must also provide an IAM role with the necessary permissions that Amazon Kinesis Analytics can assume to read the object from your S3 bucket on your behalf.
String applicationName
Name of the application.
String applicationDescription
Description of the application.
String applicationARN
ARN of the application.
String applicationStatus
Status of the application.
Date createTimestamp
Time stamp when the application version was created.
Date lastUpdateTimestamp
Time stamp when the application was last updated.
List<E> inputDescriptions
Describes the application input configuration. For more information, see Configuring Application Input.
List<E> outputDescriptions
Describes the application output configuration. For more information, see Configuring Application Output.
List<E> referenceDataSourceDescriptions
Describes reference data sources configured for the application. For more information, see Configuring Application Input.
List<E> cloudWatchLoggingOptionDescriptions
Describes the CloudWatch log streams that are configured to receive application messages. For more information about using CloudWatch log streams with Amazon Kinesis Analytics applications, see Working with Amazon CloudWatch Logs.
String applicationCode
Returns the application code that you provided to perform data analysis on any of the in-application streams in your application.
Long applicationVersionId
Provides the current application version.
List<E> inputUpdates
Describes application input configuration updates.
String applicationCodeUpdate
Describes application code updates.
List<E> outputUpdates
Describes application output configuration updates.
List<E> referenceDataSourceUpdates
Describes application reference data source updates.
List<E> cloudWatchLoggingOptionUpdates
Describes application CloudWatch logging option updates.
String cloudWatchLoggingOptionId
ID of the CloudWatch logging option description.
String logStreamARN
ARN of the CloudWatch log to receive application messages.
String roleARN
IAM ARN of the role to use to send application messages. Note: To write application messages to CloudWatch, the
IAM role used must have the PutLogEvents
policy action enabled.
String cloudWatchLoggingOptionId
ID of the CloudWatch logging option to update
String logStreamARNUpdate
ARN of the CloudWatch log to receive application messages.
String roleARNUpdate
IAM ARN of the role to use to send application messages. Note: To write application messages to CloudWatch, the
IAM role used must have the PutLogEvents
policy action enabled.
String applicationName
Name of your Amazon Kinesis Analytics application (for example, sample-app
).
String applicationDescription
Summary description of the application.
List<E> inputs
Use this parameter to configure the application input.
You can configure your application to receive input from a single streaming source. In this configuration, you map this streaming source to an in-application stream that is created. Your application code can then query the in-application stream like a table (you can think of it as a constantly updating table).
For the streaming source, you provide its Amazon Resource Name (ARN) and format of data on the stream (for example, JSON, CSV, etc.). You also must provide an IAM role that Amazon Kinesis Analytics can assume to read this stream on your behalf.
To create the in-application stream, you need to specify a schema to transform your data into a schematized version used in SQL. In the schema, you provide the necessary mapping of the data elements in the streaming source to record columns in the in-app stream.
List<E> outputs
You can configure application output to write data from any of the in-application streams to up to three destinations.
These destinations can be Amazon Kinesis streams, Amazon Kinesis Firehose delivery streams, AWS Lambda destinations, or any combination of the three.
In the configuration, you specify the in-application stream name, the destination stream or Lambda function Amazon Resource Name (ARN), and the format to use when writing data. You must also provide an IAM role that Amazon Kinesis Analytics can assume to write to the destination stream or Lambda function on your behalf.
In the output configuration, you also provide the output stream or Lambda function ARN. For stream destinations, you provide the format of data in the stream (for example, JSON, CSV). You also must provide an IAM role that Amazon Kinesis Analytics can assume to write to the stream or Lambda function on your behalf.
List<E> cloudWatchLoggingOptions
Use this parameter to configure a CloudWatch log stream to monitor application configuration errors. For more information, see Working with Amazon CloudWatch Logs.
String applicationCode
One or more SQL statements that read input data, transform it, and generate output. For example, you can write a SQL statement that reads data from one in-application stream, generates a running average of the number of advertisement clicks by vendor, and insert resulting rows in another in-application stream using pumps. For more information about the typical pattern, see Application Code.
You can provide such series of SQL statements, where output of one statement can be used as the input for the next statement. You store intermediate results by creating in-application streams and pumps.
Note that the application code must create the streams with names specified in the Outputs
. For
example, if your Outputs
defines output streams named ExampleOutputStream1
and
ExampleOutputStream2
, then your application code must create these streams.
List<E> tags
A list of one or more tags to assign to the application. A tag is a key-value pair that identifies an application. Note that the maximum number of application tags includes system tags. The maximum number of user-defined application tags is 50. For more information, see Using Tagging.
ApplicationSummary applicationSummary
In response to your CreateApplication
request, Amazon Kinesis Analytics returns a response with a
summary of the application it created, including the application Amazon Resource Name (ARN), name, and status.
String applicationName
The Kinesis Analytics application name.
Long currentApplicationVersionId
The version ID of the Kinesis Analytics application.
String cloudWatchLoggingOptionId
The CloudWatchLoggingOptionId
of the CloudWatch logging option to delete. You can get the
CloudWatchLoggingOptionId
by using the DescribeApplication operation.
String applicationName
The Kinesis Analytics application name.
Long currentApplicationVersionId
The version ID of the Kinesis Analytics application.
String inputId
The ID of the input configuration from which to delete the input processing configuration. You can get a list of the input IDs for an application by using the DescribeApplication operation.
String applicationName
Amazon Kinesis Analytics application name.
Long currentApplicationVersionId
Amazon Kinesis Analytics application version. You can use the DescribeApplication operation to get the current application version. If the version specified is not the
current version, the ConcurrentModificationException
is returned.
String outputId
The ID of the configuration to delete. Each output configuration that is added to the application, either when
the application is created or later using the AddApplicationOutput operation, has a unique ID. You need to provide the ID to uniquely identify the output
configuration that you want to delete from the application configuration. You can use the DescribeApplication operation to get the specific OutputId
.
String applicationName
Name of an existing application.
Long currentApplicationVersionId
Version of the application. You can use the DescribeApplication operation to get the current application version. If the version specified is not the
current version, the ConcurrentModificationException
is returned.
String referenceId
ID of the reference data source. When you add a reference data source to your application using the AddApplicationReferenceDataSource, Amazon Kinesis Analytics assigns an ID. You can use the DescribeApplication operation to get the reference ID.
String applicationName
Name of the application.
ApplicationDetail applicationDetail
Provides a description of the application, such as the application Amazon Resource Name (ARN), status, latest version, and input and output configuration details.
String recordFormatType
Specifies the format of the records on the output stream.
String resourceARN
Amazon Resource Name (ARN) of the streaming source.
String roleARN
ARN of the IAM role that Amazon Kinesis Analytics can assume to access the stream on your behalf.
InputStartingPositionConfiguration inputStartingPositionConfiguration
Point at which you want Amazon Kinesis Analytics to start reading records from the specified streaming source discovery purposes.
S3Configuration s3Configuration
Specify this parameter to discover a schema from data in an Amazon S3 object.
InputProcessingConfiguration inputProcessingConfiguration
The InputProcessingConfiguration to use to preprocess the records before discovering the schema of the records.
SourceSchema inputSchema
Schema inferred from the streaming source. It identifies the format of the data in the streaming source and how each data element maps to corresponding columns in the in-application stream that you can create.
List<E> parsedInputRecords
An array of elements, where each element corresponds to a row in a stream record (a stream record can have more than one row).
List<E> processedInputRecords
Stream data that was modified by the processor specified in the InputProcessingConfiguration
parameter.
List<E> rawInputRecords
Raw stream data that was sampled to infer the schema.
String namePrefix
Name prefix to use when creating an in-application stream. Suppose that you specify a prefix
"MyInApplicationStream." Amazon Kinesis Analytics then creates one or more (as per the
InputParallelism
count you specified) in-application streams with names "MyInApplicationStream_001,"
"MyInApplicationStream_002," and so on.
InputProcessingConfiguration inputProcessingConfiguration
The InputProcessingConfiguration for the input. An input processor transforms records as they are received from the stream, before the application's SQL code executes. Currently, the only input processing configuration available is InputLambdaProcessor.
KinesisStreamsInput kinesisStreamsInput
If the streaming source is an Amazon Kinesis stream, identifies the stream's Amazon Resource Name (ARN) and an IAM role that enables Amazon Kinesis Analytics to access the stream on your behalf.
Note: Either KinesisStreamsInput
or KinesisFirehoseInput
is required.
KinesisFirehoseInput kinesisFirehoseInput
If the streaming source is an Amazon Kinesis Firehose delivery stream, identifies the delivery stream's ARN and an IAM role that enables Amazon Kinesis Analytics to access the stream on your behalf.
Note: Either KinesisStreamsInput
or KinesisFirehoseInput
is required.
InputParallelism inputParallelism
Describes the number of in-application streams to create.
Data from your source is routed to these in-application input streams.
SourceSchema inputSchema
Describes the format of the data in the streaming source, and how each data element maps to corresponding columns in the in-application stream that is being created.
Also used to describe the format of the reference data source.
String id
Input source ID. You can get this ID by calling the DescribeApplication operation.
InputStartingPositionConfiguration inputStartingPositionConfiguration
Point at which you want the application to start processing records from the streaming source.
String inputId
Input ID associated with the application input. This is the ID that Amazon Kinesis Analytics assigns to each input configuration you add to your application.
String namePrefix
In-application name prefix.
List<E> inAppStreamNames
Returns the in-application stream names that are mapped to the stream source.
InputProcessingConfigurationDescription inputProcessingConfigurationDescription
The description of the preprocessor that executes on records in this input before the application's code is run.
KinesisStreamsInputDescription kinesisStreamsInputDescription
If an Amazon Kinesis stream is configured as streaming source, provides Amazon Kinesis stream's Amazon Resource Name (ARN) and an IAM role that enables Amazon Kinesis Analytics to access the stream on your behalf.
KinesisFirehoseInputDescription kinesisFirehoseInputDescription
If an Amazon Kinesis Firehose delivery stream is configured as a streaming source, provides the delivery stream's ARN and an IAM role that enables Amazon Kinesis Analytics to access the stream on your behalf.
SourceSchema inputSchema
Describes the format of the data in the streaming source, and how each data element maps to corresponding columns in the in-application stream that is being created.
InputParallelism inputParallelism
Describes the configured parallelism (number of in-application streams mapped to the streaming source).
InputStartingPositionConfiguration inputStartingPositionConfiguration
Point at which the application is configured to read from the input stream.
String resourceARN
The ARN of the AWS Lambda function that operates on records in the stream.
To specify an earlier version of the Lambda function than the latest, include the Lambda function version in the Lambda function ARN. For more information about Lambda ARNs, see Example ARNs: AWS Lambda
String roleARN
The ARN of the IAM role that is used to access the AWS Lambda function.
String resourceARN
The ARN of the AWS Lambda function that is used to preprocess the records in the stream.
String roleARN
The ARN of the IAM role that is used to access the AWS Lambda function.
String resourceARNUpdate
The Amazon Resource Name (ARN) of the new AWS Lambda function that is used to preprocess the records in the stream.
To specify an earlier version of the Lambda function than the latest, include the Lambda function version in the Lambda function ARN. For more information about Lambda ARNs, see Example ARNs: AWS Lambda
String roleARNUpdate
The ARN of the new IAM role that is used to access the AWS Lambda function.
Integer countUpdate
Number of in-application streams to create for the specified streaming source.
InputLambdaProcessor inputLambdaProcessor
The InputLambdaProcessor that is used to preprocess the records in the stream before being processed by your application code.
InputLambdaProcessorDescription inputLambdaProcessorDescription
Provides configuration information about the associated InputLambdaProcessorDescription.
InputLambdaProcessorUpdate inputLambdaProcessorUpdate
Provides update information for an InputLambdaProcessor.
RecordFormat recordFormatUpdate
Specifies the format of the records on the streaming source.
String recordEncodingUpdate
Specifies the encoding of the records in the streaming source. For example, UTF-8.
List<E> recordColumnUpdates
A list of RecordColumn
objects. Each object describes the mapping of the streaming source element to
the corresponding column in the in-application stream.
String inputStartingPosition
The starting position on the stream.
NOW
- Start reading just after the most recent record in the stream, start at the request time stamp
that the customer issued.
TRIM_HORIZON
- Start reading at the last untrimmed record in the stream, which is the oldest record
available in the stream. This option is not available for an Amazon Kinesis Firehose delivery stream.
LAST_STOPPED_POINT
- Resume reading from where the application last stopped reading.
String inputId
Input ID of the application input to be updated.
String namePrefixUpdate
Name prefix for in-application streams that Amazon Kinesis Analytics creates for the specific streaming source.
InputProcessingConfigurationUpdate inputProcessingConfigurationUpdate
Describes updates for an input processing configuration.
KinesisStreamsInputUpdate kinesisStreamsInputUpdate
If an Amazon Kinesis stream is the streaming source to be updated, provides an updated stream Amazon Resource Name (ARN) and IAM role ARN.
KinesisFirehoseInputUpdate kinesisFirehoseInputUpdate
If an Amazon Kinesis Firehose delivery stream is the streaming source to be updated, provides an updated stream ARN and IAM role ARN.
InputSchemaUpdate inputSchemaUpdate
Describes the data format on the streaming source, and how record elements on the streaming source map to columns of the in-application stream that is created.
InputParallelismUpdate inputParallelismUpdate
Describes the parallelism updates (the number in-application streams Amazon Kinesis Analytics creates for the specific streaming source).
String recordRowPath
Path to the top-level parent that contains the records.
String resourceARNUpdate
Amazon Resource Name (ARN) of the input Amazon Kinesis Firehose delivery stream to read.
String roleARNUpdate
ARN of the IAM role that Amazon Kinesis Analytics can assume to access the stream on your behalf. You need to grant the necessary permissions to this role.
String resourceARNUpdate
Amazon Resource Name (ARN) of the Amazon Kinesis Firehose delivery stream to write to.
String roleARNUpdate
ARN of the IAM role that Amazon Kinesis Analytics can assume to access the stream on your behalf. You need to grant the necessary permissions to this role.
String resourceARNUpdate
Amazon Resource Name (ARN) of the input Amazon Kinesis stream to read.
String roleARNUpdate
ARN of the IAM role that Amazon Kinesis Analytics can assume to access the stream on your behalf. You need to grant the necessary permissions to this role.
String resourceARNUpdate
Amazon Resource Name (ARN) of the Amazon Kinesis stream where you want to write the output.
String roleARNUpdate
ARN of the IAM role that Amazon Kinesis Analytics can assume to access the stream on your behalf. You need to grant the necessary permissions to this role.
String resourceARN
Amazon Resource Name (ARN) of the destination Lambda function to write to.
To specify an earlier version of the Lambda function than the latest, include the Lambda function version in the Lambda function ARN. For more information about Lambda ARNs, see Example ARNs: AWS Lambda
String roleARN
ARN of the IAM role that Amazon Kinesis Analytics can assume to write to the destination function on your behalf. You need to grant the necessary permissions to this role.
String resourceARNUpdate
Amazon Resource Name (ARN) of the destination Lambda function.
To specify an earlier version of the Lambda function than the latest, include the Lambda function version in the Lambda function ARN. For more information about Lambda ARNs, see Example ARNs: AWS Lambda
String roleARNUpdate
ARN of the IAM role that Amazon Kinesis Analytics can assume to write to the destination function on your behalf. You need to grant the necessary permissions to this role.
Integer limit
Maximum number of applications to list.
String exclusiveStartApplicationName
Name of the application to start the list with. When using pagination to retrieve the list, you don't need to specify this parameter in the first request. However, in subsequent requests, you add the last application name from the previous response to get the next page of applications.
String resourceARN
The ARN of the application for which to retrieve tags.
JSONMappingParameters jSONMappingParameters
Provides additional mapping information when JSON is the record format on the streaming source.
CSVMappingParameters cSVMappingParameters
Provides additional mapping information when the record format uses delimiters (for example, CSV).
String name
Name of the in-application stream.
KinesisStreamsOutput kinesisStreamsOutput
Identifies an Amazon Kinesis stream as the destination.
KinesisFirehoseOutput kinesisFirehoseOutput
Identifies an Amazon Kinesis Firehose delivery stream as the destination.
LambdaOutput lambdaOutput
Identifies an AWS Lambda function as the destination.
DestinationSchema destinationSchema
Describes the data format when records are written to the destination. For more information, see Configuring Application Output.
String outputId
A unique identifier for the output configuration.
String name
Name of the in-application stream configured as output.
KinesisStreamsOutputDescription kinesisStreamsOutputDescription
Describes Amazon Kinesis stream configured as the destination where output is written.
KinesisFirehoseOutputDescription kinesisFirehoseOutputDescription
Describes the Amazon Kinesis Firehose delivery stream configured as the destination where output is written.
LambdaOutputDescription lambdaOutputDescription
Describes the AWS Lambda function configured as the destination where output is written.
DestinationSchema destinationSchema
Data format used for writing data to the destination.
String outputId
Identifies the specific output configuration that you want to update.
String nameUpdate
If you want to specify a different in-application stream for this output configuration, use this field to specify the new in-application stream name.
KinesisStreamsOutputUpdate kinesisStreamsOutputUpdate
Describes an Amazon Kinesis stream as the destination for the output.
KinesisFirehoseOutputUpdate kinesisFirehoseOutputUpdate
Describes an Amazon Kinesis Firehose delivery stream as the destination for the output.
LambdaOutputUpdate lambdaOutputUpdate
Describes an AWS Lambda function as the destination for the output.
DestinationSchema destinationSchemaUpdate
Describes the data format when records are written to the destination. For more information, see Configuring Application Output.
String name
Name of the column created in the in-application input stream or reference table.
String mapping
Reference to the data element in the streaming input or the reference data source. This element is required if
the RecordFormatType is JSON
.
String sqlType
Type of column created in the in-application input stream or reference table.
String recordFormatType
The type of record format.
MappingParameters mappingParameters
When configuring application input at the time of creating or updating an application, provides additional mapping information specific to the record format (such as JSON, CSV, or record fields delimited by some delimiter) on the streaming source.
String tableName
Name of the in-application table to create.
S3ReferenceDataSource s3ReferenceDataSource
Identifies the S3 bucket and object that contains the reference data. Also identifies the IAM role Amazon Kinesis
Analytics can assume to read this object on your behalf. An Amazon Kinesis Analytics application loads reference
data only once. If the data changes, you call the UpdateApplication
operation to trigger reloading
of data into your application.
SourceSchema referenceSchema
Describes the format of the data in the streaming source, and how each data element maps to corresponding columns created in the in-application stream.
String referenceId
ID of the reference data source. This is the ID that Amazon Kinesis Analytics assigns when you add the reference data source to your application using the AddApplicationReferenceDataSource operation.
String tableName
The in-application table name created by the specific reference data source configuration.
S3ReferenceDataSourceDescription s3ReferenceDataSourceDescription
Provides the S3 bucket name, the object key name that contains the reference data. It also provides the Amazon Resource Name (ARN) of the IAM role that Amazon Kinesis Analytics can assume to read the Amazon S3 object and populate the in-application reference table.
SourceSchema referenceSchema
Describes the format of the data in the streaming source, and how each data element maps to corresponding columns created in the in-application stream.
String referenceId
ID of the reference data source being updated. You can use the DescribeApplication operation to get this value.
String tableNameUpdate
In-application table name that is created by this update.
S3ReferenceDataSourceUpdate s3ReferenceDataSourceUpdate
Describes the S3 bucket name, object key name, and IAM role that Amazon Kinesis Analytics can assume to read the Amazon S3 object on your behalf and populate the in-application reference table.
SourceSchema referenceSchemaUpdate
Describes the format of the data in the streaming source, and how each data element maps to corresponding columns created in the in-application stream.
String bucketARN
Amazon Resource Name (ARN) of the S3 bucket.
String fileKey
Object key name containing reference data.
String referenceRoleARN
ARN of the IAM role that the service can assume to read data on your behalf. This role must have permission for
the s3:GetObject
action on the object and trust policy that allows Amazon Kinesis Analytics service
principal to assume this role.
String bucketARN
Amazon Resource Name (ARN) of the S3 bucket.
String fileKey
Amazon S3 object key name.
String referenceRoleARN
ARN of the IAM role that Amazon Kinesis Analytics can assume to read the Amazon S3 object on your behalf to populate the in-application reference table.
String bucketARNUpdate
Amazon Resource Name (ARN) of the S3 bucket.
String fileKeyUpdate
Object key name.
String referenceRoleARNUpdate
ARN of the IAM role that Amazon Kinesis Analytics can assume to read the Amazon S3 object and populate the in-application.
RecordFormat recordFormat
Specifies the format of the records on the streaming source.
String recordEncoding
Specifies the encoding of the records in the streaming source. For example, UTF-8.
List<E> recordColumns
A list of RecordColumn
objects.
String applicationName
Name of the application.
List<E> inputConfigurations
Identifies the specific input, by ID, that the application starts consuming. Amazon Kinesis Analytics starts reading the streaming source associated with the input. You can also specify where in the streaming source you want Amazon Kinesis Analytics to start reading.
String applicationName
Name of the running application to stop.
String applicationName
Name of the Amazon Kinesis Analytics application to update.
Long currentApplicationVersionId
The current application version ID. You can use the DescribeApplication operation to get this value.
ApplicationUpdate applicationUpdate
Describes application updates.
Integer intervalInSeconds
Buffer incoming data for the specified period of time, in seconds, before delivering it to the destination. The default value is 300 (5 minutes).
Integer sizeInMBs
Buffer incoming data to the specified size, in MBs, before delivering it to the destination. The default value is 5.
We recommend setting this parameter to a value greater than the amount of data you typically ingest into the delivery stream in 10 seconds. For example, if you typically ingest data at 1 MB/sec, the value should be 10 MB or higher.
String roleARN
The Amazon Resource Name (ARN) of the IAM role to be assumed by Firehose for calling the Serverless offering for Amazon OpenSearch Service Configuration API and for indexing documents.
String collectionEndpoint
The endpoint to use when communicating with the collection in the Serverless offering for Amazon OpenSearch Service.
String indexName
The Serverless offering for Amazon OpenSearch Service index name.
AmazonOpenSearchServerlessBufferingHints bufferingHints
The buffering options. If no value is specified, the default values for AmazonopensearchserviceBufferingHints are used.
AmazonOpenSearchServerlessRetryOptions retryOptions
The retry behavior in case Firehose is unable to deliver documents to the Serverless offering for Amazon OpenSearch Service. The default value is 300 (5 minutes).
String s3BackupMode
Defines how documents should be delivered to Amazon S3. When it is set to FailedDocumentsOnly, Firehose writes any documents that could not be indexed to the configured Amazon S3 destination, with AmazonOpenSearchService-failed/ appended to the key prefix. When set to AllDocuments, Firehose delivers all incoming records to Amazon S3, and also writes failed documents with AmazonOpenSearchService-failed/ appended to the prefix.
S3DestinationConfiguration s3Configuration
ProcessingConfiguration processingConfiguration
CloudWatchLoggingOptions cloudWatchLoggingOptions
VpcConfiguration vpcConfiguration
String roleARN
The Amazon Resource Name (ARN) of the Amazon Web Services credentials.
String collectionEndpoint
The endpoint to use when communicating with the collection in the Serverless offering for Amazon OpenSearch Service.
String indexName
The Serverless offering for Amazon OpenSearch Service index name.
AmazonOpenSearchServerlessBufferingHints bufferingHints
The buffering options.
AmazonOpenSearchServerlessRetryOptions retryOptions
The Serverless offering for Amazon OpenSearch Service retry options.
String s3BackupMode
The Amazon S3 backup mode.
S3DestinationDescription s3DestinationDescription
ProcessingConfiguration processingConfiguration
CloudWatchLoggingOptions cloudWatchLoggingOptions
VpcConfigurationDescription vpcConfigurationDescription
String roleARN
The Amazon Resource Name (ARN) of the IAM role to be assumed by Firehose for calling the Serverless offering for Amazon OpenSearch Service Configuration API and for indexing documents.
String collectionEndpoint
The endpoint to use when communicating with the collection in the Serverless offering for Amazon OpenSearch Service.
String indexName
The Serverless offering for Amazon OpenSearch Service index name.
AmazonOpenSearchServerlessBufferingHints bufferingHints
The buffering options. If no value is specified, AmazonopensearchBufferingHints object default values are used.
AmazonOpenSearchServerlessRetryOptions retryOptions
The retry behavior in case Firehose is unable to deliver documents to the Serverless offering for Amazon OpenSearch Service. The default value is 300 (5 minutes).
S3DestinationUpdate s3Update
ProcessingConfiguration processingConfiguration
CloudWatchLoggingOptions cloudWatchLoggingOptions
Integer durationInSeconds
After an initial failure to deliver to the Serverless offering for Amazon OpenSearch Service, the total amount of time during which Firehose retries delivery (including the first attempt). After this time has elapsed, the failed documents are written to Amazon S3. Default value is 300 seconds (5 minutes). A value of 0 (zero) results in no retries.
Integer intervalInSeconds
Buffer incoming data for the specified period of time, in seconds, before delivering it to the destination. The default value is 300 (5 minutes).
Integer sizeInMBs
Buffer incoming data to the specified size, in MBs, before delivering it to the destination. The default value is 5.
We recommend setting this parameter to a value greater than the amount of data you typically ingest into the delivery stream in 10 seconds. For example, if you typically ingest data at 1 MB/sec, the value should be 10 MB or higher.
String roleARN
The Amazon Resource Name (ARN) of the IAM role to be assumed by Firehose for calling the Amazon OpenSearch Service Configuration API and for indexing documents.
String domainARN
The ARN of the Amazon OpenSearch Service domain. The IAM role must have permissions for DescribeElasticsearchDomain, DescribeElasticsearchDomains, and DescribeElasticsearchDomainConfig after assuming the role specified in RoleARN.
String clusterEndpoint
The endpoint to use when communicating with the cluster. Specify either this ClusterEndpoint or the DomainARN field.
String indexName
The ElasticsearAmazon OpenSearch Service index name.
String typeName
The Amazon OpenSearch Service type name. For Elasticsearch 6.x, there can be only one type per index. If you try to specify a new type for an existing index that already has another type, Firehose returns an error during run time.
String indexRotationPeriod
The Amazon OpenSearch Service index rotation period. Index rotation appends a timestamp to the IndexName to facilitate the expiration of old data.
AmazonopensearchserviceBufferingHints bufferingHints
The buffering options. If no value is specified, the default values for AmazonopensearchserviceBufferingHints are used.
AmazonopensearchserviceRetryOptions retryOptions
The retry behavior in case Firehose is unable to deliver documents to Amazon OpenSearch Service. The default value is 300 (5 minutes).
String s3BackupMode
Defines how documents should be delivered to Amazon S3. When it is set to FailedDocumentsOnly, Firehose writes any documents that could not be indexed to the configured Amazon S3 destination, with AmazonOpenSearchService-failed/ appended to the key prefix. When set to AllDocuments, Firehose delivers all incoming records to Amazon S3, and also writes failed documents with AmazonOpenSearchService-failed/ appended to the prefix.
S3DestinationConfiguration s3Configuration
ProcessingConfiguration processingConfiguration
CloudWatchLoggingOptions cloudWatchLoggingOptions
VpcConfiguration vpcConfiguration
DocumentIdOptions documentIdOptions
Indicates the method for setting up document ID. The supported methods are Firehose generated document ID and OpenSearch Service generated document ID.
String roleARN
The Amazon Resource Name (ARN) of the Amazon Web Services credentials.
String domainARN
The ARN of the Amazon OpenSearch Service domain.
String clusterEndpoint
The endpoint to use when communicating with the cluster. Firehose uses either this ClusterEndpoint or the DomainARN field to send data to Amazon OpenSearch Service.
String indexName
The Amazon OpenSearch Service index name.
String typeName
The Amazon OpenSearch Service type name. This applies to Elasticsearch 6.x and lower versions. For Elasticsearch 7.x and OpenSearch Service 1.x, there's no value for TypeName.
String indexRotationPeriod
The Amazon OpenSearch Service index rotation period
AmazonopensearchserviceBufferingHints bufferingHints
The buffering options.
AmazonopensearchserviceRetryOptions retryOptions
The Amazon OpenSearch Service retry options.
String s3BackupMode
The Amazon S3 backup mode.
S3DestinationDescription s3DestinationDescription
ProcessingConfiguration processingConfiguration
CloudWatchLoggingOptions cloudWatchLoggingOptions
VpcConfigurationDescription vpcConfigurationDescription
DocumentIdOptions documentIdOptions
Indicates the method for setting up document ID. The supported methods are Firehose generated document ID and OpenSearch Service generated document ID.
String roleARN
The Amazon Resource Name (ARN) of the IAM role to be assumed by Firehose for calling the Amazon OpenSearch Service Configuration API and for indexing documents.
String domainARN
The ARN of the Amazon OpenSearch Service domain. The IAM role must have permissions for DescribeDomain, DescribeDomains, and DescribeDomainConfig after assuming the IAM role specified in RoleARN.
String clusterEndpoint
The endpoint to use when communicating with the cluster. Specify either this ClusterEndpoint or the DomainARN field.
String indexName
The Amazon OpenSearch Service index name.
String typeName
The Amazon OpenSearch Service type name. For Elasticsearch 6.x, there can be only one type per index. If you try to specify a new type for an existing index that already has another type, Firehose returns an error during runtime.
If you upgrade Elasticsearch from 6.x to 7.x and don’t update your delivery stream, Firehose still delivers data to Elasticsearch with the old index name and type name. If you want to update your delivery stream with a new index name, provide an empty string for TypeName.
String indexRotationPeriod
The Amazon OpenSearch Service index rotation period. Index rotation appends a timestamp to IndexName to facilitate the expiration of old data.
AmazonopensearchserviceBufferingHints bufferingHints
The buffering options. If no value is specified, AmazonopensearchBufferingHints object default values are used.
AmazonopensearchserviceRetryOptions retryOptions
The retry behavior in case Firehose is unable to deliver documents to Amazon OpenSearch Service. The default value is 300 (5 minutes).
S3DestinationUpdate s3Update
ProcessingConfiguration processingConfiguration
CloudWatchLoggingOptions cloudWatchLoggingOptions
DocumentIdOptions documentIdOptions
Indicates the method for setting up document ID. The supported methods are Firehose generated document ID and OpenSearch Service generated document ID.
Integer durationInSeconds
After an initial failure to deliver to Amazon OpenSearch Service, the total amount of time during which Firehose retries delivery (including the first attempt). After this time has elapsed, the failed documents are written to Amazon S3. Default value is 300 seconds (5 minutes). A value of 0 (zero) results in no retries.
Integer sizeInMBs
Buffer incoming data to the specified size, in MiBs, before delivering it to the destination. The default value
is 5. This parameter is optional but if you specify a value for it, you must also specify a value for
IntervalInSeconds
, and vice versa.
We recommend setting this parameter to a value greater than the amount of data you typically ingest into the delivery stream in 10 seconds. For example, if you typically ingest data at 1 MiB/sec, the value should be 10 MiB or higher.
Integer intervalInSeconds
Buffer incoming data for the specified period of time, in seconds, before delivering it to the destination. The
default value is 300. This parameter is optional but if you specify a value for it, you must also specify a value
for SizeInMBs
, and vice versa.
Boolean enabled
Enables or disables CloudWatch logging.
String logGroupName
The CloudWatch group name for logging. This value is required if CloudWatch logging is enabled.
String logStreamName
The CloudWatch log stream name for logging. This value is required if CloudWatch logging is enabled.
String dataTableName
The name of the target table. The table must already exist in the database.
String dataTableColumns
A comma-separated list of column names.
String copyOptions
Optional parameters to use with the Amazon Redshift COPY
command. For more information, see the
"Optional Parameters" section of Amazon
Redshift COPY command. Some possible examples that would apply to Firehose are as follows:
delimiter '\t' lzop;
- fields are delimited with "\t" (TAB character) and compressed using lzop.
delimiter '|'
- fields are delimited with "|" (this is the default delimiter).
delimiter '|' escape
- the delimiter should be escaped.
fixedwidth 'venueid:3,venuename:25,venuecity:12,venuestate:2,venueseats:6'
- fields are fixed width
in the source, with each width specified after every column in the table.
JSON 's3://mybucket/jsonpaths.txt'
- data is in JSON format, and the path specified is the format of
the data.
For more examples, see Amazon Redshift COPY command examples.
String deliveryStreamName
The name of the delivery stream. This name must be unique per Amazon Web Services account in the same Amazon Web Services Region. If the delivery streams are in different accounts or different Regions, you can have multiple delivery streams with the same name.
String deliveryStreamType
The delivery stream type. This parameter can be one of the following values:
DirectPut
: Provider applications access the delivery stream directly.
KinesisStreamAsSource
: The delivery stream uses a Kinesis data stream as a source.
KinesisStreamSourceConfiguration kinesisStreamSourceConfiguration
When a Kinesis data stream is used as the source for the delivery stream, a KinesisStreamSourceConfiguration containing the Kinesis data stream Amazon Resource Name (ARN) and the role ARN for the source stream.
DeliveryStreamEncryptionConfigurationInput deliveryStreamEncryptionConfigurationInput
Used to specify the type and Amazon Resource Name (ARN) of the KMS key needed for Server-Side Encryption (SSE).
S3DestinationConfiguration s3DestinationConfiguration
[Deprecated] The destination in Amazon S3. You can specify only one destination.
ExtendedS3DestinationConfiguration extendedS3DestinationConfiguration
The destination in Amazon S3. You can specify only one destination.
RedshiftDestinationConfiguration redshiftDestinationConfiguration
The destination in Amazon Redshift. You can specify only one destination.
ElasticsearchDestinationConfiguration elasticsearchDestinationConfiguration
The destination in Amazon ES. You can specify only one destination.
AmazonopensearchserviceDestinationConfiguration amazonopensearchserviceDestinationConfiguration
The destination in Amazon OpenSearch Service. You can specify only one destination.
SplunkDestinationConfiguration splunkDestinationConfiguration
The destination in Splunk. You can specify only one destination.
HttpEndpointDestinationConfiguration httpEndpointDestinationConfiguration
Enables configuring Kinesis Firehose to deliver data to any HTTP endpoint destination. You can specify only one destination.
List<E> tags
A set of tags to assign to the delivery stream. A tag is a key-value pair that you can define and assign to Amazon Web Services resources. Tags are metadata. For example, you can add friendly names and descriptions or other types of information that can help you distinguish the delivery stream. For more information about tags, see Using Cost Allocation Tags in the Amazon Web Services Billing and Cost Management User Guide.
You can specify up to 50 tags when creating a delivery stream.
If you specify tags in the CreateDeliveryStream
action, Amazon Data Firehose performs an additional
authorization on the firehose:TagDeliveryStream
action to verify if users have permissions to create
tags. If you do not provide this permission, requests to create new Firehose delivery streams with IAM resource
tags will fail with an AccessDeniedException
such as following.
AccessDeniedException
User: arn:aws:sts::x:assumed-role/x/x is not authorized to perform: firehose:TagDeliveryStream on resource: arn:aws:firehose:us-east-1:x:deliverystream/x with an explicit deny in an identity-based policy.
For an example IAM policy, see Tag example.
AmazonOpenSearchServerlessDestinationConfiguration amazonOpenSearchServerlessDestinationConfiguration
The destination in the Serverless offering for Amazon OpenSearch Service. You can specify only one destination.
MSKSourceConfiguration mSKSourceConfiguration
SnowflakeDestinationConfiguration snowflakeDestinationConfiguration
Configure Snowflake destination
String deliveryStreamARN
The ARN of the delivery stream.
SchemaConfiguration schemaConfiguration
Specifies the Amazon Web Services Glue Data Catalog table that contains the column information. This parameter is
required if Enabled
is set to true.
InputFormatConfiguration inputFormatConfiguration
Specifies the deserializer that you want Firehose to use to convert the format of your data from JSON. This
parameter is required if Enabled
is set to true.
OutputFormatConfiguration outputFormatConfiguration
Specifies the serializer that you want Firehose to use to convert the format of your data to the Parquet or ORC
format. This parameter is required if Enabled
is set to true.
Boolean enabled
Defaults to true
. Set it to false
if you want to disable format conversion while
preserving the configuration details.
String deliveryStreamName
The name of the delivery stream.
Boolean allowForceDelete
Set this to true if you want to delete the delivery stream even if Firehose is unable to retire the grant for the CMK. Firehose might be unable to retire the grant due to a customer error, such as when the CMK or the grant are in an invalid state. If you force deletion, you can then use the RevokeGrant operation to revoke the grant you gave to Firehose. If a failure to retire the grant happens due to an Amazon Web Services KMS issue, Firehose keeps retrying the delete operation.
The default value is false.
String deliveryStreamName
The name of the delivery stream.
String deliveryStreamARN
The Amazon Resource Name (ARN) of the delivery stream. For more information, see Amazon Resource Names (ARNs) and Amazon Web Services Service Namespaces.
String deliveryStreamStatus
The status of the delivery stream. If the status of a delivery stream is CREATING_FAILED
, this
status doesn't change, and you can't invoke CreateDeliveryStream
again on it. However, you can
invoke the DeleteDeliveryStream operation to delete it.
FailureDescription failureDescription
Provides details in case one of the following operations fails due to an error related to KMS: CreateDeliveryStream, DeleteDeliveryStream, StartDeliveryStreamEncryption, StopDeliveryStreamEncryption.
DeliveryStreamEncryptionConfiguration deliveryStreamEncryptionConfiguration
Indicates the server-side encryption (SSE) status for the delivery stream.
String deliveryStreamType
The delivery stream type. This can be one of the following values:
DirectPut
: Provider applications access the delivery stream directly.
KinesisStreamAsSource
: The delivery stream uses a Kinesis data stream as a source.
String versionId
Each time the destination is updated for a delivery stream, the version ID is changed, and the current version ID is required when updating the destination. This is so that the service knows it is applying the changes to the correct version of the delivery stream.
Date createTimestamp
The date and time that the delivery stream was created.
Date lastUpdateTimestamp
The date and time that the delivery stream was last updated.
SourceDescription source
If the DeliveryStreamType
parameter is KinesisStreamAsSource
, a
SourceDescription object describing the source Kinesis data stream.
List<E> destinations
The destinations.
Boolean hasMoreDestinations
Indicates whether there are more destinations available to list.
String keyARN
If KeyType
is CUSTOMER_MANAGED_CMK
, this field contains the ARN of the customer managed
CMK. If KeyType
is Amazon Web Services_OWNED_CMK
,
DeliveryStreamEncryptionConfiguration
doesn't contain a value for KeyARN
.
String keyType
Indicates the type of customer master key (CMK) that is used for encryption. The default setting is
Amazon Web Services_OWNED_CMK
. For more information about CMKs, see Customer Master Keys
(CMKs).
String status
This is the server-side encryption (SSE) status for the delivery stream. For a full description of the different
values of this status, see StartDeliveryStreamEncryption and StopDeliveryStreamEncryption. If this
status is ENABLING_FAILED
or DISABLING_FAILED
, it is the status of the most recent
attempt to enable or disable SSE, respectively.
FailureDescription failureDescription
Provides details in case one of the following operations fails due to an error related to KMS: CreateDeliveryStream, DeleteDeliveryStream, StartDeliveryStreamEncryption, StopDeliveryStreamEncryption.
String keyARN
If you set KeyType
to CUSTOMER_MANAGED_CMK
, you must specify the Amazon Resource Name
(ARN) of the CMK. If you set KeyType
to Amazon Web Services_OWNED_CMK
, Firehose uses a
service-account CMK.
String keyType
Indicates the type of customer master key (CMK) to use for encryption. The default setting is
Amazon Web Services_OWNED_CMK
. For more information about CMKs, see Customer Master Keys
(CMKs). When you invoke CreateDeliveryStream or StartDeliveryStreamEncryption with
KeyType
set to CUSTOMER_MANAGED_CMK, Firehose invokes the Amazon KMS operation CreateGrant to create a grant
that allows the Firehose service to use the customer managed CMK to perform encryption and decryption. Firehose
manages that grant.
When you invoke StartDeliveryStreamEncryption to change the CMK for a delivery stream that is encrypted with a customer managed CMK, Firehose schedules the grant it had on the old CMK for retirement.
You can use a CMK of type CUSTOMER_MANAGED_CMK to encrypt up to 500 delivery streams. If a
CreateDeliveryStream or StartDeliveryStreamEncryption operation exceeds this limit, Firehose throws
a LimitExceededException
.
To encrypt your delivery stream, use symmetric CMKs. Firehose doesn't support asymmetric CMKs. For information about symmetric and asymmetric CMKs, see About Symmetric and Asymmetric CMKs in the Amazon Web Services Key Management Service developer guide.
String deliveryStreamName
The name of the delivery stream.
Integer limit
The limit on the number of destinations to return. You can have one destination per delivery stream.
String exclusiveStartDestinationId
The ID of the destination to start returning the destination information. Firehose supports one destination per delivery stream.
DeliveryStreamDescription deliveryStreamDescription
Information about the delivery stream.
OpenXJsonSerDe openXJsonSerDe
The OpenX SerDe. Used by Firehose for deserializing data, which means converting it from the JSON format in preparation for serializing it to the Parquet or ORC format. This is one of two deserializers you can choose, depending on which one offers the functionality you need. The other option is the native Hive / HCatalog JsonSerDe.
HiveJsonSerDe hiveJsonSerDe
The native Hive / HCatalog JsonSerDe. Used by Firehose for deserializing data, which means converting it from the JSON format in preparation for serializing it to the Parquet or ORC format. This is one of two deserializers you can choose, depending on which one offers the functionality you need. The other option is the OpenX SerDe.
String destinationId
The ID of the destination.
S3DestinationDescription s3DestinationDescription
[Deprecated] The destination in Amazon S3.
ExtendedS3DestinationDescription extendedS3DestinationDescription
The destination in Amazon S3.
RedshiftDestinationDescription redshiftDestinationDescription
The destination in Amazon Redshift.
ElasticsearchDestinationDescription elasticsearchDestinationDescription
The destination in Amazon ES.
AmazonopensearchserviceDestinationDescription amazonopensearchserviceDestinationDescription
The destination in Amazon OpenSearch Service.
SplunkDestinationDescription splunkDestinationDescription
The destination in Splunk.
HttpEndpointDestinationDescription httpEndpointDestinationDescription
Describes the specified HTTP endpoint destination.
SnowflakeDestinationDescription snowflakeDestinationDescription
Optional description for the destination
AmazonOpenSearchServerlessDestinationDescription amazonOpenSearchServerlessDestinationDescription
The destination in the Serverless offering for Amazon OpenSearch Service.
String defaultDocumentIdFormat
When the FIREHOSE_DEFAULT
option is chosen, Firehose generates a unique document ID for each record
based on a unique internal identifier. The generated document ID is stable across multiple delivery attempts,
which helps prevent the same record from being indexed multiple times with different document IDs.
When the NO_DOCUMENT_ID
option is chosen, Firehose does not include any document IDs in the requests
it sends to the Amazon OpenSearch Service. This causes the Amazon OpenSearch Service domain to generate document
IDs. In case of multiple delivery attempts, this may cause the same record to be indexed more than once with
different document IDs. This option enables write-heavy operations, such as the ingestion of logs and
observability data, to consume less resources in the Amazon OpenSearch Service domain, resulting in improved
performance.
RetryOptions retryOptions
The retry behavior in case Firehose is unable to deliver data to an Amazon S3 prefix.
Boolean enabled
Specifies that the dynamic partitioning is enabled for this Firehose delivery stream.
Integer intervalInSeconds
Buffer incoming data for the specified period of time, in seconds, before delivering it to the destination. The default value is 300 (5 minutes).
Integer sizeInMBs
Buffer incoming data to the specified size, in MBs, before delivering it to the destination. The default value is 5.
We recommend setting this parameter to a value greater than the amount of data you typically ingest into the delivery stream in 10 seconds. For example, if you typically ingest data at 1 MB/sec, the value should be 10 MB or higher.
String roleARN
The Amazon Resource Name (ARN) of the IAM role to be assumed by Firehose for calling the Amazon ES Configuration API and for indexing documents. For more information, see Grant Firehose Access to an Amazon S3 Destination and Amazon Resource Names (ARNs) and Amazon Web Services Service Namespaces.
String domainARN
The ARN of the Amazon ES domain. The IAM role must have permissions for DescribeDomain
,
DescribeDomains
, and DescribeDomainConfig
 after assuming the role specified in
RoleARN. For more information, see Amazon Resource Names (ARNs)
and Amazon Web Services Service Namespaces.
Specify either ClusterEndpoint
or DomainARN
.
String clusterEndpoint
The endpoint to use when communicating with the cluster. Specify either this ClusterEndpoint
or the
DomainARN
field.
String indexName
The Elasticsearch index name.
String typeName
The Elasticsearch type name. For Elasticsearch 6.x, there can be only one type per index. If you try to specify a new type for an existing index that already has another type, Firehose returns an error during run time.
For Elasticsearch 7.x, don't specify a TypeName
.
String indexRotationPeriod
The Elasticsearch index rotation period. Index rotation appends a timestamp to the IndexName
to
facilitate the expiration of old data. For more information, see Index Rotation for
the Amazon ES Destination. The default value is OneDay
.
ElasticsearchBufferingHints bufferingHints
The buffering options. If no value is specified, the default values for ElasticsearchBufferingHints
are used.
ElasticsearchRetryOptions retryOptions
The retry behavior in case Firehose is unable to deliver documents to Amazon ES. The default value is 300 (5 minutes).
String s3BackupMode
Defines how documents should be delivered to Amazon S3. When it is set to FailedDocumentsOnly
,
Firehose writes any documents that could not be indexed to the configured Amazon S3 destination, with
AmazonOpenSearchService-failed/
appended to the key prefix. When set to AllDocuments
,
Firehose delivers all incoming records to Amazon S3, and also writes failed documents with
AmazonOpenSearchService-failed/
appended to the prefix. For more information, see Amazon S3 Backup for the
Amazon ES Destination. Default value is FailedDocumentsOnly
.
You can't change this backup mode after you create the delivery stream.
S3DestinationConfiguration s3Configuration
The configuration for the backup Amazon S3 location.
ProcessingConfiguration processingConfiguration
The data processing configuration.
CloudWatchLoggingOptions cloudWatchLoggingOptions
The Amazon CloudWatch logging options for your delivery stream.
VpcConfiguration vpcConfiguration
The details of the VPC of the Amazon destination.
DocumentIdOptions documentIdOptions
Indicates the method for setting up document ID. The supported methods are Firehose generated document ID and OpenSearch Service generated document ID.
String roleARN
The Amazon Resource Name (ARN) of the Amazon Web Services credentials. For more information, see Amazon Resource Names (ARNs) and Amazon Web Services Service Namespaces.
String domainARN
The ARN of the Amazon ES domain. For more information, see Amazon Resource Names (ARNs) and Amazon Web Services Service Namespaces.
Firehose uses either ClusterEndpoint
or DomainARN
to send data to Amazon ES.
String clusterEndpoint
The endpoint to use when communicating with the cluster. Firehose uses either this ClusterEndpoint
or the DomainARN
field to send data to Amazon ES.
String indexName
The Elasticsearch index name.
String typeName
The Elasticsearch type name. This applies to Elasticsearch 6.x and lower versions. For Elasticsearch 7.x and
OpenSearch Service 1.x, there's no value for TypeName
.
String indexRotationPeriod
The Elasticsearch index rotation period
ElasticsearchBufferingHints bufferingHints
The buffering options.
ElasticsearchRetryOptions retryOptions
The Amazon ES retry options.
String s3BackupMode
The Amazon S3 backup mode.
S3DestinationDescription s3DestinationDescription
The Amazon S3 destination.
ProcessingConfiguration processingConfiguration
The data processing configuration.
CloudWatchLoggingOptions cloudWatchLoggingOptions
The Amazon CloudWatch logging options.
VpcConfigurationDescription vpcConfigurationDescription
The details of the VPC of the Amazon OpenSearch or the Amazon OpenSearch Serverless destination.
DocumentIdOptions documentIdOptions
Indicates the method for setting up document ID. The supported methods are Firehose generated document ID and OpenSearch Service generated document ID.
String roleARN
The Amazon Resource Name (ARN) of the IAM role to be assumed by Firehose for calling the Amazon ES Configuration API and for indexing documents. For more information, see Grant Firehose Access to an Amazon S3 Destination and Amazon Resource Names (ARNs) and Amazon Web Services Service Namespaces.
String domainARN
The ARN of the Amazon ES domain. The IAM role must have permissions for DescribeDomain
,
DescribeDomains
, and DescribeDomainConfig
 after assuming the IAM role specified in
RoleARN
. For more information, see Amazon Resource Names (ARNs)
and Amazon Web Services Service Namespaces.
Specify either ClusterEndpoint
or DomainARN
.
String clusterEndpoint
The endpoint to use when communicating with the cluster. Specify either this ClusterEndpoint
or the
DomainARN
field.
String indexName
The Elasticsearch index name.
String typeName
The Elasticsearch type name. For Elasticsearch 6.x, there can be only one type per index. If you try to specify a new type for an existing index that already has another type, Firehose returns an error during runtime.
If you upgrade Elasticsearch from 6.x to 7.x and don’t update your delivery stream, Firehose still delivers data
to Elasticsearch with the old index name and type name. If you want to update your delivery stream with a new
index name, provide an empty string for TypeName
.
String indexRotationPeriod
The Elasticsearch index rotation period. Index rotation appends a timestamp to IndexName
to
facilitate the expiration of old data. For more information, see Index Rotation for
the Amazon ES Destination. Default value is OneDay
.
ElasticsearchBufferingHints bufferingHints
The buffering options. If no value is specified, ElasticsearchBufferingHints
object default values
are used.
ElasticsearchRetryOptions retryOptions
The retry behavior in case Firehose is unable to deliver documents to Amazon ES. The default value is 300 (5 minutes).
S3DestinationUpdate s3Update
The Amazon S3 destination.
ProcessingConfiguration processingConfiguration
The data processing configuration.
CloudWatchLoggingOptions cloudWatchLoggingOptions
The CloudWatch logging options for your delivery stream.
DocumentIdOptions documentIdOptions
Indicates the method for setting up document ID. The supported methods are Firehose generated document ID and OpenSearch Service generated document ID.
Integer durationInSeconds
After an initial failure to deliver to Amazon ES, the total amount of time during which Firehose retries delivery (including the first attempt). After this time has elapsed, the failed documents are written to Amazon S3. Default value is 300 seconds (5 minutes). A value of 0 (zero) results in no retries.
String noEncryptionConfig
Specifically override existing encryption information to ensure that no encryption is used.
KMSEncryptionConfig kMSEncryptionConfig
The encryption key.
String roleARN
The Amazon Resource Name (ARN) of the Amazon Web Services credentials. For more information, see Amazon Resource Names (ARNs) and Amazon Web Services Service Namespaces.
String bucketARN
The ARN of the S3 bucket. For more information, see Amazon Resource Names (ARNs) and Amazon Web Services Service Namespaces.
String prefix
The "YYYY/MM/DD/HH" time format prefix is automatically used for delivered Amazon S3 files. You can also specify a custom prefix, as described in Custom Prefixes for Amazon S3 Objects.
String errorOutputPrefix
A prefix that Firehose evaluates and adds to failed records before writing them to S3. This prefix appears immediately following the bucket name. For information about how to specify this prefix, see Custom Prefixes for Amazon S3 Objects.
BufferingHints bufferingHints
The buffering option.
String compressionFormat
The compression format. If no value is specified, the default is UNCOMPRESSED.
EncryptionConfiguration encryptionConfiguration
The encryption configuration. If no value is specified, the default is no encryption.
CloudWatchLoggingOptions cloudWatchLoggingOptions
The Amazon CloudWatch logging options for your delivery stream.
ProcessingConfiguration processingConfiguration
The data processing configuration.
String s3BackupMode
The Amazon S3 backup mode. After you create a delivery stream, you can update it to enable Amazon S3 backup if it is disabled. If backup is enabled, you can't update the delivery stream to disable it.
S3DestinationConfiguration s3BackupConfiguration
The configuration for backup in Amazon S3.
DataFormatConversionConfiguration dataFormatConversionConfiguration
The serializer, deserializer, and schema for converting data from the JSON format to the Parquet or ORC format before writing it to Amazon S3.
DynamicPartitioningConfiguration dynamicPartitioningConfiguration
The configuration of the dynamic partitioning mechanism that creates smaller data sets from the streaming data by partitioning it based on partition keys. Currently, dynamic partitioning is only supported for Amazon S3 destinations.
String fileExtension
Specify a file extension. It will override the default file extension
String customTimeZone
The time zone you prefer. UTC is the default.
String roleARN
The Amazon Resource Name (ARN) of the Amazon Web Services credentials. For more information, see Amazon Resource Names (ARNs) and Amazon Web Services Service Namespaces.
String bucketARN
The ARN of the S3 bucket. For more information, see Amazon Resource Names (ARNs) and Amazon Web Services Service Namespaces.
String prefix
The "YYYY/MM/DD/HH" time format prefix is automatically used for delivered Amazon S3 files. You can also specify a custom prefix, as described in Custom Prefixes for Amazon S3 Objects.
String errorOutputPrefix
A prefix that Firehose evaluates and adds to failed records before writing them to S3. This prefix appears immediately following the bucket name. For information about how to specify this prefix, see Custom Prefixes for Amazon S3 Objects.
BufferingHints bufferingHints
The buffering option.
String compressionFormat
The compression format. If no value is specified, the default is UNCOMPRESSED
.
EncryptionConfiguration encryptionConfiguration
The encryption configuration. If no value is specified, the default is no encryption.
CloudWatchLoggingOptions cloudWatchLoggingOptions
The Amazon CloudWatch logging options for your delivery stream.
ProcessingConfiguration processingConfiguration
The data processing configuration.
String s3BackupMode
The Amazon S3 backup mode.
S3DestinationDescription s3BackupDescription
The configuration for backup in Amazon S3.
DataFormatConversionConfiguration dataFormatConversionConfiguration
The serializer, deserializer, and schema for converting data from the JSON format to the Parquet or ORC format before writing it to Amazon S3.
DynamicPartitioningConfiguration dynamicPartitioningConfiguration
The configuration of the dynamic partitioning mechanism that creates smaller data sets from the streaming data by partitioning it based on partition keys. Currently, dynamic partitioning is only supported for Amazon S3 destinations.
String fileExtension
Specify a file extension. It will override the default file extension
String customTimeZone
The time zone you prefer. UTC is the default.
String roleARN
The Amazon Resource Name (ARN) of the Amazon Web Services credentials. For more information, see Amazon Resource Names (ARNs) and Amazon Web Services Service Namespaces.
String bucketARN
The ARN of the S3 bucket. For more information, see Amazon Resource Names (ARNs) and Amazon Web Services Service Namespaces.
String prefix
The "YYYY/MM/DD/HH" time format prefix is automatically used for delivered Amazon S3 files. You can also specify a custom prefix, as described in Custom Prefixes for Amazon S3 Objects.
String errorOutputPrefix
A prefix that Firehose evaluates and adds to failed records before writing them to S3. This prefix appears immediately following the bucket name. For information about how to specify this prefix, see Custom Prefixes for Amazon S3 Objects.
BufferingHints bufferingHints
The buffering option.
String compressionFormat
The compression format. If no value is specified, the default is UNCOMPRESSED
.
EncryptionConfiguration encryptionConfiguration
The encryption configuration. If no value is specified, the default is no encryption.
CloudWatchLoggingOptions cloudWatchLoggingOptions
The Amazon CloudWatch logging options for your delivery stream.
ProcessingConfiguration processingConfiguration
The data processing configuration.
String s3BackupMode
You can update a delivery stream to enable Amazon S3 backup if it is disabled. If backup is enabled, you can't update the delivery stream to disable it.
S3DestinationUpdate s3BackupUpdate
The Amazon S3 destination for backup.
DataFormatConversionConfiguration dataFormatConversionConfiguration
The serializer, deserializer, and schema for converting data from the JSON format to the Parquet or ORC format before writing it to Amazon S3.
DynamicPartitioningConfiguration dynamicPartitioningConfiguration
The configuration of the dynamic partitioning mechanism that creates smaller data sets from the streaming data by partitioning it based on partition keys. Currently, dynamic partitioning is only supported for Amazon S3 destinations.
String fileExtension
Specify a file extension. It will override the default file extension
String customTimeZone
The time zone you prefer. UTC is the default.
List<E> timestampFormats
Indicates how you want Firehose to parse the date and timestamps that may be present in your input data JSON. To
specify these format strings, follow the pattern syntax of JodaTime's DateTimeFormat format strings. For more
information, see Class
DateTimeFormat. You can also use the special value millis
to parse timestamps in epoch
milliseconds. If you don't specify a format, Firehose uses java.sql.Timestamp::valueOf
by default.
Integer sizeInMBs
Buffer incoming data to the specified size, in MBs, before delivering it to the destination. The default value is 5.
We recommend setting this parameter to a value greater than the amount of data you typically ingest into the delivery stream in 10 seconds. For example, if you typically ingest data at 1 MB/sec, the value should be 10 MB or higher.
Integer intervalInSeconds
Buffer incoming data for the specified period of time, in seconds, before delivering it to the destination. The default value is 300 (5 minutes).
String url
The URL of the HTTP endpoint selected as the destination.
If you choose an HTTP endpoint as your destination, review and follow the instructions in the Appendix - HTTP Endpoint Delivery Request and Response Specifications.
String name
The name of the HTTP endpoint selected as the destination.
String accessKey
The access key required for Kinesis Firehose to authenticate with the HTTP endpoint selected as the destination.
HttpEndpointConfiguration endpointConfiguration
The configuration of the HTTP endpoint selected as the destination.
HttpEndpointBufferingHints bufferingHints
The buffering options that can be used before data is delivered to the specified destination. Firehose treats
these options as hints, and it might choose to use more optimal values. The SizeInMBs
and
IntervalInSeconds
parameters are optional. However, if you specify a value for one of them, you must
also provide a value for the other.
CloudWatchLoggingOptions cloudWatchLoggingOptions
HttpEndpointRequestConfiguration requestConfiguration
The configuration of the requeste sent to the HTTP endpoint specified as the destination.
ProcessingConfiguration processingConfiguration
String roleARN
Firehose uses this IAM role for all the permissions that the delivery stream needs.
HttpEndpointRetryOptions retryOptions
Describes the retry behavior in case Firehose is unable to deliver data to the specified HTTP endpoint destination, or if it doesn't receive a valid acknowledgment of receipt from the specified HTTP endpoint destination.
String s3BackupMode
Describes the S3 bucket backup options for the data that Firehose delivers to the HTTP endpoint destination. You
can back up all documents (AllData
) or only the documents that Firehose could not deliver to the
specified HTTP endpoint destination (FailedDataOnly
).
S3DestinationConfiguration s3Configuration
HttpEndpointDescription endpointConfiguration
The configuration of the specified HTTP endpoint destination.
HttpEndpointBufferingHints bufferingHints
Describes buffering options that can be applied to the data before it is delivered to the HTTPS endpoint
destination. Firehose teats these options as hints, and it might choose to use more optimal values. The
SizeInMBs
and IntervalInSeconds
parameters are optional. However, if specify a value
for one of them, you must also provide a value for the other.
CloudWatchLoggingOptions cloudWatchLoggingOptions
HttpEndpointRequestConfiguration requestConfiguration
The configuration of request sent to the HTTP endpoint specified as the destination.
ProcessingConfiguration processingConfiguration
String roleARN
Firehose uses this IAM role for all the permissions that the delivery stream needs.
HttpEndpointRetryOptions retryOptions
Describes the retry behavior in case Firehose is unable to deliver data to the specified HTTP endpoint destination, or if it doesn't receive a valid acknowledgment of receipt from the specified HTTP endpoint destination.
String s3BackupMode
Describes the S3 bucket backup options for the data that Kinesis Firehose delivers to the HTTP endpoint
destination. You can back up all documents (AllData
) or only the documents that Firehose could not
deliver to the specified HTTP endpoint destination (FailedDataOnly
).
S3DestinationDescription s3DestinationDescription
HttpEndpointConfiguration endpointConfiguration
Describes the configuration of the HTTP endpoint destination.
HttpEndpointBufferingHints bufferingHints
Describes buffering options that can be applied to the data before it is delivered to the HTTPS endpoint
destination. Firehose teats these options as hints, and it might choose to use more optimal values. The
SizeInMBs
and IntervalInSeconds
parameters are optional. However, if specify a value
for one of them, you must also provide a value for the other.
CloudWatchLoggingOptions cloudWatchLoggingOptions
HttpEndpointRequestConfiguration requestConfiguration
The configuration of the request sent to the HTTP endpoint specified as the destination.
ProcessingConfiguration processingConfiguration
String roleARN
Firehose uses this IAM role for all the permissions that the delivery stream needs.
HttpEndpointRetryOptions retryOptions
Describes the retry behavior in case Firehose is unable to deliver data to the specified HTTP endpoint destination, or if it doesn't receive a valid acknowledgment of receipt from the specified HTTP endpoint destination.
String s3BackupMode
Describes the S3 bucket backup options for the data that Kinesis Firehose delivers to the HTTP endpoint
destination. You can back up all documents (AllData
) or only the documents that Firehose could not
deliver to the specified HTTP endpoint destination (FailedDataOnly
).
S3DestinationUpdate s3Update
String contentEncoding
Firehose uses the content encoding to compress the body of a request before sending the request to the destination. For more information, see Content-Encoding in MDN Web Docs, the official Mozilla documentation.
List<E> commonAttributes
Describes the metadata sent to the HTTP endpoint destination.
Integer durationInSeconds
The total amount of time that Firehose spends on retries. This duration starts after the initial attempt to send data to the custom destination via HTTPS endpoint fails. It doesn't include the periods during which Firehose waits for acknowledgment from the specified destination after each attempt.
Deserializer deserializer
Specifies which deserializer to use. You can choose either the Apache Hive JSON SerDe or the OpenX JSON SerDe. If both are non-null, the server rejects the request.
String code
String code
String kinesisStreamARN
The ARN of the source Kinesis data stream. For more information, see Amazon Kinesis Data Streams ARN Format.
String roleARN
The ARN of the role that provides access to the source Kinesis data stream. For more information, see Amazon Web Services Identity and Access Management (IAM) ARN Format.
String kinesisStreamARN
The Amazon Resource Name (ARN) of the source Kinesis data stream. For more information, see Amazon Kinesis Data Streams ARN Format.
String roleARN
The ARN of the role used by the source Kinesis data stream. For more information, see Amazon Web Services Identity and Access Management (IAM) ARN Format.
Date deliveryStartTimestamp
Firehose starts retrieving records from the Kinesis data stream starting with this timestamp.
String aWSKMSKeyARN
The Amazon Resource Name (ARN) of the encryption key. Must belong to the same Amazon Web Services Region as the destination Amazon S3 bucket. For more information, see Amazon Resource Names (ARNs) and Amazon Web Services Service Namespaces.
Integer limit
The maximum number of delivery streams to list. The default value is 10.
String deliveryStreamType
The delivery stream type. This can be one of the following values:
DirectPut
: Provider applications access the delivery stream directly.
KinesisStreamAsSource
: The delivery stream uses a Kinesis data stream as a source.
This parameter is optional. If this parameter is omitted, delivery streams of all types are returned.
String exclusiveStartDeliveryStreamName
The list of delivery streams returned by this call to ListDeliveryStreams
will start with the
delivery stream whose name comes alphabetically immediately after the name you specify in
ExclusiveStartDeliveryStreamName
.
String deliveryStreamName
The name of the delivery stream whose tags you want to list.
String exclusiveStartTagKey
The key to use as the starting point for the list of tags. If you set this parameter,
ListTagsForDeliveryStream
gets all tags that occur after ExclusiveStartTagKey
.
Integer limit
The number of tags to return. If this number is less than the total number of tags associated with the delivery
stream, HasMoreTags
is set to true
in the response. To list additional tags, set
ExclusiveStartTagKey
to the last key in the response.
List<E> tags
A list of tags associated with DeliveryStreamName
, starting with the first tag after
ExclusiveStartTagKey
and up to the specified Limit
.
Boolean hasMoreTags
If this is true
in the response, more tags are available. To list the remaining tags, set
ExclusiveStartTagKey
to the key of the last tag returned and call
ListTagsForDeliveryStream
again.
String mSKClusterARN
The ARN of the Amazon MSK cluster.
String topicName
The topic name within the Amazon MSK cluster.
AuthenticationConfiguration authenticationConfiguration
The authentication configuration of the Amazon MSK cluster.
String mSKClusterARN
The ARN of the Amazon MSK cluster.
String topicName
The topic name within the Amazon MSK cluster.
AuthenticationConfiguration authenticationConfiguration
The authentication configuration of the Amazon MSK cluster.
Date deliveryStartTimestamp
Firehose starts retrieving records from the topic within the Amazon MSK cluster starting with this timestamp.
Boolean convertDotsInJsonKeysToUnderscores
When set to true
, specifies that the names of the keys include dots and that you want Firehose to
replace them with underscores. This is useful because Apache Hive does not allow dots in column names. For
example, if the JSON contains a key whose name is "a.b", you can define the column name to be "a_b" when using
this option.
The default is false
.
Boolean caseInsensitive
When set to true
, which is the default, Firehose converts JSON keys to lowercase before
deserializing them.
Map<K,V> columnToJsonKeyMappings
Maps column names to JSON keys that aren't identical to the column names. This is useful when the JSON contains
keys that are Hive keywords. For example, timestamp
is a Hive keyword. If you have a JSON key named
timestamp
, set this parameter to {"ts": "timestamp"}
to map this key to a column named
ts
.
Integer stripeSizeBytes
The number of bytes in each stripe. The default is 64 MiB and the minimum is 8 MiB.
Integer blockSizeBytes
The Hadoop Distributed File System (HDFS) block size. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 256 MiB and the minimum is 64 MiB. Firehose uses this value for padding calculations.
Integer rowIndexStride
The number of rows between index entries. The default is 10,000 and the minimum is 1,000.
Boolean enablePadding
Set this to true
to indicate that you want stripes to be padded to the HDFS block boundaries. This
is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is
false
.
Double paddingTolerance
A number between 0 and 1 that defines the tolerance for block padding as a decimal fraction of stripe size. The default value is 0.05, which means 5 percent of stripe size.
For the default values of 64 MiB ORC stripes and 256 MiB HDFS blocks, the default block padding tolerance of 5 percent reserves a maximum of 3.2 MiB for padding within the 256 MiB block. In such a case, if the available size within the block is more than 3.2 MiB, a new, smaller stripe is inserted to fit within that space. This ensures that no stripe crosses block boundaries and causes remote reads within a node-local task.
Firehose ignores this parameter when OrcSerDe$EnablePadding is false
.
String compression
The compression code to use over data blocks. The default is SNAPPY
.
List<E> bloomFilterColumns
The column names for which you want Firehose to create bloom filters. The default is null
.
Double bloomFilterFalsePositiveProbability
The Bloom filter false positive probability (FPP). The lower the FPP, the bigger the Bloom filter. The default value is 0.05, the minimum is 0, and the maximum is 1.
Double dictionaryKeyThreshold
Represents the fraction of the total number of non-null rows. To turn off dictionary encoding, set this fraction to a number that is less than the number of distinct keys in a dictionary. To always use dictionary encoding, set this threshold to 1.
String formatVersion
The version of the file to write. The possible values are V0_11
and V0_12
. The default
is V0_12
.
Serializer serializer
Specifies which serializer to use. You can choose either the ORC SerDe or the Parquet SerDe. If both are non-null, the server rejects the request.
Integer blockSizeBytes
The Hadoop Distributed File System (HDFS) block size. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 256 MiB and the minimum is 64 MiB. Firehose uses this value for padding calculations.
Integer pageSizeBytes
The Parquet page size. Column chunks are divided into pages. A page is conceptually an indivisible unit (in terms of compression and encoding). The minimum value is 64 KiB and the default is 1 MiB.
String compression
The compression code to use over data blocks. The possible values are UNCOMPRESSED
,
SNAPPY
, and GZIP
, with the default being SNAPPY
. Use SNAPPY
for higher decompression speed. Use GZIP
if the compression ratio is more important than speed.
Boolean enableDictionaryCompression
Indicates whether to enable dictionary compression.
Integer maxPaddingBytes
The maximum amount of padding to apply. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 0.
String writerVersion
Indicates the version of row format to output. The possible values are V1
and V2
. The
default is V1
.
String parameterName
The name of the parameter. Currently the following default values are supported: 3 for
NumberOfRetries
and 60 for the BufferIntervalInSeconds
. The
BufferSizeInMBs
ranges between 0.2 MB and up to 3MB. The default buffering hint is 1MB for all
destinations, except Splunk. For Splunk, the default buffering hint is 256 KB.
String parameterValue
The parameter value.
Integer failedPutCount
The number of records that might have failed processing. This number might be greater than 0 even if the
PutRecordBatch call succeeds. Check FailedPutCount
to determine whether there are records
that you need to resend.
Boolean encrypted
Indicates whether server-side encryption (SSE) was enabled during this operation.
List<E> requestResponses
The results array. For each record, the index of the response element is the same as the index used in the request array.
ByteBuffer data
The data blob, which is base64-encoded when the blob is serialized. The maximum size of the data blob, before base64-encoding, is 1,000 KiB.
String roleARN
The Amazon Resource Name (ARN) of the Amazon Web Services credentials. For more information, see Amazon Resource Names (ARNs) and Amazon Web Services Service Namespaces.
String clusterJDBCURL
The database connection string.
CopyCommand copyCommand
The COPY
command.
String username
The name of the user.
String password
The user password.
RedshiftRetryOptions retryOptions
The retry behavior in case Firehose is unable to deliver documents to Amazon Redshift. Default value is 3600 (60 minutes).
S3DestinationConfiguration s3Configuration
The configuration for the intermediate Amazon S3 location from which Amazon Redshift obtains data. Restrictions are described in the topic for CreateDeliveryStream.
The compression formats SNAPPY
or ZIP
cannot be specified in
RedshiftDestinationConfiguration.S3Configuration
because the Amazon Redshift COPY
operation that reads from the S3 bucket doesn't support these compression formats.
ProcessingConfiguration processingConfiguration
The data processing configuration.
String s3BackupMode
The Amazon S3 backup mode. After you create a delivery stream, you can update it to enable Amazon S3 backup if it is disabled. If backup is enabled, you can't update the delivery stream to disable it.
S3DestinationConfiguration s3BackupConfiguration
The configuration for backup in Amazon S3.
CloudWatchLoggingOptions cloudWatchLoggingOptions
The CloudWatch logging options for your delivery stream.
String roleARN
The Amazon Resource Name (ARN) of the Amazon Web Services credentials. For more information, see Amazon Resource Names (ARNs) and Amazon Web Services Service Namespaces.
String clusterJDBCURL
The database connection string.
CopyCommand copyCommand
The COPY
command.
String username
The name of the user.
RedshiftRetryOptions retryOptions
The retry behavior in case Firehose is unable to deliver documents to Amazon Redshift. Default value is 3600 (60 minutes).
S3DestinationDescription s3DestinationDescription
The Amazon S3 destination.
ProcessingConfiguration processingConfiguration
The data processing configuration.
String s3BackupMode
The Amazon S3 backup mode.
S3DestinationDescription s3BackupDescription
The configuration for backup in Amazon S3.
CloudWatchLoggingOptions cloudWatchLoggingOptions
The Amazon CloudWatch logging options for your delivery stream.
String roleARN
The Amazon Resource Name (ARN) of the Amazon Web Services credentials. For more information, see Amazon Resource Names (ARNs) and Amazon Web Services Service Namespaces.
String clusterJDBCURL
The database connection string.
CopyCommand copyCommand
The COPY
command.
String username
The name of the user.
String password
The user password.
RedshiftRetryOptions retryOptions
The retry behavior in case Firehose is unable to deliver documents to Amazon Redshift. Default value is 3600 (60 minutes).
S3DestinationUpdate s3Update
The Amazon S3 destination.
The compression formats SNAPPY
or ZIP
cannot be specified in
RedshiftDestinationUpdate.S3Update
because the Amazon Redshift COPY
operation that
reads from the S3 bucket doesn't support these compression formats.
ProcessingConfiguration processingConfiguration
The data processing configuration.
String s3BackupMode
You can update a delivery stream to enable Amazon S3 backup if it is disabled. If backup is enabled, you can't update the delivery stream to disable it.
S3DestinationUpdate s3BackupUpdate
The Amazon S3 destination for backup.
CloudWatchLoggingOptions cloudWatchLoggingOptions
The Amazon CloudWatch logging options for your delivery stream.
Integer durationInSeconds
The length of time during which Firehose retries delivery after a failure, starting from the initial request and
including the first attempt. The default value is 3600 seconds (60 minutes). Firehose does not retry if the value
of DurationInSeconds
is 0 (zero) or if the first delivery attempt takes longer than the current
value.
Integer durationInSeconds
The period of time during which Firehose retries to deliver data to the specified Amazon S3 prefix.
String roleARN
The Amazon Resource Name (ARN) of the Amazon Web Services credentials. For more information, see Amazon Resource Names (ARNs) and Amazon Web Services Service Namespaces.
String bucketARN
The ARN of the S3 bucket. For more information, see Amazon Resource Names (ARNs) and Amazon Web Services Service Namespaces.
String prefix
The "YYYY/MM/DD/HH" time format prefix is automatically used for delivered Amazon S3 files. You can also specify a custom prefix, as described in Custom Prefixes for Amazon S3 Objects.
String errorOutputPrefix
A prefix that Firehose evaluates and adds to failed records before writing them to S3. This prefix appears immediately following the bucket name. For information about how to specify this prefix, see Custom Prefixes for Amazon S3 Objects.
BufferingHints bufferingHints
The buffering option. If no value is specified, BufferingHints
object default values are used.
String compressionFormat
The compression format. If no value is specified, the default is UNCOMPRESSED
.
The compression formats SNAPPY
or ZIP
cannot be specified for Amazon Redshift
destinations because they are not supported by the Amazon Redshift COPY
operation that reads from
the S3 bucket.
EncryptionConfiguration encryptionConfiguration
The encryption configuration. If no value is specified, the default is no encryption.
CloudWatchLoggingOptions cloudWatchLoggingOptions
The CloudWatch logging options for your delivery stream.
String roleARN
The Amazon Resource Name (ARN) of the Amazon Web Services credentials. For more information, see Amazon Resource Names (ARNs) and Amazon Web Services Service Namespaces.
String bucketARN
The ARN of the S3 bucket. For more information, see Amazon Resource Names (ARNs) and Amazon Web Services Service Namespaces.
String prefix
The "YYYY/MM/DD/HH" time format prefix is automatically used for delivered Amazon S3 files. You can also specify a custom prefix, as described in Custom Prefixes for Amazon S3 Objects.
String errorOutputPrefix
A prefix that Firehose evaluates and adds to failed records before writing them to S3. This prefix appears immediately following the bucket name. For information about how to specify this prefix, see Custom Prefixes for Amazon S3 Objects.
BufferingHints bufferingHints
The buffering option. If no value is specified, BufferingHints
object default values are used.
String compressionFormat
The compression format. If no value is specified, the default is UNCOMPRESSED
.
EncryptionConfiguration encryptionConfiguration
The encryption configuration. If no value is specified, the default is no encryption.
CloudWatchLoggingOptions cloudWatchLoggingOptions
The Amazon CloudWatch logging options for your delivery stream.
String roleARN
The Amazon Resource Name (ARN) of the Amazon Web Services credentials. For more information, see Amazon Resource Names (ARNs) and Amazon Web Services Service Namespaces.
String bucketARN
The ARN of the S3 bucket. For more information, see Amazon Resource Names (ARNs) and Amazon Web Services Service Namespaces.
String prefix
The "YYYY/MM/DD/HH" time format prefix is automatically used for delivered Amazon S3 files. You can also specify a custom prefix, as described in Custom Prefixes for Amazon S3 Objects.
String errorOutputPrefix
A prefix that Firehose evaluates and adds to failed records before writing them to S3. This prefix appears immediately following the bucket name. For information about how to specify this prefix, see Custom Prefixes for Amazon S3 Objects.
BufferingHints bufferingHints
The buffering option. If no value is specified, BufferingHints
object default values are used.
String compressionFormat
The compression format. If no value is specified, the default is UNCOMPRESSED
.
The compression formats SNAPPY
or ZIP
cannot be specified for Amazon Redshift
destinations because they are not supported by the Amazon Redshift COPY
operation that reads from
the S3 bucket.
EncryptionConfiguration encryptionConfiguration
The encryption configuration. If no value is specified, the default is no encryption.
CloudWatchLoggingOptions cloudWatchLoggingOptions
The CloudWatch logging options for your delivery stream.
String roleARN
The role that Firehose can use to access Amazon Web Services Glue. This role must be in the same account you use for Firehose. Cross-account roles aren't allowed.
If the SchemaConfiguration
request parameter is used as part of invoking the
CreateDeliveryStream
API, then the RoleARN
property is required and its value must be
specified.
String catalogId
The ID of the Amazon Web Services Glue Data Catalog. If you don't supply this, the Amazon Web Services account ID is used by default.
String databaseName
Specifies the name of the Amazon Web Services Glue database that contains the schema for the output data.
If the SchemaConfiguration
request parameter is used as part of invoking the
CreateDeliveryStream
API, then the DatabaseName
property is required and its value must
be specified.
String tableName
Specifies the Amazon Web Services Glue table that contains the column information that constitutes your data schema.
If the SchemaConfiguration
request parameter is used as part of invoking the
CreateDeliveryStream
API, then the TableName
property is required and its value must be
specified.
String region
If you don't specify an Amazon Web Services Region, the default is the current Region.
String versionId
Specifies the table version for the output data schema. If you don't specify this version ID, or if you set it to
LATEST
, Firehose uses the most recent version. This means that any updates to the table are
automatically picked up.
ParquetSerDe parquetSerDe
A serializer to use for converting data to the Parquet format before storing it in Amazon S3. For more information, see Apache Parquet.
OrcSerDe orcSerDe
A serializer to use for converting data to the ORC format before storing it in Amazon S3. For more information, see Apache ORC.
String accountUrl
URL for accessing your Snowflake account. This URL must include your account identifier. Note that the protocol (https://) and port number are optional.
String privateKey
The private key used to encrypt your Snowflake client. For information, see Using Key Pair Authentication & Key Rotation.
String keyPassphrase
Passphrase to decrypt the private key when the key is encrypted. For information, see Using Key Pair Authentication & Key Rotation.
String user
User login name for the Snowflake account.
String database
All data in Snowflake is maintained in databases.
String schema
Each database consists of one or more schemas, which are logical groupings of database objects, such as tables and views
String table
All data in Snowflake is stored in database tables, logically structured as collections of columns and rows.
SnowflakeRoleConfiguration snowflakeRoleConfiguration
Optionally configure a Snowflake role. Otherwise the default user role will be used.
String dataLoadingOption
Choose to load JSON keys mapped to table column names or choose to split the JSON payload where content is mapped to a record content column and source metadata is mapped to a record metadata column.
String metaDataColumnName
The name of the record metadata column
String contentColumnName
The name of the record content column
SnowflakeVpcConfiguration snowflakeVpcConfiguration
The VPCE ID for Firehose to privately connect with Snowflake. The ID format is com.amazonaws.vpce.[region].vpce-svc-<[id]>. For more information, see Amazon PrivateLink & Snowflake
CloudWatchLoggingOptions cloudWatchLoggingOptions
ProcessingConfiguration processingConfiguration
String roleARN
The Amazon Resource Name (ARN) of the Snowflake role
SnowflakeRetryOptions retryOptions
The time period where Firehose will retry sending data to the chosen HTTP endpoint.
String s3BackupMode
Choose an S3 backup mode
S3DestinationConfiguration s3Configuration
String accountUrl
URL for accessing your Snowflake account. This URL must include your account identifier. Note that the protocol (https://) and port number are optional.
String user
User login name for the Snowflake account.
String database
All data in Snowflake is maintained in databases.
String schema
Each database consists of one or more schemas, which are logical groupings of database objects, such as tables and views
String table
All data in Snowflake is stored in database tables, logically structured as collections of columns and rows.
SnowflakeRoleConfiguration snowflakeRoleConfiguration
Optionally configure a Snowflake role. Otherwise the default user role will be used.
String dataLoadingOption
Choose to load JSON keys mapped to table column names or choose to split the JSON payload where content is mapped to a record content column and source metadata is mapped to a record metadata column.
String metaDataColumnName
The name of the record metadata column
String contentColumnName
The name of the record content column
SnowflakeVpcConfiguration snowflakeVpcConfiguration
The VPCE ID for Firehose to privately connect with Snowflake. The ID format is com.amazonaws.vpce.[region].vpce-svc-<[id]>. For more information, see Amazon PrivateLink & Snowflake
CloudWatchLoggingOptions cloudWatchLoggingOptions
ProcessingConfiguration processingConfiguration
String roleARN
The Amazon Resource Name (ARN) of the Snowflake role
SnowflakeRetryOptions retryOptions
The time period where Firehose will retry sending data to the chosen HTTP endpoint.
String s3BackupMode
Choose an S3 backup mode
S3DestinationDescription s3DestinationDescription
String accountUrl
URL for accessing your Snowflake account. This URL must include your account identifier. Note that the protocol (https://) and port number are optional.
String privateKey
The private key used to encrypt your Snowflake client. For information, see Using Key Pair Authentication & Key Rotation.
String keyPassphrase
Passphrase to decrypt the private key when the key is encrypted. For information, see Using Key Pair Authentication & Key Rotation.
String user
User login name for the Snowflake account.
String database
All data in Snowflake is maintained in databases.
String schema
Each database consists of one or more schemas, which are logical groupings of database objects, such as tables and views
String table
All data in Snowflake is stored in database tables, logically structured as collections of columns and rows.
SnowflakeRoleConfiguration snowflakeRoleConfiguration
Optionally configure a Snowflake role. Otherwise the default user role will be used.
String dataLoadingOption
JSON keys mapped to table column names or choose to split the JSON payload where content is mapped to a record content column and source metadata is mapped to a record metadata column.
String metaDataColumnName
The name of the record metadata column
String contentColumnName
The name of the content metadata column
CloudWatchLoggingOptions cloudWatchLoggingOptions
ProcessingConfiguration processingConfiguration
String roleARN
The Amazon Resource Name (ARN) of the Snowflake role
SnowflakeRetryOptions retryOptions
Specify how long Firehose retries sending data to the New Relic HTTP endpoint. After sending data, Firehose first waits for an acknowledgment from the HTTP endpoint. If an error occurs or the acknowledgment doesn’t arrive within the acknowledgment timeout period, Firehose starts the retry duration counter. It keeps retrying until the retry duration expires. After that, Firehose considers it a data delivery failure and backs up the data to your Amazon S3 bucket. Every time that Firehose sends data to the HTTP endpoint (either the initial attempt or a retry), it restarts the acknowledgement timeout counter and waits for an acknowledgement from the HTTP endpoint. Even if the retry duration expires, Firehose still waits for the acknowledgment until it receives it or the acknowledgement timeout period is reached. If the acknowledgment times out, Firehose determines whether there's time left in the retry counter. If there is time left, it retries again and repeats the logic until it receives an acknowledgment or determines that the retry time has expired. If you don't want Firehose to retry sending data, set this value to 0.
String s3BackupMode
Choose an S3 backup mode
S3DestinationUpdate s3Update
Integer durationInSeconds
the time period where Firehose will retry sending data to the chosen HTTP endpoint.
String privateLinkVpceId
The VPCE ID for Firehose to privately connect with Snowflake. The ID format is com.amazonaws.vpce.[region].vpce-svc-<[id]>. For more information, see Amazon PrivateLink & Snowflake
KinesisStreamSourceDescription kinesisStreamSourceDescription
The KinesisStreamSourceDescription value for the source Kinesis data stream.
MSKSourceDescription mSKSourceDescription
The configuration description for the Amazon MSK cluster to be used as the source for a delivery stream.
Integer intervalInSeconds
Buffer incoming data for the specified period of time, in seconds, before delivering it to the destination. The default value is 60 (1 minute).
Integer sizeInMBs
Buffer incoming data to the specified size, in MBs, before delivering it to the destination. The default value is 5.
String hECEndpoint
The HTTP Event Collector (HEC) endpoint to which Firehose sends your data.
String hECEndpointType
This type can be either "Raw" or "Event."
String hECToken
This is a GUID that you obtain from your Splunk cluster when you create a new HEC endpoint.
Integer hECAcknowledgmentTimeoutInSeconds
The amount of time that Firehose waits to receive an acknowledgment from Splunk after it sends it data. At the end of the timeout period, Firehose either tries to send the data again or considers it an error, based on your retry settings.
SplunkRetryOptions retryOptions
The retry behavior in case Firehose is unable to deliver data to Splunk, or if it doesn't receive an acknowledgment of receipt from Splunk.
String s3BackupMode
Defines how documents should be delivered to Amazon S3. When set to FailedEventsOnly
, Firehose
writes any data that could not be indexed to the configured Amazon S3 destination. When set to
AllEvents
, Firehose delivers all incoming records to Amazon S3, and also writes failed documents to
Amazon S3. The default value is FailedEventsOnly
.
You can update this backup mode from FailedEventsOnly
to AllEvents
. You can't update it
from AllEvents
to FailedEventsOnly
.
S3DestinationConfiguration s3Configuration
The configuration for the backup Amazon S3 location.
ProcessingConfiguration processingConfiguration
The data processing configuration.
CloudWatchLoggingOptions cloudWatchLoggingOptions
The Amazon CloudWatch logging options for your delivery stream.
SplunkBufferingHints bufferingHints
The buffering options. If no value is specified, the default values for Splunk are used.
String hECEndpoint
The HTTP Event Collector (HEC) endpoint to which Firehose sends your data.
String hECEndpointType
This type can be either "Raw" or "Event."
String hECToken
A GUID you obtain from your Splunk cluster when you create a new HEC endpoint.
Integer hECAcknowledgmentTimeoutInSeconds
The amount of time that Firehose waits to receive an acknowledgment from Splunk after it sends it data. At the end of the timeout period, Firehose either tries to send the data again or considers it an error, based on your retry settings.
SplunkRetryOptions retryOptions
The retry behavior in case Firehose is unable to deliver data to Splunk or if it doesn't receive an acknowledgment of receipt from Splunk.
String s3BackupMode
Defines how documents should be delivered to Amazon S3. When set to FailedDocumentsOnly
, Firehose
writes any data that could not be indexed to the configured Amazon S3 destination. When set to
AllDocuments
, Firehose delivers all incoming records to Amazon S3, and also writes failed documents
to Amazon S3. Default value is FailedDocumentsOnly
.
S3DestinationDescription s3DestinationDescription
The Amazon S3 destination.>
ProcessingConfiguration processingConfiguration
The data processing configuration.
CloudWatchLoggingOptions cloudWatchLoggingOptions
The Amazon CloudWatch logging options for your delivery stream.
SplunkBufferingHints bufferingHints
The buffering options. If no value is specified, the default values for Splunk are used.
String hECEndpoint
The HTTP Event Collector (HEC) endpoint to which Firehose sends your data.
String hECEndpointType
This type can be either "Raw" or "Event."
String hECToken
A GUID that you obtain from your Splunk cluster when you create a new HEC endpoint.
Integer hECAcknowledgmentTimeoutInSeconds
The amount of time that Firehose waits to receive an acknowledgment from Splunk after it sends data. At the end of the timeout period, Firehose either tries to send the data again or considers it an error, based on your retry settings.
SplunkRetryOptions retryOptions
The retry behavior in case Firehose is unable to deliver data to Splunk or if it doesn't receive an acknowledgment of receipt from Splunk.
String s3BackupMode
Specifies how you want Firehose to back up documents to Amazon S3. When set to FailedDocumentsOnly
,
Firehose writes any data that could not be indexed to the configured Amazon S3 destination. When set to
AllEvents
, Firehose delivers all incoming records to Amazon S3, and also writes failed documents to
Amazon S3. The default value is FailedEventsOnly
.
You can update this backup mode from FailedEventsOnly
to AllEvents
. You can't update it
from AllEvents
to FailedEventsOnly
.
S3DestinationUpdate s3Update
Your update to the configuration of the backup Amazon S3 location.
ProcessingConfiguration processingConfiguration
The data processing configuration.
CloudWatchLoggingOptions cloudWatchLoggingOptions
The Amazon CloudWatch logging options for your delivery stream.
SplunkBufferingHints bufferingHints
The buffering options. If no value is specified, the default values for Splunk are used.
Integer durationInSeconds
The total amount of time that Firehose spends on retries. This duration starts after the initial attempt to send data to Splunk fails. It doesn't include the periods during which Firehose waits for acknowledgment from Splunk after each attempt.
String deliveryStreamName
The name of the delivery stream for which you want to enable server-side encryption (SSE).
DeliveryStreamEncryptionConfigurationInput deliveryStreamEncryptionConfigurationInput
Used to specify the type and Amazon Resource Name (ARN) of the KMS key needed for Server-Side Encryption (SSE).
String deliveryStreamName
The name of the delivery stream for which you want to disable server-side encryption (SSE).
String key
A unique identifier for the tag. Maximum length: 128 characters. Valid characters: Unicode letters, digits, white space, _ . / = + - % @
String value
An optional string, which you can use to describe or define the tag. Maximum length: 256 characters. Valid characters: Unicode letters, digits, white space, _ . / = + - % @
String deliveryStreamName
The name of the delivery stream.
String currentDeliveryStreamVersionId
Obtain this value from the VersionId
result of DeliveryStreamDescription. This value is
required, and helps the service perform conditional operations. For example, if there is an interleaving update
and this value is null, then the update destination fails. After the update is successful, the
VersionId
value is updated. The service then performs a merge of the old configuration with the new
configuration.
String destinationId
The ID of the destination.
S3DestinationUpdate s3DestinationUpdate
[Deprecated] Describes an update for a destination in Amazon S3.
ExtendedS3DestinationUpdate extendedS3DestinationUpdate
Describes an update for a destination in Amazon S3.
RedshiftDestinationUpdate redshiftDestinationUpdate
Describes an update for a destination in Amazon Redshift.
ElasticsearchDestinationUpdate elasticsearchDestinationUpdate
Describes an update for a destination in Amazon ES.
AmazonopensearchserviceDestinationUpdate amazonopensearchserviceDestinationUpdate
Describes an update for a destination in Amazon OpenSearch Service.
SplunkDestinationUpdate splunkDestinationUpdate
Describes an update for a destination in Splunk.
HttpEndpointDestinationUpdate httpEndpointDestinationUpdate
Describes an update to the specified HTTP endpoint destination.
AmazonOpenSearchServerlessDestinationUpdate amazonOpenSearchServerlessDestinationUpdate
Describes an update for a destination in the Serverless offering for Amazon OpenSearch Service.
SnowflakeDestinationUpdate snowflakeDestinationUpdate
Update to the Snowflake destination condiguration settings
List<E> subnetIds
The IDs of the subnets that you want Firehose to use to create ENIs in the VPC of the Amazon ES destination. Make sure that the routing tables and inbound and outbound rules allow traffic to flow from the subnets whose IDs are specified here to the subnets that have the destination Amazon ES endpoints. Firehose creates at least one ENI in each of the subnets that are specified here. Do not delete or modify these ENIs.
The number of ENIs that Firehose creates in the subnets specified here scales up and down automatically based on throughput. To enable Firehose to scale up the number of ENIs to match throughput, ensure that you have sufficient quota. To help you calculate the quota you need, assume that Firehose can create up to three ENIs for this delivery stream for each of the subnets specified here. For more information about ENI quota, see Network Interfaces in the Amazon VPC Quotas topic.
String roleARN
The ARN of the IAM role that you want the delivery stream to use to create endpoints in the destination VPC. You can use your existing Firehose delivery role or you can specify a new role. In either case, make sure that the role trusts the Firehose service principal and that it grants the following permissions:
ec2:DescribeVpcs
ec2:DescribeVpcAttribute
ec2:DescribeSubnets
ec2:DescribeSecurityGroups
ec2:DescribeNetworkInterfaces
ec2:CreateNetworkInterface
ec2:CreateNetworkInterfacePermission
ec2:DeleteNetworkInterface
When you specify subnets for delivering data to the destination in a private VPC, make sure you have enough number of free IP addresses in chosen subnets. If there is no available free IP address in a specified subnet, Firehose cannot create or add ENIs for the data delivery in the private VPC, and the delivery will be degraded or fail.
List<E> securityGroupIds
The IDs of the security groups that you want Firehose to use when it creates ENIs in the VPC of the Amazon ES destination. You can use the same security group that the Amazon ES domain uses or different ones. If you specify different security groups here, ensure that they allow outbound HTTPS traffic to the Amazon ES domain's security group. Also ensure that the Amazon ES domain's security group allows HTTPS traffic from the security groups specified here. If you use the same security group for both your delivery stream and the Amazon ES domain, make sure the security group inbound rule allows HTTPS traffic. For more information about security group rules, see Security group rules in the Amazon VPC documentation.
List<E> subnetIds
The IDs of the subnets that Firehose uses to create ENIs in the VPC of the Amazon ES destination. Make sure that the routing tables and inbound and outbound rules allow traffic to flow from the subnets whose IDs are specified here to the subnets that have the destination Amazon ES endpoints. Firehose creates at least one ENI in each of the subnets that are specified here. Do not delete or modify these ENIs.
The number of ENIs that Firehose creates in the subnets specified here scales up and down automatically based on throughput. To enable Firehose to scale up the number of ENIs to match throughput, ensure that you have sufficient quota. To help you calculate the quota you need, assume that Firehose can create up to three ENIs for this delivery stream for each of the subnets specified here. For more information about ENI quota, see Network Interfaces in the Amazon VPC Quotas topic.
String roleARN
The ARN of the IAM role that the delivery stream uses to create endpoints in the destination VPC. You can use your existing Firehose delivery role or you can specify a new role. In either case, make sure that the role trusts the Firehose service principal and that it grants the following permissions:
ec2:DescribeVpcs
ec2:DescribeVpcAttribute
ec2:DescribeSubnets
ec2:DescribeSecurityGroups
ec2:DescribeNetworkInterfaces
ec2:CreateNetworkInterface
ec2:CreateNetworkInterfacePermission
ec2:DeleteNetworkInterface
If you revoke these permissions after you create the delivery stream, Firehose can't scale out by creating more ENIs when necessary. You might therefore see a degradation in performance.
List<E> securityGroupIds
The IDs of the security groups that Firehose uses when it creates ENIs in the VPC of the Amazon ES destination. You can use the same security group that the Amazon ES domain uses or different ones. If you specify different security groups, ensure that they allow outbound HTTPS traffic to the Amazon ES domain's security group. Also ensure that the Amazon ES domain's security group allows HTTPS traffic from the security groups specified here. If you use the same security group for both your delivery stream and the Amazon ES domain, make sure the security group inbound rule allows HTTPS traffic. For more information about security group rules, see Security group rules in the Amazon VPC documentation.
String vpcId
The ID of the Amazon ES destination's VPC.
Copyright © 2024. All rights reserved.