@Generated(value="com.amazonaws:aws-java-sdk-code-generator") public class KafkaSettings extends Object implements Serializable, Cloneable, StructuredPojo
Provides information that describes an Apache Kafka endpoint. This information includes the output format of records applied to the endpoint and details of transaction and control table data information.
Constructor and Description |
---|
KafkaSettings() |
Modifier and Type | Method and Description |
---|---|
KafkaSettings |
clone() |
boolean |
equals(Object obj) |
String |
getBroker()
The broker location and port of the Kafka broker that hosts your Kafka instance.
|
Boolean |
getIncludeControlDetails()
Shows detailed control information for table definition, column definition, and table and column changes in the
Kafka message output.
|
Boolean |
getIncludeNullAndEmpty()
Include NULL and empty columns for records migrated to the endpoint.
|
Boolean |
getIncludePartitionValue()
Shows the partition value within the Kafka message output, unless the partition type is
schema-table-type . |
Boolean |
getIncludeTableAlterOperations()
Includes any data definition language (DDL) operations that change the table in the control data, such as
rename-table , drop-table , add-column , drop-column , and
rename-column . |
Boolean |
getIncludeTransactionDetails()
Provides detailed transaction information from the source database.
|
String |
getMessageFormat()
The output format for the records created on the endpoint.
|
Integer |
getMessageMaxBytes()
The maximum size in bytes for records created on the endpoint The default is 1,000,000.
|
Boolean |
getPartitionIncludeSchemaTable()
Prefixes schema and table names to partition values, when the partition type is
primary-key-type . |
String |
getTopic()
The topic to which you migrate the data.
|
int |
hashCode() |
Boolean |
isIncludeControlDetails()
Shows detailed control information for table definition, column definition, and table and column changes in the
Kafka message output.
|
Boolean |
isIncludeNullAndEmpty()
Include NULL and empty columns for records migrated to the endpoint.
|
Boolean |
isIncludePartitionValue()
Shows the partition value within the Kafka message output, unless the partition type is
schema-table-type . |
Boolean |
isIncludeTableAlterOperations()
Includes any data definition language (DDL) operations that change the table in the control data, such as
rename-table , drop-table , add-column , drop-column , and
rename-column . |
Boolean |
isIncludeTransactionDetails()
Provides detailed transaction information from the source database.
|
Boolean |
isPartitionIncludeSchemaTable()
Prefixes schema and table names to partition values, when the partition type is
primary-key-type . |
void |
marshall(ProtocolMarshaller protocolMarshaller)
Marshalls this structured data using the given
ProtocolMarshaller . |
void |
setBroker(String broker)
The broker location and port of the Kafka broker that hosts your Kafka instance.
|
void |
setIncludeControlDetails(Boolean includeControlDetails)
Shows detailed control information for table definition, column definition, and table and column changes in the
Kafka message output.
|
void |
setIncludeNullAndEmpty(Boolean includeNullAndEmpty)
Include NULL and empty columns for records migrated to the endpoint.
|
void |
setIncludePartitionValue(Boolean includePartitionValue)
Shows the partition value within the Kafka message output, unless the partition type is
schema-table-type . |
void |
setIncludeTableAlterOperations(Boolean includeTableAlterOperations)
Includes any data definition language (DDL) operations that change the table in the control data, such as
rename-table , drop-table , add-column , drop-column , and
rename-column . |
void |
setIncludeTransactionDetails(Boolean includeTransactionDetails)
Provides detailed transaction information from the source database.
|
void |
setMessageFormat(String messageFormat)
The output format for the records created on the endpoint.
|
void |
setMessageMaxBytes(Integer messageMaxBytes)
The maximum size in bytes for records created on the endpoint The default is 1,000,000.
|
void |
setPartitionIncludeSchemaTable(Boolean partitionIncludeSchemaTable)
Prefixes schema and table names to partition values, when the partition type is
primary-key-type . |
void |
setTopic(String topic)
The topic to which you migrate the data.
|
String |
toString()
Returns a string representation of this object.
|
KafkaSettings |
withBroker(String broker)
The broker location and port of the Kafka broker that hosts your Kafka instance.
|
KafkaSettings |
withIncludeControlDetails(Boolean includeControlDetails)
Shows detailed control information for table definition, column definition, and table and column changes in the
Kafka message output.
|
KafkaSettings |
withIncludeNullAndEmpty(Boolean includeNullAndEmpty)
Include NULL and empty columns for records migrated to the endpoint.
|
KafkaSettings |
withIncludePartitionValue(Boolean includePartitionValue)
Shows the partition value within the Kafka message output, unless the partition type is
schema-table-type . |
KafkaSettings |
withIncludeTableAlterOperations(Boolean includeTableAlterOperations)
Includes any data definition language (DDL) operations that change the table in the control data, such as
rename-table , drop-table , add-column , drop-column , and
rename-column . |
KafkaSettings |
withIncludeTransactionDetails(Boolean includeTransactionDetails)
Provides detailed transaction information from the source database.
|
KafkaSettings |
withMessageFormat(MessageFormatValue messageFormat)
The output format for the records created on the endpoint.
|
KafkaSettings |
withMessageFormat(String messageFormat)
The output format for the records created on the endpoint.
|
KafkaSettings |
withMessageMaxBytes(Integer messageMaxBytes)
The maximum size in bytes for records created on the endpoint The default is 1,000,000.
|
KafkaSettings |
withPartitionIncludeSchemaTable(Boolean partitionIncludeSchemaTable)
Prefixes schema and table names to partition values, when the partition type is
primary-key-type . |
KafkaSettings |
withTopic(String topic)
The topic to which you migrate the data.
|
public void setBroker(String broker)
The broker location and port of the Kafka broker that hosts your Kafka instance. Specify the broker in the form
broker-hostname-or-ip:port
. For example,
"ec2-12-345-678-901.compute-1.amazonaws.com:2345"
.
broker
- The broker location and port of the Kafka broker that hosts your Kafka instance. Specify the broker in the
form broker-hostname-or-ip:port
. For example,
"ec2-12-345-678-901.compute-1.amazonaws.com:2345"
.public String getBroker()
The broker location and port of the Kafka broker that hosts your Kafka instance. Specify the broker in the form
broker-hostname-or-ip:port
. For example,
"ec2-12-345-678-901.compute-1.amazonaws.com:2345"
.
broker-hostname-or-ip:port
. For example,
"ec2-12-345-678-901.compute-1.amazonaws.com:2345"
.public KafkaSettings withBroker(String broker)
The broker location and port of the Kafka broker that hosts your Kafka instance. Specify the broker in the form
broker-hostname-or-ip:port
. For example,
"ec2-12-345-678-901.compute-1.amazonaws.com:2345"
.
broker
- The broker location and port of the Kafka broker that hosts your Kafka instance. Specify the broker in the
form broker-hostname-or-ip:port
. For example,
"ec2-12-345-678-901.compute-1.amazonaws.com:2345"
.public void setTopic(String topic)
The topic to which you migrate the data. If you don't specify a topic, AWS DMS specifies
"kafka-default-topic"
as the migration topic.
topic
- The topic to which you migrate the data. If you don't specify a topic, AWS DMS specifies
"kafka-default-topic"
as the migration topic.public String getTopic()
The topic to which you migrate the data. If you don't specify a topic, AWS DMS specifies
"kafka-default-topic"
as the migration topic.
"kafka-default-topic"
as the migration topic.public KafkaSettings withTopic(String topic)
The topic to which you migrate the data. If you don't specify a topic, AWS DMS specifies
"kafka-default-topic"
as the migration topic.
topic
- The topic to which you migrate the data. If you don't specify a topic, AWS DMS specifies
"kafka-default-topic"
as the migration topic.public void setMessageFormat(String messageFormat)
The output format for the records created on the endpoint. The message format is JSON
(default) or
JSON_UNFORMATTED
(a single line with no tab).
messageFormat
- The output format for the records created on the endpoint. The message format is JSON
(default) or JSON_UNFORMATTED
(a single line with no tab).MessageFormatValue
public String getMessageFormat()
The output format for the records created on the endpoint. The message format is JSON
(default) or
JSON_UNFORMATTED
(a single line with no tab).
JSON
(default) or JSON_UNFORMATTED
(a single line with no tab).MessageFormatValue
public KafkaSettings withMessageFormat(String messageFormat)
The output format for the records created on the endpoint. The message format is JSON
(default) or
JSON_UNFORMATTED
(a single line with no tab).
messageFormat
- The output format for the records created on the endpoint. The message format is JSON
(default) or JSON_UNFORMATTED
(a single line with no tab).MessageFormatValue
public KafkaSettings withMessageFormat(MessageFormatValue messageFormat)
The output format for the records created on the endpoint. The message format is JSON
(default) or
JSON_UNFORMATTED
(a single line with no tab).
messageFormat
- The output format for the records created on the endpoint. The message format is JSON
(default) or JSON_UNFORMATTED
(a single line with no tab).MessageFormatValue
public void setIncludeTransactionDetails(Boolean includeTransactionDetails)
Provides detailed transaction information from the source database. This information includes a commit timestamp,
a log position, and values for transaction_id
, previous transaction_id
, and
transaction_record_id
(the record offset within a transaction). The default is false
.
includeTransactionDetails
- Provides detailed transaction information from the source database. This information includes a commit
timestamp, a log position, and values for transaction_id
, previous
transaction_id
, and transaction_record_id
(the record offset within a
transaction). The default is false
.public Boolean getIncludeTransactionDetails()
Provides detailed transaction information from the source database. This information includes a commit timestamp,
a log position, and values for transaction_id
, previous transaction_id
, and
transaction_record_id
(the record offset within a transaction). The default is false
.
transaction_id
, previous
transaction_id
, and transaction_record_id
(the record offset within a
transaction). The default is false
.public KafkaSettings withIncludeTransactionDetails(Boolean includeTransactionDetails)
Provides detailed transaction information from the source database. This information includes a commit timestamp,
a log position, and values for transaction_id
, previous transaction_id
, and
transaction_record_id
(the record offset within a transaction). The default is false
.
includeTransactionDetails
- Provides detailed transaction information from the source database. This information includes a commit
timestamp, a log position, and values for transaction_id
, previous
transaction_id
, and transaction_record_id
(the record offset within a
transaction). The default is false
.public Boolean isIncludeTransactionDetails()
Provides detailed transaction information from the source database. This information includes a commit timestamp,
a log position, and values for transaction_id
, previous transaction_id
, and
transaction_record_id
(the record offset within a transaction). The default is false
.
transaction_id
, previous
transaction_id
, and transaction_record_id
(the record offset within a
transaction). The default is false
.public void setIncludePartitionValue(Boolean includePartitionValue)
Shows the partition value within the Kafka message output, unless the partition type is
schema-table-type
. The default is false
.
includePartitionValue
- Shows the partition value within the Kafka message output, unless the partition type is
schema-table-type
. The default is false
.public Boolean getIncludePartitionValue()
Shows the partition value within the Kafka message output, unless the partition type is
schema-table-type
. The default is false
.
schema-table-type
. The default is false
.public KafkaSettings withIncludePartitionValue(Boolean includePartitionValue)
Shows the partition value within the Kafka message output, unless the partition type is
schema-table-type
. The default is false
.
includePartitionValue
- Shows the partition value within the Kafka message output, unless the partition type is
schema-table-type
. The default is false
.public Boolean isIncludePartitionValue()
Shows the partition value within the Kafka message output, unless the partition type is
schema-table-type
. The default is false
.
schema-table-type
. The default is false
.public void setPartitionIncludeSchemaTable(Boolean partitionIncludeSchemaTable)
Prefixes schema and table names to partition values, when the partition type is primary-key-type
.
Doing this increases data distribution among Kafka partitions. For example, suppose that a SysBench schema has
thousands of tables and each table has only limited range for a primary key. In this case, the same primary key
is sent from thousands of tables to the same partition, which causes throttling. The default is
false
.
partitionIncludeSchemaTable
- Prefixes schema and table names to partition values, when the partition type is
primary-key-type
. Doing this increases data distribution among Kafka partitions. For example,
suppose that a SysBench schema has thousands of tables and each table has only limited range for a primary
key. In this case, the same primary key is sent from thousands of tables to the same partition, which
causes throttling. The default is false
.public Boolean getPartitionIncludeSchemaTable()
Prefixes schema and table names to partition values, when the partition type is primary-key-type
.
Doing this increases data distribution among Kafka partitions. For example, suppose that a SysBench schema has
thousands of tables and each table has only limited range for a primary key. In this case, the same primary key
is sent from thousands of tables to the same partition, which causes throttling. The default is
false
.
primary-key-type
. Doing this increases data distribution among Kafka partitions. For
example, suppose that a SysBench schema has thousands of tables and each table has only limited range for
a primary key. In this case, the same primary key is sent from thousands of tables to the same partition,
which causes throttling. The default is false
.public KafkaSettings withPartitionIncludeSchemaTable(Boolean partitionIncludeSchemaTable)
Prefixes schema and table names to partition values, when the partition type is primary-key-type
.
Doing this increases data distribution among Kafka partitions. For example, suppose that a SysBench schema has
thousands of tables and each table has only limited range for a primary key. In this case, the same primary key
is sent from thousands of tables to the same partition, which causes throttling. The default is
false
.
partitionIncludeSchemaTable
- Prefixes schema and table names to partition values, when the partition type is
primary-key-type
. Doing this increases data distribution among Kafka partitions. For example,
suppose that a SysBench schema has thousands of tables and each table has only limited range for a primary
key. In this case, the same primary key is sent from thousands of tables to the same partition, which
causes throttling. The default is false
.public Boolean isPartitionIncludeSchemaTable()
Prefixes schema and table names to partition values, when the partition type is primary-key-type
.
Doing this increases data distribution among Kafka partitions. For example, suppose that a SysBench schema has
thousands of tables and each table has only limited range for a primary key. In this case, the same primary key
is sent from thousands of tables to the same partition, which causes throttling. The default is
false
.
primary-key-type
. Doing this increases data distribution among Kafka partitions. For
example, suppose that a SysBench schema has thousands of tables and each table has only limited range for
a primary key. In this case, the same primary key is sent from thousands of tables to the same partition,
which causes throttling. The default is false
.public void setIncludeTableAlterOperations(Boolean includeTableAlterOperations)
Includes any data definition language (DDL) operations that change the table in the control data, such as
rename-table
, drop-table
, add-column
, drop-column
, and
rename-column
. The default is false
.
includeTableAlterOperations
- Includes any data definition language (DDL) operations that change the table in the control data, such as
rename-table
, drop-table
, add-column
, drop-column
, and
rename-column
. The default is false
.public Boolean getIncludeTableAlterOperations()
Includes any data definition language (DDL) operations that change the table in the control data, such as
rename-table
, drop-table
, add-column
, drop-column
, and
rename-column
. The default is false
.
rename-table
, drop-table
, add-column
, drop-column
,
and rename-column
. The default is false
.public KafkaSettings withIncludeTableAlterOperations(Boolean includeTableAlterOperations)
Includes any data definition language (DDL) operations that change the table in the control data, such as
rename-table
, drop-table
, add-column
, drop-column
, and
rename-column
. The default is false
.
includeTableAlterOperations
- Includes any data definition language (DDL) operations that change the table in the control data, such as
rename-table
, drop-table
, add-column
, drop-column
, and
rename-column
. The default is false
.public Boolean isIncludeTableAlterOperations()
Includes any data definition language (DDL) operations that change the table in the control data, such as
rename-table
, drop-table
, add-column
, drop-column
, and
rename-column
. The default is false
.
rename-table
, drop-table
, add-column
, drop-column
,
and rename-column
. The default is false
.public void setIncludeControlDetails(Boolean includeControlDetails)
Shows detailed control information for table definition, column definition, and table and column changes in the
Kafka message output. The default is false
.
includeControlDetails
- Shows detailed control information for table definition, column definition, and table and column changes
in the Kafka message output. The default is false
.public Boolean getIncludeControlDetails()
Shows detailed control information for table definition, column definition, and table and column changes in the
Kafka message output. The default is false
.
false
.public KafkaSettings withIncludeControlDetails(Boolean includeControlDetails)
Shows detailed control information for table definition, column definition, and table and column changes in the
Kafka message output. The default is false
.
includeControlDetails
- Shows detailed control information for table definition, column definition, and table and column changes
in the Kafka message output. The default is false
.public Boolean isIncludeControlDetails()
Shows detailed control information for table definition, column definition, and table and column changes in the
Kafka message output. The default is false
.
false
.public void setMessageMaxBytes(Integer messageMaxBytes)
The maximum size in bytes for records created on the endpoint The default is 1,000,000.
messageMaxBytes
- The maximum size in bytes for records created on the endpoint The default is 1,000,000.public Integer getMessageMaxBytes()
The maximum size in bytes for records created on the endpoint The default is 1,000,000.
public KafkaSettings withMessageMaxBytes(Integer messageMaxBytes)
The maximum size in bytes for records created on the endpoint The default is 1,000,000.
messageMaxBytes
- The maximum size in bytes for records created on the endpoint The default is 1,000,000.public void setIncludeNullAndEmpty(Boolean includeNullAndEmpty)
Include NULL and empty columns for records migrated to the endpoint. The default is false
.
includeNullAndEmpty
- Include NULL and empty columns for records migrated to the endpoint. The default is false
.public Boolean getIncludeNullAndEmpty()
Include NULL and empty columns for records migrated to the endpoint. The default is false
.
false
.public KafkaSettings withIncludeNullAndEmpty(Boolean includeNullAndEmpty)
Include NULL and empty columns for records migrated to the endpoint. The default is false
.
includeNullAndEmpty
- Include NULL and empty columns for records migrated to the endpoint. The default is false
.public Boolean isIncludeNullAndEmpty()
Include NULL and empty columns for records migrated to the endpoint. The default is false
.
false
.public String toString()
toString
in class Object
Object.toString()
public KafkaSettings clone()
public void marshall(ProtocolMarshaller protocolMarshaller)
StructuredPojo
ProtocolMarshaller
.marshall
in interface StructuredPojo
protocolMarshaller
- Implementation of ProtocolMarshaller
used to marshall this object's data.