Package io.debezium.relational.history
Class KafkaDatabaseHistory
- java.lang.Object
-
- io.debezium.relational.history.AbstractDatabaseHistory
-
- io.debezium.relational.history.KafkaDatabaseHistory
-
- All Implemented Interfaces:
DatabaseHistory
@NotThreadSafe public class KafkaDatabaseHistory extends AbstractDatabaseHistory
ADatabaseHistory
implementation that records schema changes as normalSourceRecord
s on the specified topic, and that recovers the history by establishing a Kafka Consumer re-processing all messages on that topic.- Author:
- Randall Hauch
-
-
Field Summary
Fields Modifier and Type Field Description static Field.Set
ALL_FIELDS
static Field
BOOTSTRAP_SERVERS
private ExecutorService
checkTopicSettingsExecutor
private static String
CLEANUP_POLICY_NAME
private static String
CLEANUP_POLICY_VALUE
private static String
CONSUMER_PREFIX
private Configuration
consumerConfig
private static short
DEFAULT_TOPIC_REPLICATION_FACTOR
The default replication factor for the history topic which is used in case the value couldn't be retrieved from the broker.private static String
DEFAULT_TOPIC_REPLICATION_FACTOR_PROP_NAME
The name of broker property defining default replication factor for topics without the explicit setting.static Field
INTERNAL_CONNECTOR_CLASS
static Field
INTERNAL_CONNECTOR_ID
private static Duration
KAFKA_QUERY_TIMEOUT
private static org.slf4j.Logger
LOGGER
private int
maxRecoveryAttempts
private static Integer
PARTITION
The one and only partition of the history topic.private static short
PARTITION_COUNT
private Duration
pollInterval
private org.apache.kafka.clients.producer.KafkaProducer<String,String>
producer
private static String
PRODUCER_PREFIX
private Configuration
producerConfig
private DocumentReader
reader
static Field
RECOVERY_POLL_ATTEMPTS
static Field
RECOVERY_POLL_INTERVAL_MS
private static String
RETENTION_BYTES_NAME
private static long
RETENTION_MS_MAX
private static long
RETENTION_MS_MIN
private static String
RETENTION_MS_NAME
static Field
TOPIC
private String
topicName
private static int
UNLIMITED_VALUE
-
Fields inherited from class io.debezium.relational.history.AbstractDatabaseHistory
config, INTERNAL_PREFER_DDL, logger
-
Fields inherited from interface io.debezium.relational.history.DatabaseHistory
CONFIGURATION_FIELD_PREFIX_STRING, DDL_FILTER, NAME, SKIP_UNPARSEABLE_DDL_STATEMENTS, STORE_ONLY_CAPTURED_TABLES_DDL, STORE_ONLY_MONITORED_TABLES_DDL
-
-
Constructor Summary
Constructors Constructor Description KafkaDatabaseHistory()
-
Method Summary
All Methods Static Methods Instance Methods Concrete Methods Modifier and Type Method Description private void
checkTopicSettings(String topicName)
void
configure(Configuration config, HistoryRecordComparator comparator, DatabaseHistoryListener listener, boolean useCatalogBeforeSchema)
Configure this instance.protected static String
consumerConfigPropertyName(String kafkaConsumerPropertyName)
boolean
exists()
Determines if the database history entity exists; i.e.private static Field.Validator
forKafka(Field.Validator validator)
private short
getDefaultTopicReplicationFactor(org.apache.kafka.clients.admin.AdminClient admin)
private Long
getEndOffsetOfDbHistoryTopic(Long previousEndOffset, org.apache.kafka.clients.consumer.KafkaConsumer<String,String> historyConsumer)
private org.apache.kafka.clients.admin.Config
getKafkaBrokerConfig(org.apache.kafka.clients.admin.AdminClient admin)
void
initializeStorage()
Called to initialize permanent storage of the history.protected void
recoverRecords(Consumer<HistoryRecord> records)
void
start()
Start the history.void
stop()
Stop recording history and release any resources acquired since#configure(Configuration, HistoryRecordComparator, DatabaseHistoryListener)
.private void
stopCheckTopicSettingsExecutor()
boolean
storageExists()
Determines if the underlying storage exists (e.g.protected void
storeRecord(HistoryRecord record)
String
toString()
-
Methods inherited from class io.debezium.relational.history.AbstractDatabaseHistory
record, record, recover, skipUnparseableDdlStatements, storeOnlyCapturedTables
-
-
-
-
Field Detail
-
LOGGER
private static final org.slf4j.Logger LOGGER
-
CLEANUP_POLICY_NAME
private static final String CLEANUP_POLICY_NAME
- See Also:
- Constant Field Values
-
CLEANUP_POLICY_VALUE
private static final String CLEANUP_POLICY_VALUE
- See Also:
- Constant Field Values
-
RETENTION_MS_NAME
private static final String RETENTION_MS_NAME
- See Also:
- Constant Field Values
-
RETENTION_MS_MAX
private static final long RETENTION_MS_MAX
- See Also:
- Constant Field Values
-
RETENTION_MS_MIN
private static final long RETENTION_MS_MIN
-
RETENTION_BYTES_NAME
private static final String RETENTION_BYTES_NAME
- See Also:
- Constant Field Values
-
UNLIMITED_VALUE
private static final int UNLIMITED_VALUE
- See Also:
- Constant Field Values
-
PARTITION_COUNT
private static final short PARTITION_COUNT
- See Also:
- Constant Field Values
-
DEFAULT_TOPIC_REPLICATION_FACTOR_PROP_NAME
private static final String DEFAULT_TOPIC_REPLICATION_FACTOR_PROP_NAME
The name of broker property defining default replication factor for topics without the explicit setting.- See Also:
kafka.server.KafkaConfig.DefaultReplicationFactorProp
, Constant Field Values
-
DEFAULT_TOPIC_REPLICATION_FACTOR
private static final short DEFAULT_TOPIC_REPLICATION_FACTOR
The default replication factor for the history topic which is used in case the value couldn't be retrieved from the broker.- See Also:
- Constant Field Values
-
TOPIC
public static final Field TOPIC
-
BOOTSTRAP_SERVERS
public static final Field BOOTSTRAP_SERVERS
-
RECOVERY_POLL_INTERVAL_MS
public static final Field RECOVERY_POLL_INTERVAL_MS
-
RECOVERY_POLL_ATTEMPTS
public static final Field RECOVERY_POLL_ATTEMPTS
-
INTERNAL_CONNECTOR_CLASS
public static final Field INTERNAL_CONNECTOR_CLASS
-
INTERNAL_CONNECTOR_ID
public static final Field INTERNAL_CONNECTOR_ID
-
ALL_FIELDS
public static Field.Set ALL_FIELDS
-
CONSUMER_PREFIX
private static final String CONSUMER_PREFIX
- See Also:
- Constant Field Values
-
PRODUCER_PREFIX
private static final String PRODUCER_PREFIX
- See Also:
- Constant Field Values
-
KAFKA_QUERY_TIMEOUT
private static final Duration KAFKA_QUERY_TIMEOUT
-
PARTITION
private static final Integer PARTITION
The one and only partition of the history topic.
-
reader
private final DocumentReader reader
-
topicName
private String topicName
-
consumerConfig
private Configuration consumerConfig
-
producerConfig
private Configuration producerConfig
-
maxRecoveryAttempts
private int maxRecoveryAttempts
-
pollInterval
private Duration pollInterval
-
checkTopicSettingsExecutor
private ExecutorService checkTopicSettingsExecutor
-
-
Method Detail
-
configure
public void configure(Configuration config, HistoryRecordComparator comparator, DatabaseHistoryListener listener, boolean useCatalogBeforeSchema)
Description copied from interface:DatabaseHistory
Configure this instance.- Specified by:
configure
in interfaceDatabaseHistory
- Overrides:
configure
in classAbstractDatabaseHistory
- Parameters:
config
- the configuration for this history storecomparator
- the function that should be used to compare history records duringrecovery
; may be null if thedefault comparator
is to be usedlistener
- TODOuseCatalogBeforeSchema
- true if the parsed string for a table contains only 2 items and the first should be used as the catalog and the second as the table name, or false if the first should be used as the schema and the second as the table name
-
start
public void start()
Description copied from interface:DatabaseHistory
Start the history.- Specified by:
start
in interfaceDatabaseHistory
- Overrides:
start
in classAbstractDatabaseHistory
-
storeRecord
protected void storeRecord(HistoryRecord record) throws DatabaseHistoryException
- Specified by:
storeRecord
in classAbstractDatabaseHistory
- Throws:
DatabaseHistoryException
-
recoverRecords
protected void recoverRecords(Consumer<HistoryRecord> records)
- Specified by:
recoverRecords
in classAbstractDatabaseHistory
-
getEndOffsetOfDbHistoryTopic
private Long getEndOffsetOfDbHistoryTopic(Long previousEndOffset, org.apache.kafka.clients.consumer.KafkaConsumer<String,String> historyConsumer)
-
storageExists
public boolean storageExists()
Description copied from interface:DatabaseHistory
Determines if the underlying storage exists (e.g. a Kafka topic, file or similar). Note: storage may exist while history entities not yet written, seeDatabaseHistory.exists()
-
exists
public boolean exists()
Description copied from interface:DatabaseHistory
Determines if the database history entity exists; i.e. the storage must have been initialized and the history must have been populated.
-
checkTopicSettings
private void checkTopicSettings(String topicName)
-
stop
public void stop()
Description copied from interface:DatabaseHistory
Stop recording history and release any resources acquired since#configure(Configuration, HistoryRecordComparator, DatabaseHistoryListener)
.- Specified by:
stop
in interfaceDatabaseHistory
- Overrides:
stop
in classAbstractDatabaseHistory
-
stopCheckTopicSettingsExecutor
private void stopCheckTopicSettingsExecutor()
-
consumerConfigPropertyName
protected static String consumerConfigPropertyName(String kafkaConsumerPropertyName)
-
initializeStorage
public void initializeStorage()
Description copied from interface:DatabaseHistory
Called to initialize permanent storage of the history.- Specified by:
initializeStorage
in interfaceDatabaseHistory
- Overrides:
initializeStorage
in classAbstractDatabaseHistory
-
getDefaultTopicReplicationFactor
private short getDefaultTopicReplicationFactor(org.apache.kafka.clients.admin.AdminClient admin) throws Exception
- Throws:
Exception
-
getKafkaBrokerConfig
private org.apache.kafka.clients.admin.Config getKafkaBrokerConfig(org.apache.kafka.clients.admin.AdminClient admin) throws Exception
- Throws:
Exception
-
forKafka
private static Field.Validator forKafka(Field.Validator validator)
-
-