All Classes and Interfaces

Class
Description
An abstract implementation of ChangeRecordEmitter.
 
 
An incremental snapshot change event source that emits events from a DB log interleaved with snapshot events.
A class describing current state of incremental snapshot
An abstract implementation of Partition which provides default facilities for logging.
An abstract regex implementation of TopicNamingStrategy.
 
An abstract implementation of SnapshotChangeEventSource that all implementations should extend to inherit common functionality.
Mutable context which is populated in the course of snapshotting
A configuration describing the task to be performed during snapshotting.
 
 
Common information provided by all connectors in either source field or offsets
Common information provided by all connectors in either source field or offsets.
An abstract implementation of TopicNamingStrategy.
An abstract unicode converter topic naming strategy implementation of TopicNamingStrategy.
ActivateTracingSpan<R extends org.apache.kafka.connect.connector.ConnectRecord<R>>
This SMT enables integration with a tracing system.
Validator for additional fields in outbox event router
 
An array of Values.
 
Reads Array instances from a variety of input forms.
A Kafka Serializer and Serializer that operates upon Debezium Arrays.
Writes Array instances to a variety of output forms.
An immutable attribute associated with a relational table.
An editor for Attribute instances.
Implementation of the AttributeEditor contract.
Relational model implementation of Attribute.
 
Base class for Debezium's CDC SourceTask implementations.
 
Package-level implementation of Array.
Package-level implementation of Document.
Package-level implementation of Array.Entry in an Array.
Package-level implementation of a Document.Field inside a Document.
A custom value converter that allows Avro messages to be delivered as raw binary data to kafka.
A specialization of Value that represents a binary value.
A set of bits of arbitrary length.
A variant of Consumer that can be blocked and interrupted.
A variant of Function that can be blocked and interrupted.
Represents an operation that accepts a single boolean-valued argument and returns no result.
A hash table supporting full concurrency of retrievals and adjustable expected concurrency for updates.
 
 
 
ConcurrentHashMap list entry.
 
Adapted to Infinispan BoundedConcurrentHashMap using LIRS implementation ideas from Charles Fry ([email protected]) See http://code.google.com/p/concurrentlinkedhashmap/source/browse/trunk/src/test/java/com/googlecode/concurrentlinkedhashmap/caches/LirsMap.java for original sources
 
 
 
 
Segments are specialized versions of hash tables.
A BlockingConsumer that retains a maximum number of values in a buffer before sending them to a delegate consumer.
ByLogicalTableRouter<R extends org.apache.kafka.connect.connector.ConnectRecord<R>>
A logical table consists of one or more physical tables with the same schema.
A customized value converter to allow avro message to be delivered as it is (byte[]) to kafka, this is used for outbox pattern where payload is serialized by KafkaAvroSerializer, the consumer need to get the deseralized payload.
 
A decorator for a ResultSet that cancels the parent Statement before the delegate set is closed.
Contains contextual information and objects scoped to the lifecycle of Debezium's SourceTask implementations.
 
A queue which serves as handover point between producer threads (e.g.
 
 
 
 
Coordinates one or more ChangeEventSources and executes them in order.
A factory for creating ChangeEventSources specific to one database.
Common API for all change event source metrics regardless of the connector phase.
Metrics that are common for both snapshot and streaming change event sources
Represents a change applied to a source database and emits one or more corresponding change records.
Callback passed to ChangeRecordEmitters, allowing them to produce one or more change records.
A logical representation of a change table containing changes for a given source table.
A wrapper around a JDBC ResultSet for a change table for processing rows.
An abstraction for a clock.
 
Implementation of Converter that express schemas and objects with CloudEvents specification.
Builder of a CloudEvents envelope schema.
Builder of a CloudEvents value.
Configuration options for CloudEventsConverter instances.
An abstract class that builds CloudEvents attributes using fields of change records provided by RecordParser.
The constants for the names of CloudEvents attributes.
A ServiceLoader interface that connectors should implement if they wish to provide a way to emit change events using the CloudEvents converter and format.
A set of utilities for more easily creating various kinds of collections.
An immutable definition of a column.
An editor for Column instances.
 
Modes for column name filters, either including a catalog (database) or schema name.
Unique identifier for a column in a database table.
 
A factory for a function used to map values of a column.
A set of ColumnMapper objects for columns.
A builder of Selectors.
 
Utility class for mapping columns to various data structures from from Table and ResultSet.
 
 
Configuration options common to all Debezium connectors.
The set of predefined BinaryHandlingMode options or aliases
The set of predefined modes for dealing with failures during event processing.
The set of predefined FieldNameAdjustmentMode options
The set of predefined SchemaNameAdjustmentMode options
The set of predefined versions e.g.
Carries common event metrics.
Exposes common event metrics.
 
A specialization of Value that wraps another Value that is not comparable.
ComputePartition<R extends org.apache.kafka.connect.connector.ConnectRecord<R>>
Deprecated.
This SMT will be soon removed.
Deprecated.
This Configuration will be soon removed.
Deprecated.
Defines the configuration options of a connector.
Editor for creating ConfigDefinitions.
An immutable representation of a Debezium configuration.
A builder of Configuration objects.
The basic interface for configuration builders.
 
A special channel that is strictly connected to Kafka Connect API
An on-demand provider of a JDBC connection.
Carries connection metrics.
Exposes connection metrics.
 
A marker interface for an event with the connector that isn't dispatched to the change event stream but instead is potentially of interest to other parts of the framework such as metrics.
 
 
Ther serializer responsible for converting of TableChanges into an array of Structs.
Temporal conversion constants.
A specialization of Value that wraps another Value to allow conversion of types.
A read-only result of a counter.
Conflict-free Replicated Data Types (CRDT)s.
The registry of all converters that were provided by the connector configuration.
Implementation of the heartbeat feature that allows for a DB query to be executed with every heartbeat.
The schema of a database.
 
A class invoked by EventDispatcher whenever an event is available for processing.
A class describing DataCollection for incremental snapshot
Provides factory methods for obtaining DataCollectionFilters.DataCollectionFilter instances as per the current connector configuration.
 
 
An immutable representation of a data type
 
A utility for converting various Java temporal object representations into the signed INT32 number of days since January 1, 1970, at 00:00:00UTC, and for defining a Kafka Connect Schema for date values with no time or timezone information.
A DdlParserListener that accumulates changes, allowing them to be consumed in the same order by database.
 
A parser interface for DDL statements.
An interface that can listen to various actions of a DdlParser.
An event describing the altering of a database.
An event describing the creation of a database.
An event describing the dropping of a database.
The base class for all table-related events.
An event describing the switching of a database.
The base class for all concrete events.
The type of concrete DdlParserListener.Events.
An event describing the setting of a variable.
An event describing the altering of a table.
An event describing the creation (or replacement) of a table.
An event describing the dropping of a table.
The base class for all table-related events.
An event describing the creation of an index on a table.
An event describing the dropping of an index on a table.
The abstract base class for all index-related events.
An event describing the truncating of a table.
A factory class for Debezium provided serializers/deserializers.
 
 
 
Implement a regex expression strategy to determine data event topic names using DataCollectionId.databaseParts().
The default implementation of metrics related to the snapshot phase of a connector.
The default implementation of metrics related to the streaming phase of a connector.
Determine data event topic names using DataCollectionId.databaseParts().
Implement a unicode converter strategy to determine data event topic names using DataCollectionId.databaseParts().
This interface is used to convert the string default value to a Java type recognized by value converters for a subset of types.
Converts the raw JDBC default value expression for a column into an object.
Encapsulates the logic of determining a delay when some criteria is met.
A Count that also tracks changes to the value within the last interval.
A simple counter that maintains a single changing value by separately tracking the positive and negative changes, and by tracking recent changes in this value since last reset.
A document contains multiple Document.Fields, each with a name and possibly-null Value.
 
Reads Document instances from a variety of input forms.
A Kafka Deserializer and Serializer that operates upon Debezium Documents.
Writes Document instances to a variety of output forms.
Encapsulates the logic of determining a delay when some criteria is met.
A semantic type for an enumeration, where the string values are one of the enumeration's values.
A configuration option with a fixed set of possible values, i.e.
A semantic type for a set of enumerated values, where the string values contain comma-separated values from an enumeration.
An immutable descriptor for the structure of Debezium message envelopes.
A builder of an envelope schema.
The constants for the names of the fields in the message envelope.
The constants for the values for the operation field in the message envelope.
 
Central dispatcher for data change and schema change events.
 
Reaction to an incoming change event for which schema is not found
Change record receiver used during snapshotting.
 
An interface implemented by each connector that enables metrics metadata to be extracted from an event.
EventRouter<R extends org.apache.kafka.connect.connector.ConnectRecord<R>>
Debezium Outbox Transform Event Router
Debezium Outbox Transform configuration definition
 
 
 
 
Defines a contract allowing a connector to override specific Outbox configuration behavior.
EventRouterDelegate<R extends org.apache.kafka.connect.connector.ConnectRecord<R>>
A delegate class having common logic between Outbox Event Routers for SQL DBs and MongoDB
 
 
The action to trigger an ad-hoc snapshot.
ExtractChangedRecordState<R extends org.apache.kafka.connect.connector.ConnectRecord<R>>
This SMT to extract the changed and unchanged field names to Connect Headers comparing before and after value.
ExtractNewRecordState<R extends org.apache.kafka.connect.connector.ConnectRecord<R>>
Debezium generates CDC (Envelope) records that are struct of values containing values before and after change.
Represents a field that should be added to the outgoing record as a header attribute or struct field.
 
 
An immutable definition of a field that make appear within a Configuration instance.
 
 
 
A Field.Recommender that will look at several fields that are deemed to be exclusive, such that when the first of them has a value the others are made invisible.
A Field.Recommender that will look at several fields that are deemed to be exclusive, such that when the first of them has a value the others are made invisible.
Validation logic for numeric ranges
A component that is able to provide recommended values for a field given a configuration.
A set of fields.
A functional interface that accepts validation results.
A functional interface that can be used to validate field values.
Implementations return names for fields.
A field namer that caches names it has obtained from a delegate
Implementations determine the field name corresponding to a given column.
A field namer that replaces any characters invalid in a field with _.
A field name underscore replacement implementation of ReplacementFunction
A field name unicode replacement inheritance of UnicodeReplacementFunction
The class responsible for processing of signals delivered to Debezium via a file.
A form of a read-write lock that has methods that allow lambdas to be performed while the read or write lock is acquired and held.
A read-only result of the state of a grow-only GCounter.
A simple grow-only counter that maintains a single changing value by tracking the positive changes to the value.
A semantic type for a Geography class.
A semantic type for an OGC Simple Features for SQL Geometry.
 
Utilities for easily computing hash codes.
HeaderToValue<R extends org.apache.kafka.connect.connector.ConnectRecord<R>>
 
 
A class that is able to generate periodic heartbeat messages based on a pre-configured interval.
Returns the offset to be used when emitting a heartbeat event.
Defines a contract for providing a connection to the DatabaseHeartbeatImpl.
 
A factory for creating the appropriate Heartbeat implementation based on the connector type and its configured properties.
Default implementation of Heartbeat
COPIED FROM https://github.com/undertow-io/undertow/blob/master/core/src/main/java/io/undertow/util/HexConverter.java A utility class for mapping between byte arrays and their hex representation and back again.
A database schema that is historized, i.e.
 
Configuration options shared across the relational CDC connectors which use a persistent database schema history.
A DatabaseSchema or a relational database which has a schema history, that can be recovered to the current state when restarting a connector.
 
 
Compares HistoryRecord instances to determine which came first.
 
A Contract t
 
 
 
Instantiates given classes reflectively.
A utility representing a duration into a string value formatted using ISO string format.
A set of utilities for more easily performing I/O.
A utility for creating iterators.
A read only iterator that is able to preview the next value without consuming it or altering the behavior or semantics of the normal Iterator methods.
An iterator that is able to transform its contents to another type.
A DocumentReader and ArrayReader that uses the Jackson library to read JSON.
A DocumentWriter and ArrayWriter that uses the Jackson library to write JSON.
 
A specialized configuration for the Debezium driver.
The JDBC-specific builder used to construct and/or alter JDBC configuration instances.
A utility that simplifies using a JDBC connection and executing transactions composed of multiple statements.
 
 
 
Establishes JDBC connections.
 
Defines multiple JDBC operations.
 
 
Extracts a data of resultset..
 
A function to create a statement from a connection.
 
RuntimeException which is raised for various SQLException instances and which retains the error code from the original exception.
A provider of ValueConverters and SchemaBuilders for various column types.
 
 
 
 
 
 
 
A utility for joining multiple character sequences together.
A semantic type for a JSON string.
 
A Serde that (de-)serializes JSON.
A configuration for JsonSerde serialize/deserializer.
Ther serializer responsible for converting of TableChanges into a JSON format.
Utility class dealing with Java version information.
The class responsible for processing of signals delivered to Debezium via a dedicated Kafka topic.
An immutable definition of a table's key.
 
Custom Key mapper used to override or defining a custom Key
Default Key mapper using PK as key.
Provides the column(s) that should be used within the message key for a given table.
Legacy source info that does not enforce presence of the version and connector fields
Log<P extends Partition>
 
A utility that provides a consistent set of properties for the Mapped Diagnostic Context (MDC) properties used by Debezium components.
A snapshot of an MDC context that can be LoggingContext.PreviousContext.restore().
Functionality for dealing with logging.
 
A custom implementation of LRUCache that allows exposure to the underlying delegate's key or values collections.
An on-demand provider of a JDBC connection.
A ColumnMapper implementation that ensures that string values are masked.
V1 default and previous version.
 
 
Utilities for performing math operations with mixed native and advanced numeric types.
A SchemaHistory implementation that stores the schema history in memory.
Base for metrics implementations.
A class that can be used to perform an action at a regular interval.
A utility representing a duration into a corresponding INT64 number of microsecond, and for defining a Kafka Connect Schema for duration values.
A utility for converting various Java time representations into the INT64 number of microseconds since midnight, and for defining a Kafka Connect Schema for time values with no date or timezone information.
A utility for converting various Java time representations into the signed INT64 number of microseconds past epoch, and for defining a Kafka Connect Schema for timestamp values with no timezone information.
Representation of multiple ParsingExceptions.
 
A utility representing a duration into a corresponding INT64 number of nanosecond, and for defining a Kafka Connect Schema for duration values.
A utility for converting various Java time representations into the INT64 number of nanoseconds since midnight, and for defining a Kafka Connect Schema for time values with no date or timezone information.
A utility for converting various Java time representations into the signed INT64 number of nanoseconds past epoch, and for defining a Kafka Connect Schema for timestamp values with no timezone information.
 
 
 
This interface is used to provide custom write channels for the Debezium notification feature: Implementations must: define the name of the channel in NotificationChannel.name(), initialize specific configuration/variables/connections in the NotificationChannel.init(CommonConnectorConfig connectorConfig) method, implement send of the notification on the channel in the NotificationChannel.send(Notification notification) method.
This service can be used to send notification to available and enabled channels
Denotes that the annotated type isn't safe for concurrent access from multiple threads without external synchronization.
 
A specialization of Value that represents a null value.
A set of numeric conversion methods.
Keeps track of the current offset within the source DB's change stream.
Implementations load a connector-specific offset context based on the offset values stored in Kafka.
Provides access to the partition offsets stored by connectors.
Offsets<P extends Partition,O extends OffsetContext>
Keeps track the source partitions to be processed by the connector task and their respective offsets.
 
Indicates that the annotated element intentionally uses default visibility.
An exception representing a problem during parsing of text.
Describes the source partition to be processed by the connector in connector-specific terms and provides its representation as a Kafka Connect source partition.
Implementations provide a set of connector-specific partitions based on the connector task configuration.
PartitionRouting<R extends org.apache.kafka.connect.connector.ConnectRecord<R>>
This SMT allow to use payload fields to calculate the destination partition.
 
A representation of multiple name segments that together form a path within Document.
 
A package-level utility that implements useful operations to create paths.
 
 
 
 
 
 
Base for metrics implementations.
A read-only result of the state of a PNCounter.
A simple counter that maintains a single changing value by separately tracking the positive and negative changes.
A semantic type for a geometric Point, defined as a set of (x,y) coordinates.
A class that represents the position of a particular character in terms of the lines and columns of a character sequence.
Utilities for constructing various predicates.
Exposes queue metrics.
Annotation that can be used to specify that the target field, method, constructor, package or type is read-only.
An abstract parser of change records.
Base class for Debezium's relational CDC SourceConnector implementations.
Base class for ChangeRecordEmitter implementations based on a relational database.
Configuration options shared across the relational CDC connectors.
The set of predefined DecimalHandlingMode options or aliases.
The set of predefined DecimalHandlingMode options or aliases.
A DatabaseSchema of a relational database such as Postgres.
A map of schemas by table id.
Base class for SnapshotChangeEventSource for relational databases with or without a schema history.
Mutable context which is populated in the course of snapshotting.
 
This interface allows the code to optionally pass a value between two parts of the application.
 
Represents a structural change to a database schema.
Type describing the content of the event.
Emits one or more change records - specific to a given DataCollectionSchema.
 
 
A factory for creating SchemaBuilder structs.
A history of the database schema described by a Tables.
 
Listener receiving lifecycle and data events from SchemaHistory.
Implementation of DatabaseSchema metrics.
 
Metrics describing SchemaHistory use.
Exposes schema metrics.
A adjuster for the names of change data message schemas and for the names of the fiields in the schemas.
Function used to report that an original value was replaced with an Avro-compatible string.
Implement a regex expression strategy to determine data event topic names using DataCollectionId.schemaParts().
Determine data event topic names using DataCollectionId.schemaParts().
Implement a unicode converter strategy to determine data event topic names using DataCollectionId.schemaParts().
Utilities for obtaining JSON string representations of Schema, Struct, and Field objects.
 
Define predicates determines whether tables or columns should be used.
A builder of a database predicate.
Implementations convert given TableIds to strings, so regular expressions can be applied to them for the purpose of table filtering.
A builder of a table predicate.
Utility methods for obtaining streams of integers.
A set of available serializer types for CloudEvents or the data attribute of CloudEvents.
 
 
A class describing current state of incremental snapshot
This interface is used to provide custom read channels for the Debezium signaling feature: Implementations must: define the name of the reader in SignalChannelReader.name(), initialize specific configuration/variables/connections in the SignalChannelReader.init(CommonConnectorConfig connectorConfig) method, implement reset logic for specific channel in the SignalChannelReader.reset(Object) method if you need to reset already processed signals, provide a list of signal record in the SignalChannelReader.read() method.
 
This class permits to process signals coming from the different channels.
The class represent the signal sent on a channel: id STRING - the unique identifier of the signal sent, usually UUID, can be used for deduplication type STRING - the unique logical name of the code executing the signal data STRING - the data in JSON format that are passed to the signal code
Denotes that the annotated element of a class that's meant for multi-threaded usage is accessed only by single thread and thus doesn't need to be guarded via synchronization or similar.
 
 
Calculates or estimates the size of the object
SmtManager<R extends org.apache.kafka.connect.connector.ConnectRecord<R>>
A class used by all Debezium supplied SMTs to centralize common logic.
A change event source that emits events for taking a consistent snapshot of the captured tables, which may include schema and data information.
Metrics related to the snapshot phase of a connector.
 
Emits change data based on a single row read via JDBC.
Carries snapshot metrics.
Exposes snapshot metrics.
Invoked whenever an important event or change of state happens during the snapshot phase.
Describes whether the change record comes from snapshot and if it is the last one
 
 
Converts the connector SourceInfo into publicly visible source field of the message.
The class responsible for processing of signals delivered to Debezium via a dedicated signaling table.
Extension of plain a BigDecimal type that adds support for new features like special values handling - NaN, infinity;
Special values for floating-point and numeric types
 
 
 
The action to stop an ad-hoc snapshot.
A stopwatch for measuring durations.
Abstract base class for Stopwatch.Durations implementations.
The average and total durations as measured by one or more stopwatches.
A Stopwatch.Durations implementation that accumulates all added durations.
A Stopwatch.Durations implementation that only remembers the most recently added duration.
The timing statistics for a recorded set of samples.
A set of stopwatches whose durations are combined.
A change event source that emits events from a DB log, such as MySQL's binlog or similar.
Metrics related to the streaming phase of a connector.
Metrics specific to streaming change event sources
Carries streaming metrics.
Exposes streaming metrics.
Invoked whenever an important event or change of state happens during the streaming phase.
String-related utility methods.
Represents a predicate (boolean-valued function) of one character argument.
 
A tokenization class used to split a comma-separated list of regular expressions.
A function that converts one change event row (from a snapshot select, or from before/after state of a log event) into the corresponding Kafka Connect key or value Struct.
Encapsulates a set of a database's system variables.
 
Interface that is used for enums defining the customized scope values for specific DBMSs.
An immutable definition of a table.
An abstract representation of one or more changes to the structure to the tables of a relational database.
 
The interface that defines conversion of TableChanges into a serialized format for persistent storage or delivering as a message.
 
An editor for Table instances, normally obtained from a Tables instance.
 
Unique identifier for a database table.
Parses identifiers into the corresponding parts of a TableId.
 
 
 
Collection of predicate methods used for parsing TableId.
 
Structural definitions for a set of tables in a JDBC database.
A filter for columns.
 
A filter for tables.
A set of table ids.
A map of tables by id.
Defines the Kafka Connect Schema functionality associated with a given table definition, and which can be used to send rows of data that match the table definition to Kafka Connect.
Builder that constructs TableSchema instances for Table definitions.
The set of predefined TemporalPrecisionMode options.
Misc.
Utilities related to threads and threading.
Expires after defined time period.
Measures the amount time that has elapsed since the last reset.
Denotes that the annotated type is safe for concurrent access from multiple threads.
Functionality for dealing with Throwables.
A utility for converting various Java time representations into the INT32 number of milliseconds since midnight, and for defining a Kafka Connect Schema for time values with no date or timezone information.
A utility for converting various Java time representations into the signed INT64 number of milliseconds past epoch, and for defining a Kafka Connect Schema for timestamp values with no timezone information.
A foundation for basic parsers that tokenize input content and allows parsers to easily access and use those tokens.
A basic TokenStream.Tokenizer implementation that ignores whitespace but includes tokens for individual symbols, the period ('.'), single-quoted strings, double-quoted strings, whitespace-delimited words, and optionally comments.
An implementation of TokenStream.CharacterStream that works with a single character array.
Interface used by a TokenStream.Tokenizer to iterate through the characters in the content input to the TokenStream.
An opaque marker for a position within the token stream.
The interface defining a token, which references the characters in the actual input character stream.
Interface for a Tokenizer component responsible for processing the characters in a TokenStream.CharacterStream and constructing the appropriate TokenStream.Token objects.
A factory for Token objects, used by a TokenStream.Tokenizer to create tokens in the correct order.
Deprecated.
Use TopicNamingStrategy instead.
Implementations determine the topic name corresponding to a given data collection.
A topic namer that caches names it has obtained from a delegate.
A topic namer that replaces any characters invalid in a topic name with _.
The context holds internal state necessary for book-keeping of events in active transaction.
The class has externalized its state in TransactionContext context class so it can be stored in and recovered from offsets.
Describes the transition of transaction from start to end.
A ColumnMapper implementation that ensures that string values longer than a specified length will be truncated.
 
An unicode replacement implementation of ReplacementFunction
A semantic type for a Uuid string.
A value in a Document or Array.
 
 
Invoked to convert incoming SQL column values into Kafka Connect values.
A function that converts from a column data value into another value.
A provider of ValueConverter functions and the SchemaBuilder used to describe them.
Provides the access to the original encapsulated value obtained for example from JDBC.
A latch that works similarly to CountDownLatch except that it can also increase the count dynamically.
Synchronization control For CountDownLatch.
An arbitrary precision decimal value with variable scale.
Indicates that visibility of the annotated element is raised for the purposes of testing (e.g.
A semantic type for an XML string.
A utility class for determining the validity of various XML names, per the XML 1.0 Specification.
A utility for defining a Kafka Connect Schema that represents year values.
A utility for converting various Java time representations into the STRING representation of the time in a particular time zone, and for defining a Kafka Connect Schema for zoned time values.
A utility for converting various Java time representations into the STRING representation of the time and date in a particular time zone, and for defining a Kafka Connect Schema for zoned timestamp values.