All Classes Interface Summary Class Summary Enum Summary Exception Summary Annotation Types Summary
Class |
Description |
AbstractChangeRecordEmitter<T extends DataCollectionSchema> |
|
AbstractDatabaseHistory |
|
AbstractDdlParser |
|
AbstractSnapshotChangeEventSource<O extends OffsetContext> |
An abstract implementation of SnapshotChangeEventSource that all implementations should extend
to inherit common functionality.
|
AbstractSnapshotChangeEventSource.SnapshotContext<O extends OffsetContext> |
Mutable context which is populated in the course of snapshotting
|
AbstractSnapshotChangeEventSource.SnapshottingTask |
A configuration describing the task to be performed during snapshotting.
|
AbstractSourceInfo |
Common information provided by all connectors in either source field or offsets
|
AbstractSourceInfoStructMaker<T extends AbstractSourceInfo> |
Common information provided by all connectors in either source field or offsets.
|
ActivateTracingSpan<R extends org.apache.kafka.connect.connector.ConnectRecord<R>> |
This SMT enables integration with a tracing system.
|
Array |
|
Array.Entry |
|
ArrayReader |
Reads Array instances from a variety of input forms.
|
ArraySerdes |
A Kafka Serializer and Serializer that operates upon Debezium Array s.
|
ArrayWriter |
Writes Array instances to a variety of output forms.
|
BaseSourceInfo |
|
BaseSourceTask<O extends OffsetContext> |
Base class for Debezium's CDC SourceTask implementations.
|
BaseSourceTask.State |
|
BasicArray |
Package-level implementation of Array .
|
BasicDocument |
Package-level implementation of Document .
|
BasicEntry |
|
BasicField |
|
BinaryValue |
A specialization of Value that represents a binary value.
|
Bits |
A set of bits of arbitrary length.
|
BlockingConsumer<T> |
A variant of Consumer that can be blocked and interrupted.
|
BooleanConsumer |
Represents an operation that accepts a single boolean -valued argument and
returns no result.
|
BoundedConcurrentHashMap<K,V> |
A hash table supporting full concurrency of retrievals and
adjustable expected concurrency for updates.
|
BoundedConcurrentHashMap.Eviction |
|
BoundedConcurrentHashMap.EvictionListener<K,V> |
|
BoundedConcurrentHashMap.EvictionPolicy<K,V> |
|
BoundedConcurrentHashMap.HashEntry<K,V> |
ConcurrentHashMap list entry.
|
BoundedConcurrentHashMap.LIRS<K,V> |
|
BoundedConcurrentHashMap.LIRSHashEntry<K,V> |
Adapted to Infinispan BoundedConcurrentHashMap using LIRS implementation ideas from Charles Fry ( [email protected])
See http://code.google.com/p/concurrentlinkedhashmap/source/browse/trunk/src/test/java/com/googlecode/concurrentlinkedhashmap/caches/LirsMap.java
for original sources
|
BoundedConcurrentHashMap.LRU<K,V> |
|
BoundedConcurrentHashMap.NullEvictionListener<K,V> |
|
BoundedConcurrentHashMap.NullEvictionPolicy<K,V> |
|
BoundedConcurrentHashMap.Recency |
|
BoundedConcurrentHashMap.Segment<K,V> |
Segments are specialized versions of hash tables.
|
BufferedBlockingConsumer<T> |
A BlockingConsumer that retains a maximum number of values in a buffer before sending them to
a delegate consumer.
|
ByLogicalTableRouter<R extends org.apache.kafka.connect.connector.ConnectRecord<R>> |
A logical table consists of one or more physical tables with the same schema.
|
ByteBufferConverter |
A customized value converter to allow avro message to be delivered as it is (byte[]) to kafka, this is used
for outbox pattern where payload is serialized by KafkaAvroSerializer, the consumer need to get the deseralized payload.
|
Callable |
|
CdcSourceTaskContext |
Contains contextual information and objects scoped to the lifecycle of Debezium's SourceTask implementations.
|
ChangeEventCreator |
|
ChangeEventQueue<T> |
A queue which serves as handover point between producer threads (e.g.
|
ChangeEventQueue.Builder<T> |
|
ChangeEventQueueMetrics |
|
ChangeEventSource |
|
ChangeEventSource.ChangeEventSourceContext |
|
ChangeEventSourceCoordinator<O extends OffsetContext> |
|
ChangeEventSourceFactory<O extends OffsetContext> |
|
ChangeEventSourceMetricsFactory |
|
ChangeEventSourceMetricsMXBean |
Metrics that are common for both snapshot and streaming change event sources
|
ChangeRecordEmitter |
Represents a change applied to a source database and emits one or more corresponding change records.
|
ChangeRecordEmitter.Receiver |
|
ChangeTable |
A logical representation of a change table containing changes for a given source table.
|
ChangeTableResultSet<C extends ChangeTable,T extends Comparable<T>> |
A wrapper around a JDBC ResultSet for a change table for processing rows.
|
Clock |
An abstraction for a clock.
|
CloseIncrementalSnapshotWindow |
|
CloudEventsConverter |
Implementation of Converter that express schemas and objects with CloudEvents specification.
|
CloudEventsConverter.CESchemaBuilder |
Builder of a CloudEvents envelope schema.
|
CloudEventsConverter.CEValueBuilder |
Builder of a CloudEvents value.
|
CloudEventsConverterConfig |
|
CloudEventsMaker |
An abstract class that builds CloudEvents attributes using fields of change records provided by
RecordParser .
|
CloudEventsMaker.FieldName |
The constants for the names of CloudEvents attributes.
|
CloudEventsMaker.MongodbCloudEventsMaker |
CloudEvents maker for records produced by MongoDB connector.
|
CloudEventsMaker.MysqlCloudEventsMaker |
CloudEvents maker for records produced by MySQL connector.
|
CloudEventsMaker.PostgresCloudEventsMaker |
CloudEvents maker for records produced by PostgreSQL connector.
|
CloudEventsMaker.SqlserverCloudEventsMaker |
CloudEvents maker for records produced by SQL Server connector.
|
Collect |
A set of utilities for more easily creating various kinds of collections.
|
Column |
An immutable definition of a column.
|
ColumnEditor |
An editor for Column instances.
|
ColumnEditorImpl |
|
ColumnFilterMode |
Modes for column name filters, either including a catalog (database) or schema name.
|
ColumnId |
Unique identifier for a column in a database table.
|
ColumnImpl |
|
ColumnMapper |
A factory for a function used to map values of a column.
|
ColumnMappers |
|
ColumnMappers.Builder |
|
ColumnMappers.MapperRule |
|
ColumnUtils |
Utility class for mapping columns to various data structures from from Table and ResultSet .
|
ColumnUtils.ColumnArray |
|
ColumnUtils.MappedColumns |
|
CommonConnectorConfig |
Configuration options common to all Debezium connectors.
|
CommonConnectorConfig.BinaryHandlingMode |
The set of predefined BinaryHandlingMode options or aliases
|
CommonConnectorConfig.EventProcessingFailureHandlingMode |
The set of predefined modes for dealing with failures during event processing.
|
CommonConnectorConfig.Version |
The set of predefined versions e.g.
|
ComparableValue |
A specialization of Value that wraps another Value that is not comparable.
|
ConfigDefinition |
Defines the configuration options of a connector.
|
ConfigDefinitionEditor |
|
Configuration |
An immutable representation of a Debezium configuration.
|
Configuration.Builder |
A builder of Configuration objects.
|
Configuration.ConfigBuilder<C extends Configuration,B extends Configuration.ConfigBuilder<C,B>> |
The basic interface for configuration builders.
|
ConfigurationDefaults |
|
ConnectorEvent |
A marker interface for an event with the connector that isn't dispatched to the change event stream but
instead is potentially of interest to other parts of the framework such as metrics.
|
ConnectTableChangeSerializer |
Ther serializer responsible for converting of TableChanges into an array of Struct s.
|
Conversions |
Temporal conversion constants.
|
ConvertingValue |
A specialization of Value that wraps another Value to allow conversion of types.
|
Count |
A read-only result of a counter.
|
CRDT |
Conflict-free Replicated Data Types (CRDT)s.
|
CustomConverterRegistry |
The registry of all converters that were provided by the connector configuration.
|
DatabaseHeartbeatImpl |
Implementation of the heartbeat feature that allows for a DB query to be executed with every heartbeat.
|
DatabaseHistory |
A history of the database schema described by a Tables .
|
DatabaseHistoryException |
|
DatabaseHistoryListener |
|
DatabaseHistoryMetrics |
|
DatabaseHistoryMetrics.DatabaseHistoryStatus |
|
DatabaseHistoryMXBean |
|
DatabaseSchema<I extends DataCollectionId> |
The schema of a database.
|
DataChangeEvent |
|
DataChangeEventListener |
A class invoked by EventDispatcher whenever an event is available for processing.
|
DataCollectionFilters |
|
DataCollectionFilters.DataCollectionFilter<T extends DataCollectionId> |
|
DataCollectionId |
Common contract for all identifiers of data collections (RDBMS tables, MongoDB collections etc.)
|
DataCollectionSchema |
|
DataType |
An immutable representation of a data type
|
DataTypeBuilder |
|
Date |
A utility for converting various Java temporal object representations into the signed INT32
number of days since January 1, 1970, at 00:00:00UTC, and for defining a Kafka Connect Schema for date values
with no time or timezone information.
|
DdlChanges |
A DdlParserListener that accumulates changes, allowing them to be consumed in the same order by database.
|
DdlChanges.DatabaseEventConsumer |
|
DdlChanges.DatabaseStatementConsumer |
|
DdlChanges.DatabaseStatementStringConsumer |
|
DdlParser |
A parser interface for DDL statements.
|
DdlParserListener |
An interface that can listen to various actions of a DdlParser .
|
DdlParserListener.DatabaseAlteredEvent |
An event describing the altering of a database.
|
DdlParserListener.DatabaseCreatedEvent |
An event describing the creation of a database.
|
DdlParserListener.DatabaseDroppedEvent |
An event describing the dropping of a database.
|
DdlParserListener.DatabaseEvent |
The base class for all table-related events.
|
DdlParserListener.DatabaseSwitchedEvent |
An event describing the switching of a database.
|
DdlParserListener.Event |
The base class for all concrete events.
|
DdlParserListener.EventType |
|
DdlParserListener.SetVariableEvent |
An event describing the setting of a variable.
|
DdlParserListener.TableAlteredEvent |
An event describing the altering of a table.
|
DdlParserListener.TableCreatedEvent |
An event describing the creation (or replacement) of a table.
|
DdlParserListener.TableDroppedEvent |
An event describing the dropping of a table.
|
DdlParserListener.TableEvent |
The base class for all table-related events.
|
DdlParserListener.TableIndexCreatedEvent |
An event describing the creation of an index on a table.
|
DdlParserListener.TableIndexDroppedEvent |
An event describing the dropping of an index on a table.
|
DdlParserListener.TableIndexEvent |
The abstract base class for all index-related events.
|
DdlParserListener.TableTruncatedEvent |
An event describing the truncating of a table.
|
DebeziumSerdes |
A factory class for Debezium provided serializers/deserializers.
|
DebeziumTextMap |
|
DefaultChangeEventSourceMetricsFactory |
|
DelayStrategy |
Encapsulates the logic of determining a delay when some criteria is met.
|
DeltaCount |
A Count that also tracks changes to the value within the last interval.
|
DeltaCounter |
A simple counter that maintains a single changing value by separately tracking the positive and negative changes, and by
tracking recent changes in this value since last reset .
|
Document |
|
Document.Field |
|
DocumentReader |
Reads Document instances from a variety of input forms.
|
DocumentSerdes |
A Kafka Deserializer and Serializer that operates upon Debezium Document s.
|
DocumentWriter |
Writes Document instances to a variety of output forms.
|
ElapsedTimeStrategy |
Encapsulates the logic of determining a delay when some criteria is met.
|
Enum |
A semantic type for an enumeration, where the string values are one of the enumeration's values.
|
EnumeratedValue |
A configuration option with a fixed set of possible values, i.e.
|
EnumSet |
A semantic type for a set of enumerated values, where the string values contain comma-separated values from an enumeration.
|
Envelope |
An immutable descriptor for the structure of Debezium message envelopes.
|
Envelope.Builder |
A builder of an envelope schema.
|
Envelope.FieldName |
The constants for the names of the fields in the message envelope.
|
Envelope.Operation |
The constants for the values for the operation field in the message envelope.
|
ErrorHandler |
|
EventDispatcher<T extends DataCollectionId> |
Central dispatcher for data change and schema change events.
|
EventDispatcher.InconsistentSchemaHandler<T extends DataCollectionId> |
Reaction to an incoming change event for which schema is not found
|
EventDispatcher.SnapshotReceiver |
Change record receiver used during snapshotting.
|
EventFormatter |
|
EventMetadataProvider |
An interface implemented by each connector that enables metrics metadata to be extracted
from an event.
|
EventRouter<R extends org.apache.kafka.connect.connector.ConnectRecord<R>> |
Debezium Outbox Transform Event Router
|
EventRouterConfigDefinition |
Debezium Outbox Transform configuration definition
|
EventRouterConfigDefinition.AdditionalField |
|
EventRouterConfigDefinition.AdditionalFieldPlacement |
|
EventRouterConfigDefinition.InvalidOperationBehavior |
|
ExecuteSnapshot |
The action to trigger an ad-hoc snapshot.
|
ExecuteSnapshot.SnapshotType |
|
ExtractNewRecordState<R extends org.apache.kafka.connect.connector.ConnectRecord<R>> |
Debezium generates CDC (Envelope ) records that are struct of values containing values
before and after change .
|
ExtractNewRecordState.FieldReference |
Represents a field that should be added to the outgoing record as a header
attribute or struct field.
|
ExtractNewRecordStateConfigDefinition |
|
ExtractNewRecordStateConfigDefinition.DeleteHandling |
|
Field |
An immutable definition of a field that make appear within a Configuration instance.
|
Field.EnumRecommender<T extends Enum<T>> |
|
Field.InvisibleRecommender |
A Field.Recommender that will look at several fields that are deemed to be exclusive, such that when the first of
them has a value the others are made invisible.
|
Field.OneOfRecommender |
A Field.Recommender that will look at several fields that are deemed to be exclusive, such that when the first of
them has a value the others are made invisible.
|
Field.RangeValidator |
Validation logic for numeric ranges
|
Field.Recommender |
A component that is able to provide recommended values for a field given a configuration.
|
Field.Set |
A set of fields.
|
Field.ValidationOutput |
A functional interface that accepts validation results.
|
Field.Validator |
A functional interface that can be used to validate field values.
|
FieldNameSelector |
Implementations return names for fields.
|
FieldNameSelector.FieldNameCache<T> |
A field namer that caches names it has obtained from a delegate
|
FieldNameSelector.FieldNamer<T> |
Implementations determine the field name corresponding to a given column.
|
FieldNameSelector.FieldNameSanitizer<T> |
A field namer that replaces any characters invalid in a field with _ .
|
FileDatabaseHistory |
A DatabaseHistory implementation that stores the schema history in a local file.
|
FunctionalReadWriteLock |
A form of a read-write lock that has methods that allow lambdas to be performed while the read or write lock is acquired and
held.
|
GCount |
A read-only result of the state of a grow-only GCounter .
|
GCounter |
A simple grow-only counter that maintains a single changing value by tracking the positive changes to the value.
|
Geography |
A semantic type for a Geography class.
|
Geometry |
A semantic type for an OGC Simple Features for SQL Geometry.
|
GuardedBy |
|
HashCode |
Utilities for easily computing hash codes.
|
Heartbeat |
A class that is able to generate periodic heartbeat messages based on a pre-configured interval.
|
Heartbeat.OffsetProducer |
Returns the offset to be used when emitting a heartbeat event.
|
HeartbeatErrorHandler |
|
HeartbeatImpl |
Default implementation of Heartbeat
|
HexConverter |
COPIED FROM https://github.com/undertow-io/undertow/blob/master/core/src/main/java/io/undertow/util/HexConverter.java
A utility class for mapping between byte arrays and their hex representation and back again.
|
HistorizedDatabaseSchema<I extends DataCollectionId> |
A database schema that is historized, i.e.
|
HistorizedDatabaseSchema.SchemaChangeEventConsumer |
|
HistorizedRelationalDatabaseConnectorConfig |
Configuration options shared across the relational CDC connectors which use a persistent database schema history.
|
HistorizedRelationalDatabaseSchema |
A DatabaseSchema or a relational database which has a schema history, that can be recovered to the current
state when restarting a connector.
|
HistoryRecord |
|
HistoryRecord.Fields |
|
HistoryRecordComparator |
Compares HistoryRecord instances to determine which came first.
|
Immutable |
|
IncrementalSnapshotChangeEventSource<T extends DataCollectionId> |
A Contract t
|
IncrementalSnapshotContext<T> |
A class describing current state of incremental snapshot
|
Instantiator |
Instantiates given classes reflectively.
|
Interval |
A utility representing a duration into a string value formatted using ISO string format.
|
IoUtil |
A set of utilities for more easily performing I/O.
|
Iterators |
A utility for creating iterators.
|
Iterators.PreviewIterator<T> |
A read only iterator that is able to preview the next value without consuming it or altering the behavior or semantics
of the normal Iterator methods.
|
Iterators.TransformedIterator<F,T> |
An iterator that is able to transform its contents to another type.
|
JacksonReader |
|
JacksonWriter |
|
JacksonWriter.WritingError |
|
JdbcConfiguration |
A specialized configuration for the Debezium driver.
|
JdbcConfiguration.Builder |
The JDBC-specific builder used to construct and/or alter JDBC configuration instances.
|
JdbcConnection |
A utility that simplifies using a JDBC connection and executing transactions composed of multiple statements.
|
JdbcConnection.BlockingMultiResultSetConsumer |
|
JdbcConnection.BlockingResultSetConsumer |
|
JdbcConnection.CallPreparer |
|
JdbcConnection.ConnectionFactory |
Establishes JDBC connections.
|
JdbcConnection.MultiResultSetConsumer |
|
JdbcConnection.Operations |
Defines multiple JDBC operations.
|
JdbcConnection.ParameterResultSetConsumer |
|
JdbcConnection.ResultSetConsumer |
|
JdbcConnection.ResultSetExtractor<T> |
Extracts a data of resultset..
|
JdbcConnection.ResultSetMapper<T> |
|
JdbcConnection.StatementFactory |
A function to create a statement from a connection.
|
JdbcConnection.StatementPreparer |
|
JdbcConnectionException |
|
JdbcValueConverters |
A provider of ValueConverter s and SchemaBuilder s for various column types.
|
JdbcValueConverters.BigIntUnsignedMode |
|
JdbcValueConverters.DecimalMode |
|
Joiner |
|
Json |
A semantic type for a JSON string.
|
JsonSerde<T> |
A Serde that (de-)serializes JSON.
|
JsonSerdeConfig |
A configuration for JsonSerde serialize/deserializer.
|
JsonTableChangeSerializer |
Ther serializer responsible for converting of TableChanges into a JSON format.
|
KafkaDatabaseHistory |
A DatabaseHistory implementation that records schema changes as normal SourceRecord s on the specified topic,
and that recovers the history by establishing a Kafka Consumer re-processing all messages on that topic.
|
Key |
An immutable definition of a table's key.
|
Key.Builder |
|
Key.CustomKeyMapper |
Custom Key mapper used to override or defining a custom Key
|
Key.IdentityKeyMapper |
Default Key mapper using PK as key.
|
Key.KeyMapper |
Provides the column(s) that should be used within the message key for a given table.
|
LegacyV1AbstractSourceInfoStructMaker<T extends AbstractSourceInfo> |
Legacy source info that does not enforce presence of the version and connector fields
|
Log |
|
LoggingContext |
A utility that provides a consistent set of properties for the Mapped Diagnostic Context (MDC) properties used by Debezium
components.
|
LoggingContext.PreviousContext |
|
MaskStrings |
A ColumnMapper implementation that ensures that string values are masked.
|
MaskStrings.HashValueConverter |
|
MaskStrings.MaskingValueConverter |
|
MathOps |
Utilities for performing math operations with mixed native and advanced numeric types.
|
MemoryDatabaseHistory |
A DatabaseHistory implementation that stores the schema history in a local file.
|
Metrics |
Base for metrics implementations.
|
Metronome |
A class that can be used to perform an action at a regular interval.
|
MicroDuration |
A utility representing a duration into a corresponding INT64
number of microsecond, and for defining a Kafka Connect Schema for duration values.
|
MicroTime |
A utility for converting various Java time representations into the INT64 number of
microseconds since midnight, and for defining a Kafka Connect Schema for time values with no date or timezone
information.
|
MicroTimestamp |
A utility for converting various Java time representations into the signed INT64 number of
microseconds past epoch, and for defining a Kafka Connect Schema for timestamp values with no timezone
information.
|
MultipleParsingExceptions |
|
NanoDuration |
A utility representing a duration into a corresponding INT64
number of nanosecond, and for defining a Kafka Connect Schema for duration values.
|
NanoTime |
A utility for converting various Java time representations into the INT64 number of
nanoseconds since midnight, and for defining a Kafka Connect Schema for time values with no date or timezone
information.
|
NanoTimestamp |
A utility for converting various Java time representations into the signed INT64 number of
nanoseconds past epoch, and for defining a Kafka Connect Schema for timestamp values with no timezone
information.
|
NoOpTableEditorImpl |
|
NotThreadSafe |
Denotes that the annotated type isn't safe for concurrent access from
multiple threads without external synchronization.
|
Nullable |
|
NullValue |
A specialization of Value that represents a null value.
|
NumberConversions |
A set of numeric conversion methods.
|
ObjectSizeCalculator |
Contains utility methods for calculating the memory usage of objects.
|
ObjectSizeCalculator.ArrayElementsVisitor |
|
ObjectSizeCalculator.CurrentLayout |
|
ObjectSizeCalculator.MemoryLayoutSpecification |
Describes constant memory overheads for various constructs in a JVM implementation.
|
OffsetContext |
Keeps track of the current offset within the source DB's change stream.
|
OffsetContext.Loader<O extends OffsetContext> |
Implementations load a connector-specific offset context based on the offset values stored in Kafka.
|
OpenIncrementalSnapshotWindow |
|
PackagePrivate |
Indicates that the annotated element intentionally uses default visibility.
|
ParsingException |
An exception representing a problem during parsing of text.
|
Path |
A representation of multiple name segments that together form a path within Document .
|
Path.Segments |
|
Paths |
A package-level utility that implements useful operations to create paths.
|
Paths.ChildPath |
|
Paths.InnerPath |
|
Paths.MultiSegmentPath |
|
Paths.RootPath |
|
Paths.SingleSegmentPath |
|
PipelineMetrics |
Base for metrics implementations.
|
PNCount |
A read-only result of the state of a PNCounter .
|
PNCounter |
A simple counter that maintains a single changing value by separately tracking the positive and negative changes.
|
Point |
A semantic type for a geometric Point, defined as a set of (x,y) coordinates.
|
Position |
A class that represents the position of a particular character in terms of the lines and columns of a character sequence.
|
Predicates |
Utilities for constructing various predicates.
|
PropagateSourceTypeToSchemaParameter |
|
ReadOnly |
Annotation that can be used to specify that the target field, method, constructor, package or type is read-only.
|
RecordParser |
An abstract parser of change records.
|
RecordParser.MongodbRecordParser |
Parser for records produced by MongoDB connectors.
|
RecordParser.MysqlRecordParser |
Parser for records produced by MySQL connectors.
|
RecordParser.PostgresRecordParser |
Parser for records produced by PostgreSQL connectors.
|
RecordParser.SqlserverRecordParser |
Parser for records produced by Sql Server connectors.
|
RelationalBaseSourceConnector |
Base class for Debezium's relational CDC SourceConnector implementations.
|
RelationalChangeRecordEmitter |
|
RelationalDatabaseConnectorConfig |
Configuration options shared across the relational CDC connectors.
|
RelationalDatabaseConnectorConfig.DecimalHandlingMode |
The set of predefined DecimalHandlingMode options or aliases.
|
RelationalDatabaseSchema |
|
RelationalDatabaseSchema.SchemasByTableId |
A map of schemas by table id.
|
RelationalSnapshotChangeEventSource<O extends OffsetContext> |
|
RelationalSnapshotChangeEventSource.RelationalSnapshotContext<O extends OffsetContext> |
Mutable context which is populated in the course of snapshotting.
|
RelationalTableFilters |
|
ResultReceiver |
This interface allows the code to optionally pass a value between two parts of the application.
|
SchemaChangeEvent |
Represents a structural change to a database schema.
|
SchemaChangeEvent.SchemaChangeEventType |
Type describing the content of the event.
|
SchemaChangeEventEmitter |
|
SchemaChangeEventEmitter.Receiver |
|
SchemaChanges |
|
SchemaNameAdjuster |
A adjuster for the names of change data message schemas.
|
SchemaNameAdjuster.ReplacementFunction |
Function used to determine the replacement for a character that is not valid per Avro rules.
|
SchemaNameAdjuster.ReplacementOccurred |
Function used to report that an original value was replaced with an Avro-compatible string.
|
SchemaUtil |
Utilities for obtaining JSON string representations of Schema , Struct , and Field objects.
|
SchemaUtil.RecordWriter |
|
Selectors |
Define predicates determines whether tables or columns should be used.
|
Selectors.DatabaseSelectionPredicateBuilder |
A builder of a database predicate.
|
Selectors.TableIdToStringMapper |
Implementations convert given TableId s to strings, so regular expressions can be applied to them for the
purpose of table filtering.
|
Selectors.TableSelectionPredicateBuilder |
A builder of a table predicate.
|
Sequences |
Utility methods for obtaining streams of integers.
|
SerializerType |
A set of available serializer types for CloudEvents or the data attribute of CloudEvents.
|
Signal |
The class responsible for processing of signals delivered to Debezium via a dedicated signaling table.
|
Signal.Action |
|
Signal.Payload |
|
SignalBasedIncrementalSnapshotChangeEventSource<T extends DataCollectionId> |
|
SingleThreadAccess |
Denotes that the annotated element of a class that's meant for multi-threaded
usage is accessed only by single thread and thus doesn't need to be guarded
via synchronization or similar.
|
SmtManager<R extends org.apache.kafka.connect.connector.ConnectRecord<R>> |
A class used by all Debezium supplied SMTs to centralize common logic.
|
SnapshotChangeEventSource<O extends OffsetContext> |
A change event source that emits events for taking a consistent snapshot of the captured tables, which may include
schema and data information.
|
SnapshotChangeEventSourceMetrics |
Metrics related to the initial snapshot of a connector.
|
SnapshotChangeEventSourceMetricsMXBean |
|
SnapshotChangeRecordEmitter |
Emits change data based on a single row read via JDBC.
|
SnapshotProgressListener |
|
SnapshotRecord |
Describes whether the change record comes from snapshot and if it is the last one
|
SnapshotResult<O extends OffsetContext> |
|
SnapshotResult.SnapshotResultStatus |
|
SourceInfoStructMaker<T extends AbstractSourceInfo> |
Converts the connector SourceInfo into publicly visible source field of the message.
|
SpecialValueDecimal |
Extension of plain a BigDecimal type that adds support for new features
like special values handling - NaN, infinity;
|
SpecialValueDecimal.SpecialValue |
Special values for floating-point and numeric types
|
StateBasedGCounter |
|
StateBasedPNCounter |
|
StateBasedPNDeltaCounter |
|
Stopwatch |
A stopwatch for measuring durations.
|
Stopwatch.BaseDurations |
|
Stopwatch.Durations |
The average and total durations as measured by one or more stopwatches.
|
Stopwatch.MultipleDurations |
|
Stopwatch.SingleDuration |
|
Stopwatch.Statistics |
The timing statistics for a recorded set of samples.
|
Stopwatch.StopwatchSet |
A set of stopwatches whose durations are combined.
|
StreamingChangeEventSource<O extends OffsetContext> |
A change event source that emits events from a DB log, such as MySQL's binlog or similar.
|
StreamingChangeEventSourceMetrics |
|
StreamingChangeEventSourceMetricsMXBean |
Metrics specific to streaming change event sources
|
Strings |
String-related utility methods.
|
Strings.CharacterPredicate |
Represents a predicate (boolean-valued function) of one character argument.
|
Strings.Justify |
|
Strings.RegExSplitter |
A tokenization class used to split a comma-separated list of regular expressions.
|
StructGenerator |
A function that converts one change event row (from a snapshot select, or
from before/after state of a log event) into the corresponding Kafka Connect
key or value Struct .
|
SystemVariables |
Encapsulates a set of a database's system variables.
|
SystemVariables.DefaultScope |
|
SystemVariables.Scope |
Interface that is used for enums defining the customized scope values for specific DBMSs.
|
Table |
An immutable definition of a table.
|
TableChanges |
An abstract representation of one or more changes to the structure to the tables of a relational database.
|
TableChanges.TableChange |
|
TableChanges.TableChangesSerializer<T> |
The interface that defines conversion of TableChanges into a serialized format for
persistent storage or delivering as a message.
|
TableChanges.TableChangeType |
|
TableEditor |
An editor for Table instances, normally obtained from a Tables instance.
|
TableEditorImpl |
|
TableId |
Unique identifier for a database table.
|
TableIdParser |
Parses identifiers into the corresponding parts of a TableId .
|
TableIdParser.ParsingContext |
|
TableIdParser.ParsingState |
|
TableIdParser.TableIdTokenizer |
|
TableImpl |
|
Tables |
Structural definitions for a set of tables in a JDBC database.
|
Tables.ColumnNameFilter |
A filter for columns.
|
Tables.ColumnNameFilterFactory |
|
Tables.TableFilter |
A filter for tables.
|
Tables.TableIds |
A set of table ids.
|
Tables.TablesById |
A map of tables by id.
|
TableSchema |
Defines the Kafka Connect Schema functionality associated with a given table definition , and which can
be used to send rows of data that match the table definition to Kafka Connect.
|
TableSchemaBuilder |
|
TemporalPrecisionMode |
The set of predefined TemporalPrecisionMode options.
|
Temporals |
Misc.
|
Threads |
Utilities related to threads and threading.
|
Threads.Timer |
Expires after defined time period.
|
Threads.TimeSince |
Measures the amount time that has elapsed since the last reset .
|
ThreadSafe |
Denotes that the annotated type is safe for concurrent access from multiple
threads.
|
Throwables |
|
Time |
A utility for converting various Java time representations into the INT32 number of
milliseconds since midnight, and for defining a Kafka Connect Schema for time values with no date or timezone
information.
|
Timestamp |
A utility for converting various Java time representations into the signed INT64 number of
milliseconds past epoch, and for defining a Kafka Connect Schema for timestamp values with no timezone
information.
|
TokenStream |
A foundation for basic parsers that tokenize input content and allows parsers to easily access and use those tokens.
|
TokenStream.BasicTokenizer |
A basic TokenStream.Tokenizer implementation that ignores whitespace but includes tokens for individual symbols, the period
('.'), single-quoted strings, double-quoted strings, whitespace-delimited words, and optionally comments.
|
TokenStream.CharacterArrayStream |
|
TokenStream.CharacterStream |
|
TokenStream.Marker |
An opaque marker for a position within the token stream.
|
TokenStream.Token |
The interface defining a token, which references the characters in the actual input character stream.
|
TokenStream.Tokenizer |
|
TokenStream.Tokens |
|
TopicSelector<I extends DataCollectionId> |
Implementations return names for Kafka topics (data and meta-data).
|
TopicSelector.DataCollectionTopicNamer<I extends DataCollectionId> |
Implementations determine the topic name corresponding to a given data collection.
|
TopicSelector.TopicNameCache<I extends DataCollectionId> |
A topic namer that caches names it has obtained from a delegate.
|
TopicSelector.TopicNameSanitizer<I extends DataCollectionId> |
A topic namer that replaces any characters invalid in a topic name with _ .
|
TransactionContext |
The context holds internal state necessary for book-keeping of events in active transaction.
|
TransactionMonitor |
The class has externalized its state in TransactionContext context class so it can be stored in and recovered from offsets.
|
TransactionStatus |
Describes the transition of transaction from start to end.
|
TruncateStrings |
A ColumnMapper implementation that ensures that string values longer than a specified length will be truncated.
|
TruncateStrings.TruncatingValueConverter |
|
Uuid |
A semantic type for a Uuid string.
|
Value |
|
Value.NullHandler |
|
Value.Type |
|
ValueConversionCallback |
Invoked to convert incoming SQL column values into Kafka Connect values.
|
ValueConverter |
A function that converts from a column data value into another value.
|
ValueConverterProvider |
A provider of ValueConverter functions and the SchemaBuilder used to describe them.
|
VariableLatch |
A latch that works similarly to CountDownLatch except that it can also increase the count dynamically.
|
VariableLatch.Sync |
Synchronization control For CountDownLatch.
|
VariableScaleDecimal |
An arbitrary precision decimal value with variable scale.
|
VisibleForTesting |
Indicates that visibility of the annotated element is raised for the purposes of testing
(e.g.
|
Xml |
A semantic type for an XML string.
|
XmlCharacters |
|
Year |
A utility for defining a Kafka Connect Schema that represents year values.
|
ZonedTime |
A utility for converting various Java time representations into the STRING representation of
the time in a particular time zone, and for defining a Kafka Connect Schema for zoned time values.
|
ZonedTimestamp |
A utility for converting various Java time representations into the STRING representation of
the time and date in a particular time zone, and for defining a Kafka Connect Schema for zoned timestamp values.
|