Package io.debezium.transforms
Class ExtractNewRecordState<R extends org.apache.kafka.connect.connector.ConnectRecord<R>>
java.lang.Object
io.debezium.transforms.ExtractNewRecordState<R>
- Type Parameters:
R
- the subtype ofConnectRecord
on which this transformation will operate
- All Implemented Interfaces:
Closeable
,AutoCloseable
,org.apache.kafka.common.Configurable
,org.apache.kafka.connect.transforms.Transformation<R>
public class ExtractNewRecordState<R extends org.apache.kafka.connect.connector.ConnectRecord<R>>
extends Object
implements org.apache.kafka.connect.transforms.Transformation<R>
Debezium generates CDC (
Envelope
) records that are struct of values containing values
before
and after change
. Sink connectors usually are not able to work
with a complex structure so a user use this SMT to extract after
value and send it down
unwrapped in Envelope
.
The functionality is similar to ExtractField
SMT but has a special semantics for handling
delete events; when delete event is emitted by database then Debezium emits two messages: a delete
message and a tombstone message that serves as a signal to Kafka compaction process.
The SMT by default drops the tombstone message created by Debezium and converts the delete message into a tombstone message that can be dropped, too, if required.
The SMT also has the option to insert fields from the original record (e.g. 'op' or 'source.ts_ms' into the unwrapped record or ad them as header attributes.
- Author:
- Jiri Pechanec
-
Nested Class Summary
Modifier and TypeClassDescriptionprivate static class
Represents a field that should be added to the outgoing record as a header attribute or struct field. -
Field Summary
Modifier and TypeFieldDescriptionprivate List<ExtractNewRecordState.FieldReference>
private List<ExtractNewRecordState.FieldReference>
private final org.apache.kafka.connect.transforms.ExtractField<R>
private final org.apache.kafka.connect.transforms.ExtractField<R>
private boolean
private static final Pattern
private static final org.slf4j.Logger
private static final Pattern
private static final String
private final org.apache.kafka.connect.transforms.InsertField<R>
private String
private static final int
private BoundedConcurrentHashMap<org.apache.kafka.connect.data.Schema,
org.apache.kafka.connect.data.Schema> private SmtManager<R>
private final org.apache.kafka.connect.transforms.InsertField<R>
-
Constructor Summary
-
Method Summary
Modifier and TypeMethodDescriptionprivate R
addFields
(List<ExtractNewRecordState.FieldReference> additionalFields, R originalRecord, R unwrappedRecord) void
close()
org.apache.kafka.common.config.ConfigDef
config()
void
private org.apache.kafka.connect.header.Headers
makeHeaders
(List<ExtractNewRecordState.FieldReference> additionalHeaders, org.apache.kafka.connect.data.Struct originalRecordValue) Create an Headers object which contains the headers to be added.private org.apache.kafka.connect.data.Schema
makeUpdatedSchema
(List<ExtractNewRecordState.FieldReference> additionalFields, org.apache.kafka.connect.data.Schema schema, org.apache.kafka.connect.data.Struct originalRecordValue) private R
private org.apache.kafka.connect.data.SchemaBuilder
updateSchema
(ExtractNewRecordState.FieldReference fieldReference, org.apache.kafka.connect.data.SchemaBuilder builder, org.apache.kafka.connect.data.Schema originalRecordSchema) private org.apache.kafka.connect.data.Struct
updateValue
(ExtractNewRecordState.FieldReference fieldReference, org.apache.kafka.connect.data.Struct updatedValue, org.apache.kafka.connect.data.Struct struct)
-
Field Details
-
LOGGER
private static final org.slf4j.Logger LOGGER -
PURPOSE
- See Also:
-
SCHEMA_CACHE_SIZE
private static final int SCHEMA_CACHE_SIZE- See Also:
-
FIELD_SEPARATOR
-
NEW_FIELD_SEPARATOR
-
dropTombstones
private boolean dropTombstones -
handleDeletes
-
additionalHeaders
-
additionalFields
-
routeByField
-
afterDelegate
-
beforeDelegate
-
removedDelegate
-
updatedDelegate
-
schemaUpdateCache
private BoundedConcurrentHashMap<org.apache.kafka.connect.data.Schema,org.apache.kafka.connect.data.Schema> schemaUpdateCache -
smtManager
-
-
Constructor Details
-
ExtractNewRecordState
public ExtractNewRecordState()
-
-
Method Details
-
configure
- Specified by:
configure
in interfaceorg.apache.kafka.common.Configurable
-
apply
-
setTopic
-
makeHeaders
private org.apache.kafka.connect.header.Headers makeHeaders(List<ExtractNewRecordState.FieldReference> additionalHeaders, org.apache.kafka.connect.data.Struct originalRecordValue) Create an Headers object which contains the headers to be added. -
addFields
private R addFields(List<ExtractNewRecordState.FieldReference> additionalFields, R originalRecord, R unwrappedRecord) -
makeUpdatedSchema
private org.apache.kafka.connect.data.Schema makeUpdatedSchema(List<ExtractNewRecordState.FieldReference> additionalFields, org.apache.kafka.connect.data.Schema schema, org.apache.kafka.connect.data.Struct originalRecordValue) -
updateSchema
private org.apache.kafka.connect.data.SchemaBuilder updateSchema(ExtractNewRecordState.FieldReference fieldReference, org.apache.kafka.connect.data.SchemaBuilder builder, org.apache.kafka.connect.data.Schema originalRecordSchema) -
updateValue
private org.apache.kafka.connect.data.Struct updateValue(ExtractNewRecordState.FieldReference fieldReference, org.apache.kafka.connect.data.Struct updatedValue, org.apache.kafka.connect.data.Struct struct) -
config
public org.apache.kafka.common.config.ConfigDef config() -
close
public void close()
-