Package io.debezium.transforms
Class ExtractNewRecordState<R extends org.apache.kafka.connect.connector.ConnectRecord<R>>
- java.lang.Object
-
- io.debezium.transforms.ExtractNewRecordState<R>
-
- Type Parameters:
R
- the subtype ofConnectRecord
on which this transformation will operate
- All Implemented Interfaces:
Closeable
,AutoCloseable
,org.apache.kafka.common.Configurable
,org.apache.kafka.connect.transforms.Transformation<R>
public class ExtractNewRecordState<R extends org.apache.kafka.connect.connector.ConnectRecord<R>> extends Object implements org.apache.kafka.connect.transforms.Transformation<R>
Debezium generates CDC (Envelope
) records that are struct of values containing valuesbefore
andafter change
. Sink connectors usually are not able to work with a complex structure so a user use this SMT to extractafter
value and send it down unwrapped inEnvelope
.The functionality is similar to
ExtractField
SMT but has a special semantics for handling delete events; when delete event is emitted by database then Debezium emits two messages: a delete message and a tombstone message that serves as a signal to Kafka compaction process.The SMT by default drops the tombstone message created by Debezium and converts the delete message into a tombstone message that can be dropped, too, if required.
The SMT also has the option to insert fields from the original record (e.g. 'op' or 'source.ts_ms' into the unwrapped record or ad them as header attributes.
- Author:
- Jiri Pechanec
-
-
Nested Class Summary
Nested Classes Modifier and Type Class Description private static class
ExtractNewRecordState.FieldReference
Represents a field that should be added to the outgoing record as a header attribute or struct field.
-
Field Summary
Fields Modifier and Type Field Description private List<ExtractNewRecordState.FieldReference>
additionalFields
private List<ExtractNewRecordState.FieldReference>
additionalHeaders
private org.apache.kafka.connect.transforms.ExtractField<R>
afterDelegate
private org.apache.kafka.connect.transforms.ExtractField<R>
beforeDelegate
private boolean
dropTombstones
private static Pattern
FIELD_SEPARATOR
private ExtractNewRecordStateConfigDefinition.DeleteHandling
handleDeletes
private static org.slf4j.Logger
LOGGER
private static Pattern
NEW_FIELD_SEPARATOR
private static String
PURPOSE
private org.apache.kafka.connect.transforms.InsertField<R>
removedDelegate
private String
routeByField
private static int
SCHEMA_CACHE_SIZE
private BoundedConcurrentHashMap<org.apache.kafka.connect.data.Schema,org.apache.kafka.connect.data.Schema>
schemaUpdateCache
private SmtManager<R>
smtManager
private org.apache.kafka.connect.transforms.InsertField<R>
updatedDelegate
-
Constructor Summary
Constructors Constructor Description ExtractNewRecordState()
-
Method Summary
All Methods Instance Methods Concrete Methods Modifier and Type Method Description private R
addFields(List<ExtractNewRecordState.FieldReference> additionalFields, R originalRecord, R unwrappedRecord)
R
apply(R record)
void
close()
org.apache.kafka.common.config.ConfigDef
config()
void
configure(Map<String,?> configs)
private org.apache.kafka.connect.header.Headers
makeHeaders(List<ExtractNewRecordState.FieldReference> additionalHeaders, org.apache.kafka.connect.data.Struct originalRecordValue)
Create an Headers object which contains the headers to be added.private org.apache.kafka.connect.data.Schema
makeUpdatedSchema(List<ExtractNewRecordState.FieldReference> additionalFields, org.apache.kafka.connect.data.Schema schema, org.apache.kafka.connect.data.Struct originalRecordValue)
private R
setTopic(String updatedTopicValue, R record)
private org.apache.kafka.connect.data.SchemaBuilder
updateSchema(ExtractNewRecordState.FieldReference fieldReference, org.apache.kafka.connect.data.SchemaBuilder builder, org.apache.kafka.connect.data.Schema originalRecordSchema)
private org.apache.kafka.connect.data.Struct
updateValue(ExtractNewRecordState.FieldReference fieldReference, org.apache.kafka.connect.data.Struct updatedValue, org.apache.kafka.connect.data.Struct struct)
-
-
-
Field Detail
-
LOGGER
private static final org.slf4j.Logger LOGGER
-
PURPOSE
private static final String PURPOSE
- See Also:
- Constant Field Values
-
SCHEMA_CACHE_SIZE
private static final int SCHEMA_CACHE_SIZE
- See Also:
- Constant Field Values
-
FIELD_SEPARATOR
private static final Pattern FIELD_SEPARATOR
-
NEW_FIELD_SEPARATOR
private static final Pattern NEW_FIELD_SEPARATOR
-
dropTombstones
private boolean dropTombstones
-
handleDeletes
private ExtractNewRecordStateConfigDefinition.DeleteHandling handleDeletes
-
additionalHeaders
private List<ExtractNewRecordState.FieldReference> additionalHeaders
-
additionalFields
private List<ExtractNewRecordState.FieldReference> additionalFields
-
routeByField
private String routeByField
-
afterDelegate
private final org.apache.kafka.connect.transforms.ExtractField<R extends org.apache.kafka.connect.connector.ConnectRecord<R>> afterDelegate
-
beforeDelegate
private final org.apache.kafka.connect.transforms.ExtractField<R extends org.apache.kafka.connect.connector.ConnectRecord<R>> beforeDelegate
-
removedDelegate
private final org.apache.kafka.connect.transforms.InsertField<R extends org.apache.kafka.connect.connector.ConnectRecord<R>> removedDelegate
-
updatedDelegate
private final org.apache.kafka.connect.transforms.InsertField<R extends org.apache.kafka.connect.connector.ConnectRecord<R>> updatedDelegate
-
schemaUpdateCache
private BoundedConcurrentHashMap<org.apache.kafka.connect.data.Schema,org.apache.kafka.connect.data.Schema> schemaUpdateCache
-
smtManager
private SmtManager<R extends org.apache.kafka.connect.connector.ConnectRecord<R>> smtManager
-
-
Method Detail
-
configure
public void configure(Map<String,?> configs)
- Specified by:
configure
in interfaceorg.apache.kafka.common.Configurable
-
makeHeaders
private org.apache.kafka.connect.header.Headers makeHeaders(List<ExtractNewRecordState.FieldReference> additionalHeaders, org.apache.kafka.connect.data.Struct originalRecordValue)
Create an Headers object which contains the headers to be added.
-
addFields
private R addFields(List<ExtractNewRecordState.FieldReference> additionalFields, R originalRecord, R unwrappedRecord)
-
makeUpdatedSchema
private org.apache.kafka.connect.data.Schema makeUpdatedSchema(List<ExtractNewRecordState.FieldReference> additionalFields, org.apache.kafka.connect.data.Schema schema, org.apache.kafka.connect.data.Struct originalRecordValue)
-
updateSchema
private org.apache.kafka.connect.data.SchemaBuilder updateSchema(ExtractNewRecordState.FieldReference fieldReference, org.apache.kafka.connect.data.SchemaBuilder builder, org.apache.kafka.connect.data.Schema originalRecordSchema)
-
updateValue
private org.apache.kafka.connect.data.Struct updateValue(ExtractNewRecordState.FieldReference fieldReference, org.apache.kafka.connect.data.Struct updatedValue, org.apache.kafka.connect.data.Struct struct)
-
config
public org.apache.kafka.common.config.ConfigDef config()
-
-