Package io.debezium.transforms
Class ExtractNewRecordState<R extends org.apache.kafka.connect.connector.ConnectRecord<R>>
java.lang.Object
io.debezium.transforms.AbstractExtractNewRecordState<R>
io.debezium.transforms.ExtractNewRecordState<R>
- Type Parameters:
R
- the subtype ofConnectRecord
on which this transformation will operate
- All Implemented Interfaces:
Closeable
,AutoCloseable
,org.apache.kafka.common.Configurable
,org.apache.kafka.connect.transforms.Transformation<R>
public class ExtractNewRecordState<R extends org.apache.kafka.connect.connector.ConnectRecord<R>>
extends AbstractExtractNewRecordState<R>
Debezium generates CDC (
Envelope
) records that are struct of values containing values
before
and after change
. Sink connectors usually are not able to work
with a complex structure so a user use this SMT to extract after
value and send it down
unwrapped in Envelope
.
The functionality is similar to ExtractField
SMT but has a special semantics for handling
delete events; when delete event is emitted by database then Debezium emits two messages: a delete
message and a tombstone message that serves as a signal to Kafka compaction process.
The SMT by default drops the tombstone message created by Debezium and converts the delete message into a tombstone message that can be dropped, too, if required.
The SMT also has the option to insert fields from the original record (e.g. 'op' or 'source.ts_ms') into the unwrapped record or add them as header attributes.
- Author:
- Jiri Pechanec
-
Nested Class Summary
Nested classes/interfaces inherited from class io.debezium.transforms.AbstractExtractNewRecordState
AbstractExtractNewRecordState.FieldReference, AbstractExtractNewRecordState.NewRecordValueMetadata
-
Field Summary
Modifier and TypeFieldDescriptionprivate final Field.Set
private static final Field
private static final Field
private static final Field
private boolean
private String
private boolean
private static final String
private static final org.slf4j.Logger
private static final int
private BoundedConcurrentHashMap<AbstractExtractNewRecordState.NewRecordValueMetadata,
org.apache.kafka.connect.data.Schema> Fields inherited from class io.debezium.transforms.AbstractExtractNewRecordState
additionalFields, additionalHeaders, config, extractRecordStrategy, PURPOSE, routeByField, smtManager
-
Constructor Summary
-
Method Summary
Modifier and TypeMethodDescriptionprivate R
addFields
(List<AbstractExtractNewRecordState.FieldReference> additionalFields, R originalRecord, R unwrappedRecord) buildCacheKey
(org.apache.kafka.connect.data.Struct value, R originalRecord) org.apache.kafka.common.config.ConfigDef
config()
void
private R
dropFields
(R record) private R
dropKeyFields
(R record, List<String> fieldNames) private R
dropValueFields
(R record, List<String> fieldNames) getFieldsToDropFromSchema
(org.apache.kafka.connect.data.Schema schema, List<String> fieldNames) private org.apache.kafka.connect.data.Schema
makeUpdatedSchema
(List<AbstractExtractNewRecordState.FieldReference> additionalFields, org.apache.kafka.connect.data.Schema schema, org.apache.kafka.connect.data.Struct originalRecordValue) private org.apache.kafka.connect.data.SchemaBuilder
updateSchema
(AbstractExtractNewRecordState.FieldReference fieldReference, org.apache.kafka.connect.data.SchemaBuilder builder, org.apache.kafka.connect.data.Schema fieldSchema) private org.apache.kafka.connect.data.Struct
updateValue
(AbstractExtractNewRecordState.FieldReference fieldReference, org.apache.kafka.connect.data.Struct updatedValue, org.apache.kafka.connect.data.Struct struct) Methods inherited from class io.debezium.transforms.AbstractExtractNewRecordState
apply, close, getHeaderByName, makeHeaders, setTopic
-
Field Details
-
LOGGER
private static final org.slf4j.Logger LOGGER -
EXCLUDE
- See Also:
-
SCHEMA_CACHE_SIZE
private static final int SCHEMA_CACHE_SIZE- See Also:
-
DROP_FIELDS_HEADER
-
DROP_FIELDS_FROM_KEY
-
DROP_FIELDS_KEEP_SCHEMA_COMPATIBLE
-
dropFieldsHeaderName
-
dropFieldsFromKey
private boolean dropFieldsFromKey -
dropFieldsKeepSchemaCompatible
private boolean dropFieldsKeepSchemaCompatible -
schemaUpdateCache
private BoundedConcurrentHashMap<AbstractExtractNewRecordState.NewRecordValueMetadata,org.apache.kafka.connect.data.Schema> schemaUpdateCache -
configFields
-
-
Constructor Details
-
ExtractNewRecordState
public ExtractNewRecordState()
-
-
Method Details
-
configure
- Specified by:
configure
in interfaceorg.apache.kafka.common.Configurable
- Overrides:
configure
in classAbstractExtractNewRecordState<R extends org.apache.kafka.connect.connector.ConnectRecord<R>>
-
doApply
- Specified by:
doApply
in classAbstractExtractNewRecordState<R extends org.apache.kafka.connect.connector.ConnectRecord<R>>
-
validateConfigFields
- Specified by:
validateConfigFields
in classAbstractExtractNewRecordState<R extends org.apache.kafka.connect.connector.ConnectRecord<R>>
-
config
public org.apache.kafka.common.config.ConfigDef config() -
addFields
private R addFields(List<AbstractExtractNewRecordState.FieldReference> additionalFields, R originalRecord, R unwrappedRecord) -
buildCacheKey
private AbstractExtractNewRecordState.NewRecordValueMetadata buildCacheKey(org.apache.kafka.connect.data.Struct value, R originalRecord) -
dropFields
-
dropKeyFields
-
dropValueFields
-
getFieldsToDropFromSchema
-
makeUpdatedSchema
private org.apache.kafka.connect.data.Schema makeUpdatedSchema(List<AbstractExtractNewRecordState.FieldReference> additionalFields, org.apache.kafka.connect.data.Schema schema, org.apache.kafka.connect.data.Struct originalRecordValue) -
updateSchema
private org.apache.kafka.connect.data.SchemaBuilder updateSchema(AbstractExtractNewRecordState.FieldReference fieldReference, org.apache.kafka.connect.data.SchemaBuilder builder, org.apache.kafka.connect.data.Schema fieldSchema) -
updateValue
private org.apache.kafka.connect.data.Struct updateValue(AbstractExtractNewRecordState.FieldReference fieldReference, org.apache.kafka.connect.data.Struct updatedValue, org.apache.kafka.connect.data.Struct struct)
-