Package io.debezium.transforms
Class ExtractNewRecordState<R extends org.apache.kafka.connect.connector.ConnectRecord<R>>
java.lang.Object
io.debezium.transforms.ExtractNewRecordState<R>
- Type Parameters:
R
- the subtype ofConnectRecord
on which this transformation will operate
- All Implemented Interfaces:
Closeable
,AutoCloseable
,org.apache.kafka.common.Configurable
,org.apache.kafka.connect.transforms.Transformation<R>
public class ExtractNewRecordState<R extends org.apache.kafka.connect.connector.ConnectRecord<R>>
extends Object
implements org.apache.kafka.connect.transforms.Transformation<R>
Debezium generates CDC (
Envelope
) records that are struct of values containing values
before
and after change
. Sink connectors usually are not able to work
with a complex structure so a user use this SMT to extract after
value and send it down
unwrapped in Envelope
.
The functionality is similar to ExtractField
SMT but has a special semantics for handling
delete events; when delete event is emitted by database then Debezium emits two messages: a delete
message and a tombstone message that serves as a signal to Kafka compaction process.
The SMT by default drops the tombstone message created by Debezium and converts the delete message into a tombstone message that can be dropped, too, if required.
The SMT also has the option to insert fields from the original record (e.g. 'op' or 'source.ts_ms') into the unwrapped record or add them as header attributes.
- Author:
- Jiri Pechanec
-
Nested Class Summary
Modifier and TypeClassDescriptionprivate static class
Represents a field that should be added to the outgoing record as a header attribute or struct field.private static class
-
Field Summary
Modifier and TypeFieldDescriptionprivate List<ExtractNewRecordState.FieldReference>
private List<ExtractNewRecordState.FieldReference>
private final org.apache.kafka.connect.transforms.ExtractField<R>
private final org.apache.kafka.connect.transforms.ExtractField<R>
private static final Field
private static final Field
private static final Field
private boolean
private String
private boolean
private boolean
private static final String
private static final Pattern
private static final org.slf4j.Logger
private static final Pattern
private static final String
private final org.apache.kafka.connect.transforms.InsertField<R>
private String
private static final int
private BoundedConcurrentHashMap<ExtractNewRecordState.NewRecordValueMetadata,
org.apache.kafka.connect.data.Schema> private SmtManager<R>
private final org.apache.kafka.connect.transforms.InsertField<R>
-
Constructor Summary
-
Method Summary
Modifier and TypeMethodDescriptionprivate R
addFields
(List<ExtractNewRecordState.FieldReference> additionalFields, R originalRecord, R unwrappedRecord) buildCacheKey
(org.apache.kafka.connect.data.Struct value, R originalRecord) void
close()
org.apache.kafka.common.config.ConfigDef
config()
void
private R
dropFields
(R record) private R
dropKeyFields
(R record, List<String> fieldNames) private R
dropValueFields
(R record, List<String> fieldNames) getFieldsToDropFromSchema
(org.apache.kafka.connect.data.Schema schema, List<String> fieldNames) private org.apache.kafka.connect.header.Header
getHeaderByName
(R record, String headerName) private org.apache.kafka.connect.header.Headers
makeHeaders
(List<ExtractNewRecordState.FieldReference> additionalHeaders, org.apache.kafka.connect.data.Struct originalRecordValue) Create an Headers object which contains the headers to be added.private org.apache.kafka.connect.data.Schema
makeUpdatedSchema
(List<ExtractNewRecordState.FieldReference> additionalFields, org.apache.kafka.connect.data.Schema schema, org.apache.kafka.connect.data.Struct originalRecordValue) private R
private org.apache.kafka.connect.data.SchemaBuilder
updateSchema
(ExtractNewRecordState.FieldReference fieldReference, org.apache.kafka.connect.data.SchemaBuilder builder, org.apache.kafka.connect.data.Schema fieldSchema) private org.apache.kafka.connect.data.Struct
updateValue
(ExtractNewRecordState.FieldReference fieldReference, org.apache.kafka.connect.data.Struct updatedValue, org.apache.kafka.connect.data.Struct struct)
-
Field Details
-
LOGGER
private static final org.slf4j.Logger LOGGER -
PURPOSE
- See Also:
-
EXCLUDE
- See Also:
-
SCHEMA_CACHE_SIZE
private static final int SCHEMA_CACHE_SIZE- See Also:
-
FIELD_SEPARATOR
-
NEW_FIELD_SEPARATOR
-
DROP_FIELDS_HEADER
-
DROP_FIELDS_FROM_KEY
-
DROP_FIELDS_KEEP_SCHEMA_COMPATIBLE
-
dropTombstones
private boolean dropTombstones -
dropFieldsHeaderName
-
dropFieldsFromKey
private boolean dropFieldsFromKey -
dropFieldsKeepSchemaCompatible
private boolean dropFieldsKeepSchemaCompatible -
handleDeletes
-
additionalHeaders
-
additionalFields
-
routeByField
-
afterDelegate
-
beforeDelegate
-
removedDelegate
-
updatedDelegate
-
schemaUpdateCache
private BoundedConcurrentHashMap<ExtractNewRecordState.NewRecordValueMetadata,org.apache.kafka.connect.data.Schema> schemaUpdateCache -
smtManager
-
-
Constructor Details
-
ExtractNewRecordState
public ExtractNewRecordState()
-
-
Method Details
-
configure
- Specified by:
configure
in interfaceorg.apache.kafka.common.Configurable
-
apply
-
setTopic
-
makeHeaders
private org.apache.kafka.connect.header.Headers makeHeaders(List<ExtractNewRecordState.FieldReference> additionalHeaders, org.apache.kafka.connect.data.Struct originalRecordValue) Create an Headers object which contains the headers to be added. -
addFields
private R addFields(List<ExtractNewRecordState.FieldReference> additionalFields, R originalRecord, R unwrappedRecord) -
buildCacheKey
private ExtractNewRecordState.NewRecordValueMetadata buildCacheKey(org.apache.kafka.connect.data.Struct value, R originalRecord) -
dropFields
-
dropKeyFields
-
dropValueFields
-
getFieldsToDropFromSchema
-
getHeaderByName
-
makeUpdatedSchema
private org.apache.kafka.connect.data.Schema makeUpdatedSchema(List<ExtractNewRecordState.FieldReference> additionalFields, org.apache.kafka.connect.data.Schema schema, org.apache.kafka.connect.data.Struct originalRecordValue) -
updateSchema
private org.apache.kafka.connect.data.SchemaBuilder updateSchema(ExtractNewRecordState.FieldReference fieldReference, org.apache.kafka.connect.data.SchemaBuilder builder, org.apache.kafka.connect.data.Schema fieldSchema) -
updateValue
private org.apache.kafka.connect.data.Struct updateValue(ExtractNewRecordState.FieldReference fieldReference, org.apache.kafka.connect.data.Struct updatedValue, org.apache.kafka.connect.data.Struct struct) -
config
public org.apache.kafka.common.config.ConfigDef config() -
close
public void close()
-