Class ExtractNewRecordState<R extends org.apache.kafka.connect.connector.ConnectRecord<R>>

  • Type Parameters:
    R - the subtype of ConnectRecord on which this transformation will operate
    All Implemented Interfaces:
    Closeable, AutoCloseable, org.apache.kafka.common.Configurable, org.apache.kafka.connect.transforms.Transformation<R>

    public class ExtractNewRecordState<R extends org.apache.kafka.connect.connector.ConnectRecord<R>>
    extends Object
    implements org.apache.kafka.connect.transforms.Transformation<R>
    Debezium generates CDC (Envelope) records that are struct of values containing values before and after change. Sink connectors usually are not able to work with a complex structure so a user use this SMT to extract after value and send it down unwrapped in Envelope.

    The functionality is similar to ExtractField SMT but has a special semantics for handling delete events; when delete event is emitted by database then Debezium emits two messages: a delete message and a tombstone message that serves as a signal to Kafka compaction process.

    The SMT by default drops the tombstone message created by Debezium and converts the delete message into a tombstone message that can be dropped, too, if required.

    The SMT also has the option to insert fields from the original record (e.g. 'op' or 'source.ts_ms' into the unwrapped record or ad them as header attributes.

    Author:
    Jiri Pechanec
    • Field Detail

      • LOGGER

        private static final org.slf4j.Logger LOGGER
      • FIELD_SEPARATOR

        private static final Pattern FIELD_SEPARATOR
      • NEW_FIELD_SEPARATOR

        private static final Pattern NEW_FIELD_SEPARATOR
      • dropTombstones

        private boolean dropTombstones
      • routeByField

        private String routeByField
      • afterDelegate

        private final org.apache.kafka.connect.transforms.ExtractField<R extends org.apache.kafka.connect.connector.ConnectRecord<R>> afterDelegate
      • beforeDelegate

        private final org.apache.kafka.connect.transforms.ExtractField<R extends org.apache.kafka.connect.connector.ConnectRecord<R>> beforeDelegate
      • removedDelegate

        private final org.apache.kafka.connect.transforms.InsertField<R extends org.apache.kafka.connect.connector.ConnectRecord<R>> removedDelegate
      • updatedDelegate

        private final org.apache.kafka.connect.transforms.InsertField<R extends org.apache.kafka.connect.connector.ConnectRecord<R>> updatedDelegate
      • schemaUpdateCache

        private BoundedConcurrentHashMap<org.apache.kafka.connect.data.Schema,​org.apache.kafka.connect.data.Schema> schemaUpdateCache
      • smtManager

        private SmtManager<R extends org.apache.kafka.connect.connector.ConnectRecord<R>> smtManager
    • Constructor Detail

      • ExtractNewRecordState

        public ExtractNewRecordState()
    • Method Detail

      • configure

        public void configure​(Map<String,​?> configs)
        Specified by:
        configure in interface org.apache.kafka.common.Configurable
      • apply

        public R apply​(R record)
        Specified by:
        apply in interface org.apache.kafka.connect.transforms.Transformation<R extends org.apache.kafka.connect.connector.ConnectRecord<R>>
      • setTopic

        private R setTopic​(String updatedTopicValue,
                           R record)
      • makeHeaders

        private org.apache.kafka.connect.header.Headers makeHeaders​(List<ExtractNewRecordState.FieldReference> additionalHeaders,
                                                                    org.apache.kafka.connect.data.Struct originalRecordValue)
        Create an Headers object which contains the headers to be added.
      • makeUpdatedSchema

        private org.apache.kafka.connect.data.Schema makeUpdatedSchema​(List<ExtractNewRecordState.FieldReference> additionalFields,
                                                                       org.apache.kafka.connect.data.Schema schema,
                                                                       org.apache.kafka.connect.data.Struct originalRecordValue)
      • updateSchema

        private org.apache.kafka.connect.data.SchemaBuilder updateSchema​(ExtractNewRecordState.FieldReference fieldReference,
                                                                         org.apache.kafka.connect.data.SchemaBuilder builder,
                                                                         org.apache.kafka.connect.data.Schema originalRecordSchema)
      • updateValue

        private org.apache.kafka.connect.data.Struct updateValue​(ExtractNewRecordState.FieldReference fieldReference,
                                                                 org.apache.kafka.connect.data.Struct updatedValue,
                                                                 org.apache.kafka.connect.data.Struct struct)
      • config

        public org.apache.kafka.common.config.ConfigDef config()
        Specified by:
        config in interface org.apache.kafka.connect.transforms.Transformation<R extends org.apache.kafka.connect.connector.ConnectRecord<R>>
      • close

        public void close()
        Specified by:
        close in interface AutoCloseable
        Specified by:
        close in interface Closeable
        Specified by:
        close in interface org.apache.kafka.connect.transforms.Transformation<R extends org.apache.kafka.connect.connector.ConnectRecord<R>>