It's copy paste from AbstractKafkaAvroDeserializer#DeserializationContext.read with some modification.
It's copy paste from AbstractKafkaAvroDeserializer#DeserializationContext.read with some modification. We pass there record buffer data and schema which will be used to convert record.
We use it on checking writerSchema is primitive - on creating DatumReader (createDatumReader).
We use it on checking writerSchema is primitive - on creating DatumReader (createDatumReader).
Record serialization method, kind of copy paste from AbstractKafkaAvroSerializer#DeserializationContext.read.
Record serialization method, kind of copy paste from AbstractKafkaAvroSerializer#DeserializationContext.read. We use confluent serialization mechanism without some specifics features like:
- fetching schema from registry - fetching schema Id - we don't serialize MagicByte and version
To serialization we use schema from record.
In future we can try to configure it
In future we can try to configure it
(Since version ) see corresponding Javadoc for more information.
It's base implementation of AvroSchemaEvolution. In this case strategy to evolve record to schema is as follows:
serialize record to record schema -> deserialize record to provided schema (final schema)
This strategy is based on Confluent implementation of: serialization and deserialization method. But we don't allocate bytes for MagicByte and Id, because we don't need it.
For now it's easiest way to convert GenericContainer record to wanted schema.