A good example is a Kafka Dataframe. It contains fields such as key, value, partition, etc. If the first option above
is used, the current schema is removed and the schema belonging to the value column is used, which means that the other
fields are ignored. If the second option is used, the original schema is kept, which means that the Avro record will be put
inside the value column.
This object contains options for schema retention during Avro/Spark conversions.
When za.co.absa.abris.avro.schemas.policy.SchemaRetentionPolicies.RETAIN_SELECTED_COLUMN_ONLY is used, the column containing Avro data is extracted and the whole Dataframe schema becomes the Avro schema defining those data.
When za.co.absa.abris.avro.schemas.policy.SchemaRetentionPolicies.RETAIN_ORIGINAL_SCHEMA is used, the current Dataframe schema is kept, and the column containing the Avro data is decoded "in place", i.e. as a nested structure.
A good example is a Kafka Dataframe. It contains fields such as key, value, partition, etc. If the first option above is used, the current schema is removed and the schema belonging to the value column is used, which means that the other fields are ignored. If the second option is used, the original schema is kept, which means that the Avro record will be put inside the value column.