This method extracts a hierarchical record from the specified raw bytes.
This method extracts a hierarchical record from the specified raw bytes. The copybook for the record needs to be already parsed.
This extractor expects multiple segments to be provided as a list of a segmentId-data pair. Raw data for each segment should be provided as array of bytes.
This method reconstructs hierarchical record structure by putting all provided segments in their corresponding places in the hierarchy.
The parsed copybook
The data bits containing the record
A list of segment redefine GROUPs
A mapping from segment ids to segment redefine groups
A mapping from a segment field name to its parents
The offset to the beginning of the record (in bits)
A schema retention policy to be applied to the extracted record
If true, OCCURS DEPENDING ON data size will depend on the number of elements
If true a record id field will be added as the first field of the record.
A file id to be put to the extractor record if generateRecordId == true
The record id to be saved to the record id field
if true, a field containing input file name will be generated
An input file name to put if its generation is needed
A Spark org.apache.spark.sql.Row object corresponding to the hierarchical record schema
This method extracts a record from the specified array of bytes.
This method extracts a record from the specified array of bytes. The copybook for the record needs to be already parsed.
The parsed copybook.
The data bits containing the record.
The offset to the beginning of the record (in bits).
A schema retention policy to be applied to the extracted record.
If true, OCCURS DEPENDING ON data size will depend on the number of elements.
If true a record id field will be added as the first field of the record.
Segent ids to put to the extracted record if id generation it turned on.
A file id to be put to the extractor record if generateRecordId == true.
The record id to be saved to the record id field.
An active segment redefine (the one that will be parsed). All other segment redefines will be skipped.
if true, a field containing input file name will be generated
An input file name to put if its generation is needed
A Spark org.apache.spark.sql.Row object corresponding to the record schema.