Class

za.co.absa.cobrix.cobol.reader.parameters

VariableLengthParameters

Related Doc: package parameters

Permalink

case class VariableLengthParameters(isRecordSequence: Boolean, bdw: Option[Bdw], isRdwBigEndian: Boolean, isRdwPartRecLength: Boolean, rdwAdjustment: Int, recordHeaderParser: Option[String], recordExtractor: Option[String], rhpAdditionalInfo: Option[String], reAdditionalInfo: String, recordLengthField: String, fileStartOffset: Int, fileEndOffset: Int, generateRecordId: Boolean, isUsingIndex: Boolean, inputSplitRecords: Option[Int], inputSplitSizeMB: Option[Int], improveLocality: Boolean, optimizeAllocation: Boolean, inputFileNameColumn: String, occursMappings: Map[String, Map[String, Int]]) extends Product with Serializable

This class holds the parameters currently used for parsing variable-length records.

isRecordSequence

Does input files have 4 byte record length headers

bdw

Block descriptor word (if specified), for FB and VB record formats

isRdwBigEndian

Is RDW big endian? It may depend on flavor of mainframe and/or mainframe to PC transfer method

isRdwPartRecLength

Does RDW count itself as part of record length itself

rdwAdjustment

Controls a mismatch between RDW and record length

recordHeaderParser

An optional custom record header parser for non-standard RDWs

recordExtractor

An optional custom raw record parser class non-standard record types

rhpAdditionalInfo

An optional additional option string passed to a custom record header parser

reAdditionalInfo

An optional additional option string passed to a custom record extractor

recordLengthField

A field that stores record length

fileStartOffset

A number of bytes to skip at the beginning of each file

fileEndOffset

A number of bytes to skip at the end of each file

generateRecordId

Generate a sequential record number for each record to be able to retain the order of the original data

isUsingIndex

Is indexing input file before processing is requested

inputSplitRecords

The number of records to include in each partition. Notice mainframe records may have variable size, inputSplitMB is the recommended option

inputSplitSizeMB

A partition size to target. In certain circumstances this size may not be exactly that, but the library will do the best effort to target that size

improveLocality

Tries to improve locality by extracting preferred locations for variable-length records

optimizeAllocation

Optimizes cluster usage in case of optimization for locality in the presence of new nodes (nodes that do not contain any blocks of the files being processed)

inputFileNameColumn

A column name to add to the dataframe. The column will contain input file name for each record similar to 'input_file_name()' function

Linear Supertypes
Serializable, Serializable, Product, Equals, AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. VariableLengthParameters
  2. Serializable
  3. Serializable
  4. Product
  5. Equals
  6. AnyRef
  7. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Instance Constructors

  1. new VariableLengthParameters(isRecordSequence: Boolean, bdw: Option[Bdw], isRdwBigEndian: Boolean, isRdwPartRecLength: Boolean, rdwAdjustment: Int, recordHeaderParser: Option[String], recordExtractor: Option[String], rhpAdditionalInfo: Option[String], reAdditionalInfo: String, recordLengthField: String, fileStartOffset: Int, fileEndOffset: Int, generateRecordId: Boolean, isUsingIndex: Boolean, inputSplitRecords: Option[Int], inputSplitSizeMB: Option[Int], improveLocality: Boolean, optimizeAllocation: Boolean, inputFileNameColumn: String, occursMappings: Map[String, Map[String, Int]])

    Permalink

    isRecordSequence

    Does input files have 4 byte record length headers

    bdw

    Block descriptor word (if specified), for FB and VB record formats

    isRdwBigEndian

    Is RDW big endian? It may depend on flavor of mainframe and/or mainframe to PC transfer method

    isRdwPartRecLength

    Does RDW count itself as part of record length itself

    rdwAdjustment

    Controls a mismatch between RDW and record length

    recordHeaderParser

    An optional custom record header parser for non-standard RDWs

    recordExtractor

    An optional custom raw record parser class non-standard record types

    rhpAdditionalInfo

    An optional additional option string passed to a custom record header parser

    reAdditionalInfo

    An optional additional option string passed to a custom record extractor

    recordLengthField

    A field that stores record length

    fileStartOffset

    A number of bytes to skip at the beginning of each file

    fileEndOffset

    A number of bytes to skip at the end of each file

    generateRecordId

    Generate a sequential record number for each record to be able to retain the order of the original data

    isUsingIndex

    Is indexing input file before processing is requested

    inputSplitRecords

    The number of records to include in each partition. Notice mainframe records may have variable size, inputSplitMB is the recommended option

    inputSplitSizeMB

    A partition size to target. In certain circumstances this size may not be exactly that, but the library will do the best effort to target that size

    improveLocality

    Tries to improve locality by extracting preferred locations for variable-length records

    optimizeAllocation

    Optimizes cluster usage in case of optimization for locality in the presence of new nodes (nodes that do not contain any blocks of the files being processed)

    inputFileNameColumn

    A column name to add to the dataframe. The column will contain input file name for each record similar to 'input_file_name()' function

Value Members

  1. final def !=(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  4. final def asInstanceOf[T0]: T0

    Permalink
    Definition Classes
    Any
  5. val bdw: Option[Bdw]

    Permalink

    Block descriptor word (if specified), for FB and VB record formats

  6. def clone(): AnyRef

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  7. final def eq(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  8. val fileEndOffset: Int

    Permalink

    A number of bytes to skip at the end of each file

  9. val fileStartOffset: Int

    Permalink

    A number of bytes to skip at the beginning of each file

  10. def finalize(): Unit

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  11. val generateRecordId: Boolean

    Permalink

    Generate a sequential record number for each record to be able to retain the order of the original data

  12. final def getClass(): Class[_]

    Permalink
    Definition Classes
    AnyRef → Any
  13. val improveLocality: Boolean

    Permalink

    Tries to improve locality by extracting preferred locations for variable-length records

  14. val inputFileNameColumn: String

    Permalink

    A column name to add to the dataframe.

    A column name to add to the dataframe. The column will contain input file name for each record similar to 'input_file_name()' function

  15. val inputSplitRecords: Option[Int]

    Permalink

    The number of records to include in each partition.

    The number of records to include in each partition. Notice mainframe records may have variable size, inputSplitMB is the recommended option

  16. val inputSplitSizeMB: Option[Int]

    Permalink

    A partition size to target.

    A partition size to target. In certain circumstances this size may not be exactly that, but the library will do the best effort to target that size

  17. final def isInstanceOf[T0]: Boolean

    Permalink
    Definition Classes
    Any
  18. val isRdwBigEndian: Boolean

    Permalink

    Is RDW big endian? It may depend on flavor of mainframe and/or mainframe to PC transfer method

  19. val isRdwPartRecLength: Boolean

    Permalink

    Does RDW count itself as part of record length itself

  20. val isRecordSequence: Boolean

    Permalink

    Does input files have 4 byte record length headers

  21. val isUsingIndex: Boolean

    Permalink

    Is indexing input file before processing is requested

  22. final def ne(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  23. final def notify(): Unit

    Permalink
    Definition Classes
    AnyRef
  24. final def notifyAll(): Unit

    Permalink
    Definition Classes
    AnyRef
  25. val occursMappings: Map[String, Map[String, Int]]

    Permalink
  26. val optimizeAllocation: Boolean

    Permalink

    Optimizes cluster usage in case of optimization for locality in the presence of new nodes (nodes that do not contain any blocks of the files being processed)

  27. val rdwAdjustment: Int

    Permalink

    Controls a mismatch between RDW and record length

  28. val reAdditionalInfo: String

    Permalink

    An optional additional option string passed to a custom record extractor

  29. val recordExtractor: Option[String]

    Permalink

    An optional custom raw record parser class non-standard record types

  30. val recordHeaderParser: Option[String]

    Permalink

    An optional custom record header parser for non-standard RDWs

  31. val recordLengthField: String

    Permalink

    A field that stores record length

  32. val rhpAdditionalInfo: Option[String]

    Permalink

    An optional additional option string passed to a custom record header parser

  33. final def synchronized[T0](arg0: ⇒ T0): T0

    Permalink
    Definition Classes
    AnyRef
  34. final def wait(): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  35. final def wait(arg0: Long, arg1: Int): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  36. final def wait(arg0: Long): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )

Inherited from Serializable

Inherited from Serializable

Inherited from Product

Inherited from Equals

Inherited from AnyRef

Inherited from Any

Ungrouped