Packages

trait FileFormat extends AnyRef

Used to read and write data stored in files to/from the InternalRow format.

Linear Supertypes
AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. FileFormat
  2. AnyRef
  3. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Abstract Value Members

  1. abstract def inferSchema(sparkSession: SparkSession, options: Map[String, String], files: Seq[FileStatus]): Option[StructType]

    When possible, this method should return the schema of the given files.

    When possible, this method should return the schema of the given files. When the format does not support inference, or no valid files are given should return None. In these cases Spark will require that user specify the schema manually.

  2. abstract def prepareWrite(sparkSession: SparkSession, job: Job, options: Map[String, String], dataSchema: StructType): OutputWriterFactory

    Prepares a write job and returns an OutputWriterFactory.

    Prepares a write job and returns an OutputWriterFactory. Client side job preparation can be put here. For example, user defined output committer can be configured here by setting the output committer class in the conf of spark.sql.sources.outputCommitterClass.

Concrete Value Members

  1. final def !=(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  4. final def asInstanceOf[T0]: T0
    Definition Classes
    Any
  5. def buildReader(sparkSession: SparkSession, dataSchema: StructType, partitionSchema: StructType, requiredSchema: StructType, filters: Seq[Filter], options: Map[String, String], hadoopConf: Configuration): (PartitionedFile) ⇒ Iterator[InternalRow]

    Returns a function that can be used to read a single file in as an Iterator of InternalRow.

    Returns a function that can be used to read a single file in as an Iterator of InternalRow.

    dataSchema

    The global data schema. It can be either specified by the user, or reconciled/merged from all underlying data files. If any partition columns are contained in the files, they are preserved in this schema.

    partitionSchema

    The schema of the partition column row that will be present in each PartitionedFile. These columns should be appended to the rows that are produced by the iterator.

    requiredSchema

    The schema of the data that should be output for each row. This may be a subset of the columns that are present in the file if column pruning has occurred.

    filters

    A set of filters than can optionally be used to reduce the number of rows output

    options

    A set of string -> string configuration options.

    Attributes
    protected
  6. def buildReaderWithPartitionValues(sparkSession: SparkSession, dataSchema: StructType, partitionSchema: StructType, requiredSchema: StructType, filters: Seq[Filter], options: Map[String, String], hadoopConf: Configuration): (PartitionedFile) ⇒ Iterator[InternalRow]

    Exactly the same as buildReader except that the reader function returned by this method appends partition values to InternalRows produced by the reader function buildReader returns.

  7. def clone(): AnyRef
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native()
  8. final def eq(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  9. def equals(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  10. def finalize(): Unit
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  11. final def getClass(): Class[_]
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  12. def hashCode(): Int
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  13. final def isInstanceOf[T0]: Boolean
    Definition Classes
    Any
  14. def isSplitable(sparkSession: SparkSession, options: Map[String, String], path: Path): Boolean

    Returns whether a file with path could be split or not.

  15. final def ne(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  16. final def notify(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  17. final def notifyAll(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  18. def supportBatch(sparkSession: SparkSession, dataSchema: StructType): Boolean

    Returns whether this format supports returning columnar batch or not.

    Returns whether this format supports returning columnar batch or not.

    TODO: we should just have different traits for the different formats.

  19. def supportDataType(dataType: DataType): Boolean

    Returns whether this format supports the given DataType in read/write path.

    Returns whether this format supports the given DataType in read/write path. By default all data types are supported.

  20. def supportFieldName(name: String): Boolean

    Returns whether this format supports the given filed name in read/write path.

    Returns whether this format supports the given filed name in read/write path. By default all field name is supported.

  21. final def synchronized[T0](arg0: ⇒ T0): T0
    Definition Classes
    AnyRef
  22. def toString(): String
    Definition Classes
    AnyRef → Any
  23. def vectorTypes(requiredSchema: StructType, partitionSchema: StructType, sqlConf: SQLConf): Option[Seq[String]]

    Returns concrete column vector class names for each column to be used in a columnar batch if this format supports returning columnar batch.

  24. final def wait(): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  25. final def wait(arg0: Long, arg1: Int): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  26. final def wait(arg0: Long): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native()

Inherited from AnyRef

Inherited from Any

Ungrouped