Package

org.apache.spark.sql.execution

datasources

Permalink

package datasources

Visibility
  1. Public
  2. All

Type Members

  1. class CatalogFileIndex extends FileIndex

    Permalink

    A FileIndex for a metastore catalog table.

  2. case class CreateTable(tableDesc: CatalogTable, mode: SaveMode, query: Option[LogicalPlan]) extends LogicalPlan with Product with Serializable

    Permalink

    Create a table and optionally insert some data into it.

    Create a table and optionally insert some data into it. Note that this plan is unresolved and has to be replaced by the concrete implementations during analysis.

    tableDesc

    the metadata of the table to be created.

    mode

    the data writing mode

    query

    an optional logical plan representing data to write into the created table.

  3. case class CreateTempViewUsing(tableIdent: TableIdentifier, userSpecifiedSchema: Option[StructType], replace: Boolean, global: Boolean, provider: String, options: Map[String, String]) extends LeafNode with RunnableCommand with Product with Serializable

    Permalink

    Create or replace a local/global temporary view with given data source.

  4. case class DataSource(sparkSession: SparkSession, className: String, paths: Seq[String] = Nil, userSpecifiedSchema: Option[StructType] = None, partitionColumns: Seq[String] = Seq.empty, bucketSpec: Option[BucketSpec] = None, options: Map[String, String] = Map.empty, catalogTable: Option[CatalogTable] = None) extends Logging with Product with Serializable

    Permalink

    The main class responsible for representing a pluggable Data Source in Spark SQL.

    The main class responsible for representing a pluggable Data Source in Spark SQL. In addition to acting as the canonical set of parameters that can describe a Data Source, this class is used to resolve a description to a concrete implementation that can be used in a query plan (either batch or streaming) or to write out data using an external library.

    From an end user's perspective a DataSource description can be created explicitly using org.apache.spark.sql.DataFrameReader or CREATE TABLE USING DDL. Additionally, this class is used when resolving a description from a metastore to a concrete implementation.

    Many of the arguments to this class are optional, though depending on the specific API being used these optional arguments might be filled in during resolution using either inference or external metadata. For example, when reading a partitioned table from a file system, partition columns will be inferred from the directory layout even if they are not specified.

    paths

    A list of file system paths that hold data. These will be globbed before and qualified. This option only works when reading from a FileFormat.

    userSpecifiedSchema

    An optional specification of the schema of the data. When present we skip attempting to infer the schema.

    partitionColumns

    A list of column names that the relation is partitioned by. This list is generally empty during the read path, unless this DataSource is managed by Hive. In these cases, during resolveRelation, we will call getOrInferFileFormatSchema for file based DataSources to infer the partitioning. In other cases, if this list is empty, then this table is unpartitioned.

    bucketSpec

    An optional specification for bucketing (hash-partitioning) of the data.

    catalogTable

    Optional catalog table reference that can be used to push down operations over the datasource to the catalog service.

  5. case class DataSourceAnalysis(conf: SQLConf) extends Rule[LogicalPlan] with CastSupport with Product with Serializable

    Permalink

    Replaces generic operations with specific variants that are designed to work with Spark SQL Data Sources.

    Replaces generic operations with specific variants that are designed to work with Spark SQL Data Sources.

    Note that, this rule must be run after PreprocessTableCreation and PreprocessTableInsertion.

  6. case class DataSourceStrategy(conf: SQLConf) extends Strategy with Logging with CastSupport with Product with Serializable

    Permalink

    A Strategy for planning scans over data sources defined using the sources API.

  7. class FailureSafeParser[IN] extends AnyRef

    Permalink
  8. trait FileFormat extends AnyRef

    Permalink

    Used to read and write data stored in files to/from the InternalRow format.

  9. trait FileIndex extends AnyRef

    Permalink

    An interface for objects capable of enumerating the root paths of a relation as well as the partitions of a relation subject to some pruning expressions.

  10. case class FilePartition(index: Int, files: Seq[PartitionedFile]) extends Partition with Product with Serializable

    Permalink

    A collection of file blocks that should be read as a single task (possibly from multiple partitioned directories).

  11. class FileScanRDD extends RDD[InternalRow]

    Permalink

    An RDD that scans a list of file partitions.

  12. abstract class FileStatusCache extends AnyRef

    Permalink

    A cache of the leaf files of partition directories.

    A cache of the leaf files of partition directories. We cache these files in order to speed up iterated queries over the same set of partitions. Otherwise, each query would have to hit remote storage in order to gather file statistics for physical planning.

    Each resolved catalog table has its own FileStatusCache. When the backing relation for the table is refreshed via refreshTable() or refreshByPath(), this cache will be invalidated.

  13. class FindDataSourceTable extends Rule[LogicalPlan]

    Permalink

    Replaces UnresolvedCatalogRelation with concrete relation logical plans.

    Replaces UnresolvedCatalogRelation with concrete relation logical plans.

    TODO: we should remove the special handling for hive tables after completely making hive as a data source.

  14. class HadoopFileLinesReader extends Iterator[Text] with Closeable

    Permalink

    An adaptor from a PartitionedFile to an Iterator of Text, which are all of the lines in that file.

  15. case class HadoopFsRelation(location: FileIndex, partitionSchema: StructType, dataSchema: StructType, bucketSpec: Option[BucketSpec], fileFormat: FileFormat, options: Map[String, String])(sparkSession: SparkSession) extends BaseRelation with FileRelation with Product with Serializable

    Permalink

    Acts as a container for all of the metadata required to read from a datasource.

    Acts as a container for all of the metadata required to read from a datasource. All discovery, resolution and merging logic for schemas and partitions has been removed.

    location

    A FileIndex that can enumerate the locations of all the files that comprise this relation.

    partitionSchema

    The schema of the columns (if any) that are used to partition the relation

    dataSchema

    The schema of any remaining columns. Note that if any partition columns are present in the actual data files as well, they are preserved.

    bucketSpec

    Describes the bucketing (hash-partitioning of the files by some column values).

    fileFormat

    A file format that can be used to read and write the data in files.

    options

    Configuration used when reading / writing data.

  16. class InMemoryFileIndex extends PartitioningAwareFileIndex

    Permalink

    A FileIndex that generates the list of files to process by recursively listing all the files present in paths.

  17. case class InsertIntoDataSourceCommand(logicalRelation: LogicalRelation, query: LogicalPlan, overwrite: Boolean) extends LeafNode with RunnableCommand with Product with Serializable

    Permalink

    Inserts the results of query in to a relation that extends InsertableRelation.

  18. case class InsertIntoHadoopFsRelationCommand(outputPath: Path, staticPartitions: TablePartitionSpec, ifPartitionNotExists: Boolean, partitionColumns: Seq[Attribute], bucketSpec: Option[BucketSpec], fileFormat: FileFormat, options: Map[String, String], query: LogicalPlan, mode: SaveMode, catalogTable: Option[CatalogTable], fileIndex: Option[FileIndex]) extends LeafNode with RunnableCommand with Product with Serializable

    Permalink

    A command for writing data to a HadoopFsRelation.

    A command for writing data to a HadoopFsRelation. Supports both overwriting and appending. Writing to dynamic partitions is also supported.

    staticPartitions

    partial partitioning spec for write. This defines the scope of partition overwrites: when the spec is empty, all partitions are overwritten. When it covers a prefix of the partition keys, only partitions matching the prefix are overwritten.

    ifPartitionNotExists

    If true, only write if the partition does not exist. Only valid for static partitions.

  19. case class LogicalRelation(relation: BaseRelation, output: Seq[AttributeReference], catalogTable: Option[CatalogTable]) extends LeafNode with MultiInstanceRelation with Product with Serializable

    Permalink

    Used to link a BaseRelation in to a logical query plan.

  20. abstract class OutputWriter extends AnyRef

    Permalink

    OutputWriter is used together with HadoopFsRelation for persisting rows to the underlying file system.

    OutputWriter is used together with HadoopFsRelation for persisting rows to the underlying file system. Subclasses of OutputWriter must provide a zero-argument constructor. An OutputWriter instance is created and initialized when a new output file is opened on executor side. This instance is used to persist rows to this single output file.

  21. abstract class OutputWriterFactory extends Serializable

    Permalink

    A factory that produces OutputWriters.

    A factory that produces OutputWriters. A new OutputWriterFactory is created on driver side for each write job issued when writing to a HadoopFsRelation, and then gets serialized to executor side to create actual OutputWriters on the fly.

  22. case class PartitionDirectory(values: InternalRow, files: Seq[FileStatus]) extends Product with Serializable

    Permalink

    A collection of data files from a partitioned relation, along with the partition values in the form of an InternalRow.

  23. case class PartitionPath(values: InternalRow, path: Path) extends Product with Serializable

    Permalink

    Holds a directory in a partitioned collection of files as well as the partition values in the form of a Row.

    Holds a directory in a partitioned collection of files as well as the partition values in the form of a Row. Before scanning, the files at path need to be enumerated.

  24. case class PartitionSpec(partitionColumns: StructType, partitions: Seq[PartitionPath]) extends Product with Serializable

    Permalink
  25. case class PartitionedFile(partitionValues: InternalRow, filePath: String, start: Long, length: Long, locations: Array[String] = Array.empty) extends Product with Serializable

    Permalink

    A part (i.e.

    A part (i.e. "block") of a single file that should be read, along with partition column values that need to be prepended to each row.

    partitionValues

    value of partition columns to be prepended to each row.

    filePath

    path of the file to read

    start

    the beginning offset (in bytes) of the block.

    length

    number of bytes to read.

    locations

    locality information (list of nodes that have the data).

  26. abstract class PartitioningAwareFileIndex extends FileIndex with Logging

    Permalink

    An abstract class that represents FileIndexs that are aware of partitioned tables.

    An abstract class that represents FileIndexs that are aware of partitioned tables. It provides the necessary methods to parse partition data based on a set of files.

  27. case class PreprocessTableCreation(sparkSession: SparkSession) extends Rule[LogicalPlan] with Product with Serializable

    Permalink

    Preprocess CreateTable, to do some normalization and checking.

  28. case class PreprocessTableInsertion(conf: SQLConf) extends Rule[LogicalPlan] with CastSupport with Product with Serializable

    Permalink

    Preprocess the InsertIntoTable plan.

    Preprocess the InsertIntoTable plan. Throws exception if the number of columns mismatch, or specified partition columns are different from the existing partition columns in the target table. It also does data type casting and field renaming, to make sure that the columns to be inserted have the correct data type and fields have the correct names.

  29. class RecordReaderIterator[T] extends Iterator[T] with Closeable

    Permalink

    An adaptor from a Hadoop RecordReader to an Iterator over the values returned.

    An adaptor from a Hadoop RecordReader to an Iterator over the values returned.

    Note that this returns Objects instead of InternalRow because we rely on erasure to pass column batches by pretending they are rows.

  30. case class RefreshResource(path: String) extends LeafNode with RunnableCommand with Product with Serializable

    Permalink
  31. case class RefreshTable(tableIdent: TableIdentifier) extends LeafNode with RunnableCommand with Product with Serializable

    Permalink
  32. class ResolveSQLOnFile extends Rule[LogicalPlan]

    Permalink

    Try to replaces UnresolvedRelations if the plan is for direct query on files.

  33. class SQLHadoopMapReduceCommitProtocol extends HadoopMapReduceCommitProtocol with Serializable with Logging

    Permalink

    A variant of HadoopMapReduceCommitProtocol that allows specifying the actual Hadoop output committer using an option specified in SQLConf.

  34. case class SaveIntoDataSourceCommand(query: LogicalPlan, provider: String, partitionColumns: Seq[String], options: Map[String, String], mode: SaveMode) extends LeafNode with RunnableCommand with Product with Serializable

    Permalink

    Saves the results of query in to a data source.

    Saves the results of query in to a data source.

    Note that this command is different from InsertIntoDataSourceCommand. This command will call CreatableRelationProvider.createRelation to write out the data, while InsertIntoDataSourceCommand calls InsertableRelation.insert. Ideally these 2 data source interfaces should do the same thing, but as we've already published these 2 interfaces and the implementations may have different logic, we have to keep these 2 different commands.

  35. abstract class TextBasedFileFormat extends FileFormat

    Permalink

    The base class file format that is based on text file.

Value Members

  1. object BucketingUtils

    Permalink
  2. object CodecStreams

    Permalink
  3. object DataSource extends Logging with Serializable

    Permalink
  4. object DataSourceStrategy extends Serializable

    Permalink
  5. object FileFormatWriter extends Logging

    Permalink

    A helper object for writing FileFormat data out to a location.

  6. object FileSourceStrategy extends Strategy with Logging

    Permalink

    A strategy for planning scans over collections of files that might be partitioned or bucketed by user specified columns.

    A strategy for planning scans over collections of files that might be partitioned or bucketed by user specified columns.

    At a high level planning occurs in several phases:

    • Split filters by when they need to be evaluated.
    • Prune the schema of the data requested based on any projections present. Today this pruning is only done on top level columns, but formats should support pruning of nested columns as well.
    • Construct a reader function by passing filters and the schema into the FileFormat.
    • Using a partition pruning predicates, enumerate the list of files that should be read.
    • Split the files into tasks and construct a FileScanRDD.
    • Add any projection or filters that must be evaluated after the scan.

    Files are assigned into tasks using the following algorithm:

    • If the table is bucketed, group files by bucket id into the correct number of partitions.
    • If the table is not bucketed or bucketing is turned off:
      • If any file is larger than the threshold, split it into pieces based on that threshold
      • Sort the files by decreasing file size.
      • Assign the ordered files to buckets using the following algorithm. If the current partition is under the threshold with the addition of the next file, add it. If not, open a new bucket and add it. Proceed to the next file.
  7. object FileStatusCache

    Permalink

    Use FileStatusCache.getOrCreate() to construct a globally shared file status cache.

  8. object HiveOnlyCheck extends (LogicalPlan) ⇒ Unit

    Permalink

    A rule to check whether the functions are supported only when Hive support is enabled

  9. object InMemoryFileIndex extends Logging

    Permalink
  10. object LogicalRelation extends Serializable

    Permalink
  11. object NoopCache extends FileStatusCache

    Permalink

    A non-caching implementation used when partition file status caching is disabled.

  12. object PartitionPath extends Serializable

    Permalink
  13. object PartitionSpec extends Serializable

    Permalink
  14. object PartitioningAwareFileIndex

    Permalink
  15. object PartitioningUtils

    Permalink
  16. object PreWriteCheck extends (LogicalPlan) ⇒ Unit

    Permalink

    A rule to do various checks before inserting into or writing to a data source table.

  17. package csv

    Permalink
  18. package jdbc

    Permalink
  19. package json

    Permalink
  20. package parquet

    Permalink
  21. package text

    Permalink

Ungrouped