org.apache.spark.sql

execution

package execution

:: DeveloperApi :: An execution engine for relational query plans that runs on top Spark and returns RDDs.

Note that the operators in this package are created automatically by a query planner using a SQLContext and are not intended to be used directly by end users of Spark SQL. They are documented here in order to make it easier for others to understand the performance characteristics of query plans that are generated by Spark SQL.

Linear Supertypes
AnyRef, Any
Ordering
  1. Alphabetic
  2. By inheritance
Inherited
  1. execution
  2. AnyRef
  3. Any
  1. Hide All
  2. Show all
Learn more about member selection
Visibility
  1. Public
  2. All

Type Members

  1. case class Aggregate(partial: Boolean, groupingExpressions: Seq[Expression], aggregateExpressions: Seq[NamedExpression], child: SparkPlan) extends SparkPlan with UnaryNode with Product with Serializable

    :: DeveloperApi :: Groups input data by groupingExpressions and computes the aggregateExpressions for each group.

    :: DeveloperApi :: Groups input data by groupingExpressions and computes the aggregateExpressions for each group.

    partial

    if true then aggregation is done partially on local data without shuffling to ensure all values where groupingExpressions are equal are present.

    groupingExpressions

    expressions that are evaluated to determine grouping.

    aggregateExpressions

    expressions that are computed for each group.

    child

    the input data source.

    Annotations
    @DeveloperApi()
  2. case class AggregateEvaluation(schema: Seq[Attribute], initialValues: Seq[Expression], update: Seq[Expression], result: Expression) extends Product with Serializable

  3. case class BatchPythonEvaluation(udf: PythonUDF, output: Seq[Attribute], child: SparkPlan) extends SparkPlan with Product with Serializable

    :: DeveloperApi :: Uses PythonRDD to evaluate a PythonUDF, one partition of tuples at a time.

    :: DeveloperApi :: Uses PythonRDD to evaluate a PythonUDF, one partition of tuples at a time. The input data is cached and zipped with the result of the udf evaluation.

    Annotations
    @DeveloperApi()
  4. case class CacheTableCommand(tableName: String, plan: Option[LogicalPlan], isLazy: Boolean) extends Command with RunnableCommand with Product with Serializable

    :: DeveloperApi ::

    :: DeveloperApi ::

    Annotations
    @DeveloperApi()
  5. case class DescribeCommand(child: SparkPlan, output: Seq[Attribute], isExtended: Boolean) extends Command with RunnableCommand with Product with Serializable

    :: DeveloperApi ::

    :: DeveloperApi ::

    Annotations
    @DeveloperApi()
  6. case class Distinct(partial: Boolean, child: SparkPlan) extends SparkPlan with UnaryNode with Product with Serializable

    :: DeveloperApi :: Computes the set of distinct input rows using a HashSet.

    :: DeveloperApi :: Computes the set of distinct input rows using a HashSet.

    partial

    when true the distinct operation is performed partially, per partition, without shuffling the data.

    child

    the input query plan.

    Annotations
    @DeveloperApi()
  7. case class EvaluatePython(udf: PythonUDF, child: LogicalPlan, resultAttribute: AttributeReference) extends catalyst.plans.logical.UnaryNode with Product with Serializable

    :: DeveloperApi :: Evaluates a PythonUDF, appending the result to the end of the input tuple.

    :: DeveloperApi :: Evaluates a PythonUDF, appending the result to the end of the input tuple.

    Annotations
    @DeveloperApi()
  8. case class Except(left: SparkPlan, right: SparkPlan) extends SparkPlan with BinaryNode with Product with Serializable

    :: DeveloperApi :: Returns a table with the elements from left that are not in right using the built-in spark subtract function.

    :: DeveloperApi :: Returns a table with the elements from left that are not in right using the built-in spark subtract function.

    Annotations
    @DeveloperApi()
  9. case class Exchange(newPartitioning: Partitioning, child: SparkPlan) extends SparkPlan with UnaryNode with Product with Serializable

    :: DeveloperApi ::

    :: DeveloperApi ::

    Annotations
    @DeveloperApi()
  10. case class ExecutedCommand(cmd: RunnableCommand) extends SparkPlan with Product with Serializable

    A physical operator that executes the run method of a RunnableCommand and saves the result to prevent multiple executions.

  11. case class Expand(projections: Seq[GroupExpression], output: Seq[Attribute], child: SparkPlan) extends SparkPlan with UnaryNode with Product with Serializable

    Apply the all of the GroupExpressions to every input row, hence we will get multiple output rows for a input row.

    Apply the all of the GroupExpressions to every input row, hence we will get multiple output rows for a input row.

    projections

    The group of expressions, all of the group expressions should output the same schema specified bye the parameter output

    output

    The output Schema

    child

    Child operator

    Annotations
    @DeveloperApi()
  12. case class ExplainCommand(logicalPlan: LogicalPlan, output: Seq[Attribute] = ..., extended: Boolean = false) extends Command with RunnableCommand with Product with Serializable

    An explain command for users to see how a command will be executed.

    An explain command for users to see how a command will be executed.

    Note that this command takes in a logical plan, runs the optimizer on the logical plan (but do NOT actually execute it).

    :: DeveloperApi ::

    Annotations
    @DeveloperApi()
  13. case class ExternalSort(sortOrder: Seq[SortOrder], global: Boolean, child: SparkPlan) extends SparkPlan with UnaryNode with Product with Serializable

    :: DeveloperApi :: Performs a sort, spilling to disk as needed.

    :: DeveloperApi :: Performs a sort, spilling to disk as needed.

    global

    when true performs a global sort of all partitions by shuffling the data first if necessary.

    Annotations
    @DeveloperApi()
  14. case class Filter(condition: Expression, child: SparkPlan) extends SparkPlan with UnaryNode with Product with Serializable

    :: DeveloperApi ::

    :: DeveloperApi ::

    Annotations
    @DeveloperApi()
  15. case class Generate(generator: Generator, join: Boolean, outer: Boolean, child: SparkPlan) extends SparkPlan with UnaryNode with Product with Serializable

    :: DeveloperApi :: Applies a Generator to a stream of input rows, combining the output of each into a new stream of rows.

    :: DeveloperApi :: Applies a Generator to a stream of input rows, combining the output of each into a new stream of rows. This operation is similar to a flatMap in functional programming with one important additional feature, which allows the input rows to be joined with their output.

    join

    when true, each output row is implicitly joined with the input tuple that produced it.

    outer

    when true, each input row will be output at least once, even if the output of the given generator is empty. outer has no effect when join is false.

    Annotations
    @DeveloperApi()
  16. case class GeneratedAggregate(partial: Boolean, groupingExpressions: Seq[Expression], aggregateExpressions: Seq[NamedExpression], child: SparkPlan) extends SparkPlan with UnaryNode with Product with Serializable

    :: DeveloperApi :: Alternate version of aggregation that leverages projection and thus code generation.

    :: DeveloperApi :: Alternate version of aggregation that leverages projection and thus code generation. Aggregations are converted into a set of projections from a aggregation buffer tuple back onto itself. Currently only used for simple aggregations like SUM, COUNT, or AVERAGE are supported.

    partial

    if true then aggregation is done partially on local data without shuffling to ensure all values where groupingExpressions are equal are present.

    groupingExpressions

    expressions that are evaluated to determine grouping.

    aggregateExpressions

    expressions that are computed for each group.

    child

    the input data source.

    Annotations
    @DeveloperApi()
  17. case class Intersect(left: SparkPlan, right: SparkPlan) extends SparkPlan with BinaryNode with Product with Serializable

    :: DeveloperApi :: Returns the rows in left that also appear in right using the built in spark intersection function.

    :: DeveloperApi :: Returns the rows in left that also appear in right using the built in spark intersection function.

    Annotations
    @DeveloperApi()
  18. case class Limit(limit: Int, child: SparkPlan) extends SparkPlan with UnaryNode with Product with Serializable

    :: DeveloperApi :: Take the first limit elements.

    :: DeveloperApi :: Take the first limit elements. Note that the implementation is different depending on whether this is a terminal operator or not. If it is terminal and is invoked using executeCollect, this operator uses something similar to Spark's take method on the Spark driver. If it is not terminal or is invoked using execute, we first take the limit on each partition, and then repartition all the data to a single partition to compute the global limit.

    Annotations
    @DeveloperApi()
  19. case class LocalTableScan(output: Seq[Attribute], rows: Seq[Row]) extends SparkPlan with LeafNode with Product with Serializable

    Physical plan node for scanning data from a local collection.

  20. case class LogicalLocalTable(output: Seq[Attribute], rows: Seq[Row])(sqlContext: SQLContext) extends LogicalPlan with MultiInstanceRelation with Product with Serializable

    Logical plan node for scanning data from a local collection.

  21. case class LogicalRDD(output: Seq[Attribute], rdd: RDD[Row])(sqlContext: SQLContext) extends LogicalPlan with MultiInstanceRelation with Product with Serializable

    Logical plan node for scanning data from an RDD.

  22. case class OutputFaker(output: Seq[Attribute], child: SparkPlan) extends SparkPlan with Product with Serializable

    :: DeveloperApi :: A plan node that does nothing but lie about the output of its child.

    :: DeveloperApi :: A plan node that does nothing but lie about the output of its child. Used to spice a (hopefully structurally equivalent) tree from a different optimization sequence into an already resolved tree.

    Annotations
    @DeveloperApi()
  23. case class PhysicalRDD(output: Seq[Attribute], rdd: RDD[Row]) extends SparkPlan with LeafNode with Product with Serializable

    Physical plan node for scanning data from an RDD.

  24. case class Project(projectList: Seq[NamedExpression], child: SparkPlan) extends SparkPlan with UnaryNode with Product with Serializable

    :: DeveloperApi ::

    :: DeveloperApi ::

    Annotations
    @DeveloperApi()
  25. class QueryExecutionException extends Exception

  26. trait RunnableCommand extends Command

    A logical command that is executed for its side-effects.

    A logical command that is executed for its side-effects. RunnableCommands are wrapped in ExecutedCommand during execution.

  27. case class Sample(fraction: Double, withReplacement: Boolean, seed: Long, child: SparkPlan) extends SparkPlan with UnaryNode with Product with Serializable

    :: DeveloperApi ::

    :: DeveloperApi ::

    Annotations
    @DeveloperApi()
  28. case class SetCommand(kv: Option[(String, Option[String])], output: Seq[Attribute]) extends Command with RunnableCommand with Logging with Product with Serializable

    :: DeveloperApi ::

    :: DeveloperApi ::

    Annotations
    @DeveloperApi()
  29. case class ShowTablesCommand(databaseName: Option[String]) extends Command with RunnableCommand with Product with Serializable

    A command for users to get tables in the given database.

    A command for users to get tables in the given database. If a databaseName is not given, the current database will be used. The syntax of using this command in SQL is:

    SHOW TABLES [IN databaseName]

    :: DeveloperApi ::

    Annotations
    @DeveloperApi()
  30. case class Sort(sortOrder: Seq[SortOrder], global: Boolean, child: SparkPlan) extends SparkPlan with UnaryNode with Product with Serializable

    :: DeveloperApi :: Performs a sort on-heap.

    :: DeveloperApi :: Performs a sort on-heap.

    global

    when true performs a global sort of all partitions by shuffling the data first if necessary.

    Annotations
    @DeveloperApi()
  31. abstract class SparkPlan extends QueryPlan[SparkPlan] with Logging with Serializable

    :: DeveloperApi ::

    :: DeveloperApi ::

    Annotations
    @DeveloperApi()
  32. case class TakeOrdered(limit: Int, sortOrder: Seq[SortOrder], child: SparkPlan) extends SparkPlan with UnaryNode with Product with Serializable

    :: DeveloperApi :: Take the first limit elements as defined by the sortOrder.

    :: DeveloperApi :: Take the first limit elements as defined by the sortOrder. This is logically equivalent to having a Limit operator after a Sort operator. This could have been named TopK, but Spark's top operator does the opposite in ordering so we name it TakeOrdered to avoid confusion.

    Annotations
    @DeveloperApi()
  33. case class UncacheTableCommand(tableName: String) extends Command with RunnableCommand with Product with Serializable

    :: DeveloperApi ::

    :: DeveloperApi ::

    Annotations
    @DeveloperApi()
  34. case class Union(children: Seq[SparkPlan]) extends SparkPlan with Product with Serializable

    :: DeveloperApi ::

    :: DeveloperApi ::

    Annotations
    @DeveloperApi()

Value Members

  1. object ClearCacheCommand extends Command with RunnableCommand with Product with Serializable

    :: DeveloperApi :: Clear all cached data from the in-memory cache.

    :: DeveloperApi :: Clear all cached data from the in-memory cache.

    Annotations
    @DeveloperApi()
  2. object EvaluatePython extends Serializable

  3. object RDDConversions

    :: DeveloperApi ::

    :: DeveloperApi ::

    Annotations
    @DeveloperApi()
  4. object SparkPlan extends Serializable

  5. package debug

    :: DeveloperApi :: Contains methods for debugging query execution.

    :: DeveloperApi :: Contains methods for debugging query execution.

    Usage:

    import org.apache.spark.sql.execution.debug._
    sql("SELECT key FROM src").debug()
    dataFrame.typeCheck()
  6. package joins

    :: DeveloperApi :: Physical execution operators for join operations.

Inherited from AnyRef

Inherited from Any

Ungrouped