Package

org.apache.spark.sql.execution.columnar

impl

Permalink

package impl

Visibility
  1. Public
  2. All

Type Members

  1. abstract class BaseColumnFormatRelation extends JDBCAppendableRelation with PartitionedDataSourceScan with RowInsertableRelation with MutableRelation

    Permalink

    This class acts as a DataSource provider for column format tables provided Snappy.

    This class acts as a DataSource provider for column format tables provided Snappy. It uses GemFireXD as actual datastore to physically locate the tables. Column tables can be used for storing data in columnar compressed format. A example usage is given below.

    val data = Seq(Data(1, 2, 3), Data(7, 8, 9), Data(9, 2, 3), Data(4, 2, 3), Data(5, 6, 7)) val dataDF = snc.createDataset(data)(Encoders.product) snc.createTable(tableName, "column", dataDF.schema, props) dataDF.write.insertInto(tableName)

    This provider scans underlying tables in parallel and is aware of the data partition. It does not introduces a shuffle if simple table query is fired. One can insert a single or multiple rows into this table as well as do a bulk insert by a Spark DataFrame. Bulk insert example is shown above.

  2. abstract class ClusteredColumnIterator extends CloseableIterator[RegionEntry]

    Permalink

    Base trait for iterators that are capable of reading and returning the entire set of columns of a column batch.

    Base trait for iterators that are capable of reading and returning the entire set of columns of a column batch. These can be local region iterators or those fetching entries from remote nodes.

  3. final class ColumnFormatEncoder extends RowEncoder with Logging

    Permalink

    A RowEncoder implementation for ColumnFormatValue and child classes.

  4. final class ColumnFormatIterator extends ClusteredColumnIterator with DiskRegionIterator

    Permalink

    A customized iterator for column store tables that projects out the required columns and returns those column batches first that have all their columns in the memory.

    A customized iterator for column store tables that projects out the required columns and returns those column batches first that have all their columns in the memory. Further this will make use of DiskBlockSortManager to allow for concurrent partition iterators to do cross-partition disk block sorting and fault-in for best disk read performance (SNAP-2012).

  5. class ColumnFormatRelation extends BaseColumnFormatRelation with BulkPutRelation

    Permalink
  6. final class ColumnarStorePartitionedRDD extends RDDKryo[Any] with KryoSerializable

    Permalink
  7. case class CompactionResult(batchKey: ColumnFormatKey, bucketId: Int, success: Boolean) extends Product with Serializable

    Permalink

    Result of compaction of a column batch added to transaction pre-commit results.

    Result of compaction of a column batch added to transaction pre-commit results.

    NOTE: if the layout of this class or ColumnFormatKey changes, then update the regex pattern in SnapshotConnectionListener.parseCompactionResult that parses the toString() of this class

  8. final class DefaultSource extends ExternalSchemaRelationProvider with SchemaRelationProvider with CreatableRelationProvider with DataSourceRegister with Logging

    Permalink

    Column tables don't support any extensions over regular Spark schema syntax, but the support for ExternalSchemaRelationProvider has been added as a workaround to allow for specifying schema in a CREATE TABLE AS SELECT statement.

    Column tables don't support any extensions over regular Spark schema syntax, but the support for ExternalSchemaRelationProvider has been added as a workaround to allow for specifying schema in a CREATE TABLE AS SELECT statement.

    Normally Spark does not allow specifying schema in a CTAS statement for DataSources (except its special "hive" provider), so schema is passed here as string which is parsed locally in the CreatableRelationProvider implementation.

  9. class IndexColumnFormatRelation extends BaseColumnFormatRelation

    Permalink

    Currently this is same as ColumnFormatRelation but has kept it as a separate class to allow adding of any index specific functionality in future.

  10. class JDBCSourceAsColumnarStore extends ExternalStore with KryoSerializable

    Permalink

    Column Store implementation for GemFireXD.

  11. final class LongObjectHashMapWithState[V] extends Long2ObjectOpenHashMap[V]

    Permalink
  12. final class RemoteEntriesIterator extends ClusteredColumnIterator with Logging

    Permalink

    A ClusteredColumnIterator that fetches entries from a remote bucket.

    A ClusteredColumnIterator that fetches entries from a remote bucket.

    TODO: PERF: instead of fetching using getAll, this should instead open a named ColumnFormatIterator on the remote node hosting the bucket, then step through the iterator to fetch batch (or batches) at a time using Function/GfxdFunctionMessage invocations. As of now, the getAll invocation does not honour ordered disk reads, proper fault-in etc.

  13. final class SingleColumnBatchIterator extends ColumnBatchIterator

    Permalink

    Provides a ColumnBatchIterator over a single column batch for ColumnTableScan.

  14. final class SmartConnectorColumnRDD extends RDDKryo[Any] with KryoSerializable

    Permalink
  15. class SmartConnectorRowRDD extends RowFormatScanRDD

    Permalink
  16. trait StatsPredicate extends AnyRef

    Permalink

    The type of the generated class used by column stats check for a column batch.

  17. trait StoreCallback extends Serializable

    Permalink

Value Members

  1. object ColumnCompactor extends Logging

    Permalink

    Compact column batches, if required, and insert new compacted column batches, or if they are too small then push into row delta buffer.

  2. object ColumnFormatRelation extends Logging with StoreCallback

    Permalink
  3. object StatsFilter extends Predicate[AnyRef] with Serializable

    Permalink
  4. object StoreCallbacksImpl extends StoreCallbacks with Logging with Serializable

    Permalink

Ungrouped