org.apache.spark.sql.execution.columnar.impl
Used by DependentRelation
s to register with parent
Used by DependentRelation
s to register with parent
Base table of this relation.
Base table of this relation.
Table definition: create table columnTable ( id varchar(36) not null, partitionId integer, numRows integer not null, data blob) For a table with n columns, there will be n+1 region entries.
Table definition: create table columnTable ( id varchar(36) not null, partitionId integer, numRows integer not null, data blob) For a table with n columns, there will be n+1 region entries. A base entry and one entry each for a column. The data column for the base entry will contain the stats. id for the base entry would be the uuid while for column entries it would be uuid_colName.
Create an index on a table.
Create an index on a table.
Index Identifier which goes in the catalog
Table identifier on which the index is created.
Columns on which the index has to be created with the direction of sorting. Direction can be specified as None.
Options for indexes. For e.g. column table index - ("COLOCATE_WITH"->"CUSTOMER"). row table index - ("INDEX_TYPE"->"GLOBAL HASH") or ("INDEX_TYPE"->"UNIQUE")
Destroy and cleanup this relation.
Destroy and cleanup this relation. It may include, but not limited to, dropping the external table that this relation represents.
Drops an index on this table
Drops an index on this table
Index identifier
Table identifier
Drop if exists
Execute a DML SQL and return the number of rows affected.
Execute a DML SQL and return the number of rows affected.
Get a spark plan to delete rows the relation.
Get a spark plan to delete rows the relation. The result of SparkPlan execution should be a count of number of updated rows.
Get the dependent child.
Get the dependent child.
Get a spark plan for insert.
Get a spark plan for insert. The result of SparkPlan execution should be a count of number of inserted rows.
Get the "key" columns for the table that need to be projected out by UPDATE and DELETE operations for affecting the selected rows.
Get the "key" columns for the table that need to be projected out by UPDATE and DELETE operations for affecting the selected rows.
Get key columns of the column table
Get key columns of the column table
Get a spark plan for puts.
Get a spark plan for puts. If the row is already present, it gets updated otherwise it gets inserted into the table represented by this relation. The result of SparkPlan execution should be a count of number of rows put.
Get a spark plan to update rows in the relation.
Get a spark plan to update rows in the relation. The result of SparkPlan execution should be a count of number of updated rows.
Insert a sequence of rows into the table represented by this relation.
Insert a sequence of rows into the table represented by this relation.
the rows to be inserted
number of rows inserted
Name of this relation in the catalog.
Name of this relation in the catalog.
Get the partitioning columns for the table, if any.
Get the partitioning columns for the table, if any.
Recover/Re-create the dependent child relations.
Recover/Re-create the dependent child relations. This callback is to recreate Dependent relations when the ParentRelation is being created.
Used by DependentRelation
s to unregister with parent
Used by DependentRelation
s to unregister with parent
Return true if table is created by the relation.
Return true if table is created by the relation. This will be used to check while destroying the table incase of a failure while creating the table
Return true if table already existed when the relation object was created.
Return true if table already existed when the relation object was created.
Truncate the table represented by this relation.
Truncate the table represented by this relation.
Returns the list of Expressions that this datasource may not be able to handle.
Returns the list of Expressions that this datasource may not be able to handle. By default, this function will return all filters, as it is always safe to double evaluate an Expression.
If required inject the key columns in the original relation.
If required inject the key columns in the original relation.