Creates an ExasolRelation after saving a org.apache.spark.sql.DataFrame into Exasol table.
Creates an ExasolRelation after saving a org.apache.spark.sql.DataFrame into Exasol table.
A Spark org.apache.spark.sql.SQLContext context
One of Spark save modes, org.apache.spark.sql.SaveMode
The parameters provided as options, table
parameter is
required for write
A Spark org.apache.spark.sql.DataFrame to save as a Exasol table
An ExasolRelation relation
Creates an ExasolRelation using the provided Spark org.apache.spark.sql.SQLContext, parameters and schema.
Creates an ExasolRelation using the provided Spark org.apache.spark.sql.SQLContext, parameters and schema.
A Spark org.apache.spark.sql.SQLContext context
The parameters provided as options, query
parameter is
required for read
A user provided schema used to select columns for the relation
An ExasolRelation relation
Creates an ExasolRelation using provided Spark org.apache.spark.sql.SQLContext and parameters.
Creates an ExasolRelation using provided Spark org.apache.spark.sql.SQLContext and parameters.
Since the schema is not provided, it is inferred by running an Exasol
query with LIMIT 1
clause.
A Spark org.apache.spark.sql.SQLContext context
The parameters provided as options, query
parameter is
required for read
An ExasolRelation relation
Rearrange dataframe partitions into Exasol data nodes count.
Rearrange dataframe partitions into Exasol data nodes count.
If nodesCnt
< df.rdd.getNumPartitions
then perform
df.coalesce(nodesCnt)
in order to reduce the partition counts.
If nodesCnt
> df.rdd.getNumPartitions
then perform
df.repartition(nodesCnt)
so that there a partition for each data node.
If the number of partitions and nodes are same, then do nothing.
The default entry source for creating integration between Exasol and Spark.
Additionally, it serves as a factory class to create ExasolRelation instances for Spark application.