org.apache.spark.sql.cassandra
(Since version 1.6.0) use org.apache.spark.sql.QueryExecution
(Since version 1.6.0) use org.apache.spark.sql.SparkPlanner
Executes SQL query against Cassandra and returns DataFrame representing the result.
A catalyst metadata catalog that points to Cassandra.
A catalyst metadata catalog that points to Cassandra.
Get current used cluster name
Returns keyspace/database set previously by setKeyspace or throws IllegalStateException if keyspace has not been set yet.
Set current used cluster name
Set the Spark Cassandra Connector configuration parameters which will be used when accessing a given keyspace in a given cluster
Set the Spark Cassandra Connector configuration parameters which will be used when accessing a given cluster
Set the Spark Cassandra Connector configuration parameters
Set current used database name.
Set current used database name. Database is equivalent to keyspace
Set default Cassandra keyspace to be used when accessing tables with unqualified names.
Delegates to cassandraSql
Delegates to cassandraSql
(Since version 1.3.0) Use createDataFrame. This will be removed in Spark 2.0.
(Since version 1.3.0) Use createDataFrame. This will be removed in Spark 2.0.
(Since version 1.3.0) Use createDataFrame. This will be removed in Spark 2.0.
(Since version 1.3.0) Use createDataFrame. This will be removed in Spark 2.0.
(Since version 1.4.0) Use read.jdbc(). This will be removed in Spark 2.0.
(Since version 1.4.0) Use read.jdbc(). This will be removed in Spark 2.0.
(Since version 1.4.0) Use read.jdbc(). This will be removed in Spark 2.0.
(Since version 1.4.0) Use read.json(). This will be removed in Spark 2.0.
(Since version 1.4.0) Use read.json(). This will be removed in Spark 2.0.
(Since version 1.4.0) Use read.json(). This will be removed in Spark 2.0.
(Since version 1.4.0) Use read.json(). This will be removed in Spark 2.0.
(Since version 1.4.0) Use read.json(). This will be removed in Spark 2.0.
(Since version 1.4.0) Use read.json(). This will be removed in Spark 2.0.
(Since version 1.4.0) Use read.json(). This will be removed in Spark 2.0.
(Since version 1.4.0) Use read.json(). This will be removed in Spark 2.0.
(Since version 1.4.0) Use read.json(). This will be removed in Spark 2.0.
(Since version 1.4.0) Use read.format(source).schema(schema).options(options).load(). This will be removed in Spark 2.0.
(Since version 1.4.0) Use read.format(source).schema(schema).options(options).load(). This will be removed in Spark 2.0.
(Since version 1.4.0) Use read.format(source).options(options).load(). This will be removed in Spark 2.0.
(Since version 1.4.0) Use read.format(source).options(options).load(). This will be removed in Spark 2.0.
(Since version 1.4.0) Use read.format(source).load(path). This will be removed in Spark 2.0.
(Since version 1.4.0) Use read.load(path). This will be removed in Spark 2.0.
(Since version 1.4.0) Use read.parquet(). This will be removed in Spark 2.0.
Allows to execute SQL queries against Cassandra and access results as
SchemaRDD
collections. Predicate pushdown to Cassandra is supported.Example: