Returns a new HiveContext as new session, which will have separated SQLConf, UDF/UDAF, temporary tables and SessionState, but sharing the same CacheManager, IsolatedClientLoader and Hive client (both of execution and metadata) with existing HiveContext.
Returns a new HiveContext as new session, which will have separated SQLConf, UDF/UDAF, temporary tables and SessionState, but sharing the same CacheManager, IsolatedClientLoader and Hive client (both of execution and metadata) with existing HiveContext.
Invalidate and refresh all the cached the metadata of the given table.
Invalidate and refresh all the cached the metadata of the given table. For performance reasons, Spark SQL or the external data source library it uses might cache certain metadata about a table, such as the location of blocks. When those change outside of Spark SQL, users should call this function to invalidate the cache.
1.3.0
(Since version 1.3.0) Use createDataFrame instead.
(Since version 1.3.0) Use createDataFrame instead.
(Since version 1.3.0) Use createDataFrame instead.
(Since version 1.3.0) Use createDataFrame instead.
(Since version 1.4.0) Use read.jdbc() instead.
(Since version 1.4.0) Use read.jdbc() instead.
(Since version 1.4.0) Use read.jdbc() instead.
(Since version 1.4.0) Use read.json() instead.
(Since version 1.4.0) Use read.json() instead.
(Since version 1.4.0) Use read.json() instead.
(Since version 1.4.0) Use read.json() instead.
(Since version 1.4.0) Use read.json() instead.
(Since version 1.4.0) Use read.json() instead.
(Since version 1.4.0) Use read.json() instead.
(Since version 1.4.0) Use read.json() instead.
(Since version 1.4.0) Use read.json() instead.
(Since version 1.4.0) Use read.format(source).schema(schema).options(options).load() instead.
(Since version 1.4.0) Use read.format(source).schema(schema).options(options).load() instead.
(Since version 1.4.0) Use read.format(source).options(options).load() instead.
(Since version 1.4.0) Use read.format(source).options(options).load() instead.
(Since version 1.4.0) Use read.format(source).load(path) instead.
(Since version 1.4.0) Use read.load(path) instead.
(Since version 1.4.0) Use read.parquet() instead.
An instance of the Spark SQL execution engine that integrates with data stored in Hive. Configuration for Hive is read from hive-site.xml on the classpath.
(Since version 2.0.0) Use SparkSession.builder.enableHiveSupport instead