org.apache.spark.sql.catalyst.catalog

CatalogTable

case class CatalogTable(identifier: TableIdentifier, tableType: CatalogTableType, storage: CatalogStorageFormat, schema: StructType, provider: Option[String] = scala.None, partitionColumnNames: Seq[String] = collection.this.Seq.empty[Nothing], bucketSpec: Option[BucketSpec] = scala.None, owner: String = "", createTime: Long = ..., lastAccessTime: Long = -1, properties: Map[String, String] = ..., stats: Option[Statistics] = scala.None, viewOriginalText: Option[String] = scala.None, viewText: Option[String] = scala.None, comment: Option[String] = scala.None, unsupportedFeatures: Seq[String] = collection.this.Seq.empty[Nothing], tracksPartitionsInCatalog: Boolean = false) extends Product with Serializable

A table defined in the catalog.

Note that Hive's metastore also tracks skewed columns. We should consider adding that in the future once we have a better understanding of how we want to handle skewed columns.

provider

the name of the data source provider for this table, e.g. parquet, json, etc. Can be None if this table is a View, should be "hive" for hive serde tables.

unsupportedFeatures

is a list of string descriptions of features that are used by the underlying table but not supported by Spark SQL yet.

tracksPartitionsInCatalog

whether this table's partition metadata is stored in the catalog. If false, it is inferred automatically based on file structure.

Linear Supertypes
Serializable, Serializable, Product, Equals, AnyRef, Any
Ordering
  1. Alphabetic
  2. By inheritance
Inherited
  1. CatalogTable
  2. Serializable
  3. Serializable
  4. Product
  5. Equals
  6. AnyRef
  7. Any
  1. Hide All
  2. Show all
Learn more about member selection
Visibility
  1. Public
  2. All

Instance Constructors

  1. new CatalogTable(identifier: TableIdentifier, tableType: CatalogTableType, storage: CatalogStorageFormat, schema: StructType, provider: Option[String] = scala.None, partitionColumnNames: Seq[String] = collection.this.Seq.empty[Nothing], bucketSpec: Option[BucketSpec] = scala.None, owner: String = "", createTime: Long = ..., lastAccessTime: Long = -1, properties: Map[String, String] = ..., stats: Option[Statistics] = scala.None, viewOriginalText: Option[String] = scala.None, viewText: Option[String] = scala.None, comment: Option[String] = scala.None, unsupportedFeatures: Seq[String] = collection.this.Seq.empty[Nothing], tracksPartitionsInCatalog: Boolean = false)

    provider

    the name of the data source provider for this table, e.g. parquet, json, etc. Can be None if this table is a View, should be "hive" for hive serde tables.

    unsupportedFeatures

    is a list of string descriptions of features that are used by the underlying table but not supported by Spark SQL yet.

    tracksPartitionsInCatalog

    whether this table's partition metadata is stored in the catalog. If false, it is inferred automatically based on file structure.

Value Members

  1. final def !=(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  2. final def !=(arg0: Any): Boolean

    Definition Classes
    Any
  3. final def ##(): Int

    Definition Classes
    AnyRef → Any
  4. final def ==(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  5. final def ==(arg0: Any): Boolean

    Definition Classes
    Any
  6. final def asInstanceOf[T0]: T0

    Definition Classes
    Any
  7. val bucketSpec: Option[BucketSpec]

  8. def clone(): AnyRef

    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  9. val comment: Option[String]

  10. val createTime: Long

  11. def database: String

    Return the database this table was specified to belong to, assuming it exists.

  12. final def eq(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  13. def finalize(): Unit

    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  14. final def getClass(): Class[_]

    Definition Classes
    AnyRef → Any
  15. val identifier: TableIdentifier

  16. final def isInstanceOf[T0]: Boolean

    Definition Classes
    Any
  17. val lastAccessTime: Long

  18. def location: String

    Return the table location, assuming it is specified.

  19. final def ne(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  20. final def notify(): Unit

    Definition Classes
    AnyRef
  21. final def notifyAll(): Unit

    Definition Classes
    AnyRef
  22. val owner: String

  23. val partitionColumnNames: Seq[String]

  24. def partitionSchema: StructType

    schema of this table's partition columns

  25. val properties: Map[String, String]

  26. val provider: Option[String]

    the name of the data source provider for this table, e.

    the name of the data source provider for this table, e.g. parquet, json, etc. Can be None if this table is a View, should be "hive" for hive serde tables.

  27. def qualifiedName: String

    Return the fully qualified name of this table, assuming the database was specified.

  28. val schema: StructType

  29. val stats: Option[Statistics]

  30. val storage: CatalogStorageFormat

  31. final def synchronized[T0](arg0: ⇒ T0): T0

    Definition Classes
    AnyRef
  32. val tableType: CatalogTableType

  33. def toString(): String

    Definition Classes
    CatalogTable → AnyRef → Any
  34. val tracksPartitionsInCatalog: Boolean

    whether this table's partition metadata is stored in the catalog.

    whether this table's partition metadata is stored in the catalog. If false, it is inferred automatically based on file structure.

  35. val unsupportedFeatures: Seq[String]

    is a list of string descriptions of features that are used by the underlying table but not supported by Spark SQL yet.

  36. val viewOriginalText: Option[String]

  37. val viewText: Option[String]

  38. final def wait(): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  39. final def wait(arg0: Long, arg1: Int): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  40. final def wait(arg0: Long): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  41. def withNewStorage(locationUri: Option[String] = storage.locationUri, inputFormat: Option[String] = storage.inputFormat, outputFormat: Option[String] = storage.outputFormat, compressed: Boolean = false, serde: Option[String] = storage.serde, properties: Map[String, String] = storage.properties): CatalogTable

    Syntactic sugar to update a field in storage.

Inherited from Serializable

Inherited from Serializable

Inherited from Product

Inherited from Equals

Inherited from AnyRef

Inherited from Any

Ungrouped