Class/Object

io.smartdatalake.workflow.dataobject

HiveTableDataObject

Related Docs: object HiveTableDataObject | package dataobject

Permalink

case class HiveTableDataObject(id: DataObjectId, path: Option[String] = None, partitions: Seq[String] = Seq(), analyzeTableAfterWrite: Boolean = false, dateColumnType: DateColumnType = DateColumnType.Date, schemaMin: Option[StructType] = None, table: Table, numInitialHdfsPartitions: Int = 16, saveMode: SDLSaveMode = SDLSaveMode.Overwrite, acl: Option[AclDef] = None, connectionId: Option[ConnectionId] = None, expectedPartitionsCondition: Option[String] = None, housekeepingMode: Option[HousekeepingMode] = None, metadata: Option[DataObjectMetadata] = None)(implicit instanceRegistry: InstanceRegistry) extends TableDataObject with CanWriteDataFrame with CanHandlePartitions with HasHadoopStandardFilestore with SmartDataLakeLogger with Product with Serializable

DataObject of type Hive. Provides details to access Hive tables to an Action

id

unique name of this data object

path

hadoop directory for this table. If it doesn't contain scheme and authority, the connections pathPrefix is applied. If pathPrefix is not defined or doesn't define scheme and authority, default schema and authority is applied. If DataObject is only used for reading or if the HiveTable already exist, the path can be omitted. If the HiveTable already exists but with a different path, a warning is issued

partitions

partition columns for this data object

analyzeTableAfterWrite

enable compute statistics after writing data (default=false)

dateColumnType

type of date column

schemaMin

An optional, minimal schema that this DataObject must have to pass schema validation on reading and writing.

table

hive table to be written by this output

numInitialHdfsPartitions

number of files created when writing into an empty table (otherwise the number will be derived from the existing data)

saveMode

spark SaveMode to use when writing files, default is "overwrite"

acl

override connections permissions for files created tables hadoop directory with this connection

connectionId

optional id of io.smartdatalake.workflow.connection.HiveTableConnection

expectedPartitionsCondition

Optional definition of partitions expected to exist. Define a Spark SQL expression that is evaluated against a PartitionValues instance and returns true or false Default is to expect all partitions to exist.

housekeepingMode

Optional definition of a housekeeping mode applied after every write. E.g. it can be used to cleanup, archive and compact partitions. See HousekeepingMode for available implementations. Default is None.

metadata

meta data

Linear Supertypes
Serializable, Serializable, Product, Equals, HasHadoopStandardFilestore, CanHandlePartitions, CanWriteDataFrame, TableDataObject, SchemaValidation, CanCreateDataFrame, DataObject, AtlasExportable, SmartDataLakeLogger, ParsableFromConfig[DataObject], SdlConfigObject, AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. HiveTableDataObject
  2. Serializable
  3. Serializable
  4. Product
  5. Equals
  6. HasHadoopStandardFilestore
  7. CanHandlePartitions
  8. CanWriteDataFrame
  9. TableDataObject
  10. SchemaValidation
  11. CanCreateDataFrame
  12. DataObject
  13. AtlasExportable
  14. SmartDataLakeLogger
  15. ParsableFromConfig
  16. SdlConfigObject
  17. AnyRef
  18. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Instance Constructors

  1. new HiveTableDataObject(id: DataObjectId, path: Option[String] = None, partitions: Seq[String] = Seq(), analyzeTableAfterWrite: Boolean = false, dateColumnType: DateColumnType = DateColumnType.Date, schemaMin: Option[StructType] = None, table: Table, numInitialHdfsPartitions: Int = 16, saveMode: SDLSaveMode = SDLSaveMode.Overwrite, acl: Option[AclDef] = None, connectionId: Option[ConnectionId] = None, expectedPartitionsCondition: Option[String] = None, housekeepingMode: Option[HousekeepingMode] = None, metadata: Option[DataObjectMetadata] = None)(implicit instanceRegistry: InstanceRegistry)

    Permalink

    id

    unique name of this data object

    path

    hadoop directory for this table. If it doesn't contain scheme and authority, the connections pathPrefix is applied. If pathPrefix is not defined or doesn't define scheme and authority, default schema and authority is applied. If DataObject is only used for reading or if the HiveTable already exist, the path can be omitted. If the HiveTable already exists but with a different path, a warning is issued

    partitions

    partition columns for this data object

    analyzeTableAfterWrite

    enable compute statistics after writing data (default=false)

    dateColumnType

    type of date column

    schemaMin

    An optional, minimal schema that this DataObject must have to pass schema validation on reading and writing.

    table

    hive table to be written by this output

    numInitialHdfsPartitions

    number of files created when writing into an empty table (otherwise the number will be derived from the existing data)

    saveMode

    spark SaveMode to use when writing files, default is "overwrite"

    acl

    override connections permissions for files created tables hadoop directory with this connection

    connectionId

    optional id of io.smartdatalake.workflow.connection.HiveTableConnection

    expectedPartitionsCondition

    Optional definition of partitions expected to exist. Define a Spark SQL expression that is evaluated against a PartitionValues instance and returns true or false Default is to expect all partitions to exist.

    housekeepingMode

    Optional definition of a housekeeping mode applied after every write. E.g. it can be used to cleanup, archive and compact partitions. See HousekeepingMode for available implementations. Default is None.

    metadata

    meta data

Value Members

  1. final def !=(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  4. val acl: Option[AclDef]

    Permalink

    override connections permissions for files created tables hadoop directory with this connection

  5. def addFieldIfNotExisting(writeSchema: StructType, colName: String, dataType: DataType): StructType

    Permalink
    Attributes
    protected
    Definition Classes
    CanCreateDataFrame
  6. val analyzeTableAfterWrite: Boolean

    Permalink

    enable compute statistics after writing data (default=false)

  7. final def asInstanceOf[T0]: T0

    Permalink
    Definition Classes
    Any
  8. def atlasName: String

    Permalink
    Definition Classes
    TableDataObject → DataObjectAtlasExportable
  9. def atlasQualifiedName(prefix: String): String

    Permalink
    Definition Classes
    TableDataObject → AtlasExportable
  10. def clone(): AnyRef

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  11. def compactPartitions(partitionValues: Seq[PartitionValues])(implicit session: SparkSession, actionPipelineContext: ActionPipelineContext): Unit

    Permalink

    Compact given partitions combining smaller files into bigger ones.

    Compact given partitions combining smaller files into bigger ones. This is used to compact partitions by housekeeping. Note: this is optional to implement.

    Definition Classes
    HiveTableDataObjectCanHandlePartitions
  12. val connectionId: Option[ConnectionId]

    Permalink

    optional id of io.smartdatalake.workflow.connection.HiveTableConnection

  13. def createEmptyPartition(partitionValues: PartitionValues)(implicit session: SparkSession): Unit

    Permalink

    create empty partition

    create empty partition

    Definition Classes
    HiveTableDataObjectCanHandlePartitions
  14. def createReadSchema(writeSchema: StructType)(implicit session: SparkSession): StructType

    Permalink

    Creates the read schema based on a given write schema.

    Creates the read schema based on a given write schema. Normally this is the same, but some DataObjects can remove & add columns on read (e.g. KafkaTopicDataObject, SparkFileDataObject) In this cases we have to break the DataFrame lineage und create a dummy DataFrame in init phase.

    Definition Classes
    CanCreateDataFrame
  15. val dateColumnType: DateColumnType

    Permalink

    type of date column

  16. def deletePartitions(partitionValues: Seq[PartitionValues])(implicit session: SparkSession): Unit

    Permalink

    Delete given partitions.

    Delete given partitions. This is used to cleanup partitions by housekeeping. Note: this is optional to implement.

    Definition Classes
    HiveTableDataObjectCanHandlePartitions
  17. def deletePartitionsIfExisting(partitionValues: Seq[PartitionValues])(implicit session: SparkSession, context: ActionPipelineContext): Unit

    Permalink

    Checks if partition exists and deletes it.

    Checks if partition exists and deletes it. Note that partition values to check don't need to have a key/value defined for every partition column.

  18. def dropTable(implicit session: SparkSession): Unit

    Permalink
    Definition Classes
    HiveTableDataObject → TableDataObject
  19. final def eq(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  20. val expectedPartitionsCondition: Option[String]

    Permalink

    Optional definition of partitions expected to exist.

    Optional definition of partitions expected to exist. Define a Spark SQL expression that is evaluated against a PartitionValues instance and returns true or false Default is to expect all partitions to exist.

    Definition Classes
    HiveTableDataObjectCanHandlePartitions
  21. def factory: FromConfigFactory[DataObject]

    Permalink

    Returns the factory that can parse this type (that is, type CO).

    Returns the factory that can parse this type (that is, type CO).

    Typically, implementations of this method should return the companion object of the implementing class. The companion object in turn should implement FromConfigFactory.

    returns

    the factory (object) for this class.

    Definition Classes
    HiveTableDataObject → ParsableFromConfig
  22. def finalize(): Unit

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  23. final def getClass(): Class[_]

    Permalink
    Definition Classes
    AnyRef → Any
  24. def getConnection[T <: Connection](connectionId: ConnectionId)(implicit registry: InstanceRegistry, ct: ClassTag[T], tt: scala.reflect.api.JavaUniverse.TypeTag[T]): T

    Permalink

    Handle class cast exception when getting objects from instance registry

    Handle class cast exception when getting objects from instance registry

    Attributes
    protected
    Definition Classes
    DataObject
  25. def getConnectionReg[T <: Connection](connectionId: ConnectionId, registry: InstanceRegistry)(implicit ct: ClassTag[T], tt: scala.reflect.api.JavaUniverse.TypeTag[T]): T

    Permalink
    Attributes
    protected
    Definition Classes
    DataObject
  26. def getDataFrame(partitionValues: Seq[PartitionValues] = Seq())(implicit session: SparkSession, context: ActionPipelineContext): DataFrame

    Permalink
    Definition Classes
    HiveTableDataObject → CanCreateDataFrame
  27. def getPKduplicates(implicit session: SparkSession, context: ActionPipelineContext): DataFrame

    Permalink
    Definition Classes
    TableDataObject
  28. def getPKnulls(implicit session: SparkSession, context: ActionPipelineContext): DataFrame

    Permalink
    Definition Classes
    TableDataObject
  29. def getPKviolators(implicit session: SparkSession, context: ActionPipelineContext): DataFrame

    Permalink
    Definition Classes
    TableDataObject
  30. def hadoopPath(implicit session: SparkSession): Path

    Permalink
  31. val housekeepingMode: Option[HousekeepingMode]

    Permalink

    Optional definition of a housekeeping mode applied after every write.

    Optional definition of a housekeeping mode applied after every write. E.g. it can be used to cleanup, archive and compact partitions. See HousekeepingMode for available implementations. Default is None.

    Definition Classes
    HiveTableDataObjectDataObject
  32. val id: DataObjectId

    Permalink

    unique name of this data object

    unique name of this data object

    Definition Classes
    HiveTableDataObjectDataObject → SdlConfigObject
  33. def init(df: DataFrame, partitionValues: Seq[PartitionValues], saveModeOptions: Option[SaveModeOptions] = None)(implicit session: SparkSession, context: ActionPipelineContext): Unit

    Permalink

    Called during init phase for checks and initialization.

    Called during init phase for checks and initialization. If possible dont change the system until execution phase.

    Definition Classes
    HiveTableDataObject → CanWriteDataFrame
  34. implicit val instanceRegistry: InstanceRegistry

    Permalink
  35. def isDbExisting(implicit session: SparkSession): Boolean

    Permalink
    Definition Classes
    HiveTableDataObject → TableDataObject
  36. final def isInstanceOf[T0]: Boolean

    Permalink
    Definition Classes
    Any
  37. def isPKcandidateKey(implicit session: SparkSession, context: ActionPipelineContext): Boolean

    Permalink
    Definition Classes
    TableDataObject
  38. def isTableExisting(implicit session: SparkSession): Boolean

    Permalink
    Definition Classes
    HiveTableDataObject → TableDataObject
  39. def listPartitions(implicit session: SparkSession, context: ActionPipelineContext): Seq[PartitionValues]

    Permalink

    list hive table partitions

    list hive table partitions

    Definition Classes
    HiveTableDataObjectCanHandlePartitions
  40. lazy val logger: Logger

    Permalink
    Attributes
    protected
    Definition Classes
    SmartDataLakeLogger
  41. val metadata: Option[DataObjectMetadata]

    Permalink

    meta data

    meta data

    Definition Classes
    HiveTableDataObjectDataObject
  42. def movePartitions(partitionValues: Seq[(PartitionValues, PartitionValues)])(implicit session: SparkSession): Unit

    Permalink

    Move given partitions.

    Move given partitions. This is used to archive partitions by housekeeping. Note: this is optional to implement.

    Definition Classes
    HiveTableDataObjectCanHandlePartitions
  43. final def ne(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  44. final def notify(): Unit

    Permalink
    Definition Classes
    AnyRef
  45. final def notifyAll(): Unit

    Permalink
    Definition Classes
    AnyRef
  46. val numInitialHdfsPartitions: Int

    Permalink

    number of files created when writing into an empty table (otherwise the number will be derived from the existing data)

  47. def partitionLayout(): Option[String]

    Permalink

    Return a String specifying the partition layout.

    Return a String specifying the partition layout. For Hadoop the default partition layout is colname1=<value1>/colname2=<value2>/.../

    Definition Classes
    HasHadoopStandardFilestore
  48. val partitions: Seq[String]

    Permalink

    partition columns for this data object

    partition columns for this data object

    Definition Classes
    HiveTableDataObjectCanHandlePartitions
  49. val path: Option[String]

    Permalink

    hadoop directory for this table.

    hadoop directory for this table. If it doesn't contain scheme and authority, the connections pathPrefix is applied. If pathPrefix is not defined or doesn't define scheme and authority, default schema and authority is applied. If DataObject is only used for reading or if the HiveTable already exist, the path can be omitted. If the HiveTable already exists but with a different path, a warning is issued

  50. def preWrite(implicit session: SparkSession, context: ActionPipelineContext): Unit

    Permalink

    Runs operations before writing to DataObject Note: As the transformed SubFeed doesnt yet exist in Action.preWrite, no partition values can be passed as parameters as in preRead

    Runs operations before writing to DataObject Note: As the transformed SubFeed doesnt yet exist in Action.preWrite, no partition values can be passed as parameters as in preRead

    Definition Classes
    HiveTableDataObjectDataObject
  51. def prepare(implicit session: SparkSession, context: ActionPipelineContext): Unit

    Permalink

    Prepare & test DataObject's prerequisits

    Prepare & test DataObject's prerequisits

    This runs during the "prepare" operation of the DAG.

    Definition Classes
    HiveTableDataObjectDataObject
  52. val saveMode: SDLSaveMode

    Permalink

    spark SaveMode to use when writing files, default is "overwrite"

  53. val schemaMin: Option[StructType]

    Permalink

    An optional, minimal schema that this DataObject must have to pass schema validation on reading and writing.

    An optional, minimal schema that this DataObject must have to pass schema validation on reading and writing.

    Definition Classes
    HiveTableDataObject → SchemaValidation
  54. def streamingOptions: Map[String, String]

    Permalink
    Definition Classes
    CanWriteDataFrame
  55. final def synchronized[T0](arg0: ⇒ T0): T0

    Permalink
    Definition Classes
    AnyRef
  56. var table: Table

    Permalink

    hive table to be written by this output

    hive table to be written by this output

    Definition Classes
    HiveTableDataObject → TableDataObject
  57. var tableSchema: StructType

    Permalink
    Definition Classes
    TableDataObject
  58. def toStringShort: String

    Permalink
    Definition Classes
    DataObject
  59. def validateSchema(df: DataFrame, schemaExpected: StructType, role: String): Unit

    Permalink

    Validate the schema of a given Spark Data Frame df against a given expected schema.

    Validate the schema of a given Spark Data Frame df against a given expected schema.

    df

    The data frame to validate.

    schemaExpected

    The expected schema to validate against.

    role

    role used in exception message. Set to read or write.

    Definition Classes
    SchemaValidation
    Exceptions thrown

    SchemaViolationException is the schemaMin does not validate.

  60. def validateSchemaHasPartitionCols(df: DataFrame, role: String): Unit

    Permalink

    Validate the schema of a given Spark Data Frame df that it contains the specified partition columns

    Validate the schema of a given Spark Data Frame df that it contains the specified partition columns

    df

    The data frame to validate.

    role

    role used in exception message. Set to read or write.

    Definition Classes
    CanHandlePartitions
    Exceptions thrown

    SchemaViolationException if the partitions columns are not included.

  61. def validateSchemaHasPrimaryKeyCols(df: DataFrame, primaryKeyCols: Seq[String], role: String): Unit

    Permalink

    Validate the schema of a given Spark Data Frame df that it contains the specified primary key columns

    Validate the schema of a given Spark Data Frame df that it contains the specified primary key columns

    df

    The data frame to validate.

    role

    role used in exception message. Set to read or write.

    Definition Classes
    CanHandlePartitions
    Exceptions thrown

    SchemaViolationException if the partitions columns are not included.

  62. def validateSchemaMin(df: DataFrame, role: String): Unit

    Permalink

    Validate the schema of a given Spark Data Frame df against schemaMin.

    Validate the schema of a given Spark Data Frame df against schemaMin.

    df

    The data frame to validate.

    role

    role used in exception message. Set to read or write.

    Definition Classes
    SchemaValidation
    Exceptions thrown

    SchemaViolationException is the schemaMin does not validate.

  63. final def wait(): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  64. final def wait(arg0: Long, arg1: Int): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  65. final def wait(arg0: Long): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  66. def writeDataFrame(df: DataFrame, partitionValues: Seq[PartitionValues] = Seq(), isRecursiveInput: Boolean = false, saveModeOptions: Option[SaveModeOptions] = None)(implicit session: SparkSession, context: ActionPipelineContext): Unit

    Permalink

    Write DataFrame to DataObject

    Write DataFrame to DataObject

    df

    the DataFrame to write

    partitionValues

    partition values included in DataFrames data

    isRecursiveInput

    if DataFrame needs this DataObject as input - special treatment might be needed in this case.

    Definition Classes
    HiveTableDataObject → CanWriteDataFrame
  67. def writeDataFrameToPath(df: DataFrame, path: Path, finalSaveMode: SDLSaveMode)(implicit session: SparkSession): Unit

    Permalink

    Write DataFrame to specific Path with properties of this DataObject.

    Write DataFrame to specific Path with properties of this DataObject. This is needed for compacting partitions by housekeeping. Note: this is optional to implement.

    Definition Classes
    HiveTableDataObject → CanWriteDataFrame
  68. def writeStreamingDataFrame(df: DataFrame, trigger: Trigger, options: Map[String, String], checkpointLocation: String, queryName: String, outputMode: OutputMode = OutputMode.Append, saveModeOptions: Option[SaveModeOptions] = None)(implicit session: SparkSession, context: ActionPipelineContext): StreamingQuery

    Permalink

    Write Spark structured streaming DataFrame The default implementation uses foreachBatch and this traits writeDataFrame method to write the DataFrame.

    Write Spark structured streaming DataFrame The default implementation uses foreachBatch and this traits writeDataFrame method to write the DataFrame. Some DataObjects will override this with specific implementations (Kafka).

    df

    The Streaming DataFrame to write

    trigger

    Trigger frequency for stream

    checkpointLocation

    location for checkpoints of streaming query

    Definition Classes
    CanWriteDataFrame

Inherited from Serializable

Inherited from Serializable

Inherited from Product

Inherited from Equals

Inherited from CanHandlePartitions

Inherited from CanWriteDataFrame

Inherited from TableDataObject

Inherited from SchemaValidation

Inherited from CanCreateDataFrame

Inherited from DataObject

Inherited from AtlasExportable

Inherited from SmartDataLakeLogger

Inherited from ParsableFromConfig[DataObject]

Inherited from SdlConfigObject

Inherited from AnyRef

Inherited from Any

Ungrouped