Class/Object

io.smartdatalake.workflow.dataobject

TickTockHiveTableDataObject

Related Docs: object TickTockHiveTableDataObject | package dataobject

Permalink

case class TickTockHiveTableDataObject(id: DataObjectId, path: Option[String] = None, partitions: Seq[String] = Seq(), analyzeTableAfterWrite: Boolean = false, dateColumnType: DateColumnType = DateColumnType.Date, schemaMin: Option[StructType] = None, table: Table, numInitialHdfsPartitions: Int = 16, saveMode: SDLSaveMode = SDLSaveMode.Overwrite, acl: Option[AclDef] = None, connectionId: Option[ConnectionId] = None, expectedPartitionsCondition: Option[String] = None, housekeepingMode: Option[HousekeepingMode] = None, metadata: Option[DataObjectMetadata] = None)(implicit instanceRegistry: InstanceRegistry) extends TransactionalSparkTableDataObject with CanHandlePartitions with Product with Serializable

Linear Supertypes
Serializable, Serializable, Product, Equals, CanHandlePartitions, TransactionalSparkTableDataObject, CanWriteDataFrame, TableDataObject, SchemaValidation, CanCreateDataFrame, DataObject, AtlasExportable, SmartDataLakeLogger, ParsableFromConfig[DataObject], SdlConfigObject, AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. TickTockHiveTableDataObject
  2. Serializable
  3. Serializable
  4. Product
  5. Equals
  6. CanHandlePartitions
  7. TransactionalSparkTableDataObject
  8. CanWriteDataFrame
  9. TableDataObject
  10. SchemaValidation
  11. CanCreateDataFrame
  12. DataObject
  13. AtlasExportable
  14. SmartDataLakeLogger
  15. ParsableFromConfig
  16. SdlConfigObject
  17. AnyRef
  18. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Instance Constructors

  1. new TickTockHiveTableDataObject(id: DataObjectId, path: Option[String] = None, partitions: Seq[String] = Seq(), analyzeTableAfterWrite: Boolean = false, dateColumnType: DateColumnType = DateColumnType.Date, schemaMin: Option[StructType] = None, table: Table, numInitialHdfsPartitions: Int = 16, saveMode: SDLSaveMode = SDLSaveMode.Overwrite, acl: Option[AclDef] = None, connectionId: Option[ConnectionId] = None, expectedPartitionsCondition: Option[String] = None, housekeepingMode: Option[HousekeepingMode] = None, metadata: Option[DataObjectMetadata] = None)(implicit instanceRegistry: InstanceRegistry)

    Permalink

Value Members

  1. final def !=(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  4. val acl: Option[AclDef]

    Permalink
  5. def addFieldIfNotExisting(writeSchema: StructType, colName: String, dataType: DataType): StructType

    Permalink
    Attributes
    protected
    Definition Classes
    CanCreateDataFrame
  6. val analyzeTableAfterWrite: Boolean

    Permalink
  7. final def asInstanceOf[T0]: T0

    Permalink
    Definition Classes
    Any
  8. def atlasName: String

    Permalink
    Definition Classes
    TableDataObject → DataObjectAtlasExportable
  9. def atlasQualifiedName(prefix: String): String

    Permalink
    Definition Classes
    TableDataObject → AtlasExportable
  10. def clone(): AnyRef

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  11. val connectionId: Option[ConnectionId]

    Permalink
  12. def createEmptyPartition(partitionValues: PartitionValues)(implicit session: SparkSession): Unit

    Permalink

    create empty partition

    create empty partition

    Definition Classes
    TickTockHiveTableDataObjectCanHandlePartitions
  13. def createReadSchema(writeSchema: StructType)(implicit session: SparkSession): StructType

    Permalink

    Creates the read schema based on a given write schema.

    Creates the read schema based on a given write schema. Normally this is the same, but some DataObjects can remove & add columns on read (e.g. KafkaTopicDataObject, SparkFileDataObject) In this cases we have to break the DataFrame lineage und create a dummy DataFrame in init phase.

    Definition Classes
    CanCreateDataFrame
  14. val dateColumnType: DateColumnType

    Permalink
  15. def deletePartitions(partitionValues: Seq[PartitionValues])(implicit session: SparkSession): Unit

    Permalink

    Delete given partitions.

    Delete given partitions. This is used to cleanup partitions by housekeeping. Note: this is optional to implement.

    Definition Classes
    TickTockHiveTableDataObjectCanHandlePartitions
  16. def dropTable(implicit session: SparkSession): Unit

    Permalink
    Definition Classes
    TickTockHiveTableDataObject → TableDataObject
  17. final def eq(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  18. val expectedPartitionsCondition: Option[String]

    Permalink

    Definition of partitions that are expected to exists.

    Definition of partitions that are expected to exists. This is used to validate that partitions being read exists and don't return no data. Define a Spark SQL expression that is evaluated against a PartitionValues instance and returns true or false example: "elements['yourColName'] > 2017"

    returns

    true if partition is expected to exist.

    Definition Classes
    TickTockHiveTableDataObjectCanHandlePartitions
  19. def factory: FromConfigFactory[DataObject]

    Permalink

    Returns the factory that can parse this type (that is, type CO).

    Returns the factory that can parse this type (that is, type CO).

    Typically, implementations of this method should return the companion object of the implementing class. The companion object in turn should implement FromConfigFactory.

    returns

    the factory (object) for this class.

    Definition Classes
    TickTockHiveTableDataObject → ParsableFromConfig
  20. def filesystem(implicit session: SparkSession): FileSystem

    Permalink
  21. def finalize(): Unit

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  22. final def getClass(): Class[_]

    Permalink
    Definition Classes
    AnyRef → Any
  23. def getConnection[T <: Connection](connectionId: ConnectionId)(implicit registry: InstanceRegistry, ct: ClassTag[T], tt: scala.reflect.api.JavaUniverse.TypeTag[T]): T

    Permalink

    Handle class cast exception when getting objects from instance registry

    Handle class cast exception when getting objects from instance registry

    Attributes
    protected
    Definition Classes
    DataObject
  24. def getConnectionReg[T <: Connection](connectionId: ConnectionId, registry: InstanceRegistry)(implicit ct: ClassTag[T], tt: scala.reflect.api.JavaUniverse.TypeTag[T]): T

    Permalink
    Attributes
    protected
    Definition Classes
    DataObject
  25. def getDataFrame(partitionValues: Seq[PartitionValues] = Seq())(implicit session: SparkSession, context: ActionPipelineContext): DataFrame

    Permalink
    Definition Classes
    TickTockHiveTableDataObject → CanCreateDataFrame
  26. def getPKduplicates(implicit session: SparkSession, context: ActionPipelineContext): DataFrame

    Permalink
    Definition Classes
    TableDataObject
  27. def getPKnulls(implicit session: SparkSession, context: ActionPipelineContext): DataFrame

    Permalink
    Definition Classes
    TableDataObject
  28. def getPKviolators(implicit session: SparkSession, context: ActionPipelineContext): DataFrame

    Permalink
    Definition Classes
    TableDataObject
  29. def hadoopPath(implicit session: SparkSession): Path

    Permalink
  30. val housekeepingMode: Option[HousekeepingMode]

    Permalink

    Configure a housekeeping mode to e.g cleanup, archive and compact partitions.

    Configure a housekeeping mode to e.g cleanup, archive and compact partitions. Default is None.

    Definition Classes
    TickTockHiveTableDataObjectDataObject
  31. val id: DataObjectId

    Permalink

    A unique identifier for this instance.

    A unique identifier for this instance.

    Definition Classes
    TickTockHiveTableDataObjectDataObject → SdlConfigObject
  32. def init(df: DataFrame, partitionValues: Seq[PartitionValues], saveModeOptions: Option[SaveModeOptions] = None)(implicit session: SparkSession, context: ActionPipelineContext): Unit

    Permalink

    Called during init phase for checks and initialization.

    Called during init phase for checks and initialization. If possible dont change the system until execution phase.

    Definition Classes
    TickTockHiveTableDataObject → CanWriteDataFrame
  33. implicit val instanceRegistry: InstanceRegistry

    Permalink
  34. def isDbExisting(implicit session: SparkSession): Boolean

    Permalink
    Definition Classes
    TickTockHiveTableDataObject → TableDataObject
  35. final def isInstanceOf[T0]: Boolean

    Permalink
    Definition Classes
    Any
  36. def isPKcandidateKey(implicit session: SparkSession, context: ActionPipelineContext): Boolean

    Permalink
    Definition Classes
    TableDataObject
  37. def isTableExisting(implicit session: SparkSession): Boolean

    Permalink
    Definition Classes
    TickTockHiveTableDataObject → TableDataObject
  38. def listPartitions(implicit session: SparkSession, context: ActionPipelineContext): Seq[PartitionValues]

    Permalink

    list hive table partitions

    list hive table partitions

    Definition Classes
    TickTockHiveTableDataObjectCanHandlePartitions
  39. lazy val logger: Logger

    Permalink
    Attributes
    protected
    Definition Classes
    SmartDataLakeLogger
  40. val metadata: Option[DataObjectMetadata]

    Permalink

    Additional metadata for the DataObject

    Additional metadata for the DataObject

    Definition Classes
    TickTockHiveTableDataObjectDataObject
  41. final def ne(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  42. final def notify(): Unit

    Permalink
    Definition Classes
    AnyRef
  43. final def notifyAll(): Unit

    Permalink
    Definition Classes
    AnyRef
  44. val numInitialHdfsPartitions: Int

    Permalink
  45. val partitions: Seq[String]

    Permalink

    Definition of partition columns

    Definition of partition columns

    Definition Classes
    TickTockHiveTableDataObjectCanHandlePartitions
  46. val path: Option[String]

    Permalink
  47. def preWrite(implicit session: SparkSession, context: ActionPipelineContext): Unit

    Permalink

    Runs operations before writing to DataObject Note: As the transformed SubFeed doesnt yet exist in Action.preWrite, no partition values can be passed as parameters as in preRead

    Runs operations before writing to DataObject Note: As the transformed SubFeed doesnt yet exist in Action.preWrite, no partition values can be passed as parameters as in preRead

    Definition Classes
    TickTockHiveTableDataObjectDataObject
  48. def prepare(implicit session: SparkSession, context: ActionPipelineContext): Unit

    Permalink

    Prepare & test DataObject's prerequisits

    Prepare & test DataObject's prerequisits

    This runs during the "prepare" operation of the DAG.

    Definition Classes
    TickTockHiveTableDataObjectDataObject
  49. val saveMode: SDLSaveMode

    Permalink
  50. val schemaMin: Option[StructType]

    Permalink

    An optional, minimal schema that a DataObject schema must have to pass schema validation.

    An optional, minimal schema that a DataObject schema must have to pass schema validation.

    The schema validation semantics are: - Schema A is valid in respect to a minimal schema B when B is a subset of A. This means: the whole column set of B is contained in the column set of A.

    • A column of B is contained in A when A contains a column with equal name and data type.
    • Column order is ignored.
    • Column nullability is ignored.
    • Duplicate columns in terms of name and data type are eliminated (set semantics).

    Note: This is mainly used by the functionality defined in CanCreateDataFrame and CanWriteDataFrame, that is, when reading or writing Spark data frames from/to the underlying data container. io.smartdatalake.workflow.action.Actions that work with files ignore the schemaMin attribute if it is defined. Additionally schemaMin can be used to define the schema used if there is no data or table doesn't yet exist.

    Definition Classes
    TickTockHiveTableDataObject → SchemaValidation
  51. def streamingOptions: Map[String, String]

    Permalink
    Definition Classes
    CanWriteDataFrame
  52. final def synchronized[T0](arg0: ⇒ T0): T0

    Permalink
    Definition Classes
    AnyRef
  53. var table: Table

    Permalink
    Definition Classes
    TickTockHiveTableDataObject → TableDataObject
  54. var tableSchema: StructType

    Permalink
    Definition Classes
    TableDataObject
  55. def toStringShort: String

    Permalink
    Definition Classes
    DataObject
  56. def validateSchema(df: DataFrame, schemaExpected: StructType, role: String): Unit

    Permalink

    Validate the schema of a given Spark Data Frame df against a given expected schema.

    Validate the schema of a given Spark Data Frame df against a given expected schema.

    df

    The data frame to validate.

    schemaExpected

    The expected schema to validate against.

    role

    role used in exception message. Set to read or write.

    Definition Classes
    SchemaValidation
    Exceptions thrown

    SchemaViolationException is the schemaMin does not validate.

  57. def validateSchemaHasPartitionCols(df: DataFrame, role: String): Unit

    Permalink

    Validate the schema of a given Spark Data Frame df that it contains the specified partition columns

    Validate the schema of a given Spark Data Frame df that it contains the specified partition columns

    df

    The data frame to validate.

    role

    role used in exception message. Set to read or write.

    Definition Classes
    CanHandlePartitions
    Exceptions thrown

    SchemaViolationException if the partitions columns are not included.

  58. def validateSchemaHasPrimaryKeyCols(df: DataFrame, primaryKeyCols: Seq[String], role: String): Unit

    Permalink

    Validate the schema of a given Spark Data Frame df that it contains the specified primary key columns

    Validate the schema of a given Spark Data Frame df that it contains the specified primary key columns

    df

    The data frame to validate.

    role

    role used in exception message. Set to read or write.

    Definition Classes
    CanHandlePartitions
    Exceptions thrown

    SchemaViolationException if the partitions columns are not included.

  59. def validateSchemaMin(df: DataFrame, role: String): Unit

    Permalink

    Validate the schema of a given Spark Data Frame df against schemaMin.

    Validate the schema of a given Spark Data Frame df against schemaMin.

    df

    The data frame to validate.

    role

    role used in exception message. Set to read or write.

    Definition Classes
    SchemaValidation
    Exceptions thrown

    SchemaViolationException is the schemaMin does not validate.

  60. final def wait(): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  61. final def wait(arg0: Long, arg1: Int): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  62. final def wait(arg0: Long): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  63. def writeDataFrame(df: DataFrame, partitionValues: Seq[PartitionValues] = Seq(), isRecursiveInput: Boolean = false, saveModeOptions: Option[SaveModeOptions] = None)(implicit session: SparkSession, context: ActionPipelineContext): Unit

    Permalink

    Write DataFrame to DataObject

    Write DataFrame to DataObject

    df

    the DataFrame to write

    partitionValues

    partition values included in DataFrames data

    isRecursiveInput

    if DataFrame needs this DataObject as input - special treatment might be needed in this case.

    Definition Classes
    TickTockHiveTableDataObject → CanWriteDataFrame
  64. def writeDataFrameInternal(df: DataFrame, createTableOnly: Boolean, partitionValues: Seq[PartitionValues], isRecursiveInput: Boolean, saveModeOptions: Option[SaveModeOptions])(implicit session: SparkSession): Unit

    Permalink

    Writes DataFrame to HDFS/Parquet and creates Hive table.

    Writes DataFrame to HDFS/Parquet and creates Hive table. DataFrames are repartitioned in order not to write too many small files or only a few HDFS files that are too large.

  65. def writeStreamingDataFrame(df: DataFrame, trigger: Trigger, options: Map[String, String], checkpointLocation: String, queryName: String, outputMode: OutputMode = OutputMode.Append, saveModeOptions: Option[SaveModeOptions] = None)(implicit session: SparkSession, context: ActionPipelineContext): StreamingQuery

    Permalink

    Write Spark structured streaming DataFrame The default implementation uses foreachBatch and this traits writeDataFrame method to write the DataFrame.

    Write Spark structured streaming DataFrame The default implementation uses foreachBatch and this traits writeDataFrame method to write the DataFrame. Some DataObjects will override this with specific implementations (Kafka).

    df

    The Streaming DataFrame to write

    trigger

    Trigger frequency for stream

    checkpointLocation

    location for checkpoints of streaming query

    Definition Classes
    CanWriteDataFrame

Inherited from Serializable

Inherited from Serializable

Inherited from Product

Inherited from Equals

Inherited from CanHandlePartitions

Inherited from TransactionalSparkTableDataObject

Inherited from CanWriteDataFrame

Inherited from TableDataObject

Inherited from SchemaValidation

Inherited from CanCreateDataFrame

Inherited from DataObject

Inherited from AtlasExportable

Inherited from SmartDataLakeLogger

Inherited from ParsableFromConfig[DataObject]

Inherited from SdlConfigObject

Inherited from AnyRef

Inherited from Any

Ungrouped