Class/Object

io.smartdatalake.workflow.dataobject

JdbcTableDataObject

Related Docs: object JdbcTableDataObject | package dataobject

Permalink

case class JdbcTableDataObject(id: DataObjectId, createSql: Option[String] = None, preReadSql: Option[String] = None, postReadSql: Option[String] = None, preWriteSql: Option[String] = None, postWriteSql: Option[String] = None, schemaMin: Option[StructType] = None, table: Table, jdbcFetchSize: Int = 1000, saveMode: SDLSaveMode = SDLSaveMode.Overwrite, allowSchemaEvolution: Boolean = false, connectionId: ConnectionId, jdbcOptions: Map[String, String] = Map(), virtualPartitions: Seq[String] = Seq(), expectedPartitionsCondition: Option[String] = None, metadata: Option[DataObjectMetadata] = None)(implicit instanceRegistry: InstanceRegistry) extends TransactionalSparkTableDataObject with CanHandlePartitions with CanEvolveSchema with CanMergeDataFrame with Product with Serializable

DataObject of type JDBC. Provides details for an action to access tables in a database through JDBC.

id

unique name of this data object

createSql

DDL-statement to be executed in prepare phase, using output jdbc connection. Note that it is also possible to let Spark create the table in Init-phase. See jdbcOptions to customize column data types for auto-created DDL-statement.

preReadSql

SQL-statement to be executed in exec phase before reading input table, using input jdbc connection. Use tokens with syntax %{<spark sql expression>} to substitute with values from DefaultExpressionData.

postReadSql

SQL-statement to be executed in exec phase after reading input table and before action is finished, using input jdbc connection Use tokens with syntax %{<spark sql expression>} to substitute with values from DefaultExpressionData.

preWriteSql

SQL-statement to be executed in exec phase before writing output table, using output jdbc connection Use tokens with syntax %{<spark sql expression>} to substitute with values from DefaultExpressionData.

postWriteSql

SQL-statement to be executed in exec phase after writing output table, using output jdbc connection Use tokens with syntax %{<spark sql expression>} to substitute with values from DefaultExpressionData.

schemaMin

An optional, minimal schema that this DataObject must have to pass schema validation on reading and writing.

table

The jdbc table to be read

jdbcFetchSize

Number of rows to be fetched together by the Jdbc driver

saveMode

SDLSaveMode to use when writing table, default is "Overwrite". Only "Append" and "Overwrite" supported.

allowSchemaEvolution

If set to true schema evolution will automatically occur when writing to this DataObject with different schema, otherwise SDL will stop with error.

connectionId

Id of JdbcConnection configuration

jdbcOptions

Any jdbc options according to https://spark.apache.org/docs/latest/sql-data-sources-jdbc.html. Note that some options above set and override some of this options explicitly. Use "createTableOptions" and "createTableColumnTypes" to control automatic creating of database tables.

virtualPartitions

Virtual partition columns. Note that this doesn't need to be the same as the database partition columns for this table. But it is important that there is an index on these columns to efficiently list existing "partitions".

expectedPartitionsCondition

Optional definition of partitions expected to exist. Define a Spark SQL expression that is evaluated against a PartitionValues instance and returns true or false Default is to expect all partitions to exist.

Linear Supertypes
Serializable, Serializable, Product, Equals, CanMergeDataFrame, CanEvolveSchema, CanHandlePartitions, TransactionalSparkTableDataObject, CanWriteDataFrame, TableDataObject, SchemaValidation, CanCreateDataFrame, DataObject, AtlasExportable, SmartDataLakeLogger, ParsableFromConfig[DataObject], SdlConfigObject, AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. JdbcTableDataObject
  2. Serializable
  3. Serializable
  4. Product
  5. Equals
  6. CanMergeDataFrame
  7. CanEvolveSchema
  8. CanHandlePartitions
  9. TransactionalSparkTableDataObject
  10. CanWriteDataFrame
  11. TableDataObject
  12. SchemaValidation
  13. CanCreateDataFrame
  14. DataObject
  15. AtlasExportable
  16. SmartDataLakeLogger
  17. ParsableFromConfig
  18. SdlConfigObject
  19. AnyRef
  20. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Instance Constructors

  1. new JdbcTableDataObject(id: DataObjectId, createSql: Option[String] = None, preReadSql: Option[String] = None, postReadSql: Option[String] = None, preWriteSql: Option[String] = None, postWriteSql: Option[String] = None, schemaMin: Option[StructType] = None, table: Table, jdbcFetchSize: Int = 1000, saveMode: SDLSaveMode = SDLSaveMode.Overwrite, allowSchemaEvolution: Boolean = false, connectionId: ConnectionId, jdbcOptions: Map[String, String] = Map(), virtualPartitions: Seq[String] = Seq(), expectedPartitionsCondition: Option[String] = None, metadata: Option[DataObjectMetadata] = None)(implicit instanceRegistry: InstanceRegistry)

    Permalink

    id

    unique name of this data object

    createSql

    DDL-statement to be executed in prepare phase, using output jdbc connection. Note that it is also possible to let Spark create the table in Init-phase. See jdbcOptions to customize column data types for auto-created DDL-statement.

    preReadSql

    SQL-statement to be executed in exec phase before reading input table, using input jdbc connection. Use tokens with syntax %{<spark sql expression>} to substitute with values from DefaultExpressionData.

    postReadSql

    SQL-statement to be executed in exec phase after reading input table and before action is finished, using input jdbc connection Use tokens with syntax %{<spark sql expression>} to substitute with values from DefaultExpressionData.

    preWriteSql

    SQL-statement to be executed in exec phase before writing output table, using output jdbc connection Use tokens with syntax %{<spark sql expression>} to substitute with values from DefaultExpressionData.

    postWriteSql

    SQL-statement to be executed in exec phase after writing output table, using output jdbc connection Use tokens with syntax %{<spark sql expression>} to substitute with values from DefaultExpressionData.

    schemaMin

    An optional, minimal schema that this DataObject must have to pass schema validation on reading and writing.

    table

    The jdbc table to be read

    jdbcFetchSize

    Number of rows to be fetched together by the Jdbc driver

    saveMode

    SDLSaveMode to use when writing table, default is "Overwrite". Only "Append" and "Overwrite" supported.

    allowSchemaEvolution

    If set to true schema evolution will automatically occur when writing to this DataObject with different schema, otherwise SDL will stop with error.

    connectionId

    Id of JdbcConnection configuration

    jdbcOptions

    Any jdbc options according to https://spark.apache.org/docs/latest/sql-data-sources-jdbc.html. Note that some options above set and override some of this options explicitly. Use "createTableOptions" and "createTableColumnTypes" to control automatic creating of database tables.

    virtualPartitions

    Virtual partition columns. Note that this doesn't need to be the same as the database partition columns for this table. But it is important that there is an index on these columns to efficiently list existing "partitions".

    expectedPartitionsCondition

    Optional definition of partitions expected to exist. Define a Spark SQL expression that is evaluated against a PartitionValues instance and returns true or false Default is to expect all partitions to exist.

Value Members

  1. final def !=(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  4. def addFieldIfNotExisting(writeSchema: StructType, colName: String, dataType: DataType): StructType

    Permalink
    Attributes
    protected
    Definition Classes
    CanCreateDataFrame
  5. val allowSchemaEvolution: Boolean

    Permalink

    If set to true schema evolution will automatically occur when writing to this DataObject with different schema, otherwise SDL will stop with error.

    If set to true schema evolution will automatically occur when writing to this DataObject with different schema, otherwise SDL will stop with error.

    Definition Classes
    JdbcTableDataObject → CanEvolveSchema
  6. final def asInstanceOf[T0]: T0

    Permalink
    Definition Classes
    Any
  7. def atlasName: String

    Permalink
    Definition Classes
    TableDataObject → DataObjectAtlasExportable
  8. def atlasQualifiedName(prefix: String): String

    Permalink
    Definition Classes
    TableDataObject → AtlasExportable
  9. def clone(): AnyRef

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  10. val connection: JdbcTableConnection

    Permalink

    Connection defines driver, url and db in central location

  11. val connectionId: ConnectionId

    Permalink

    Id of JdbcConnection configuration

  12. def createReadSchema(writeSchema: StructType)(implicit session: SparkSession): StructType

    Permalink

    Creates the read schema based on a given write schema.

    Creates the read schema based on a given write schema. Normally this is the same, but some DataObjects can remove & add columns on read (e.g. KafkaTopicDataObject, SparkFileDataObject) In this cases we have to break the DataFrame lineage und create a dummy DataFrame in init phase.

    Definition Classes
    CanCreateDataFrame
  13. val createSql: Option[String]

    Permalink

    DDL-statement to be executed in prepare phase, using output jdbc connection.

    DDL-statement to be executed in prepare phase, using output jdbc connection. Note that it is also possible to let Spark create the table in Init-phase. See jdbcOptions to customize column data types for auto-created DDL-statement.

  14. def deleteAllData(implicit session: SparkSession): Unit

    Permalink
  15. def deletePartitions(partitionValues: Seq[PartitionValues])(implicit session: SparkSession): Unit

    Permalink

    Delete virtual partitions by "delete from" statement

    Delete virtual partitions by "delete from" statement

    Definition Classes
    JdbcTableDataObjectCanHandlePartitions
  16. def dropTable(implicit session: SparkSession): Unit

    Permalink
    Definition Classes
    JdbcTableDataObject → TableDataObject
  17. final def eq(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  18. val expectedPartitionsCondition: Option[String]

    Permalink

    Optional definition of partitions expected to exist.

    Optional definition of partitions expected to exist. Define a Spark SQL expression that is evaluated against a PartitionValues instance and returns true or false Default is to expect all partitions to exist.

    Definition Classes
    JdbcTableDataObjectCanHandlePartitions
  19. def factory: FromConfigFactory[DataObject]

    Permalink

    Returns the factory that can parse this type (that is, type CO).

    Returns the factory that can parse this type (that is, type CO).

    Typically, implementations of this method should return the companion object of the implementing class. The companion object in turn should implement FromConfigFactory.

    returns

    the factory (object) for this class.

    Definition Classes
    JdbcTableDataObject → ParsableFromConfig
  20. def finalize(): Unit

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  21. final def getClass(): Class[_]

    Permalink
    Definition Classes
    AnyRef → Any
  22. def getConnection[T <: Connection](connectionId: ConnectionId)(implicit registry: InstanceRegistry, ct: ClassTag[T], tt: scala.reflect.api.JavaUniverse.TypeTag[T]): T

    Permalink

    Handle class cast exception when getting objects from instance registry

    Handle class cast exception when getting objects from instance registry

    Attributes
    protected
    Definition Classes
    DataObject
  23. def getConnectionReg[T <: Connection](connectionId: ConnectionId, registry: InstanceRegistry)(implicit ct: ClassTag[T], tt: scala.reflect.api.JavaUniverse.TypeTag[T]): T

    Permalink
    Attributes
    protected
    Definition Classes
    DataObject
  24. def getDataFrame(partitionValues: Seq[PartitionValues] = Seq())(implicit session: SparkSession, context: ActionPipelineContext): DataFrame

    Permalink
    Definition Classes
    JdbcTableDataObject → CanCreateDataFrame
  25. def getPKduplicates(implicit session: SparkSession, context: ActionPipelineContext): DataFrame

    Permalink
    Definition Classes
    TableDataObject
  26. def getPKnulls(implicit session: SparkSession, context: ActionPipelineContext): DataFrame

    Permalink
    Definition Classes
    TableDataObject
  27. def getPKviolators(implicit session: SparkSession, context: ActionPipelineContext): DataFrame

    Permalink
    Definition Classes
    TableDataObject
  28. def housekeepingMode: Option[HousekeepingMode]

    Permalink

    Configure a housekeeping mode to e.g cleanup, archive and compact partitions.

    Configure a housekeeping mode to e.g cleanup, archive and compact partitions. Default is None.

    Definition Classes
    DataObject
  29. val id: DataObjectId

    Permalink

    unique name of this data object

    unique name of this data object

    Definition Classes
    JdbcTableDataObjectDataObject → SdlConfigObject
  30. def init(df: DataFrame, partitionValues: Seq[PartitionValues], saveModeOptions: Option[SaveModeOptions] = None)(implicit session: SparkSession, context: ActionPipelineContext): Unit

    Permalink

    Called during init phase for checks and initialization.

    Called during init phase for checks and initialization. If possible dont change the system until execution phase.

    Definition Classes
    JdbcTableDataObject → CanWriteDataFrame
  31. implicit val instanceRegistry: InstanceRegistry

    Permalink
  32. def isDbExisting(implicit session: SparkSession): Boolean

    Permalink
    Definition Classes
    JdbcTableDataObject → TableDataObject
  33. final def isInstanceOf[T0]: Boolean

    Permalink
    Definition Classes
    Any
  34. def isPKcandidateKey(implicit session: SparkSession, context: ActionPipelineContext): Boolean

    Permalink
    Definition Classes
    TableDataObject
  35. def isTableExisting(implicit session: SparkSession): Boolean

    Permalink
    Definition Classes
    JdbcTableDataObject → TableDataObject
  36. val jdbcFetchSize: Int

    Permalink

    Number of rows to be fetched together by the Jdbc driver

  37. val jdbcOptions: Map[String, String]

    Permalink

    Any jdbc options according to https://spark.apache.org/docs/latest/sql-data-sources-jdbc.html.

    Any jdbc options according to https://spark.apache.org/docs/latest/sql-data-sources-jdbc.html. Note that some options above set and override some of this options explicitly. Use "createTableOptions" and "createTableColumnTypes" to control automatic creating of database tables.

  38. def listPartitions(implicit session: SparkSession, context: ActionPipelineContext): Seq[PartitionValues]

    Permalink

    Listing virtual partitions by a "select distinct partition-columns" query

    Listing virtual partitions by a "select distinct partition-columns" query

    Definition Classes
    JdbcTableDataObjectCanHandlePartitions
  39. lazy val logger: Logger

    Permalink
    Attributes
    protected
    Definition Classes
    SmartDataLakeLogger
  40. def mergeDataFrameByPrimaryKey(df: DataFrame, saveModeOptions: SaveModeMergeOptions)(implicit session: SparkSession, context: ActionPipelineContext): Unit

    Permalink

    Merges DataFrame with existing table data by writing DataFrame to a temp-table and using SQL Merge-statement.

    Merges DataFrame with existing table data by writing DataFrame to a temp-table and using SQL Merge-statement. Table.primaryKey is used as condition to check if a record is matched or not. If it is matched it gets updated (or deleted), otherwise it is inserted. This all is done in one transaction.

  41. val metadata: Option[DataObjectMetadata]

    Permalink

    Additional metadata for the DataObject

    Additional metadata for the DataObject

    Definition Classes
    JdbcTableDataObjectDataObject
  42. final def ne(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  43. final def notify(): Unit

    Permalink
    Definition Classes
    AnyRef
  44. final def notifyAll(): Unit

    Permalink
    Definition Classes
    AnyRef
  45. val partitions: Seq[String]

    Permalink

    Definition of partition columns

    Definition of partition columns

    Definition Classes
    JdbcTableDataObjectCanHandlePartitions
  46. def postRead(partitionValues: Seq[PartitionValues])(implicit session: SparkSession, context: ActionPipelineContext): Unit

    Permalink

    Runs operations after reading from DataObject

    Runs operations after reading from DataObject

    Definition Classes
    JdbcTableDataObjectDataObject
  47. val postReadSql: Option[String]

    Permalink

    SQL-statement to be executed in exec phase after reading input table and before action is finished, using input jdbc connection Use tokens with syntax %{<spark sql expression>} to substitute with values from DefaultExpressionData.

  48. def postWrite(partitionValues: Seq[PartitionValues])(implicit session: SparkSession, context: ActionPipelineContext): Unit

    Permalink

    Runs operations after writing to DataObject

    Runs operations after writing to DataObject

    Definition Classes
    JdbcTableDataObjectDataObject
  49. val postWriteSql: Option[String]

    Permalink

    SQL-statement to be executed in exec phase after writing output table, using output jdbc connection Use tokens with syntax %{<spark sql expression>} to substitute with values from DefaultExpressionData.

  50. def preRead(partitionValues: Seq[PartitionValues])(implicit session: SparkSession, context: ActionPipelineContext): Unit

    Permalink

    Runs operations before reading from DataObject

    Runs operations before reading from DataObject

    Definition Classes
    JdbcTableDataObjectDataObject
  51. val preReadSql: Option[String]

    Permalink

    SQL-statement to be executed in exec phase before reading input table, using input jdbc connection.

    SQL-statement to be executed in exec phase before reading input table, using input jdbc connection. Use tokens with syntax %{<spark sql expression>} to substitute with values from DefaultExpressionData.

  52. def preWrite(implicit session: SparkSession, context: ActionPipelineContext): Unit

    Permalink

    Runs operations before writing to DataObject Note: As the transformed SubFeed doesnt yet exist in Action.preWrite, no partition values can be passed as parameters as in preRead

    Runs operations before writing to DataObject Note: As the transformed SubFeed doesnt yet exist in Action.preWrite, no partition values can be passed as parameters as in preRead

    Definition Classes
    JdbcTableDataObjectDataObject
  53. val preWriteSql: Option[String]

    Permalink

    SQL-statement to be executed in exec phase before writing output table, using output jdbc connection Use tokens with syntax %{<spark sql expression>} to substitute with values from DefaultExpressionData.

  54. def prepare(implicit session: SparkSession, context: ActionPipelineContext): Unit

    Permalink

    Prepare & test DataObject's prerequisits

    Prepare & test DataObject's prerequisits

    This runs during the "prepare" operation of the DAG.

    Definition Classes
    JdbcTableDataObjectDataObject
  55. val saveMode: SDLSaveMode

    Permalink

    SDLSaveMode to use when writing table, default is "Overwrite".

    SDLSaveMode to use when writing table, default is "Overwrite". Only "Append" and "Overwrite" supported.

  56. val schemaMin: Option[StructType]

    Permalink

    An optional, minimal schema that this DataObject must have to pass schema validation on reading and writing.

    An optional, minimal schema that this DataObject must have to pass schema validation on reading and writing.

    Definition Classes
    JdbcTableDataObject → SchemaValidation
  57. def streamingOptions: Map[String, String]

    Permalink
    Definition Classes
    CanWriteDataFrame
  58. final def synchronized[T0](arg0: ⇒ T0): T0

    Permalink
    Definition Classes
    AnyRef
  59. var table: Table

    Permalink

    The jdbc table to be read

    The jdbc table to be read

    Definition Classes
    JdbcTableDataObject → TableDataObject
  60. var tableSchema: StructType

    Permalink
    Definition Classes
    TableDataObject
  61. def toStringShort: String

    Permalink
    Definition Classes
    DataObject
  62. def validateSchema(df: DataFrame, schemaExpected: StructType, role: String): Unit

    Permalink

    Validate the schema of a given Spark Data Frame df against a given expected schema.

    Validate the schema of a given Spark Data Frame df against a given expected schema.

    df

    The data frame to validate.

    schemaExpected

    The expected schema to validate against.

    role

    role used in exception message. Set to read or write.

    Definition Classes
    SchemaValidation
    Exceptions thrown

    SchemaViolationException is the schemaMin does not validate.

  63. def validateSchemaHasPartitionCols(df: DataFrame, role: String): Unit

    Permalink

    Validate the schema of a given Spark Data Frame df that it contains the specified partition columns

    Validate the schema of a given Spark Data Frame df that it contains the specified partition columns

    df

    The data frame to validate.

    role

    role used in exception message. Set to read or write.

    Definition Classes
    CanHandlePartitions
    Exceptions thrown

    SchemaViolationException if the partitions columns are not included.

  64. def validateSchemaHasPrimaryKeyCols(df: DataFrame, primaryKeyCols: Seq[String], role: String): Unit

    Permalink

    Validate the schema of a given Spark Data Frame df that it contains the specified primary key columns

    Validate the schema of a given Spark Data Frame df that it contains the specified primary key columns

    df

    The data frame to validate.

    role

    role used in exception message. Set to read or write.

    Definition Classes
    CanHandlePartitions
    Exceptions thrown

    SchemaViolationException if the partitions columns are not included.

  65. def validateSchemaMin(df: DataFrame, role: String): Unit

    Permalink

    Validate the schema of a given Spark Data Frame df against schemaMin.

    Validate the schema of a given Spark Data Frame df against schemaMin.

    df

    The data frame to validate.

    role

    role used in exception message. Set to read or write.

    Definition Classes
    SchemaValidation
    Exceptions thrown

    SchemaViolationException is the schemaMin does not validate.

  66. val virtualPartitions: Seq[String]

    Permalink

    Virtual partition columns.

    Virtual partition columns. Note that this doesn't need to be the same as the database partition columns for this table. But it is important that there is an index on these columns to efficiently list existing "partitions".

  67. final def wait(): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  68. final def wait(arg0: Long, arg1: Int): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  69. final def wait(arg0: Long): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  70. def writeDataFrame(df: DataFrame, partitionValues: Seq[PartitionValues] = Seq(), isRecursiveInput: Boolean = false, saveModeOptions: Option[SaveModeOptions] = None)(implicit session: SparkSession, context: ActionPipelineContext): Unit

    Permalink

    Write DataFrame to DataObject

    Write DataFrame to DataObject

    df

    the DataFrame to write

    partitionValues

    partition values included in DataFrames data

    isRecursiveInput

    if DataFrame needs this DataObject as input - special treatment might be needed in this case.

    Definition Classes
    JdbcTableDataObject → CanWriteDataFrame
  71. def writeStreamingDataFrame(df: DataFrame, trigger: Trigger, options: Map[String, String], checkpointLocation: String, queryName: String, outputMode: OutputMode = OutputMode.Append, saveModeOptions: Option[SaveModeOptions] = None)(implicit session: SparkSession, context: ActionPipelineContext): StreamingQuery

    Permalink

    Write Spark structured streaming DataFrame The default implementation uses foreachBatch and this traits writeDataFrame method to write the DataFrame.

    Write Spark structured streaming DataFrame The default implementation uses foreachBatch and this traits writeDataFrame method to write the DataFrame. Some DataObjects will override this with specific implementations (Kafka).

    df

    The Streaming DataFrame to write

    trigger

    Trigger frequency for stream

    checkpointLocation

    location for checkpoints of streaming query

    Definition Classes
    CanWriteDataFrame

Inherited from Serializable

Inherited from Serializable

Inherited from Product

Inherited from Equals

Inherited from CanMergeDataFrame

Inherited from CanEvolveSchema

Inherited from CanHandlePartitions

Inherited from TransactionalSparkTableDataObject

Inherited from CanWriteDataFrame

Inherited from TableDataObject

Inherited from SchemaValidation

Inherited from CanCreateDataFrame

Inherited from DataObject

Inherited from AtlasExportable

Inherited from SmartDataLakeLogger

Inherited from ParsableFromConfig[DataObject]

Inherited from SdlConfigObject

Inherited from AnyRef

Inherited from Any

Ungrouped