io.smartdatalake.workflow.dataobject
unique name of this data object
hadoop directory for this table. If it doesn't contain scheme and authority, the connections pathPrefix is applied. If pathPrefix is not defined or doesn't define scheme and authority, default schema and authority is applied. If DataObject is only used for reading or if the HiveTable already exist, the path can be omitted. If the HiveTable already exists but with a different path, a warning is issued
partition columns for this data object
enable compute statistics after writing data (default=false)
type of date column
An optional, minimal schema that this DataObject must have to pass schema validation on reading and writing.
hive table to be written by this output
number of files created when writing into an empty table (otherwise the number will be derived from the existing data)
spark SaveMode to use when writing files, default is "overwrite"
override connections permissions for files created tables hadoop directory with this connection
optional id of io.smartdatalake.workflow.connection.HiveTableConnection
Optional definition of partitions expected to exist. Define a Spark SQL expression that is evaluated against a PartitionValues instance and returns true or false Default is to expect all partitions to exist.
Optional definition of a housekeeping mode applied after every write. E.g. it can be used to cleanup, archive and compact partitions. See HousekeepingMode for available implementations. Default is None.
meta data
override connections permissions for files created tables hadoop directory with this connection
enable compute statistics after writing data (default=false)
Compact given partitions combining smaller files into bigger ones.
Compact given partitions combining smaller files into bigger ones. This is used to compact partitions by housekeeping. Note: this is optional to implement.
optional id of io.smartdatalake.workflow.connection.HiveTableConnection
create empty partition
create empty partition
Creates the read schema based on a given write schema.
Creates the read schema based on a given write schema. Normally this is the same, but some DataObjects can remove & add columns on read (e.g. KafkaTopicDataObject, SparkFileDataObject) In this cases we have to break the DataFrame lineage und create a dummy DataFrame in init phase.
type of date column
Delete given partitions.
Delete given partitions. This is used to cleanup partitions by housekeeping. Note: this is optional to implement.
Checks if partition exists and deletes it.
Checks if partition exists and deletes it. Note that partition values to check don't need to have a key/value defined for every partition column.
Optional definition of partitions expected to exist.
Optional definition of partitions expected to exist. Define a Spark SQL expression that is evaluated against a PartitionValues instance and returns true or false Default is to expect all partitions to exist.
Returns the factory that can parse this type (that is, type CO
).
Returns the factory that can parse this type (that is, type CO
).
Typically, implementations of this method should return the companion object of the implementing class. The companion object in turn should implement FromConfigFactory.
the factory (object) for this class.
Handle class cast exception when getting objects from instance registry
Handle class cast exception when getting objects from instance registry
Optional definition of a housekeeping mode applied after every write.
Optional definition of a housekeeping mode applied after every write. E.g. it can be used to cleanup, archive and compact partitions. See HousekeepingMode for available implementations. Default is None.
unique name of this data object
unique name of this data object
Called during init phase for checks and initialization.
Called during init phase for checks and initialization. If possible dont change the system until execution phase.
list hive table partitions
list hive table partitions
meta data
meta data
Move given partitions.
Move given partitions. This is used to archive partitions by housekeeping. Note: this is optional to implement.
number of files created when writing into an empty table (otherwise the number will be derived from the existing data)
Return a String specifying the partition layout.
Return a String specifying the partition layout. For Hadoop the default partition layout is colname1=<value1>/colname2=<value2>/.../
partition columns for this data object
partition columns for this data object
hadoop directory for this table.
hadoop directory for this table. If it doesn't contain scheme and authority, the connections pathPrefix is applied. If pathPrefix is not defined or doesn't define scheme and authority, default schema and authority is applied. If DataObject is only used for reading or if the HiveTable already exist, the path can be omitted. If the HiveTable already exists but with a different path, a warning is issued
Runs operations before writing to DataObject Note: As the transformed SubFeed doesnt yet exist in Action.preWrite, no partition values can be passed as parameters as in preRead
Runs operations before writing to DataObject Note: As the transformed SubFeed doesnt yet exist in Action.preWrite, no partition values can be passed as parameters as in preRead
Prepare & test DataObject's prerequisits
Prepare & test DataObject's prerequisits
This runs during the "prepare" operation of the DAG.
spark SaveMode to use when writing files, default is "overwrite"
An optional, minimal schema that this DataObject must have to pass schema validation on reading and writing.
An optional, minimal schema that this DataObject must have to pass schema validation on reading and writing.
hive table to be written by this output
hive table to be written by this output
Validate the schema of a given Spark Data Frame df
against a given expected schema.
Validate the schema of a given Spark Data Frame df
against a given expected schema.
The data frame to validate.
The expected schema to validate against.
role used in exception message. Set to read or write.
SchemaViolationException
is the schemaMin
does not validate.
Validate the schema of a given Spark Data Frame df
that it contains the specified partition columns
Validate the schema of a given Spark Data Frame df
that it contains the specified partition columns
The data frame to validate.
role used in exception message. Set to read or write.
SchemaViolationException
if the partitions columns are not included.
Validate the schema of a given Spark Data Frame df
that it contains the specified primary key columns
Validate the schema of a given Spark Data Frame df
that it contains the specified primary key columns
The data frame to validate.
role used in exception message. Set to read or write.
SchemaViolationException
if the partitions columns are not included.
Validate the schema of a given Spark Data Frame df
against schemaMin
.
Validate the schema of a given Spark Data Frame df
against schemaMin
.
The data frame to validate.
role used in exception message. Set to read or write.
SchemaViolationException
is the schemaMin
does not validate.
Write DataFrame to DataObject
Write DataFrame to DataObject
the DataFrame to write
partition values included in DataFrames data
if DataFrame needs this DataObject as input - special treatment might be needed in this case.
Write DataFrame to specific Path with properties of this DataObject.
Write DataFrame to specific Path with properties of this DataObject. This is needed for compacting partitions by housekeeping. Note: this is optional to implement.
Write Spark structured streaming DataFrame The default implementation uses foreachBatch and this traits writeDataFrame method to write the DataFrame.
Write Spark structured streaming DataFrame The default implementation uses foreachBatch and this traits writeDataFrame method to write the DataFrame. Some DataObjects will override this with specific implementations (Kafka).
The Streaming DataFrame to write
Trigger frequency for stream
location for checkpoints of streaming query
DataObject of type Hive. Provides details to access Hive tables to an Action
unique name of this data object
hadoop directory for this table. If it doesn't contain scheme and authority, the connections pathPrefix is applied. If pathPrefix is not defined or doesn't define scheme and authority, default schema and authority is applied. If DataObject is only used for reading or if the HiveTable already exist, the path can be omitted. If the HiveTable already exists but with a different path, a warning is issued
partition columns for this data object
enable compute statistics after writing data (default=false)
type of date column
An optional, minimal schema that this DataObject must have to pass schema validation on reading and writing.
hive table to be written by this output
number of files created when writing into an empty table (otherwise the number will be derived from the existing data)
spark SaveMode to use when writing files, default is "overwrite"
override connections permissions for files created tables hadoop directory with this connection
optional id of io.smartdatalake.workflow.connection.HiveTableConnection
Optional definition of partitions expected to exist. Define a Spark SQL expression that is evaluated against a PartitionValues instance and returns true or false Default is to expect all partitions to exist.
Optional definition of a housekeeping mode applied after every write. E.g. it can be used to cleanup, archive and compact partitions. See HousekeepingMode for available implementations. Default is None.
meta data