io.smartdatalake.workflow.dataobject
create empty partition
create empty partition
Creates the read schema based on a given write schema.
Creates the read schema based on a given write schema. Normally this is the same, but some DataObjects can remove & add columns on read (e.g. KafkaTopicDataObject, SparkFileDataObject) In this cases we have to break the DataFrame lineage und create a dummy DataFrame in init phase.
Delete given partitions.
Delete given partitions. This is used to cleanup partitions by housekeeping. Note: this is optional to implement.
Definition of partitions that are expected to exists.
Definition of partitions that are expected to exists. This is used to validate that partitions being read exists and don't return no data. Define a Spark SQL expression that is evaluated against a PartitionValues instance and returns true or false example: "elements['yourColName'] > 2017"
true if partition is expected to exist.
Returns the factory that can parse this type (that is, type CO
).
Returns the factory that can parse this type (that is, type CO
).
Typically, implementations of this method should return the companion object of the implementing class. The companion object in turn should implement FromConfigFactory.
the factory (object) for this class.
Handle class cast exception when getting objects from instance registry
Handle class cast exception when getting objects from instance registry
Configure a housekeeping mode to e.g cleanup, archive and compact partitions.
Configure a housekeeping mode to e.g cleanup, archive and compact partitions. Default is None.
A unique identifier for this instance.
A unique identifier for this instance.
Called during init phase for checks and initialization.
Called during init phase for checks and initialization. If possible dont change the system until execution phase.
list hive table partitions
list hive table partitions
Additional metadata for the DataObject
Additional metadata for the DataObject
Definition of partition columns
Definition of partition columns
Runs operations before writing to DataObject Note: As the transformed SubFeed doesnt yet exist in Action.preWrite, no partition values can be passed as parameters as in preRead
Runs operations before writing to DataObject Note: As the transformed SubFeed doesnt yet exist in Action.preWrite, no partition values can be passed as parameters as in preRead
Prepare & test DataObject's prerequisits
Prepare & test DataObject's prerequisits
This runs during the "prepare" operation of the DAG.
An optional, minimal schema that a DataObject schema must have to pass schema validation.
An optional, minimal schema that a DataObject schema must have to pass schema validation.
The schema validation semantics are: - Schema A is valid in respect to a minimal schema B when B is a subset of A. This means: the whole column set of B is contained in the column set of A.
Note: This is mainly used by the functionality defined in CanCreateDataFrame and CanWriteDataFrame, that is,
when reading or writing Spark data frames from/to the underlying data container.
io.smartdatalake.workflow.action.Actions that work with files ignore the schemaMin
attribute
if it is defined.
Additionally schemaMin can be used to define the schema used if there is no data or table doesn't yet exist.
Validate the schema of a given Spark Data Frame df
against a given expected schema.
Validate the schema of a given Spark Data Frame df
against a given expected schema.
The data frame to validate.
The expected schema to validate against.
role used in exception message. Set to read or write.
SchemaViolationException
is the schemaMin
does not validate.
Validate the schema of a given Spark Data Frame df
that it contains the specified partition columns
Validate the schema of a given Spark Data Frame df
that it contains the specified partition columns
The data frame to validate.
role used in exception message. Set to read or write.
SchemaViolationException
if the partitions columns are not included.
Validate the schema of a given Spark Data Frame df
that it contains the specified primary key columns
Validate the schema of a given Spark Data Frame df
that it contains the specified primary key columns
The data frame to validate.
role used in exception message. Set to read or write.
SchemaViolationException
if the partitions columns are not included.
Validate the schema of a given Spark Data Frame df
against schemaMin
.
Validate the schema of a given Spark Data Frame df
against schemaMin
.
The data frame to validate.
role used in exception message. Set to read or write.
SchemaViolationException
is the schemaMin
does not validate.
Write DataFrame to DataObject
Write DataFrame to DataObject
the DataFrame to write
partition values included in DataFrames data
if DataFrame needs this DataObject as input - special treatment might be needed in this case.
Writes DataFrame to HDFS/Parquet and creates Hive table.
Writes DataFrame to HDFS/Parquet and creates Hive table. DataFrames are repartitioned in order not to write too many small files or only a few HDFS files that are too large.
Write Spark structured streaming DataFrame The default implementation uses foreachBatch and this traits writeDataFrame method to write the DataFrame.
Write Spark structured streaming DataFrame The default implementation uses foreachBatch and this traits writeDataFrame method to write the DataFrame. Some DataObjects will override this with specific implementations (Kafka).
The Streaming DataFrame to write
Trigger frequency for stream
location for checkpoints of streaming query