Connect by custom authorization header
Authentication modes define how an application authenticates itself to a given data object/connection
Connect by basic authentication
Definition of a Spark SQL condition with description.
Definition of a Spark SQL condition with description. This is used for example to define failConditions of PartitionDiffMode.
Condition formulated as Spark SQL. The attributes available are dependent on the context.
A textual description of the condition to be shown in error messages.
Connect with custom HTTP authentication
Connect with custom HTTP authentication
class name implementing trait CustomHttpAuthModeLogic
Options to pass to the custom auth mode logc in prepare function
Execution mode to create custom partition execution mode logic.
Execution mode to create custom partition execution mode logic. Define a function which receives main input&output DataObject and returns partition values to process as Seq[Map[String,String]\]
class name implementing trait CustomPartitionModeLogic
optional alternative outputId of DataObject later in the DAG. This replaces the mainOutputId. It can be used to ensure processing all partitions over multiple actions in case of errors.
Options specified in the configuration for this execution mode
An execution mode for incremental processing by remembering DataObjects state from last increment.
Attributes definition for spark expressions used as ExecutionMode conditions.
Attributes definition for spark expressions used as ExecutionMode conditions.
Partition values specified with command line (start action) or passed from previous action
True if the current action is a start node of the DAG.
Execution mode defines how data is selected when running a data pipeline.
Execution mode defines how data is selected when running a data pipeline. You need to select one of the subclasses by defining type, i.e.
executionMode = { type = SparkIncrementalMode compareCol = "id" }
Result of execution mode application
An execution mode which just validates that partition values are given.
An execution mode which just validates that partition values are given. Note: For start nodes of the DAG partition values can be defined by command line, for subsequent nodes partition values are passed on from previous nodes.
Execution mode to incrementally process file-based DataObjects.
Execution mode to incrementally process file-based DataObjects. It takes all existing files in the input DataObject and removes (deletes) them after processing. Input partition values are applied when searching for files and also used as output partition values.
Connect by using Keycloak to manage token and token refresh giving clientId/secret as information.
Connect by using Keycloak to manage token and token refresh giving clientId/secret as information. For HTTP Connection this is used as Bearer token in Authorization header.
Partition difference execution mode lists partitions on mainInput & mainOutput DataObject and starts loading all missing partitions.
Partition difference execution mode lists partitions on mainInput & mainOutput DataObject and starts loading all missing partitions. Partition columns to be used for comparision need to be a common 'init' of input and output partition columns. This mode needs mainInput/Output DataObjects which CanHandlePartitions to list partitions. Partition values are passed to following actions for partition columns which they have in common.
optional number of partition columns to use as a common 'init'.
optional alternative outputId of DataObject later in the DAG. This replaces the mainOutputId. It can be used to ensure processing all partitions over multiple actions in case of errors.
optional restriction of the number of partition values per run.
Condition to decide if execution mode should be applied or not. Define a spark sql expression working with attributes of DefaultExecutionModeExpressionData returning a boolean. Default is to apply the execution mode if given partition values (partition values from command line or passed from previous action) are not empty.
List of conditions to fail application of execution mode if true. Define as spark sql expressions working with attributes of PartitionDiffModeExpressionData returning a boolean. Default is that the application of the PartitionDiffMode does not fail the action. If there is no data to process, the following actions are skipped. Multiple conditions are evaluated individually and every condition may fail the execution mode (or-logic)
optional expression to define or refine the list of selected output partitions. Define a spark sql expression working with the attributes of PartitionDiffModeExpressionData returning a list<map<string,string>>. Default is to return the originally selected output partitions found in attribute selectedOutputPartitionValues.
If true applies the partition values transform of custom transformations on input partition values before comparison with output partition values. If enabled input and output partition columns can be different. Default is to disable the transformation of partition values.
optional expression to refine the list of selected input partitions. Note that primarily output partitions are selected by PartitionDiffMode. The selected output partitions are then transformed back to the input partitions needed to create the selected output partitions. This is one-to-one except if applyPartitionValuesTransform=true. And sometimes there is a need for additional input data to create the output partitions, e.g. if you aggregate a window of 7 days for every day. You can customize selected input partitions by defining a spark sql expression working with the attributes of PartitionDiffModeExpressionData returning a list<map<string,string>>. Default is to return the originally selected input partitions found in attribute selectedInputPartitionValues.
partition values received by main input or command line
all partition values existing in main input DataObject
all partition values existing in main output DataObject
input partition values selected by PartitionDiffMode
output partition values selected by PartitionDiffMode
An execution mode which forces processing all data from it's inputs.
Validate by user and private/public key Private key is read from .ssh
Validate by SASL_SSL Authentication : user / password and truststore
Validate by SSL Certificates : Only location an credentials.
Validate by SSL Certificates : Only location an credentials. Additional attributes should be supplied via options map
This class can be used to override save mode without further special parameters.
Options to control detailed behaviour of SaveMode.Merge.
Options to control detailed behaviour of SaveMode.Merge. In Spark expressions use table alias 'existing' to reference columns of the existing table data, and table alias 'new' to reference columns of new data set.
A condition to control if matched records are deleted. If no condition is given, *no* records are delete.
A condition to control if matched records are updated. If no condition is given all matched records are updated (default). Note that delete is applied before update. Records selected for deletion are automatically excluded from the updates.
List of column names to update in update clause. If empty all columns (except primary keys) are updated (default)
A condition to control if unmatched records are inserted. If no condition is given all unmatched records are inserted (default).
List of column names to ignore in insert clause. If empty all columns are inserted (default).
To optimize performance for SDLSaveMode.Merge it might be interesting to limit the records read from the existing table data, e.g. merge operation might use only the last 7 days.
Override and control detailed behaviour of saveMode, especially SaveMode.Merge for now.
Compares max entry in "compare column" between mainOutput and mainInput and incrementally loads the delta.
Compares max entry in "compare column" between mainOutput and mainInput and incrementally loads the delta. This mode works only with SparkSubFeeds. The filter is not propagated to following actions.
a comparable column name existing in mainInput and mainOutput used to identify the delta. Column content should be bigger for newer records.
optional alternative outputId of DataObject later in the DAG. This replaces the mainOutputId. It can be used to ensure processing all partitions over multiple actions in case of errors.
Condition to decide if execution mode should be applied or not. Define a spark sql expression working with attributes of DefaultExecutionModeExpressionData returning a boolean. Default is to apply the execution mode if given partition values (partition values from command line or passed from previous action) are not empty.
Spark streaming execution mode uses Spark Structured Streaming to incrementally execute data loads and keep track of processed data.
Spark streaming execution mode uses Spark Structured Streaming to incrementally execute data loads and keep track of processed data. This mode needs a DataObject implementing CanCreateStreamingDataFrame and works only with SparkSubFeeds. This mode can be executed synchronously in the DAG by using triggerType=Once, or asynchronously as Streaming Query with triggerType = ProcessingTime or Continuous.
location for checkpoints of streaming query to keep state
define execution interval of Spark streaming query. Possible values are Once (default), ProcessingTime & Continuous. See Trigger for details. Note that this is only applied if SDL is executed in streaming mode. If SDL is executed in normal mode, TriggerType=Once is used always. If triggerType=Once, the action is repeated with Trigger.Once in SDL streaming mode.
Time as String in triggerType = ProcessingTime or Continuous. See Trigger for details.
additional option to apply when reading streaming source. This overwrites options set by the DataObjects.
additional option to apply when writing to streaming sink. This overwrites options set by the DataObjects.
Connect by token For HTTP Connection this is used as Bearer token in Authorization header.
Datatype for date columns in Hive
Environment dependent configurations.
Environment dependent configurations. They can be set - by Java system properties (prefixed with "sdl.", e.g. "sdl.hadoopAuthoritiesWithAclsRequired") - by Environment variables (prefixed with "SDL_" and camelCase converted to uppercase, e.g. "SDL_HADOOP_AUTHORITIES_WITH_ACLS_REQUIRED") - by a custom io.smartdatalake.app.SmartDataLakeBuilder implementation for your environment, which sets these variables directly.
Hive conventions
Suffix used for alternating parquet HDFS paths (usually in TickTockHiveTableDataObject for integration layer)
Options for HDFS output
SDL supports more SaveModes than Spark, that's why there is an own definition of SDLSaveMode.
Column names specific to historization of Hive tables
Authentication modes define how an application authenticates itself to a given data object/connection
You need to define one of the AuthModes (subclasses) as type, i.e.
authMode { type = BasicAuthMode user = myUser password = myPassword }