Class/Object

io.smartdatalake.workflow.dataobject

RawFileDataObject

Related Docs: object RawFileDataObject | package dataobject

Permalink

case class RawFileDataObject(id: DataObjectId, path: String, customFormat: Option[String] = None, options: Map[String, String] = Map(), fileName: String = "*", partitions: Seq[String] = Seq(), schema: Option[StructType] = None, schemaMin: Option[StructType] = None, saveMode: SDLSaveMode = SDLSaveMode.Overwrite, sparkRepartition: Option[SparkRepartitionDef] = None, acl: Option[AclDef] = None, connectionId: Option[ConnectionId] = None, filenameColumn: Option[String] = None, expectedPartitionsCondition: Option[String] = None, housekeepingMode: Option[HousekeepingMode] = None, metadata: Option[DataObjectMetadata] = None)(implicit instanceRegistry: InstanceRegistry) extends SparkFileDataObject with CanCreateDataFrame with CanWriteDataFrame with Product with Serializable

DataObject of type raw for files with unknown content. Provides details to an Action to access raw files. By specifying format you can custom Spark data formats

customFormat

Custom Spark data source format, e.g. binaryFile or text. Only needed if you want to read/write this DataObject with Spark.

options

Options for custom Spark data source format. Only of use if you want to read/write this DataObject with Spark.

fileName

Definition of fileName. This is concatenated with path and partition layout to search for files. Default is an asterix to match everything.

saveMode

Overwrite or Append new data.

expectedPartitionsCondition

Optional definition of partitions expected to exist. Define a Spark SQL expression that is evaluated against a PartitionValues instance and returns true or false Default is to expect all partitions to exist.

housekeepingMode

Optional definition of a housekeeping mode applied after every write. E.g. it can be used to cleanup, archive and compact partitions. See HousekeepingMode for available implementations. Default is None.

Linear Supertypes
Serializable, Serializable, Product, Equals, SparkFileDataObject, SchemaValidation, UserDefinedSchema, CanCreateStreamingDataFrame, CanWriteDataFrame, CanCreateDataFrame, HadoopFileDataObject, HasHadoopStandardFilestore, CanCreateOutputStream, CanCreateInputStream, FileRefDataObject, FileDataObject, CanHandlePartitions, DataObject, AtlasExportable, SmartDataLakeLogger, ParsableFromConfig[DataObject], SdlConfigObject, AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. RawFileDataObject
  2. Serializable
  3. Serializable
  4. Product
  5. Equals
  6. SparkFileDataObject
  7. SchemaValidation
  8. UserDefinedSchema
  9. CanCreateStreamingDataFrame
  10. CanWriteDataFrame
  11. CanCreateDataFrame
  12. HadoopFileDataObject
  13. HasHadoopStandardFilestore
  14. CanCreateOutputStream
  15. CanCreateInputStream
  16. FileRefDataObject
  17. FileDataObject
  18. CanHandlePartitions
  19. DataObject
  20. AtlasExportable
  21. SmartDataLakeLogger
  22. ParsableFromConfig
  23. SdlConfigObject
  24. AnyRef
  25. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Instance Constructors

  1. new RawFileDataObject(id: DataObjectId, path: String, customFormat: Option[String] = None, options: Map[String, String] = Map(), fileName: String = "*", partitions: Seq[String] = Seq(), schema: Option[StructType] = None, schemaMin: Option[StructType] = None, saveMode: SDLSaveMode = SDLSaveMode.Overwrite, sparkRepartition: Option[SparkRepartitionDef] = None, acl: Option[AclDef] = None, connectionId: Option[ConnectionId] = None, filenameColumn: Option[String] = None, expectedPartitionsCondition: Option[String] = None, housekeepingMode: Option[HousekeepingMode] = None, metadata: Option[DataObjectMetadata] = None)(implicit instanceRegistry: InstanceRegistry)

    Permalink

    customFormat

    Custom Spark data source format, e.g. binaryFile or text. Only needed if you want to read/write this DataObject with Spark.

    options

    Options for custom Spark data source format. Only of use if you want to read/write this DataObject with Spark.

    fileName

    Definition of fileName. This is concatenated with path and partition layout to search for files. Default is an asterix to match everything.

    saveMode

    Overwrite or Append new data.

    expectedPartitionsCondition

    Optional definition of partitions expected to exist. Define a Spark SQL expression that is evaluated against a PartitionValues instance and returns true or false Default is to expect all partitions to exist.

    housekeepingMode

    Optional definition of a housekeeping mode applied after every write. E.g. it can be used to cleanup, archive and compact partitions. See HousekeepingMode for available implementations. Default is None.

Value Members

  1. final def !=(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  4. val acl: Option[AclDef]

    Permalink

    Return the ACL definition for the Hadoop path of this DataObject

    Return the ACL definition for the Hadoop path of this DataObject

    Definition Classes
    RawFileDataObject → HadoopFileDataObject
    See also

    org.apache.hadoop.fs.permission.AclEntry

  5. def addFieldIfNotExisting(writeSchema: StructType, colName: String, dataType: DataType): StructType

    Permalink
    Attributes
    protected
    Definition Classes
    CanCreateDataFrame
  6. def afterRead(df: DataFrame)(implicit session: SparkSession): DataFrame

    Permalink

    Callback that enables potential transformation to be applied to df after the data is read.

    Callback that enables potential transformation to be applied to df after the data is read.

    Default is to validate the schemaMin and not apply any modification.

    Definition Classes
    SparkFileDataObject
  7. def applyAcls(implicit session: SparkSession): Unit

    Permalink
    Attributes
    protected[io.smartdatalake.workflow]
    Definition Classes
    HadoopFileDataObject
  8. final def asInstanceOf[T0]: T0

    Permalink
    Definition Classes
    Any
  9. def atlasName: String

    Permalink
    Definition Classes
    DataObjectAtlasExportable
  10. def atlasQualifiedName(prefix: String): String

    Permalink
    Definition Classes
    AtlasExportable
  11. def beforeWrite(df: DataFrame)(implicit session: SparkSession): DataFrame

    Permalink

    Callback that enables potential transformation to be applied to df before the data is written.

    Callback that enables potential transformation to be applied to df before the data is written.

    Default is to validate the schemaMin and not apply any modification.

    Definition Classes
    SparkFileDataObject
  12. def checkFilesExisting(implicit session: SparkSession): Boolean

    Permalink

    Check if the input files exist.

    Check if the input files exist.

    Attributes
    protected
    Definition Classes
    HadoopFileDataObject
    Exceptions thrown

    IllegalArgumentException if failIfFilesMissing = true and no files found at path.

  13. def clone(): AnyRef

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  14. def compactPartitions(partitionValues: Seq[PartitionValues])(implicit session: SparkSession, actionPipelineContext: ActionPipelineContext): Unit

    Permalink

    Compact partitions using Spark

    Compact partitions using Spark

    Definition Classes
    SparkFileDataObject → CanHandlePartitions
  15. val connection: Option[HadoopFileConnection]

    Permalink
    Attributes
    protected
    Definition Classes
    HadoopFileDataObject
  16. val connectionId: Option[ConnectionId]

    Permalink

    Return the connection id.

    Return the connection id.

    Connection defines path prefix (scheme, authority, base path) and ACL's in central location.

    Definition Classes
    RawFileDataObject → HadoopFileDataObject
  17. def createEmptyPartition(partitionValues: PartitionValues)(implicit session: SparkSession): Unit

    Permalink

    create empty partition

    create empty partition

    Definition Classes
    HadoopFileDataObject → CanHandlePartitions
  18. def createInputStream(path: String)(implicit session: SparkSession): InputStream

    Permalink
    Definition Classes
    HadoopFileDataObject → CanCreateInputStream
  19. def createOutputStream(path: String, overwrite: Boolean)(implicit session: SparkSession): OutputStream

    Permalink
    Definition Classes
    HadoopFileDataObject → CanCreateOutputStream
  20. def createReadSchema(writeSchema: StructType)(implicit session: SparkSession): StructType

    Permalink

    Creates the read schema based on a given write schema.

    Creates the read schema based on a given write schema. Normally this is the same, but some DataObjects can remove & add columns on read (e.g. KafkaTopicDataObject, SparkFileDataObject) In this cases we have to break the DataFrame lineage und create a dummy DataFrame in init phase.

    Definition Classes
    SparkFileDataObject → CanCreateDataFrame
  21. val customFormat: Option[String]

    Permalink

    Custom Spark data source format, e.g.

    Custom Spark data source format, e.g. binaryFile or text. Only needed if you want to read/write this DataObject with Spark.

  22. def deleteAll(implicit session: SparkSession): Unit

    Permalink

    Delete all data.

    Delete all data. This is used to implement SaveMode.Overwrite.

    Definition Classes
    HadoopFileDataObject → FileRefDataObject
  23. def deleteAllFiles(path: Path)(implicit session: SparkSession): Unit

    Permalink

    delete all files inside given path recursively

    delete all files inside given path recursively

    Definition Classes
    HadoopFileDataObject
  24. def deleteFileRefs(fileRefs: Seq[FileRef])(implicit session: SparkSession): Unit

    Permalink

    Delete given files.

    Delete given files. This is used to cleanup files after they are processed.

    Definition Classes
    HadoopFileDataObject → FileRefDataObject
  25. def deletePartitions(partitionValues: Seq[PartitionValues])(implicit session: SparkSession): Unit

    Permalink

    Delete Hadoop Partitions.

    Delete Hadoop Partitions.

    if there is no value for a partition column before the last partition column given, the partition path will be exploded

    Definition Classes
    HadoopFileDataObject → CanHandlePartitions
  26. def deletePartitionsFiles(partitionValues: Seq[PartitionValues])(implicit session: SparkSession): Unit

    Permalink

    Delete files inside Hadoop Partitions, but keep partition directory to preserve ACLs

    Delete files inside Hadoop Partitions, but keep partition directory to preserve ACLs

    if there is no value for a partition column before the last partition column given, the partition path will be exploded

    Definition Classes
    HadoopFileDataObject
  27. final def eq(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  28. val expectedPartitionsCondition: Option[String]

    Permalink

    Optional definition of partitions expected to exist.

    Optional definition of partitions expected to exist. Define a Spark SQL expression that is evaluated against a PartitionValues instance and returns true or false Default is to expect all partitions to exist.

    Definition Classes
    RawFileDataObjectCanHandlePartitions
  29. def extractPartitionValuesFromPath(filePath: String)(implicit session: SparkSession): PartitionValues

    Permalink

    Extract partition values from a given file path

    Extract partition values from a given file path

    Attributes
    protected
    Definition Classes
    FileRefDataObject
  30. def factory: FromConfigFactory[DataObject]

    Permalink

    Returns the factory that can parse this type (that is, type CO).

    Returns the factory that can parse this type (that is, type CO).

    Typically, implementations of this method should return the companion object of the implementing class. The companion object in turn should implement FromConfigFactory.

    returns

    the factory (object) for this class.

    Definition Classes
    RawFileDataObject → ParsableFromConfig
  31. def failIfFilesMissing: Boolean

    Permalink

    Configure whether io.smartdatalake.workflow.action.Actions should fail if the input file(s) are missing on the file system.

    Configure whether io.smartdatalake.workflow.action.Actions should fail if the input file(s) are missing on the file system.

    Default is false.

    Definition Classes
    HadoopFileDataObject
  32. val fileName: String

    Permalink

    Definition of fileName.

    Definition of fileName. This is concatenated with path and partition layout to search for files. Default is an asterix to match everything.

    Definition Classes
    RawFileDataObject → FileRefDataObject
  33. val filenameColumn: Option[String]

    Permalink

    The name of the (optional) additional column containing the source filename

    The name of the (optional) additional column containing the source filename

    Definition Classes
    RawFileDataObject → SparkFileDataObject
  34. def filterPartitionsExisting(partitionValues: Seq[PartitionValues])(implicit session: SparkSession, context: ActionPipelineContext): Seq[PartitionValues]

    Permalink

    Filters only existing partition.

    Filters only existing partition. Note that partition values to check don't need to have a key/value defined for every partition column.

    Definition Classes
    SparkFileDataObject
  35. def finalize(): Unit

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  36. def format: String

    Permalink

    The Spark-Format provider to be used

    The Spark-Format provider to be used

    Definition Classes
    RawFileDataObject → SparkFileDataObject
  37. final def getClass(): Class[_]

    Permalink
    Definition Classes
    AnyRef → Any
  38. def getConcretePaths(pv: PartitionValues)(implicit session: SparkSession): Seq[Path]

    Permalink

    Generate all paths for given partition values exploding undefined partitions before the last given partition value.

    Generate all paths for given partition values exploding undefined partitions before the last given partition value. Use case: Reading all files from a given path with spark cannot contain wildcards. If there are partitions without given partition value before the last partition value given, they must be searched with globs.

    Definition Classes
    HadoopFileDataObject
  39. def getConnection[T <: Connection](connectionId: ConnectionId)(implicit registry: InstanceRegistry, ct: ClassTag[T], tt: scala.reflect.api.JavaUniverse.TypeTag[T]): T

    Permalink

    Handle class cast exception when getting objects from instance registry

    Handle class cast exception when getting objects from instance registry

    Attributes
    protected
    Definition Classes
    DataObject
  40. def getConnectionReg[T <: Connection](connectionId: ConnectionId, registry: InstanceRegistry)(implicit ct: ClassTag[T], tt: scala.reflect.api.JavaUniverse.TypeTag[T]): T

    Permalink
    Attributes
    protected
    Definition Classes
    DataObject
  41. def getDataFrame(partitionValues: Seq[PartitionValues] = Seq())(implicit session: SparkSession, context: ActionPipelineContext): DataFrame

    Permalink

    Constructs an Apache Spark DataFrame from the underlying file content.

    Constructs an Apache Spark DataFrame from the underlying file content.

    session

    the current SparkSession.

    returns

    a new DataFrame containing the data stored in the file at path

    Definition Classes
    SparkFileDataObject → CanCreateDataFrame
    See also

    DataFrameReader

  42. def getFileRefs(partitionValues: Seq[PartitionValues])(implicit session: SparkSession): Seq[FileRef]

    Permalink

    List files for given partition values

    List files for given partition values

    partitionValues

    List of partition values to be filtered. If empty all files in root path of DataObject will be listed.

    returns

    List of FileRefs

    Definition Classes
    HadoopFileDataObject → FileRefDataObject
  43. def getPartitionString(partitionValues: PartitionValues)(implicit session: SparkSession): Option[String]

    Permalink

    get partition values formatted by partition layout

    get partition values formatted by partition layout

    Definition Classes
    FileRefDataObject
  44. def getPath(implicit session: SparkSession): String

    Permalink

    Method for subclasses to override the base path for this DataObject.

    Method for subclasses to override the base path for this DataObject. This is for instance needed if pathPrefix is defined in a connection.

    Definition Classes
    HadoopFileDataObject → FileRefDataObject
  45. def getSchema(sourceExists: Boolean): Option[StructType]

    Permalink

    Returns the user-defined schema for reading from the data source.

    Returns the user-defined schema for reading from the data source. By default, this should return schema but it may be customized by data objects that have a source schema and ignore the user-defined schema on read operations.

    If a user-defined schema is returned, it overrides any schema inference. If no user-defined schema is set, the schema may be inferred depending on the configuration and type of data frame reader.

    sourceExists

    Whether the source file/table exists already. Existing sources may have a source schema.

    returns

    The schema to use for the data frame reader when reading from the source.

    Definition Classes
    SparkFileDataObject
  46. def getSearchPaths(partitionValues: Seq[PartitionValues])(implicit session: SparkSession): Seq[(PartitionValues, String)]

    Permalink

    prepare paths to be searched

    prepare paths to be searched

    Attributes
    protected
    Definition Classes
    FileRefDataObject
  47. def getStreamingDataFrame(options: Map[String, String], pipelineSchema: Option[StructType])(implicit session: SparkSession): DataFrame

    Permalink
    Definition Classes
    SparkFileDataObject → CanCreateStreamingDataFrame
  48. def hadoopPath(implicit session: SparkSession): Path

    Permalink
    Definition Classes
    HadoopFileDataObject → HasHadoopStandardFilestore
  49. val housekeepingMode: Option[HousekeepingMode]

    Permalink

    Optional definition of a housekeeping mode applied after every write.

    Optional definition of a housekeeping mode applied after every write. E.g. it can be used to cleanup, archive and compact partitions. See HousekeepingMode for available implementations. Default is None.

    Definition Classes
    RawFileDataObjectDataObject
  50. val id: DataObjectId

    Permalink

    A unique identifier for this instance.

    A unique identifier for this instance.

    Definition Classes
    RawFileDataObjectDataObject → SdlConfigObject
  51. def init(df: DataFrame, partitionValues: Seq[PartitionValues], saveModeOptions: Option[SaveModeOptions] = None)(implicit session: SparkSession, context: ActionPipelineContext): Unit

    Permalink

    Called during init phase for checks and initialization.

    Called during init phase for checks and initialization. If possible dont change the system until execution phase.

    Definition Classes
    SparkFileDataObject → CanWriteDataFrame
  52. implicit val instanceRegistry: InstanceRegistry

    Permalink

    Return the InstanceRegistry parsed from the SDL configuration used for this run.

    Return the InstanceRegistry parsed from the SDL configuration used for this run.

    returns

    the current InstanceRegistry.

    Definition Classes
    RawFileDataObject → HadoopFileDataObject
  53. final def isInstanceOf[T0]: Boolean

    Permalink
    Definition Classes
    Any
  54. def listPartitions(implicit session: SparkSession, context: ActionPipelineContext): Seq[PartitionValues]

    Permalink

    List partitions on data object's root path

    List partitions on data object's root path

    Definition Classes
    HadoopFileDataObject → CanHandlePartitions
  55. lazy val logger: Logger

    Permalink
    Attributes
    protected
    Definition Classes
    SmartDataLakeLogger
  56. val metadata: Option[DataObjectMetadata]

    Permalink

    Additional metadata for the DataObject

    Additional metadata for the DataObject

    Definition Classes
    RawFileDataObjectDataObject
  57. def movePartitions(partitionValuesMapping: Seq[(PartitionValues, PartitionValues)])(implicit session: SparkSession): Unit

    Permalink

    Move given partitions.

    Move given partitions. This is used to archive partitions by housekeeping. Note: this is optional to implement.

    Definition Classes
    HadoopFileDataObject → CanHandlePartitions
  58. final def ne(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  59. final def notify(): Unit

    Permalink
    Definition Classes
    AnyRef
  60. final def notifyAll(): Unit

    Permalink
    Definition Classes
    AnyRef
  61. val options: Map[String, String]

    Permalink

    Options for custom Spark data source format.

    Options for custom Spark data source format. Only of use if you want to read/write this DataObject with Spark.

    Definition Classes
    RawFileDataObject → SparkFileDataObject
  62. def partitionLayout(): Option[String]

    Permalink

    Return a String specifying the partition layout.

    Return a String specifying the partition layout. For Hadoop the default partition layout is colname1=<value1>/colname2=<value2>/.../

    Definition Classes
    HasHadoopStandardFilestore
  63. val partitions: Seq[String]

    Permalink

    Definition of partition columns

    Definition of partition columns

    Definition Classes
    RawFileDataObjectCanHandlePartitions
  64. val path: String

    Permalink

    The root path of the files that are handled by this DataObject.

    The root path of the files that are handled by this DataObject.

    Definition Classes
    RawFileDataObject → FileDataObject
  65. def postWrite(partitionValues: Seq[PartitionValues])(implicit session: SparkSession, context: ActionPipelineContext): Unit

    Permalink

    Runs operations after writing to DataObject

    Runs operations after writing to DataObject

    Definition Classes
    HadoopFileDataObject → DataObject
  66. def preWrite(implicit session: SparkSession, context: ActionPipelineContext): Unit

    Permalink

    Runs operations before writing to DataObject Note: As the transformed SubFeed doesnt yet exist in Action.preWrite, no partition values can be passed as parameters as in preRead

    Runs operations before writing to DataObject Note: As the transformed SubFeed doesnt yet exist in Action.preWrite, no partition values can be passed as parameters as in preRead

    Definition Classes
    HadoopFileDataObject → DataObject
  67. def prepare(implicit session: SparkSession, context: ActionPipelineContext): Unit

    Permalink

    Prepare & test DataObject's prerequisits

    Prepare & test DataObject's prerequisits

    This runs during the "prepare" operation of the DAG.

    Definition Classes
    HadoopFileDataObject → FileDataObject → DataObject
  68. def relativizePath(path: String)(implicit session: SparkSession): String

    Permalink

    Make a given path relative to this DataObjects base path

    Make a given path relative to this DataObjects base path

    Definition Classes
    HadoopFileDataObject → FileDataObject
  69. val saveMode: SDLSaveMode

    Permalink

    Overwrite or Append new data.

    Overwrite or Append new data.

    Definition Classes
    RawFileDataObject → FileRefDataObject
  70. val schema: Option[StructType]

    Permalink

    An optional DataObject user-defined schema definition.

    An optional DataObject user-defined schema definition.

    Some DataObjects support optional schema inference. Specifying this attribute disables automatic schema inference. When the wrapped data source contains a source schema, this schema attribute is ignored.

    Note: This is only used by the functionality defined in CanCreateDataFrame, that is, when reading Spark data frames from the underlying data container. io.smartdatalake.workflow.action.Actions that bypass Spark data frames ignore the schema attribute if it is defined.

    Definition Classes
    RawFileDataObject → UserDefinedSchema
  71. val schemaMin: Option[StructType]

    Permalink

    An optional, minimal schema that a DataObject schema must have to pass schema validation.

    An optional, minimal schema that a DataObject schema must have to pass schema validation.

    The schema validation semantics are: - Schema A is valid in respect to a minimal schema B when B is a subset of A. This means: the whole column set of B is contained in the column set of A.

    • A column of B is contained in A when A contains a column with equal name and data type.
    • Column order is ignored.
    • Column nullability is ignored.
    • Duplicate columns in terms of name and data type are eliminated (set semantics).

    Note: This is mainly used by the functionality defined in CanCreateDataFrame and CanWriteDataFrame, that is, when reading or writing Spark data frames from/to the underlying data container. io.smartdatalake.workflow.action.Actions that work with files ignore the schemaMin attribute if it is defined. Additionally schemaMin can be used to define the schema used if there is no data or table doesn't yet exist.

    Definition Classes
    RawFileDataObject → SchemaValidation
  72. val separator: Char

    Permalink

    default separator for paths

    default separator for paths

    Attributes
    protected
    Definition Classes
    FileDataObject
  73. val sparkRepartition: Option[SparkRepartitionDef]

    Permalink

    Definition of repartition operation before writing DataFrame with Spark to Hadoop.

    Definition of repartition operation before writing DataFrame with Spark to Hadoop.

    Definition Classes
    RawFileDataObject → SparkFileDataObject
  74. def streamingOptions: Map[String, String]

    Permalink
    Definition Classes
    CanWriteDataFrame
  75. final def synchronized[T0](arg0: ⇒ T0): T0

    Permalink
    Definition Classes
    AnyRef
  76. def toStringShort: String

    Permalink
    Definition Classes
    DataObject
  77. def translateFileRefs(fileRefs: Seq[FileRef])(implicit session: SparkSession, context: ActionPipelineContext): Seq[FileRef]

    Permalink

    Given some FileRefs for another DataObject, translate the paths to the root path of this DataObject

    Given some FileRefs for another DataObject, translate the paths to the root path of this DataObject

    Definition Classes
    FileRefDataObject
  78. def validateSchema(df: DataFrame, schemaExpected: StructType, role: String): Unit

    Permalink

    Validate the schema of a given Spark Data Frame df against a given expected schema.

    Validate the schema of a given Spark Data Frame df against a given expected schema.

    df

    The data frame to validate.

    schemaExpected

    The expected schema to validate against.

    role

    role used in exception message. Set to read or write.

    Definition Classes
    SchemaValidation
    Exceptions thrown

    SchemaViolationException is the schemaMin does not validate.

  79. def validateSchemaHasPartitionCols(df: DataFrame, role: String): Unit

    Permalink

    Validate the schema of a given Spark Data Frame df that it contains the specified partition columns

    Validate the schema of a given Spark Data Frame df that it contains the specified partition columns

    df

    The data frame to validate.

    role

    role used in exception message. Set to read or write.

    Definition Classes
    CanHandlePartitions
    Exceptions thrown

    SchemaViolationException if the partitions columns are not included.

  80. def validateSchemaHasPrimaryKeyCols(df: DataFrame, primaryKeyCols: Seq[String], role: String): Unit

    Permalink

    Validate the schema of a given Spark Data Frame df that it contains the specified primary key columns

    Validate the schema of a given Spark Data Frame df that it contains the specified primary key columns

    df

    The data frame to validate.

    role

    role used in exception message. Set to read or write.

    Definition Classes
    CanHandlePartitions
    Exceptions thrown

    SchemaViolationException if the partitions columns are not included.

  81. def validateSchemaMin(df: DataFrame, role: String): Unit

    Permalink

    Validate the schema of a given Spark Data Frame df against schemaMin.

    Validate the schema of a given Spark Data Frame df against schemaMin.

    df

    The data frame to validate.

    role

    role used in exception message. Set to read or write.

    Definition Classes
    SchemaValidation
    Exceptions thrown

    SchemaViolationException is the schemaMin does not validate.

  82. final def wait(): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  83. final def wait(arg0: Long, arg1: Int): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  84. final def wait(arg0: Long): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  85. final def writeDataFrame(df: DataFrame, partitionValues: Seq[PartitionValues] = Seq(), isRecursiveInput: Boolean = false, saveModeOptions: Option[SaveModeOptions] = None)(implicit session: SparkSession, context: ActionPipelineContext): Unit

    Permalink

    Writes the provided DataFrame to the filesystem.

    Writes the provided DataFrame to the filesystem.

    The partitionValues attribute is used to partition the output by the given columns on the file system.

    df

    the DataFrame to write to the file system.

    partitionValues

    The partition layout to write.

    isRecursiveInput

    if DataFrame needs this DataObject as input - special treatment might be needed in this case.@param session the current SparkSession.

    Definition Classes
    SparkFileDataObject → CanWriteDataFrame
    See also

    DataFrameWriter.partitionBy

  86. def writeStreamingDataFrame(df: DataFrame, trigger: Trigger, options: Map[String, String], checkpointLocation: String, queryName: String, outputMode: OutputMode = OutputMode.Append, saveModeOptions: Option[SaveModeOptions] = None)(implicit session: SparkSession, context: ActionPipelineContext): StreamingQuery

    Permalink

    Write Spark structured streaming DataFrame The default implementation uses foreachBatch and this traits writeDataFrame method to write the DataFrame.

    Write Spark structured streaming DataFrame The default implementation uses foreachBatch and this traits writeDataFrame method to write the DataFrame. Some DataObjects will override this with specific implementations (Kafka).

    df

    The Streaming DataFrame to write

    trigger

    Trigger frequency for stream

    checkpointLocation

    location for checkpoints of streaming query

    Definition Classes
    SparkFileDataObject → CanWriteDataFrame

Inherited from Serializable

Inherited from Serializable

Inherited from Product

Inherited from Equals

Inherited from SparkFileDataObject

Inherited from SchemaValidation

Inherited from UserDefinedSchema

Inherited from CanCreateStreamingDataFrame

Inherited from CanWriteDataFrame

Inherited from CanCreateDataFrame

Inherited from HadoopFileDataObject

Inherited from CanCreateOutputStream

Inherited from CanCreateInputStream

Inherited from FileRefDataObject

Inherited from FileDataObject

Inherited from CanHandlePartitions

Inherited from DataObject

Inherited from AtlasExportable

Inherited from SmartDataLakeLogger

Inherited from ParsableFromConfig[DataObject]

Inherited from SdlConfigObject

Inherited from AnyRef

Inherited from Any

Ungrouped