object SDLSaveMode extends Enumeration
SDL supports more SaveModes than Spark, that's why there is an own definition of SDLSaveMode.
- Alphabetic
- By Inheritance
- SDLSaveMode
- Enumeration
- Serializable
- Serializable
- AnyRef
- Any
- Hide All
- Show All
- Public
- All
Type Members
- type SDLSaveMode = Value
- class SDLSaveModeValue extends AnyRef
-
class
Val extends Value with Serializable
- Attributes
- protected
- Definition Classes
- Enumeration
- Annotations
- @SerialVersionUID()
-
abstract
class
Value extends Ordered[Value] with Serializable
- Definition Classes
- Enumeration
- Annotations
- @SerialVersionUID()
-
class
ValueSet extends AbstractSet[Value] with SortedSet[Value] with SortedSetLike[Value, ValueSet] with Serializable
- Definition Classes
- Enumeration
Value Members
-
final
def
!=(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
final
def
##(): Int
- Definition Classes
- AnyRef → Any
-
final
def
==(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
val
Append: Value
- See also
SaveMode
-
val
ErrorIfExists: Value
- See also
SaveMode
-
val
Ignore: Value
- See also
SaveMode
-
val
Merge: Value
Merge new data with existing data by insert new records and update (or delete) existing records.
Merge new data with existing data by insert new records and update (or delete) existing records. DataObjects need primary key defined to check if a record is new. To delete existing records add column '_deleted' to DataFrame and set its value to 'true' for the records which should be deleted.
Note that only few DataObjects are able to merge new data, e.g. DeltaLakeTableDataObject and JdbcTableDataObject
-
val
Overwrite: Value
- See also
SaveMode
-
val
OverwriteOptimized: Value
This is like SDLSaveMode.Overwrite but processed partitions are manually deleted instead of using dynamic partitioning mode.
This is like SDLSaveMode.Overwrite but processed partitions are manually deleted instead of using dynamic partitioning mode. Then it uses Sparks append mode to add the new partitions. This helps if there are performance problems when using dynamic partitioning mode with hive tables and many partitions.
Implementation: This save mode will delete processed partition directories manually. If no partition values are present when writing to a partitioned data object, all partitions are deleted. This is different to Sparks dynamic partitioning, which only deletes partitions where data is present in the DataFrame to be written (enabled by default in SDL). To stop if no partition values are present, configure executionMode.type = FailIfNoPartitionValuesMode on the Action.
-
val
OverwritePreserveDirectories: Value
This is like SDLSaveMode.Overwrite but doesnt delete the directory of the DataObject and its partition, but only the files inside.
This is like SDLSaveMode.Overwrite but doesnt delete the directory of the DataObject and its partition, but only the files inside. Then it uses Sparks append mode to add the new files. Like that ACLs set on the base directory are preserved.
Implementation: This save mode will delete all files inside the base directory, but not the directory itself. If no partition values are present when writing to a partitioned data object, all files in all partitions are deleted, but not the partition directories itself. This is different to Sparks dynamic partitioning, which only deletes partitions where data is present in the DataFrame to be written (enabled by default in SDL). To stop if no partition values are present, configure executionMode.type = FailIfNoPartitionValuesMode on the Action.
-
final
def
Value(i: Int, name: String): Value
- Attributes
- protected
- Definition Classes
- Enumeration
-
final
def
Value(name: String): Value
- Attributes
- protected
- Definition Classes
- Enumeration
-
final
def
Value(i: Int): Value
- Attributes
- protected
- Definition Classes
- Enumeration
-
final
def
Value: Value
- Attributes
- protected
- Definition Classes
- Enumeration
-
final
def
apply(x: Int): Value
- Definition Classes
- Enumeration
-
final
def
asInstanceOf[T0]: T0
- Definition Classes
- Any
-
def
clone(): AnyRef
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws( ... ) @native()
-
final
def
eq(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
-
def
equals(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
def
finalize(): Unit
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws( classOf[java.lang.Throwable] )
-
final
def
getClass(): Class[_]
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
-
def
hashCode(): Int
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
-
final
def
isInstanceOf[T0]: Boolean
- Definition Classes
- Any
-
final
def
maxId: Int
- Definition Classes
- Enumeration
-
final
def
ne(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
-
var
nextId: Int
- Attributes
- protected
- Definition Classes
- Enumeration
-
var
nextName: Iterator[String]
- Attributes
- protected
- Definition Classes
- Enumeration
-
final
def
notify(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
-
final
def
notifyAll(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
-
def
readResolve(): AnyRef
- Attributes
- protected
- Definition Classes
- Enumeration
-
final
def
synchronized[T0](arg0: ⇒ T0): T0
- Definition Classes
- AnyRef
-
def
toString(): String
- Definition Classes
- Enumeration → AnyRef → Any
- implicit def value2SparkSaveMode(mode: Value): SDLSaveModeValue
-
def
values: ValueSet
- Definition Classes
- Enumeration
-
final
def
wait(): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... )
-
final
def
wait(arg0: Long, arg1: Int): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... )
-
final
def
wait(arg0: Long): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... ) @native()
-
final
def
withName(s: String): Value
- Definition Classes
- Enumeration