Class/Object

frameless

TypedDataset

Related Docs: object TypedDataset | package frameless

Permalink

class TypedDataset[T] extends TypedDatasetForwarded[T]

TypedDataset is a safer interface for working with Dataset.

NOTE: Prefer TypedDataset.create over new TypedDataset unless you know what you are doing.

Documentation marked "apache/spark" is thanks to apache/spark Contributors at https://github.com/apache/spark, licensed under Apache v2.0 available at http://www.apache.org/licenses/LICENSE-2.0

Self Type
TypedDataset[T]
Linear Supertypes
TypedDatasetForwarded[T], AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. TypedDataset
  2. TypedDatasetForwarded
  3. AnyRef
  4. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Instance Constructors

  1. new TypedDataset(dataset: Dataset[T])(implicit encoder: TypedEncoder[T])

    Permalink
    Attributes
    protected[frameless]

Value Members

  1. final def !=(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  4. def apply[A](column: Lt[Symbol])(implicit exists: Exists[T, (column)#T, A], encoder: TypedEncoder[A]): TypedColumn[T, A]

    Permalink

    Returns TypedColumn of type A given it's name.

    Returns TypedColumn of type A given it's name.

    tf('id)

    It is statically checked that column with such name exists and has type A.

  5. def as[U]()(implicit as: As[T, U]): TypedDataset[U]

    Permalink

    Returns a new TypedDataset where each record has been mapped on to the specified type.

  6. final def asInstanceOf[T0]: T0

    Permalink
    Definition Classes
    Any
  7. def cache(): TypedDataset[T]

    Permalink

    Persist this TypedDataset with the default storage level (MEMORY_AND_DISK).

    Persist this TypedDataset with the default storage level (MEMORY_AND_DISK).

    apache/spark

    Definition Classes
    TypedDatasetForwarded
  8. def clone(): AnyRef

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  9. def coalesce(numPartitions: Int): TypedDataset[T]

    Permalink

    Returns a new TypedDataset that has exactly numPartitions partitions.

    Returns a new TypedDataset that has exactly numPartitions partitions. Similar to coalesce defined on an RDD, this operation results in a narrow dependency, e.g. if you go from 1000 partitions to 100 partitions, there will not be a shuffle, instead each of the 100 new partitions will claim 10 of the current partitions.

    apache/spark

    Definition Classes
    TypedDatasetForwarded
  10. def col[A](column: Lt[Symbol])(implicit exists: Exists[T, (column)#T, A], encoder: TypedEncoder[A]): TypedColumn[T, A]

    Permalink

    Returns TypedColumn of type A given it's name.

    Returns TypedColumn of type A given it's name.

    tf.col('id)

    It is statically checked that column with such name exists and has type A.

  11. object colMany extends SingletonProductArgs

    Permalink
  12. def collect(): Job[Seq[T]]

    Permalink

    Returns a Seq that contains all the elements in this TypedDataset.

    Returns a Seq that contains all the elements in this TypedDataset.

    Running this Job requires moving all the data into the application's driver process, and doing so on a very large TypedDataset can crash the driver process with OutOfMemoryError.

    Differs from Dataset#collect by wrapping it's result into a Job.

  13. def count(): Job[Long]

    Permalink

    Returns the number of elements in the TypedDataset.

    Returns the number of elements in the TypedDataset.

    Differs from Dataset#count by wrapping it's result into a Job.

  14. val dataset: Dataset[T]

    Permalink
  15. def distinct: TypedDataset[T]

    Permalink

    Returns a new TypedDataset that contains only the unique elements of this TypedDataset.

    Returns a new TypedDataset that contains only the unique elements of this TypedDataset.

    Note that, equality checking is performed directly on the encoded representation of the data and thus is not affected by a custom equals function defined on T.

    apache/spark

    Definition Classes
    TypedDatasetForwarded
  16. implicit val encoder: TypedEncoder[T]

    Permalink
  17. final def eq(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  18. def equals(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  19. def except(other: TypedDataset[T]): TypedDataset[T]

    Permalink

    Returns a new Dataset containing rows in this Dataset but not in another Dataset.

    Returns a new Dataset containing rows in this Dataset but not in another Dataset. This is equivalent to EXCEPT in SQL.

    Note that, equality checking is performed directly on the encoded representation of the data and thus is not affected by a custom equals function defined on T.

    apache/spark

    Definition Classes
    TypedDatasetForwarded
  20. def explain(extended: Boolean = false): Unit

    Permalink

    Prints the plans (logical and physical) to the console for debugging purposes.

    Prints the plans (logical and physical) to the console for debugging purposes.

    apache/spark

    Definition Classes
    TypedDatasetForwarded
  21. def filter(column: TypedColumn[T, Boolean]): TypedDataset[T]

    Permalink

    Returns a new frameless.TypedDataset that only contains elements where column is true.

    Returns a new frameless.TypedDataset that only contains elements where column is true.

    Differs from TypedDatasetForward#filter by taking a TypedColumn[T, Boolean] instead of a T => Boolean. Using a column expression instead of a regular function save one Spark → Scala deserialization which leads to better performance.

  22. def filter(func: (T) ⇒ Boolean): TypedDataset[T]

    Permalink

    Returns a new TypedDataset that only contains elements where func returns true.

    Returns a new TypedDataset that only contains elements where func returns true.

    apache/spark

    Definition Classes
    TypedDatasetForwarded
  23. def finalize(): Unit

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  24. def firstOption(): Job[Option[T]]

    Permalink

    Optionally returns the first element in this TypedDataset.

    Optionally returns the first element in this TypedDataset.

    Differs from Dataset#first by wrapping it's result into an Option and a Job.

  25. def flatMap[U](func: (T) ⇒ TraversableOnce[U])(implicit arg0: TypedEncoder[U]): TypedDataset[U]

    Permalink

    Returns a new TypedDataset by first applying a function to all elements of this TypedDataset, and then flattening the results.

    Returns a new TypedDataset by first applying a function to all elements of this TypedDataset, and then flattening the results.

    apache/spark

    Definition Classes
    TypedDatasetForwarded
  26. def foreach(func: (T) ⇒ Unit): Job[Unit]

    Permalink

    Runs func on each element of this TypedDataset.

    Runs func on each element of this TypedDataset.

    Differs from Dataset#foreach by wrapping it's result into a Job.

  27. def foreachPartition(func: (Iterator[T]) ⇒ Unit): Job[Unit]

    Permalink

    Runs func on each partition of this TypedDataset.

    Runs func on each partition of this TypedDataset.

    Differs from Dataset#foreachPartition by wrapping it's result into a Job.

  28. final def getClass(): Class[_]

    Permalink
    Definition Classes
    AnyRef → Any
  29. def groupBy[K1, K2](c1: TypedColumn[T, K1], c2: TypedColumn[T, K2]): GroupedBy2Ops[K1, K2, T]

    Permalink
  30. def groupBy[K1](c1: TypedColumn[T, K1]): GroupedBy1Ops[K1, T]

    Permalink
  31. object groupByMany extends ProductArgs

    Permalink
  32. def hashCode(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  33. def intersect(other: TypedDataset[T]): TypedDataset[T]

    Permalink

    Returns a new TypedDataset that contains only the elements of this TypedDataset that are also present in other.

    Returns a new TypedDataset that contains only the elements of this TypedDataset that are also present in other.

    Note that, equality checking is performed directly on the encoded representation of the data and thus is not affected by a custom equals function defined on T.

    apache/spark

    Definition Classes
    TypedDatasetForwarded
  34. final def isInstanceOf[T0]: Boolean

    Permalink
    Definition Classes
    Any
  35. def join[A, B](right: TypedDataset[A], leftCol: TypedColumn[T, B], rightCol: TypedColumn[A, B]): TypedDataset[(T, A)]

    Permalink
  36. def joinLeft[A, B](right: TypedDataset[A], leftCol: TypedColumn[T, B], rightCol: TypedColumn[A, B])(implicit arg0: TypedEncoder[A], e: TypedEncoder[(T, Option[A])]): TypedDataset[(T, Option[A])]

    Permalink
  37. def limit(n: Int): TypedDataset[T]

    Permalink

    Returns a new Dataset by taking the first n rows.

    Returns a new Dataset by taking the first n rows. The difference between this function and head is that head is an action and returns an array (by triggering query execution) while limit returns a new Dataset.

    apache/spark

    Definition Classes
    TypedDatasetForwarded
  38. def makeUDF[A1, A2, A3, A4, A5, R](f: (A1, A2, A3, A4, A5) ⇒ R)(implicit arg0: TypedEncoder[A1], arg1: TypedEncoder[A2], arg2: TypedEncoder[A3], arg3: TypedEncoder[A4], arg4: TypedEncoder[A5], arg5: TypedEncoder[R]): (TypedColumn[T, A1], TypedColumn[T, A2], TypedColumn[T, A3], TypedColumn[T, A4], TypedColumn[T, A5]) ⇒ TypedColumn[T, R]

    Permalink

    Takes a function from (A1, A2, A3, A4, A5) => R and converts it to a UDF for (TypedColumn[T, A1], TypedColumn[T, A2], TypedColumn[T, A3], TypedColumn[T, A4], TypedColumn[T, A5]) => TypedColumn[T, R].

  39. def makeUDF[A1, A2, A3, A4, R](f: (A1, A2, A3, A4) ⇒ R)(implicit arg0: TypedEncoder[A1], arg1: TypedEncoder[A2], arg2: TypedEncoder[A3], arg3: TypedEncoder[A4], arg4: TypedEncoder[R]): (TypedColumn[T, A1], TypedColumn[T, A2], TypedColumn[T, A3], TypedColumn[T, A4]) ⇒ TypedColumn[T, R]

    Permalink

    Takes a function from (A1, A2, A3, A4) => R and converts it to a UDF for (TypedColumn[T, A1], TypedColumn[T, A2], TypedColumn[T, A3], TypedColumn[T, A4]) => TypedColumn[T, R].

  40. def makeUDF[A1, A2, A3, R](f: (A1, A2, A3) ⇒ R)(implicit arg0: TypedEncoder[A1], arg1: TypedEncoder[A2], arg2: TypedEncoder[A3], arg3: TypedEncoder[R]): (TypedColumn[T, A1], TypedColumn[T, A2], TypedColumn[T, A3]) ⇒ TypedColumn[T, R]

    Permalink

    Takes a function from (A1, A2, A3) => R and converts it to a UDF for (TypedColumn[T, A1], TypedColumn[T, A2], TypedColumn[T, A3]) => TypedColumn[T, R].

  41. def makeUDF[A1, A2, R](f: (A1, A2) ⇒ R)(implicit arg0: TypedEncoder[A1], arg1: TypedEncoder[A2], arg2: TypedEncoder[R]): (TypedColumn[T, A1], TypedColumn[T, A2]) ⇒ TypedColumn[T, R]

    Permalink

    Takes a function from (A1, A2) => R and converts it to a UDF for (TypedColumn[T, A1], TypedColumn[T, A2]) => TypedColumn[T, R].

  42. def makeUDF[A, R](f: (A) ⇒ R)(implicit arg0: TypedEncoder[A], arg1: TypedEncoder[R]): (TypedColumn[T, A]) ⇒ TypedColumn[T, R]

    Permalink

    Takes a function from A => R and converts it to a UDF for TypedColumn[T, A] => TypedColumn[T, R].

  43. def map[U](func: (T) ⇒ U)(implicit arg0: TypedEncoder[U]): TypedDataset[U]

    Permalink

    Returns a new TypedDataset that contains the result of applying func to each element.

    Returns a new TypedDataset that contains the result of applying func to each element.

    apache/spark

    Definition Classes
    TypedDatasetForwarded
  44. def mapPartitions[U](func: (Iterator[T]) ⇒ Iterator[U])(implicit arg0: TypedEncoder[U]): TypedDataset[U]

    Permalink

    Returns a new TypedDataset that contains the result of applying func to each partition.

    Returns a new TypedDataset that contains the result of applying func to each partition.

    apache/spark

    Definition Classes
    TypedDatasetForwarded
  45. final def ne(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  46. final def notify(): Unit

    Permalink
    Definition Classes
    AnyRef
  47. final def notifyAll(): Unit

    Permalink
    Definition Classes
    AnyRef
  48. def persist(newLevel: StorageLevel = StorageLevel.MEMORY_AND_DISK): TypedDataset[T]

    Permalink

    Persist this TypedDataset with the given storage level.

    Persist this TypedDataset with the given storage level.

    newLevel

    One of: MEMORY_ONLY, MEMORY_AND_DISK, MEMORY_ONLY_SER, MEMORY_AND_DISK_SER, DISK_ONLY, MEMORY_ONLY_2, MEMORY_AND_DISK_2, etc. apache/spark

    Definition Classes
    TypedDatasetForwarded
  49. def printSchema(): Unit

    Permalink

    Prints the schema of the underlying Dataset to the console in a nice tree format.

    Prints the schema of the underlying Dataset to the console in a nice tree format.

    apache/spark

    Definition Classes
    TypedDatasetForwarded
  50. def project[U](implicit projector: SmartProject[T, U]): TypedDataset[U]

    Permalink

    Returns a new TypedDataset where each record has been mapped on to the specified type.

    Returns a new TypedDataset where each record has been mapped on to the specified type. Unlike as the projection U may include a subset of the columns of T and the column names and types must agree.

    case class Foo(i: Int, j: String)
    case class Bar(j: String)
    
    val t: TypedDataset[Foo] = ...
    val b: TypedDataset[Bar] = t.project[Bar]
    
    case class BarErr(e: String)
    // The following does not compile because `Foo` doesn't have a field with name `e`
    val e: TypedDataset[BarErr] = t.project[BarErr]
  51. def rdd: RDD[T]

    Permalink

    Converts this TypedDataset to an RDD.

    Converts this TypedDataset to an RDD.

    apache/spark

    Definition Classes
    TypedDatasetForwarded
  52. def reduceOption(func: (T, T) ⇒ T): Job[Option[T]]

    Permalink

    Optionally reduces the elements of this TypedDataset using the specified binary function.

    Optionally reduces the elements of this TypedDataset using the specified binary function. The given func must be commutative and associative or the result may be non-deterministic.

    Differs from Dataset#reduce by wrapping it's result into an Option and a Job.

  53. def repartition(numPartitions: Int): TypedDataset[T]

    Permalink

    Returns a new TypedDataset that has exactly numPartitions partitions.

    Returns a new TypedDataset that has exactly numPartitions partitions.

    apache/spark

    Definition Classes
    TypedDatasetForwarded
  54. def sample(withReplacement: Boolean, fraction: Double, seed: Long = Random.nextLong): TypedDataset[T]

    Permalink

    Returns a new TypedDataset by sampling a fraction of records.

    Returns a new TypedDataset by sampling a fraction of records.

    apache/spark

    Definition Classes
    TypedDatasetForwarded
  55. def schema: StructType

    Permalink

    Returns the schema of this Dataset.

    Returns the schema of this Dataset.

    apache/spark

    Definition Classes
    TypedDatasetForwarded
  56. def select[A, B, C, D, E, F, G, H, I, J](ca: TypedColumn[T, A], cb: TypedColumn[T, B], cc: TypedColumn[T, C], cd: TypedColumn[T, D], ce: TypedColumn[T, E], cf: TypedColumn[T, F], cg: TypedColumn[T, G], ch: TypedColumn[T, H], ci: TypedColumn[T, I], cj: TypedColumn[T, J])(implicit arg0: TypedEncoder[A], arg1: TypedEncoder[B], arg2: TypedEncoder[C], arg3: TypedEncoder[D], arg4: TypedEncoder[E], arg5: TypedEncoder[F], arg6: TypedEncoder[G], arg7: TypedEncoder[H], arg8: TypedEncoder[I], arg9: TypedEncoder[J]): TypedDataset[(A, B, C, D, E, F, G, H, I, J)]

    Permalink

    Type-safe projection from type T to Tuple10[A,B,...]

    Type-safe projection from type T to Tuple10[A,B,...]

    d.select( d('a), d('a)+d('b), ... )
  57. def select[A, B, C, D, E, F, G, H, I](ca: TypedColumn[T, A], cb: TypedColumn[T, B], cc: TypedColumn[T, C], cd: TypedColumn[T, D], ce: TypedColumn[T, E], cf: TypedColumn[T, F], cg: TypedColumn[T, G], ch: TypedColumn[T, H], ci: TypedColumn[T, I])(implicit arg0: TypedEncoder[A], arg1: TypedEncoder[B], arg2: TypedEncoder[C], arg3: TypedEncoder[D], arg4: TypedEncoder[E], arg5: TypedEncoder[F], arg6: TypedEncoder[G], arg7: TypedEncoder[H], arg8: TypedEncoder[I]): TypedDataset[(A, B, C, D, E, F, G, H, I)]

    Permalink

    Type-safe projection from type T to Tuple9[A,B,...]

    Type-safe projection from type T to Tuple9[A,B,...]

    d.select( d('a), d('a)+d('b), ... )
  58. def select[A, B, C, D, E, F, G, H](ca: TypedColumn[T, A], cb: TypedColumn[T, B], cc: TypedColumn[T, C], cd: TypedColumn[T, D], ce: TypedColumn[T, E], cf: TypedColumn[T, F], cg: TypedColumn[T, G], ch: TypedColumn[T, H])(implicit arg0: TypedEncoder[A], arg1: TypedEncoder[B], arg2: TypedEncoder[C], arg3: TypedEncoder[D], arg4: TypedEncoder[E], arg5: TypedEncoder[F], arg6: TypedEncoder[G], arg7: TypedEncoder[H]): TypedDataset[(A, B, C, D, E, F, G, H)]

    Permalink

    Type-safe projection from type T to Tuple8[A,B,...]

    Type-safe projection from type T to Tuple8[A,B,...]

    d.select( d('a), d('a)+d('b), ... )
  59. def select[A, B, C, D, E, F, G](ca: TypedColumn[T, A], cb: TypedColumn[T, B], cc: TypedColumn[T, C], cd: TypedColumn[T, D], ce: TypedColumn[T, E], cf: TypedColumn[T, F], cg: TypedColumn[T, G])(implicit arg0: TypedEncoder[A], arg1: TypedEncoder[B], arg2: TypedEncoder[C], arg3: TypedEncoder[D], arg4: TypedEncoder[E], arg5: TypedEncoder[F], arg6: TypedEncoder[G]): TypedDataset[(A, B, C, D, E, F, G)]

    Permalink

    Type-safe projection from type T to Tuple7[A,B,...]

    Type-safe projection from type T to Tuple7[A,B,...]

    d.select( d('a), d('a)+d('b), ... )
  60. def select[A, B, C, D, E, F](ca: TypedColumn[T, A], cb: TypedColumn[T, B], cc: TypedColumn[T, C], cd: TypedColumn[T, D], ce: TypedColumn[T, E], cf: TypedColumn[T, F])(implicit arg0: TypedEncoder[A], arg1: TypedEncoder[B], arg2: TypedEncoder[C], arg3: TypedEncoder[D], arg4: TypedEncoder[E], arg5: TypedEncoder[F]): TypedDataset[(A, B, C, D, E, F)]

    Permalink

    Type-safe projection from type T to Tuple6[A,B,...]

    Type-safe projection from type T to Tuple6[A,B,...]

    d.select( d('a), d('a)+d('b), ... )
  61. def select[A, B, C, D, E](ca: TypedColumn[T, A], cb: TypedColumn[T, B], cc: TypedColumn[T, C], cd: TypedColumn[T, D], ce: TypedColumn[T, E])(implicit arg0: TypedEncoder[A], arg1: TypedEncoder[B], arg2: TypedEncoder[C], arg3: TypedEncoder[D], arg4: TypedEncoder[E]): TypedDataset[(A, B, C, D, E)]

    Permalink

    Type-safe projection from type T to Tuple5[A,B,...]

    Type-safe projection from type T to Tuple5[A,B,...]

    d.select( d('a), d('a)+d('b), ... )
  62. def select[A, B, C, D](ca: TypedColumn[T, A], cb: TypedColumn[T, B], cc: TypedColumn[T, C], cd: TypedColumn[T, D])(implicit arg0: TypedEncoder[A], arg1: TypedEncoder[B], arg2: TypedEncoder[C], arg3: TypedEncoder[D]): TypedDataset[(A, B, C, D)]

    Permalink

    Type-safe projection from type T to Tuple4[A,B,...]

    Type-safe projection from type T to Tuple4[A,B,...]

    d.select( d('a), d('a)+d('b), ... )
  63. def select[A, B, C](ca: TypedColumn[T, A], cb: TypedColumn[T, B], cc: TypedColumn[T, C])(implicit arg0: TypedEncoder[A], arg1: TypedEncoder[B], arg2: TypedEncoder[C]): TypedDataset[(A, B, C)]

    Permalink

    Type-safe projection from type T to Tuple3[A,B,...]

    Type-safe projection from type T to Tuple3[A,B,...]

    d.select( d('a), d('a)+d('b), ... )
  64. def select[A, B](ca: TypedColumn[T, A], cb: TypedColumn[T, B])(implicit arg0: TypedEncoder[A], arg1: TypedEncoder[B]): TypedDataset[(A, B)]

    Permalink

    Type-safe projection from type T to Tuple2[A,B]

    Type-safe projection from type T to Tuple2[A,B]

    d.select( d('a), d('a)+d('b), ... )
  65. def select[A](ca: TypedColumn[T, A])(implicit arg0: TypedEncoder[A]): TypedDataset[A]

    Permalink

    Type-safe projection from type T to Tuple1[A]

    Type-safe projection from type T to Tuple1[A]

    d.select( d('a), d('a)+d('b), ... )
  66. object selectMany extends ProductArgs

    Permalink
  67. def show(numRows: Int = 20, truncate: Boolean = true): Job[Unit]

    Permalink

    Displays the content of this TypedDataset in a tabular form.

    Displays the content of this TypedDataset in a tabular form. Strings more than 20 characters will be truncated, and all cells will be aligned right. For example:

    year  month AVG('Adj Close) MAX('Adj Close)
    1980  12    0.503218        0.595103
    1981  01    0.523289        0.570307
    1982  02    0.436504        0.475256
    1983  03    0.410516        0.442194
    1984  04    0.450090        0.483521
    numRows

    Number of rows to show

    truncate

    Whether truncate long strings. If true, strings more than 20 characters will be truncated and all cells will be aligned right Differs from Dataset#show by wrapping it's result into a Job. apache/spark

  68. final def synchronized[T0](arg0: ⇒ T0): T0

    Permalink
    Definition Classes
    AnyRef
  69. def take(num: Int): Job[Seq[T]]

    Permalink

    Returns the first num elements of this TypedDataset as a Seq.

    Returns the first num elements of this TypedDataset as a Seq.

    Running take requires moving data into the application's driver process, and doing so with a very large num can crash the driver process with OutOfMemoryError.

    Differs from Dataset#take by wrapping it's result into a Job.

    apache/spark

  70. def toDF(): DataFrame

    Permalink

    Converts this strongly typed collection of data to generic Dataframe.

    Converts this strongly typed collection of data to generic Dataframe. In contrast to the strongly typed objects that Dataset operations work on, a Dataframe returns generic Row objects that allow fields to be accessed by ordinal or name.

    apache/spark

    Definition Classes
    TypedDatasetForwarded
  71. def toString(): String

    Permalink
    Definition Classes
    TypedDatasetForwarded → AnyRef → Any
  72. def transform[U](t: (TypedDataset[T]) ⇒ TypedDataset[U]): TypedDataset[U]

    Permalink

    Concise syntax for chaining custom transformations.

    Concise syntax for chaining custom transformations.

    apache/spark

    Definition Classes
    TypedDatasetForwarded
  73. def union(other: TypedDataset[T]): TypedDataset[T]

    Permalink

    Returns a new TypedDataset that contains the elements of both this and the other TypedDataset combined.

    Returns a new TypedDataset that contains the elements of both this and the other TypedDataset combined.

    Note that, this function is not a typical set union operation, in that it does not eliminate duplicate items. As such, it is analogous to UNION ALL in SQL.

    apache/spark

    Definition Classes
    TypedDatasetForwarded
  74. def unpersist(blocking: Boolean = false): TypedDataset[T]

    Permalink

    Mark the TypedDataset as non-persistent, and remove all blocks for it from memory and disk.

    Mark the TypedDataset as non-persistent, and remove all blocks for it from memory and disk.

    blocking

    Whether to block until all blocks are deleted. apache/spark

    Definition Classes
    TypedDatasetForwarded
  75. final def wait(): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  76. final def wait(arg0: Long, arg1: Int): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  77. final def wait(arg0: Long): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )

Inherited from TypedDatasetForwarded[T]

Inherited from AnyRef

Inherited from Any

Ungrouped