Object

com.holdenkarau.spark.testing

RDDGenerator

Related Doc: package testing

Permalink

object RDDGenerator

Annotations
@Experimental()
Linear Supertypes
AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. RDDGenerator
  2. AnyRef
  3. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Value Members

  1. final def !=(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  4. def arbitraryRDD[T](sc: SparkContext, minPartitions: Int = 1)(generator: ⇒ Gen[T])(implicit arg0: ClassTag[T]): Arbitrary[RDD[T]]

    Permalink

    Generate an RDD of the desired type.

    Generate an RDD of the desired type. Attempt to try different number of partitions so as to catch problems with empty partitions, etc. minPartitions defaults to 1, but when generating data too large for a single machine choose a larger value.

    T

    The required type for the RDD

    sc

    Spark Context

    minPartitions

    defaults to 1

    generator

    used to create the generator. This function will be used to create the generator as many times as required.

  5. final def asInstanceOf[T0]: T0

    Permalink
    Definition Classes
    Any
  6. def clone(): AnyRef

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  7. final def eq(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  8. def equals(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  9. def finalize(): Unit

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  10. def genRDD[T](sc: SparkContext, minPartitions: Int = 1)(generator: ⇒ Gen[T])(implicit arg0: ClassTag[T]): Gen[RDD[T]]

    Permalink

    Generate an RDD of the desired type.

    Generate an RDD of the desired type. Attempt to try different number of partitions so as to catch problems with empty partitions, etc. minPartitions defaults to 1, but when generating data too large for a single machine choose a larger value.

    T

    The required type for the RDD

    sc

    Spark Context

    minPartitions

    defaults to 1

    generator

    used to create the generator. This function will be used to create the generator as many times as required.

  11. final def getClass(): Class[_]

    Permalink
    Definition Classes
    AnyRef → Any
  12. def hashCode(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  13. final def isInstanceOf[T0]: Boolean

    Permalink
    Definition Classes
    Any
  14. final def ne(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  15. final def notify(): Unit

    Permalink
    Definition Classes
    AnyRef
  16. final def notifyAll(): Unit

    Permalink
    Definition Classes
    AnyRef
  17. final def synchronized[T0](arg0: ⇒ T0): T0

    Permalink
    Definition Classes
    AnyRef
  18. def toString(): String

    Permalink
    Definition Classes
    AnyRef → Any
  19. final def wait(): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  20. final def wait(arg0: Long, arg1: Int): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  21. final def wait(arg0: Long): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )

Inherited from AnyRef

Inherited from Any

Ungrouped