peapod

Peapod

class Peapod extends AnyRef

Main access point for Peapod functionality, all Tasks and Peas must belong to a Peapod instance which manages them. This allows for the same Tasks to exist multiple times within a single JVM rather than requiring only a single different copy of a Task or Pea per JVM.

Linear Supertypes
AnyRef, Any
Ordering
  1. Alphabetic
  2. By inheritance
Inherited
  1. Peapod
  2. AnyRef
  3. Any
  1. Hide All
  2. Show all
Learn more about member selection
Visibility
  1. Public
  2. All

Instance Constructors

  1. new Peapod(path: String, raw: String, conf: Config = ...)(_sc: ⇒ SparkContext)

    path

    The path where Peapod stored internal outputs such as from StorableTask, should be in Hadoop format (ie: "file://", "hdfs://", etc.)

    raw

    The path for input files not managed by Peapod, should be in Hadoop format (ie: "file://", "hdfs://", etc.)

    conf

    An optional set of configuration parameters for this Peapod object

    _sc

    A SparkContext

Value Members

  1. final def !=(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  2. final def !=(arg0: Any): Boolean

    Definition Classes
    Any
  3. final def ##(): Int

    Definition Classes
    AnyRef → Any
  4. final def ==(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  5. final def ==(arg0: Any): Boolean

    Definition Classes
    Any
  6. def addTask(t: Task[_]): Unit

    Attributes
    protected
  7. def apply[D](t: Task[D])(implicit arg0: ClassTag[D]): Pea[D]

    Returns a Pea for a Task and caches the Pea

  8. final def asInstanceOf[T0]: T0

    Definition Classes
    Any
  9. def clear(): Unit

    Clear this Peapod instance of all stored data

  10. def clone(): AnyRef

    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  11. val conf: Config

    An optional set of configuration parameters for this Peapod object

  12. def deleteOtherVersions(): Unit

    Remove the output of all Tasks in this Peapod instance from persistent storage if the recursive version differs from the current version, if there is no output then this should be a no-op rather than throwing an error

  13. def dotFormatDiagram(): String

    Returns the Peapod's Task's in a DOT format graph

  14. final def eq(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  15. def equals(arg0: Any): Boolean

    Definition Classes
    AnyRef → Any
  16. def finalize(): Unit

    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  17. def generatePea(t: Task[_]): Pea[_]

    Attributes
    protected
  18. final def getClass(): Class[_]

    Definition Classes
    AnyRef → Any
  19. def hashCode(): Int

    Definition Classes
    AnyRef → Any
  20. final def isInstanceOf[T0]: Boolean

    Definition Classes
    Any
  21. final def ne(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  22. final def notify(): Unit

    Definition Classes
    AnyRef
  23. final def notifyAll(): Unit

    Definition Classes
    AnyRef
  24. val path: String

    The path where Peapod stored internal outputs such as from StorableTask, should be in Hadoop format (ie: "file://", "hdfs://", etc.

    The path where Peapod stored internal outputs such as from StorableTask, should be in Hadoop format (ie: "file://", "hdfs://", etc.)

  25. def pea[D](t: Task[D])(implicit arg0: ClassTag[D]): Pea[D]

    Returns a Pea for a Task and caches the Pea

  26. val peas: ConcurrentMap[String, Pea[_]]

    Attributes
    protected
  27. val raw: String

    The path for input files not managed by Peapod, should be in Hadoop format (ie: "file://", "hdfs://", etc.

    The path for input files not managed by Peapod, should be in Hadoop format (ie: "file://", "hdfs://", etc.)

  28. val recursiveVersioning: Boolean

    Is recursive versioning enabled, used by classes which extend Peapod

  29. lazy val sc: SparkContext

    Spark Context

  30. def setLinkages(t: Task[_], p: Pea[_]): Unit

    Attributes
    protected
  31. def size(): Int

    Returns the number of Tasks that have been cached by this Peapod instance

  32. lazy val sqlCtx: SQLContext

    SQL Spark Context

  33. final def synchronized[T0](arg0: ⇒ T0): T0

    Definition Classes
    AnyRef
  34. val tasks: ConcurrentMap[String, Task[_]]

    Attributes
    protected
  35. def toString(): String

    Definition Classes
    AnyRef → Any
  36. final def wait(): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  37. final def wait(arg0: Long, arg1: Int): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  38. final def wait(arg0: Long): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )

Inherited from AnyRef

Inherited from Any

Ungrouped