The path where Peapod stored internal outputs such as from StorableTask, should be in Hadoop format (ie: "file://", "hdfs://", etc.)
The path for input files not managed by Peapod, should be in Hadoop format (ie: "file://", "hdfs://", etc.)
An optional set of configuration parameters for this Peapod object
A SparkContext
Returns a Pea for a Task and caches the Pea
Clear this Peapod instance of all stored data
An optional set of configuration parameters for this Peapod object
Remove the output of all Tasks in this Peapod instance from persistent storage if the recursive version differs from the current version, if there is no output then this should be a no-op rather than throwing an error
Returns the Peapod's Task's in a DOT format graph
The path where Peapod stored internal outputs such as from StorableTask, should be in Hadoop format (ie: "file://", "hdfs://", etc.
The path where Peapod stored internal outputs such as from StorableTask, should be in Hadoop format (ie: "file://", "hdfs://", etc.)
Returns a Pea for a Task and caches the Pea
The path for input files not managed by Peapod, should be in Hadoop format (ie: "file://", "hdfs://", etc.
The path for input files not managed by Peapod, should be in Hadoop format (ie: "file://", "hdfs://", etc.)
Is recursive versioning enabled, used by classes which extend Peapod
Spark Context
Returns the number of Tasks that have been cached by this Peapod instance
SQL Spark Context
Main access point for Peapod functionality, all Tasks and Peas must belong to a Peapod instance which manages them. This allows for the same Tasks to exist multiple times within a single JVM rather than requiring only a single different copy of a Task or Pea per JVM.