Object

org.apache.spark.bagel

Bagel

Related Doc: package bagel

Permalink

object Bagel extends Logging

Annotations
@deprecated
Deprecated

(Since version 1.6.0) Uses of Bagel should migrate to GraphX

Linear Supertypes
Logging, AnyRef, Any
Ordering
  1. Alphabetic
  2. By inheritance
Inherited
  1. Bagel
  2. Logging
  3. AnyRef
  4. Any
  1. Hide All
  2. Show all
Visibility
  1. Public
  2. All

Value Members

  1. final def !=(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  4. val DEFAULT_STORAGE_LEVEL: StorageLevel

    Permalink
  5. final def asInstanceOf[T0]: T0

    Permalink
    Definition Classes
    Any
  6. def clone(): AnyRef

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  7. final def eq(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  8. def equals(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  9. def finalize(): Unit

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  10. final def getClass(): Class[_]

    Permalink
    Definition Classes
    AnyRef → Any
  11. def hashCode(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  12. final def isInstanceOf[T0]: Boolean

    Permalink
    Definition Classes
    Any
  13. def isTraceEnabled(): Boolean

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  14. def log: Logger

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  15. def logDebug(msg: ⇒ String, throwable: Throwable): Unit

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  16. def logDebug(msg: ⇒ String): Unit

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  17. def logError(msg: ⇒ String, throwable: Throwable): Unit

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  18. def logError(msg: ⇒ String): Unit

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  19. def logInfo(msg: ⇒ String, throwable: Throwable): Unit

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  20. def logInfo(msg: ⇒ String): Unit

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  21. def logName: String

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  22. def logTrace(msg: ⇒ String, throwable: Throwable): Unit

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  23. def logTrace(msg: ⇒ String): Unit

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  24. def logWarning(msg: ⇒ String, throwable: Throwable): Unit

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  25. def logWarning(msg: ⇒ String): Unit

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  26. final def ne(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  27. final def notify(): Unit

    Permalink
    Definition Classes
    AnyRef
  28. final def notifyAll(): Unit

    Permalink
    Definition Classes
    AnyRef
  29. def run[K, V <: Vertex, M <: Message[K]](sc: SparkContext, vertices: RDD[(K, V)], messages: RDD[(K, M)], numPartitions: Int, storageLevel: StorageLevel)(compute: (V, Option[Array[M]], Int) ⇒ (V, Array[M]))(implicit arg0: Manifest[K], arg1: Manifest[V], arg2: Manifest[M]): RDD[(K, V)]

    Permalink

    Runs a Bagel program with no org.apache.spark.bagel.Aggregator, the default org.apache.spark.HashPartitioner and org.apache.spark.bagel.DefaultCombiner

  30. def run[K, V <: Vertex, M <: Message[K]](sc: SparkContext, vertices: RDD[(K, V)], messages: RDD[(K, M)], numPartitions: Int)(compute: (V, Option[Array[M]], Int) ⇒ (V, Array[M]))(implicit arg0: Manifest[K], arg1: Manifest[V], arg2: Manifest[M]): RDD[(K, V)]

    Permalink

    Runs a Bagel program with no org.apache.spark.bagel.Aggregator, default org.apache.spark.HashPartitioner, org.apache.spark.bagel.DefaultCombiner and the default storage level

  31. def run[K, V <: Vertex, M <: Message[K], C](sc: SparkContext, vertices: RDD[(K, V)], messages: RDD[(K, M)], combiner: Combiner[M, C], numPartitions: Int, storageLevel: StorageLevel)(compute: (V, Option[C], Int) ⇒ (V, Array[M]))(implicit arg0: Manifest[K], arg1: Manifest[V], arg2: Manifest[M], arg3: Manifest[C]): RDD[(K, V)]

    Permalink

    Runs a Bagel program with no org.apache.spark.bagel.Aggregator and the default org.apache.spark.HashPartitioner

  32. def run[K, V <: Vertex, M <: Message[K], C](sc: SparkContext, vertices: RDD[(K, V)], messages: RDD[(K, M)], combiner: Combiner[M, C], numPartitions: Int)(compute: (V, Option[C], Int) ⇒ (V, Array[M]))(implicit arg0: Manifest[K], arg1: Manifest[V], arg2: Manifest[M], arg3: Manifest[C]): RDD[(K, V)]

    Permalink

    Runs a Bagel program with no org.apache.spark.bagel.Aggregator, default org.apache.spark.HashPartitioner and default storage level

  33. def run[K, V <: Vertex, M <: Message[K], C](sc: SparkContext, vertices: RDD[(K, V)], messages: RDD[(K, M)], combiner: Combiner[M, C], partitioner: Partitioner, numPartitions: Int, storageLevel: StorageLevel)(compute: (V, Option[C], Int) ⇒ (V, Array[M]))(implicit arg0: Manifest[K], arg1: Manifest[V], arg2: Manifest[M], arg3: Manifest[C]): RDD[(K, V)]

    Permalink

    Runs a Bagel program with no org.apache.spark.bagel.Aggregator

  34. def run[K, V <: Vertex, M <: Message[K], C](sc: SparkContext, vertices: RDD[(K, V)], messages: RDD[(K, M)], combiner: Combiner[M, C], partitioner: Partitioner, numPartitions: Int)(compute: (V, Option[C], Int) ⇒ (V, Array[M]))(implicit arg0: Manifest[K], arg1: Manifest[V], arg2: Manifest[M], arg3: Manifest[C]): RDD[(K, V)]

    Permalink

    Runs a Bagel program with no org.apache.spark.bagel.Aggregator and the default storage level

  35. def run[K, V <: Vertex, M <: Message[K], C, A](sc: SparkContext, vertices: RDD[(K, V)], messages: RDD[(K, M)], combiner: Combiner[M, C], aggregator: Option[Aggregator[V, A]], partitioner: Partitioner, numPartitions: Int, storageLevel: StorageLevel = DEFAULT_STORAGE_LEVEL)(compute: (V, Option[C], Option[A], Int) ⇒ (V, Array[M]))(implicit arg0: Manifest[K], arg1: Manifest[V], arg2: Manifest[M], arg3: Manifest[C], arg4: Manifest[A]): RDD[(K, V)]

    Permalink

    Runs a Bagel program.

    Runs a Bagel program.

    K

    key

    V

    vertex type

    M

    message type

    C

    combiner

    A

    aggregator

    sc

    org.apache.spark.SparkContext to use for the program.

    vertices

    vertices of the graph represented as an RDD of (Key, Vertex) pairs. Often the Key will be the vertex id.

    messages

    initial set of messages represented as an RDD of (Key, Message) pairs. Often this will be an empty array, i.e. sc.parallelize(Array[K, Message]()).

    combiner

    org.apache.spark.bagel.Combiner combines multiple individual messages to a given vertex into one message before sending (which often involves network I/O).

    aggregator

    org.apache.spark.bagel.Aggregator performs a reduce across all vertices after each superstep and provides the result to each vertex in the next superstep.

    partitioner

    org.apache.spark.Partitioner partitions values by key

    numPartitions

    number of partitions across which to split the graph. Default is the default parallelism of the SparkContext

    storageLevel

    org.apache.spark.storage.StorageLevel to use for caching of intermediate RDDs in each superstep. Defaults to caching in memory.

    compute

    function that takes a Vertex, optional set of (possibly combined) messages to the Vertex, optional Aggregator and the current superstep, and returns a set of (Vertex, outgoing Messages) pairs

    returns

    an RDD of (K, V) pairs representing the graph after completion of the program

  36. final def synchronized[T0](arg0: ⇒ T0): T0

    Permalink
    Definition Classes
    AnyRef
  37. def toString(): String

    Permalink
    Definition Classes
    AnyRef → Any
  38. final def wait(): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  39. final def wait(arg0: Long, arg1: Int): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  40. final def wait(arg0: Long): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )

Inherited from Logging

Inherited from AnyRef

Inherited from Any

Ungrouped