com.twitter.scalding

Execution

object Execution extends Serializable

Execution has many methods for creating Execution[T] instances, which are the preferred way to compose computations in scalding libraries.

Linear Supertypes
Serializable, Serializable, AnyRef, Any
Ordering
  1. Alphabetic
  2. By inheritance
Inherited
  1. Execution
  2. Serializable
  3. Serializable
  4. AnyRef
  5. Any
  1. Hide All
  2. Show all
Learn more about member selection
Visibility
  1. Public
  2. All

Value Members

  1. final def !=(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  2. final def !=(arg0: Any): Boolean

    Definition Classes
    Any
  3. final def ##(): Int

    Definition Classes
    AnyRef → Any
  4. final def ==(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  5. final def ==(arg0: Any): Boolean

    Definition Classes
    Any
  6. implicit object ExecutionMonad extends Monad[Execution]

    This is an instance of Monad for execution so it can be used in functions that apply to all Monads

  7. final def asInstanceOf[T0]: T0

    Definition Classes
    Any
  8. def clone(): AnyRef

    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  9. final def eq(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  10. def equals(arg0: Any): Boolean

    Definition Classes
    AnyRef → Any
  11. def failFastSequence[T](t: Iterable[Future[T]])(implicit cec: scala.concurrent.ExecutionContext): Future[List[T]]

    Use our internal faster failing zip function rather than the standard one due to waiting

  12. def failFastZip[T, U](ft: Future[T], fu: Future[U])(implicit cec: scala.concurrent.ExecutionContext): Future[(T, U)]

    Standard scala zip waits forever on the left side, even if the right side fails

  13. def failed(t: Throwable): Execution[Nothing]

    This creates a definitely failed Execution.

  14. def finalize(): Unit

    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  15. def from[T](t: ⇒ T): Execution[T]

    This makes a constant execution that runs no job.

    This makes a constant execution that runs no job. Note this is a lazy parameter that is evaluated every time run is called.

  16. def fromFn(fn: (Config, Mode) ⇒ FlowDef): Execution[Unit]

    This converts a function into an Execution monad.

    This converts a function into an Execution monad. The flowDef returned is never mutated.

  17. def fromFuture[T](fn: (scala.concurrent.ExecutionContext) ⇒ Future[T]): Execution[T]

    The call to fn will happen when the run method on the result is called.

    The call to fn will happen when the run method on the result is called. The ConcurrentExecutionContext will be the same one used on run. This is intended for cases where you need to make asynchronous calls in the middle or end of execution. Presumably this is used with flatMap either before or after

  18. def fromTry[T](t: ⇒ Try[T]): Execution[T]

  19. def getArgs: Execution[Args]

    Convenience method to get the Args

  20. final def getClass(): Class[_]

    Definition Classes
    AnyRef → Any
  21. def getConfig: Execution[Config]

    Use this to read the configuration, which may contain Args or options which describe input on which to run

  22. def getConfigMode: Execution[(Config, Mode)]

    Use this to get the config and mode.

  23. def getMode: Execution[Mode]

    Use this to get the mode, which may contain the job conf

  24. def hashCode(): Int

    Definition Classes
    AnyRef → Any
  25. final def isInstanceOf[T0]: Boolean

    Definition Classes
    Any
  26. implicit def monoid[T](implicit arg0: Monoid[T]): Monoid[Execution[T]]

    This is the standard monoid on an Applicative (zip, then inside the Execution do plus) useful to combine unit Executions: Monoid.

    This is the standard monoid on an Applicative (zip, then inside the Execution do plus) useful to combine unit Executions: Monoid.sum(ex1, ex2, ex3, ex4): Execution[Unit] where each are exi are Execution[Unit]

  27. final def ne(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  28. final def notify(): Unit

    Definition Classes
    AnyRef
  29. final def notifyAll(): Unit

    Definition Classes
    AnyRef
  30. def run[C](flow: Flow[C]): Future[JobStats]

  31. implicit def semigroup[T](implicit arg0: Semigroup[T]): Semigroup[Execution[T]]

    This is the standard semigroup on an Applicative (zip, then inside the Execution do plus)

  32. def sequence[T](exs: Seq[Execution[T]]): Execution[Seq[T]]

  33. final def synchronized[T0](arg0: ⇒ T0): T0

    Definition Classes
    AnyRef
  34. def toString(): String

    Definition Classes
    AnyRef → Any
  35. val unit: Execution[Unit]

    Returns a constant Execution[Unit]

  36. final def wait(): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  37. final def wait(arg0: Long, arg1: Int): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  38. final def wait(arg0: Long): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  39. def waitFor[C](flow: Flow[C]): Try[JobStats]

  40. def withArgs[T](fn: (Args) ⇒ Execution[T]): Execution[T]

    This is convenience method only here to make it slightly cleaner to get Args, which are in the Config

  41. def withConfig[T](ex: Execution[T])(c: (Config) ⇒ Config): Execution[T]

  42. def withId[T](fn: (UniqueID) ⇒ Execution[T]): Execution[T]

    Use this to use counters/stats with Execution.

    Use this to use counters/stats with Execution. You do this: Execution.withId { implicit uid => val myStat = Stat("myStat") // uid is implicitly pulled in pipe.map { t => if(someCase(t)) myStat.inc fn(t) } .writeExecution(mySink) }

  43. def withNewCache[T](ex: Execution[T]): Execution[T]

    This function allows running the passed execution with its own cache.

    This function allows running the passed execution with its own cache. This will mean anything inside won't benefit from Execution's global attempts to avoid repeated executions.

    The main use case here is when generating a lot of Execution results which are large. Executions caching in this case can lead to out of memory errors as the cache keeps references to many heap objects.

    Ex. (0 until 1000).map { _ => Execution.withNewCache(myLargeObjectProducingExecution)}

  44. def withParallelism[T](executions: Seq[Execution[T]], parallelism: Int): Execution[Seq[T]]

    Run a sequence of executions but only permitting parallelism amount to run at the same time.

    Run a sequence of executions but only permitting parallelism amount to run at the same time.

    executions

    List of executions to run

    parallelism

    Number to run in parallel

    returns

    Execution Seq

  45. def zip[A, B, C, D, E](ax: Execution[A], bx: Execution[B], cx: Execution[C], dx: Execution[D], ex: Execution[E]): Execution[(A, B, C, D, E)]

    combine several executions and run them in parallel when .

    combine several executions and run them in parallel when .run is called

  46. def zip[A, B, C, D](ax: Execution[A], bx: Execution[B], cx: Execution[C], dx: Execution[D]): Execution[(A, B, C, D)]

    combine several executions and run them in parallel when .

    combine several executions and run them in parallel when .run is called

  47. def zip[A, B, C](ax: Execution[A], bx: Execution[B], cx: Execution[C]): Execution[(A, B, C)]

    combine several executions and run them in parallel when .

    combine several executions and run them in parallel when .run is called

  48. def zip[A, B](ax: Execution[A], bx: Execution[B]): Execution[(A, B)]

    combine several executions and run them in parallel when .

    combine several executions and run them in parallel when .run is called

Inherited from Serializable

Inherited from Serializable

Inherited from AnyRef

Inherited from Any

Ungrouped