This is an instance of Monad for execution so it can be used in functions that apply to all Monads
Use our internal faster failing zip function rather than the standard one due to waiting
Standard scala zip waits forever on the left side, even if the right side fails
This creates a definitely failed Execution.
This makes a constant execution that runs no job.
This makes a constant execution that runs no job. Note this is a lazy parameter that is evaluated every time run is called.
This converts a function into an Execution monad.
This converts a function into an Execution monad. The flowDef returned is never mutated.
The call to fn will happen when the run method on the result is called.
The call to fn will happen when the run method on the result is called. The ConcurrentExecutionContext will be the same one used on run. This is intended for cases where you need to make asynchronous calls in the middle or end of execution. Presumably this is used with flatMap either before or after
Convenience method to get the Args
Use this to read the configuration, which may contain Args or options which describe input on which to run
Use this to get the config and mode.
Use this to get the mode, which may contain the job conf
This is the standard monoid on an Applicative (zip, then inside the Execution do plus) useful to combine unit Executions: Monoid.sum(ex1, ex2, ex3, ex4): Execution[Unit] where each are exi are Execution[Unit]
This is the standard semigroup on an Applicative (zip, then inside the Execution do plus)
Returns a constant Execution[Unit]
This is convenience method only here to make it slightly cleaner to get Args, which are in the Config
Use this to use counters/stats with Execution.
Use this to use counters/stats with Execution. You do this: Execution.withId { implicit uid => val myStat = Stat("myStat") // uid is implicitly pulled in pipe.map { t => if(someCase(t)) myStat.inc fn(t) } .writeExecution(mySink) }
This function allows running the passed execution with its own cache.
This function allows running the passed execution with its own cache. This will mean anything inside won't benefit from Execution's global attempts to avoid repeated executions.
The main use case here is when generating a lot of Execution results which are large. Executions caching in this case can lead to out of memory errors as the cache keeps references to many heap objects.
Ex. (0 until 1000).map { _ => Execution.withNewCache(myLargeObjectProducingExecution)}
Run a sequence of executions but only permitting parallelism amount to run at the same time.
Run a sequence of executions but only permitting parallelism amount to run at the same time.
List of executions to run
Number to run in parallel
Execution Seq
combine several executions and run them in parallel when .run is called
combine several executions and run them in parallel when .run is called
combine several executions and run them in parallel when .run is called
combine several executions and run them in parallel when .run is called
Execution has many methods for creating Execution[T] instances, which are the preferred way to compose computations in scalding libraries.