An asynchronous FIFO queue.
An AsyncSemaphore is a traditional semaphore but with asynchronous execution.
An AsyncSemaphore is a traditional semaphore but with asynchronous execution. Grabbing a permit returns a Future[Permit]
A scheduler that bridges tasks submitted by external threads into local executor threads.
A scheduler that bridges tasks submitted by external threads into local executor threads. All tasks submitted locally are executed on local threads.
Note: This scheduler expects to create executors with unbounded capacity.
Thus it does not expect and has undefined behavior for any
RejectedExecutionException
s other than those encountered after executor
shutdown.
An unbuffered FIFO queue, brokered by Offer
s.
An unbuffered FIFO queue, brokered by Offer
s. Note that the queue is
ordered by successful operations, not initiations, so one
and two
may not be received in that order with this code:
val b: Broker[Int] b ! 1 b ! 2
But rather we need to explicitly sequence them:
val b: Broker[Int] for { () <- b ! 1 () <- b ! 2 } ()
BUGS: the implementation would be much simpler in the absence of cancellation.
A named Scheduler mix-in that causes submitted tasks to be dispatched according to an java.util.concurrent.ExecutorService created by an abstract factory function.
An efficient thread-local, direct-dispatch scheduler.
A java.util.concurrent.ThreadFactory which creates threads with a name indicating the pool from which they originated.
A java.util.concurrent.ThreadFactory which creates threads with a name indicating the pool from which they originated.
A new java.lang.ThreadGroup (named name
) is created as a sub-group of
whichever group to which the thread that created the factory belongs. Each
thread created by this factory will be a member of this group and have a
unique name including the group name and an monotonically increasing number.
The intention of this naming is to ease thread identification in debugging
output.
For example, a NamedPoolThreadFactory
with name="writer"
will create a
ThreadGroup
named "writer" and new threads will be named "writer-1",
"writer-2", etc.
An offer to communicate with another process.
An offer to communicate with another process. The offer is
parameterized on the type of the value communicated. An offer that
sends a value typically has type {{Unit}}. An offer is activated by
synchronizing it, which is done with sync()
.
Note that Offers are persistent values -- they may be synchronized multiple times. They represent a standing offer of communication, not a one-shot event.
Synchronization is performed via a two-phase commit process.
prepare()
commenses the transaction, and when the other party is
ready, it returns with a transaction object, Tx[T]
. This must then
be ackd or nackd. If both parties acknowledge, Tx.ack()
returns
with a commit object, containing the value. This finalizes the
transaction. Please see the Tx
documentation for more details on
that phase of the protocol.
Note that a user should never perform this protocol themselves --
synchronization should always be done with sync()
.
Future interrupts are propagated, and failure is passed through. It is up to the implementor of the Offer to decide on failure semantics, but they are always passed through in all of the combinators.
An interface for scheduling java.lang.Runnable tasks.
Efficient ordered serialization of operations.
Efficient ordered serialization of operations.
Note: This should not be used in place of Scala's
synchronized
, but rather only when serialization semantics are
required.
A spool is an asynchronous stream.
A spool is an asynchronous stream. It more or less mimics the scala {{Stream}} collection, but with cons cells that have either eager or deferred tails.
Construction of eager Spools is done with either Spool.cons or the {{**::}} operator. To construct a lazy/deferred Spool which materializes its tail on demand, use the {{*::}} operator. In order to use these operators for deconstruction, they must be imported explicitly (ie: {{import Spool.{*::, **:: }}} )
def fill(rest: Promise[Spool[Int]]) { asyncProcess foreach { result => if (result.last) { rest() = Return(result **:: Spool.empty) } else { val next = new Promise[Spool[Int]] rest() = Return(result *:: next) fill(next) } } } val rest = new Promise[Spool[Int]] fill(rest) firstElem *:: rest
explicitly (ie: {{import Spool.{*::, **:: }}}
def fill(rest: Promise[Spool[Int]]) { asyncProcess foreach { result => if (result.last) { rest() = Return(result **:: Spool.empty) } else { val next = new Promise[Spool[Int]] rest() = Return(result *:: next) fill(next) } } } val rest = new Promise[Spool[Int]] fill(rest) firstElem *:: rest
A SpoolSource is a simple object for creating and populating a Spool-chain.
A SpoolSource is a simple object for creating and populating a Spool-chain. apply() returns a Future[Spool] that is populated by calls to offer(). This class is thread-safe.
A scheduler that dispatches directly to an underlying Java cached threadpool executor.
A Tx
is used to mediate multi-party transactions with the following
protocol:
A Tx
is used to mediate multi-party transactions with the following
protocol:
(Since version 6.2.x) use guava's Multimaps.synchronizedMultimap
The ConcurrentPool provides a concurrent object pool on top of the java.util.concurrent primitives.
The ConcurrentPool provides a concurrent object pool on top of the java.util.concurrent primitives.
The pool currently supports only FIFO ordering of items, and does not yet clean up per-key object lists.
(Since version 6.2.x) use finagle's BufferingPool
A global scheduler.
An asynchronous FIFO queue. In addition to providing {{offer()}} and {{poll()}}, the queue can be "failed", flushing current pollers.