Packages

p

zio

internal

package internal

Linear Supertypes
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. internal
  2. AnyRef
  3. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Type Members

  1. abstract class ExecutionMetrics extends AnyRef
  2. abstract class Executor extends ExecutorPlatformSpecific

    An executor is responsible for executing actions.

    An executor is responsible for executing actions. Each action is guaranteed to begin execution on a fresh stack frame.

  3. trait ExecutorPlatformSpecific extends AnyRef
  4. abstract class MutableConcurrentQueue[A] extends AnyRef

    A MutableConcurrentQueue interface to use under the hood in ZIO.

    A MutableConcurrentQueue interface to use under the hood in ZIO.

    The implementation at minimum: 1. Should be non-blocking and ideally lock-free. 2. Should provide basic metrics such as how many elements were enqueued/dequeued.

    Attributes
    protected[zio]
    Note

    this is declared as abstract class since invokevirtual is slightly cheaper than invokeinterface.

  5. abstract class MutableQueueFieldsPadding[A] extends TailPadding[A] with Serializable
  6. final class OneElementConcurrentQueue[A] extends MutableConcurrentQueue[A] with Serializable

    This is a specialized implementation of MutableConcurrentQueue of capacity 1.

    This is a specialized implementation of MutableConcurrentQueue of capacity 1. Since capacity 1 queues are by default used under the hood in Streams as intermediate resource they should be very cheap to create and throw away. Hence this queue is optimized (unlike RingBuffer*) for a very small footprint, while still being plenty fast.

    Allocating an object takes only 24 bytes + 8+ bytes in long adder (so 32+ bytes total), which is 15x less than the smallest RingBuffer.

    zio.internal.OneElementConcurrentQueue object internals: OFFSET SIZE TYPE DESCRIPTION 0 4 (object header) 4 4 (object header) 8 4 (object header) 12 4 int OneElementConcurrentQueue.capacity 16 4 java.util.concurrent.atomic.AtomicReference OneElementConcurrentQueue.ref 20 4 java.util.concurrent.atomic.LongAdder OneElementConcurrentQueue.deqAdder Instance size: 24 bytes Space losses: 0 bytes internal + 0 bytes external = 0 bytes total

  7. abstract class Platform extends AnyRef

    A Platform provides the minimum capabilities necessary to bootstrap execution of ZIO tasks.

  8. abstract class RingBuffer[A] extends MutableQueueFieldsPadding[A] with Serializable

    A lock-free array-based bounded queue.

    A lock-free array-based bounded queue. It is thread-safe and can be used in multiple-producer/multiple-consumer (MPMC) setting.

    Main concepts

    A simple array-based queue of size N uses an array buf of size N as an underlying storage. There are 2 pointers head and tail. The element is enqueued into buf at position tail % N and dequeued from head % N. Each time an enqueue happens tail is incremented, similarly when dequeue happens head is incremented.

    Since pointers wrap around the array as they get incremented such data structure is also called a circular buffer or a ring buffer.

    Because queue is bounded, enqueue and dequeue may fail, which is captured in the semantics of offer and poll methods.

    Using offer as an example, the algorithm can be broken down roughly into three steps:

    1. Find a place to insert an element. 2. Reserve this place, put an element and make it visible to other threads (store and publish). 3. If there was no place on step 1 return false, otherwise returns true.

    Steps 1 and 2 are usually done in a loop to accommodate the possibility of failure due to race. Depending on the implementation of these steps the resulting queue will have different characteristics. For instance, the more sub-steps are between reserve and publish in step 2, the higher is the chance that one thread will delay other threads due to being descheduled.

    Notes on the design

    The queue uses a buf array to store elements. It uses seq array to store longs which serve as: 1. an indicator to producer/consumer threads whether the slot is right for enqueue/dequeue, 2. an indicator whether the queue is empty/full, 3. a mechanism to publish changes to buf via volatile write (can even be relaxed to ordered store). See comments in offer/poll methods for more details on seq.

    The benefit of using seq + head/tail counters is that there are no allocations during enqueue/dequeue and very little overhead. The downside is it doubles (on 64bit) or triples (compressed OOPs) the amount of memory needed for a queue.

    Concurrent enqueues and concurrent dequeues are possible. However there is no helping, so threads can delay other threads, and thus the queue doesn't provide full set of lock-free guarantees. In practice it's usually not a problem, since benefits are simplicity, zero GC pressure and speed.

    There are 2 implementations of a RingBuffer: 1. RingBufferArb that supports queues with arbitrary capacity; 2. RingBufferPow2 that supports queues with only power of 2 capacities.

    The reason is head % N and tail % N are rather cheap when can be done as a simple mask (N is pow 2), and pretty expensive when involve an idiv instruction. The difference is especially pronounced in tight loops (see. RoundtripBenchmark).

    To ensure good performance reads/writes to head and tail fields need to be independent, e.g. they shouldn't fall on the same (adjacent) cache-line.

    We can make those counters regular volatile long fields and space them out, but we still need a way to do CAS on them. The only way to do this except Unsafe is to use AtomicLongFieldUpdater, which is exactly what we have here.

    See also

    zio.internal.MutableQueueFieldsPadding for more details on padding and object's memory layout. The design is heavily inspired by such libraries as https://github.com/LMAX-Exchange/disruptor and https://github.com/JCTools/JCTools which is based off D. Vyukov's design http://www.1024cores.net/home/lock-free-algorithms/queues/bounded-mpmc-queue Compared to JCTools this implementation doesn't rely on sun.misc.Unsafe, so it is arguably more portable, and should be easier to read. It's also very extensively commented, including reasoning, assumptions, and hacks.

    Alternative designs

    There is an alternative design described in the paper A Portable Lock-Free Bounded Queue by Pirkelbauer et al. It provides full lock-free guarantees, which generally means that one out of many contending threads is guaranteed to make progress in a finite number of steps. The design thus is not susceptible to threads delaying other threads. However the helping scheme is rather involved and cannot be implemented without allocations (at least I couldn't come up with a way yet). This translates into worse performance on average, and better performance in some very specific situations.

  9. final class RingBufferArb[A] extends RingBuffer[A]
  10. final class RingBufferPow2[A] extends RingBuffer[A]
  11. final case class Tracing(tracer: Tracer, tracingConfig: TracingConfig) extends Product with Serializable
  12. final class UniqueKey extends AnyRef

    This can be used whenever an arbitrary number of unique keys needs to be generated as this will just use memory location for equality.

Value Members

  1. def ZIOSucceedNow[A](a: A): UIO[A]

    Returns an effect that models success with the specified value.

  2. def ZManagedSucceedNow[A](r: A): ZManaged[Any, Nothing, A]

    Lifts an eager, pure value into a Managed.

  3. def ZSTMSucceedNow[A](a: A): ZSTM[Any, Nothing, A]

    Returns an STM effect that succeeds with the specified value.

  4. object Executor extends DefaultExecutors with Serializable
  5. object MutableConcurrentQueue
  6. object OneShot
  7. object Platform extends PlatformSpecific
  8. object RingBuffer extends Serializable
  9. object RingBufferArb extends Serializable
  10. object RingBufferPow2 extends Serializable
  11. object SingleThreadedRingBuffer
  12. object Tracing extends Serializable
  13. object UniqueKey

Inherited from AnyRef

Inherited from Any

Ungrouped