p

cats.effect

kernel

package kernel

Source
package.scala
Linear Supertypes
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. kernel
  2. AnyRef
  3. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Type Members

  1. type ApplicativeThrow[F[_]] = ApplicativeError[F, Throwable]
  2. trait Async[F[_]] extends AsyncPlatform[F] with Sync[F] with Temporal[F]
  3. sealed trait CancelScope extends Product with Serializable
  4. trait Clock[F[_]] extends ClockPlatform[F]
  5. type Concurrent[F[_]] = GenConcurrent[F, Throwable]
  6. trait Cont[F[_], K, R] extends AnyRef

    This construction supports Async.cont trait Async[F[_]] { ...

    This construction supports Async.cont trait Async[F[_]] { ...

    def cont[A](body: Cont[F, A]): F[A] } It's a low level operation meant for implementors, end users should use async, start or Deferred instead, depending on the use case.

    It can be understood as providing an operation to resume an F asynchronously, of type Either[Throwable, A] => Unit, and an (interruptible) operation to semantically block until resumption, of type F[A]. We will refer to the former as resume, and the latter as get.

    These two operations capture the essence of semantic blocking, and can be used to build async, which in turn can be used to build Fiber, start, Deferred and so on.

    Refer to the default implementation to Async[F].async for an example of usage.

    The reason for the shape of the Cont construction in Async[F].cont, as opposed to simply:

    trait Async[F[_]] { ...

    def cont[A]: F[(Either[Throwable, A] => Unit, F[A])] }

    is that it's not safe to use concurrent operations such as get.start.

    The Cont encoding therefore simulates higher-rank polymorphism to ensure that you can not call start on get, but only use operations up to MonadCancel (flatMap, onCancel, uncancelable, etc).

    If you are an implementor, and you have an implementation of async but not cont, you can override Async[F].async with your implementation, and use Async.defaultCont to implement Async[F].cont.

  7. abstract class Deferred[F[_], A] extends DeferredSource[F, A] with DeferredSink[F, A]

    A purely functional synchronization primitive which represents a single value which may not yet be available.

    A purely functional synchronization primitive which represents a single value which may not yet be available.

    When created, a Deferred is empty. It can then be completed exactly once, and never be made empty again.

    get on an empty Deferred will block until the Deferred is completed. get on a completed Deferred will always immediately return its content.

    complete(a) on an empty Deferred will set it to a, and notify any and all readers currently blocked on a call to get. complete(a) on a Deferred that has already been completed will not modify its content, and result in a failed F.

    Albeit simple, Deferred can be used in conjunction with Ref to build complex concurrent behaviour and data structures like queues and semaphores.

    Finally, the blocking mentioned above is semantic only, no actual threads are blocked by the implementation.

  8. trait DeferredSink[F[_], A] extends AnyRef
  9. trait DeferredSource[F[_], A] extends AnyRef
  10. trait Fiber[F[_], E, A] extends AnyRef

    A datatype that represents a handle to a fiber and allows for waiting and cancellation against that fiber.

    A datatype that represents a handle to a fiber and allows for waiting and cancellation against that fiber.

    See also

    GenSpawn documentation for more detailed information on the concurrency of fibers.

  11. trait GenConcurrent[F[_], E] extends GenSpawn[F, E]
  12. trait GenSpawn[F[_], E] extends MonadCancel[F, E] with Unique[F]

    A typeclass that characterizes monads which support spawning and racing of fibers.

    A typeclass that characterizes monads which support spawning and racing of fibers. GenSpawn extends the capabilities of MonadCancel, so an instance of this typeclass must also provide a lawful instance for MonadCancel.

    This documentation builds upon concepts introduced in the MonadCancel documentation.

    Concurrency

    GenSpawn introduces a notion of concurrency that enables fibers to safely interact with each other via three special functions. start spawns a fiber that executes concurrently with the spawning fiber. join semantically blocks the joining fiber until the joinee fiber terminates, after which the outcome of the joinee is returned. cancel requests a fiber to abnormally terminate, and semantically blocks the canceller until the cancellee has completed finalization.

    Just like threads, fibers can execute concurrently with respect to each other. This means that the effects of independent fibers may be interleaved nondeterministically. This mode of concurrency reaps benefits for modular program design; fibers that are described separately can execute simultaneously without requiring programmers to explicitly yield back to the runtime system.

    The interleaving of effects is illustrated in the following program:

    for {
      fa <- (println("A1") *> println("A2")).start
      fb <- (println("B1") *> println("B2")).start
    } yield ()

    In this program, two fibers A and B are spawned concurrently. There are six possible executions, each of which exhibits a different ordering of effects. The observed output of each execution is shown below:

    1. A1, A2, B1, B2 2. A1, B1, A2, B2 3. A1, B1, B2, A2 4. B1, B2, A1, A2 5. B1, A1, B2, A2 6. B1, A1, A2, B3

    Notice how every execution preserves sequential consistency of the effects within each fiber: A1 always prints before A2, and B1 always prints before B2. However, there are no guarantees around how the effects of both fibers will be ordered with respect to each other; it is entirely nondeterministic.

    Cancellation

    MonadCancel introduces a simple means of cancellation, particularly self-cancellation, where a fiber can request the abnormal termination of its own execution. This is achieved by calling canceled.

    GenSpawn expands on the cancellation model described by MonadCancel by introducing a means of external cancellation. With external cancellation, a fiber can request the abnormal termination of another fiber by calling Fiber!.cancel.

    The cancellation model dictates that external cancellation behaves identically to self-cancellation. To guarantee consistent behavior between the two, the following semantics are shared:

    1. Masking: if a fiber is canceled while it is masked, cancellation is suppressed until it reaches a completely unmasked state. See MonadCancel documentation for more details.
    2. Backpressure: cancel semantically blocks all callers until finalization is complete.
    3. Idempotency: once a fiber's cancellation has been requested, subsequent cancellations have no effect on cancellation status.
    4. Terminal: Cancellation of a fiber that has terminated immediately returns.

    External cancellation contrasts with self-cancellation in one aspect: the former may require synchronization between multiple threads to communicate a cancellation request. As a result, cancellation may not be immediately observed by a fiber. Implementations are free to decide how and when this synchronization takes place.

    Cancellation safety

    A function or effect is considered to be cancellation-safe if it can be run in the absence of masking without violating effectful lifecycles or leaking resources. These functions require extra attention and care from users to ensure safe usage.

    start and racePair are both considered to be cancellation-unsafe effects because they return a Fiber, which is a resource that has a lifecycle.

    // Start a fiber that continuously prints "A".
    // After 10 seconds, cancel the fiber.
    F.start(F.delay(println("A")).foreverM).flatMap { fiber =>
      F.sleep(10.seconds) *> fiber.cancel
    }

    In the above example, imagine the spawning fiber is canceled after it starts the printing fiber, but before the latter is canceled. In this situation, the printing fiber is not canceled and will continue executing forever, contending with other fibers for system resources. Fiber leaks like this typically happen because some fiber that holds a reference to a child fiber is canceled before the child terminates; like threads, fibers will not automatically be cleaned up.

    Resource leaks like this are unfavorable when writing applications. In the case of start and racePair, it is recommended not to use these methods; instead, use background and race respectively.

    The following example depicts a safer version of the start example above:

    // Starts a fiber that continously prints "A".
    // After 10 seconds, the resource scope exits so the fiber is canceled.
    F.background(F.delay(println("A")).foreverM).use { _ =>
      F.sleep(10.seconds)
    }

    Scheduling

    Fibers are commonly referred to as lightweight threads or green threads. This alludes to the nature by which fibers are scheduled by runtime systems: many fibers are multiplexed onto one or more native threads.

    For applications running on the JVM, the scheduler typically manages a thread pool onto which fibers are scheduled. These fibers are executed simultaneously by the threads in the pool, achieving both concurrency and parallelism. For applications running on JavaScript platforms, all compute is restricted to a single worker thread, so multiple fibers must share that worker thread (dictated by fairness properties), achieving concurrency without parallelism.

    cede is a special function that interacts directly with the underlying scheduler. It is a means of cooperative multitasking by which a fiber signals to the runtime system that it intends to pause execution and resume at some later time at the discretion of the scheduler. This is in contrast to preemptive multitasking, in which threads of control are forcibly yielded after a well-defined time slice.

    Preemptive and cooperative multitasking are both features of runtime systems that influence the fairness and throughput properties of an application. These modes of scheduling are not necessarily mutually exclusive: a runtime system may incorporate a blend of the two, where fibers can explicitly yield back to the scheduler, but the runtime preempts a fiber if it has not yielded for some time.

    For more details on schedulers, see the following resources:

    1. https://gist.github.com/djspiewak/3ac3f3f55a780e8ab6fa2ca87160ca40
    2. https://gist.github.com/djspiewak/46b543800958cf61af6efa8e072bfd5c
  13. trait GenTemporal[F[_], E] extends GenConcurrent[F, E] with Clock[F]
  14. trait MonadCancel[F[_], E] extends MonadError[F, E]

    A typeclass that characterizes monads which support safe cancellation, masking, and finalization.

    A typeclass that characterizes monads which support safe cancellation, masking, and finalization. MonadCancel extends the capabilities of cats.MonadError, so an instance of this typeclass must also provide a lawful instance for cats.MonadError.

    Fibers

    A fiber is a sequence of effects which are bound together by flatMap. The execution of a fiber of an effect F[E, A] terminates with one of three outcomes, which are encoded by the datatype Outcome:

    1. Outcome.Succeeded: indicates success with a value of type A
    2. Outcome.Errored: indicates failure with a value of type E
    3. Outcome.Canceled: indicates abnormal termination

    Additionally, a fiber may never produce an outcome, in which case it is said to be non-terminating.

    Cancellation

    Cancellation refers to the act of requesting that the execution of a fiber be abnormally terminated. MonadCancel exposes a means of self-cancellation, with which a fiber can request that its own execution be terminated. Self-cancellation is achieved via canceled.

    Cancellation is vaguely similar to the short-circuiting behavior introduced by cats.MonadError, but there are several key differences:

    1. Cancellation is effective; if it is observed it must be respected, and it cannot be reversed. In contrast, handleError exposes the ability to catch and recover from errors, and then proceed with normal execution.
    2. Cancellation can be masked via MonadCancel!.uncancelable. Masking is discussed in the next section.
    3. GenSpawn introduces external cancellation, another cancellation mechanism by which fibers can be canceled by external parties.

    Masking

    Masking allows a fiber to suppress cancellation for a period of time, which is achieved via uncancelable. If a fiber is canceled while it is masked, the cancellation is suppressed for as long as the fiber remains masked. Once the fiber reaches a completely unmasked state, it responds to the cancellation.

    While a fiber is masked, it may optionally unmask by "polling", rendering itself cancelable again.

    F.uncancelable { poll =>
      // can only observe cancellation within `fb`
      fa *> poll(fb) *> fc
    }

    These semantics allow users to precisely mark what regions of code are cancelable within a larger code block.

    Finalization

    Finalization refers to the act of running finalizers in response to a cancellation. Finalizers are those effects whose evaluation is guaranteed in the event of cancellation. After a fiber has completed finalization, it terminates with an outcome of Canceled.

    Finalizers can be registered to a fiber for the duration of some effect via onCancel. If a fiber is canceled while running that effect, the registered finalizer is guaranteed to be run before terminating.

    Bracket pattern

    The aforementioned concepts work together to unlock a powerful pattern for safely interacting with effectful lifecycles: the bracket pattern. This is analogous to the try-with-resources/finally construct in Java.

    A lifecycle refers to a pair of actions, which are called the acquisition action and the release action respectively. The relationship between these two actions is that if the former completes successfully, then the latter is guaranteed to be run eventually, even in the presence of errors and cancellation. While the lifecycle is active, other work can be performed, but this invariant is always respected.

    The bracket pattern is an invaluable tool for safely handling resource lifecycles. Imagine an application that opens network connections to a database server to do work. If a task in the application is canceled while it holds an open database connection, the connection would never be released or returned to a pool, causing a resource leak.

    To illustrate the compositional nature of MonadCancel and its combinators, the implementation of bracket is shown below:

    def bracket[A, B](acquire: F[A])(use: A => F[B])(release: A => F[Unit]): F[B] =
      uncancelable { poll =>
        flatMap(acquire) { a =>
          val finalized = onCancel(poll(use(a)), release(a).uncancelable)
          val handled = onError(finalized) { case e => void(attempt(release(a).uncancelable)) }
          flatMap(handled)(b => as(attempt(release(a).uncancelable), b))
        }
      }

    See bracketCase and bracketFull for other variants of the bracket pattern. If more specialized behavior is necessary, it is recommended to use uncancelable and onCancel directly.

  15. type MonadCancelThrow[F[_]] = MonadCancel[F, Throwable]
  16. type MonadThrow[F[_]] = MonadError[F, Throwable]
  17. sealed trait Outcome[F[_], E, A] extends Product with Serializable
  18. type ParallelF[F[_], A] = T[F, A]
  19. trait Poll[F[_]] extends ~>[F, F]
  20. abstract class Ref[F[_], A] extends RefSource[F, A] with RefSink[F, A]

    An asynchronous, concurrent mutable reference.

    An asynchronous, concurrent mutable reference.

    Provides safe concurrent access and modification of its content, but no functionality for synchronisation, which is instead handled by Deferred. For this reason, a Ref is always initialised to a value.

    The default implementation is nonblocking and lightweight, consisting essentially of a purely functional wrapper over an AtomicReference.

  21. trait RefSink[F[_], A] extends AnyRef
  22. trait RefSource[F[_], A] extends AnyRef
  23. sealed abstract class Resource[F[_], +A] extends AnyRef

    Resource is a data structure which encodes the idea of executing an action which has an associated finalizer that needs to be run when the action completes.

    Resource is a data structure which encodes the idea of executing an action which has an associated finalizer that needs to be run when the action completes.

    Examples include scarce resources like files, which need to be closed after use, or concurrent abstractions like locks, which need to be released after having been acquired.

    There are several constructors to allocate a resource, the most common is make:

    def open(file: File): Resource[IO, BufferedReader] = {
       val openFile = IO(new BufferedReader(new FileReader(file)))
       Resource.make(acquire = openFile)(release = f => IO(f.close))
    }

    and several methods to consume a resource, the most common is use:

    def readFile(file: BufferedReader): IO[Content]
    
    open(file1).use(readFile)

    Finalisation (in this case file closure) happens when the action passed to use terminates. Therefore, the code above is _not_ equivalent to:

    open(file1).use(IO.pure).flatMap(readFile)

    which will instead result in an error, since the file gets closed after pure, meaning that .readFile will then fail.

    Also note that a _new_ resource is allocated every time use is called, so the following code opens and closes the resource twice:

    val file: Resource[IO, File]
    file.use(read) >> file.use(read)

    If you want sharing, pass the result of allocating the resource around, and call use once.

    file.use { file => read(file) >> read(file) }

    The acquire and release actions passed to make are not interruptible, and release will run when the action passed to use succeeds, fails, or is interrupted. You can use makeCase to specify a different release logic depending on each of the three outcomes above.

    It is also possible to specify an interruptible acquire though makeFull but be warned that this is an advanced concurrency operation, which requires some care.

    Resource usage nests:

    open(file1).use { in1 =>
      open(file2).use { in2 =>
        readFiles(in1, in2)
      }
    }

    However, it is more idiomatic to compose multiple resources together before use, exploiting the fact that Resource forms a Monad, and therefore that resources can be nested through flatMap. Nested resources are released in reverse order of acquisition. Outer resources are released even if an inner use or release fails.

    def mkResource(s: String) = {
      val acquire = IO(println(s"Acquiring $$s")) *> IO.pure(s)
      def release(s: String) = IO(println(s"Releasing $$s"))
      Resource.make(acquire)(release)
    }
    
    val r = for {
      outer <- mkResource("outer")
    
      inner <- mkResource("inner")
    } yield (outer, inner)
    
    r.use { case (a, b) =>
      IO(println(s"Using $$a and $$b"))
    }

    On evaluation the above prints:

    Acquiring outer
    Acquiring inner
    Using outer and inner
    Releasing inner
    Releasing outer

    A Resource can also lift arbitrary actions that don't require finalisation through eval. Actions passed to eval preserve their interruptibility.

    Finally, Resource partakes in other abstractions such as MonadError, Parallel, and Monoid, so make sure to explore those instances as well as the other methods not covered here.

    Resource is encoded as a data structure, an ADT, described by the following node types:

    Normally users don't need to care about these node types, unless conversions from Resource into something else is needed (e.g. conversion from Resource into a streaming data type), in which case they can be interpreted through pattern matching.

    F

    the effect type in which the resource is allocated and released

    A

    the type of resource

  24. type Spawn[F[_]] = GenSpawn[F, Throwable]
  25. trait Sync[F[_]] extends MonadCancel[F, Throwable] with Clock[F] with Unique[F] with Defer[F]
  26. type Temporal[F[_]] = GenTemporal[F, Throwable]
  27. trait Unique[F[_]] extends AnyRef

Value Members

  1. val Concurrent: GenConcurrent.type
  2. val ParallelF: kernel.Par.ParallelF.type
  3. val Spawn: GenSpawn.type
  4. val Temporal: GenTemporal.type
  5. object Async extends Serializable
  6. object CancelScope extends Serializable
  7. object Clock
  8. object Deferred
  9. object DeferredSink
  10. object DeferredSource
  11. object GenConcurrent extends Serializable
  12. object GenSpawn extends Serializable
  13. object GenTemporal extends Serializable
  14. object MonadCancel extends Serializable
  15. object Outcome extends LowPriorityImplicits with Serializable
  16. object Par
  17. object Ref
  18. object RefSink
  19. object RefSource
  20. object Resource extends ResourceFOInstances0 with ResourceHOInstances0 with ResourcePlatform
  21. object Sync extends Serializable
  22. object Unique
  23. object implicits extends AllSyntax with AllInstances

Inherited from AnyRef

Inherited from Any

Ungrouped