cats.effect

package cats.effect

Type members

Classlikes

sealed class ExitCode extends Product with Equals with Serializable

Represents the exit code of an application.

Represents the exit code of an application.

code is constrained to a range from 0 to 255, inclusive.

Companion:
object
object ExitCode
Companion:
class
sealed abstract class IO[+A]

A pure abstraction representing the intention to perform a side effect, where the result of that side effect may be obtained synchronously (via return) or asynchronously (via callback).

A pure abstraction representing the intention to perform a side effect, where the result of that side effect may be obtained synchronously (via return) or asynchronously (via callback).

IO values are pure, immutable values and thus preserve referential transparency, being usable in functional programming. An IO is a data structure that represents just a description of a side effectful computation.

IO can describe synchronous or asynchronous computations that:

  1. on evaluation yield exactly one result 2. can end in either success or failure and in case of failure flatMap chains get short-circuited (IO implementing the algebra of MonadError) 3. can be canceled, but note this capability relies on the user to provide cancelation logic

Effects described via this abstraction are not evaluated until the "end of the world", which is to say, when one of the "unsafe" methods are used. Effectful results are not memoized, meaning that memory overhead is minimal (and no leaks), and also that a single effect may be run multiple times in a referentially-transparent manner. For example:

val ioa = IO.println("hey!")

val program = for {
 _ <- ioa
 _ <- ioa
} yield ()

program.unsafeRunSync()

The above will print "hey!" twice, as the effect will be re-run each time it is sequenced in the monadic chain.

IO is trampolined in its flatMap evaluation. This means that you can safely call flatMap in a recursive function of arbitrary depth, without fear of blowing the stack.

def fib(n: Int, a: Long = 0, b: Long = 1): IO[Long] =
 IO.pure(a + b) flatMap { b2 =>
   if (n > 0)
     fib(n - 1, b, b2)
   else
     IO.pure(a)
 }
See also:

IOApp for the preferred way of executing whole programs wrapped in IO

Companion:
object
object IO
Companion:
class
trait IOApp

The primary entry point to a Cats Effect application. Extend this trait rather than defining your own main method. This avoids the need to run IO.unsafeRunAsync (or similar) on your own.

The primary entry point to a Cats Effect application. Extend this trait rather than defining your own main method. This avoids the need to run IO.unsafeRunAsync (or similar) on your own.

IOApp takes care of the messy details of properly setting up (and tearing down) the unsafe.IORuntime needed to run the IO which represents your application. All of the associated thread pools (if relevant) will be configured with the assumption that your application is fully contained within the IO produced by the run method. Note that the exact details of how the runtime will be configured are very platform-specific. Part of the point of IOApp is to insulate users from the details of the underlying runtime (whether JVM or JavaScript).

 object MyApplication extends IOApp {
   def run(args: List[String]) =
     for {
       _ <- IO.print("Enter your name: ")
       name <- IO.readln
       _ <- IO.println("Hello, " + name)
     } yield ExitCode.Success
 }

In the above example, MyApplication will be a runnable class with a main method, visible to Sbt, IntelliJ, or plain-old java. When run externally, it will print, read, and print in the obvious way, producing a final process exit code of 0. Any exceptions thrown within the IO will be printed to standard error and the exit code will be set to 1. In the event that the main Fiber (represented by the IO returned by run) is canceled, the runtime will produce an exit code of 1.

Note that exit codes are an implementation-specific feature of the underlying runtime, as are process arguments. Naturally, all JVMs support these functions, as does NodeJS, but some JavaScript execution environments will be unable to replicate these features (or they simply may not make sense). In such cases, exit codes may be ignored and/or argument lists may be empty.

Note that in the case of the above example, we would actually be better off using IOApp.Simple rather than IOApp directly, since we are neither using args nor are we explicitly producing a custom ExitCode:

 object MyApplication extends IOApp.Simple {
   val run =
     for {
       _ <- IO.print("Enter your name: ")
       name <- IO.readln
       _ <- IO.println(s"Hello, " + name)
     } yield ()
 }

It is valid to define val run rather than def run because IO's evaluation is lazy: it will only run when the main method is invoked by the runtime.

In the event that the process receives an interrupt signal (SIGINT) due to Ctrl-C (or any other mechanism), it will immediately cancel the main fiber. Assuming this fiber is not within an uncancelable region, this will result in interrupting any current activities and immediately invoking any finalizers (see: IO.onCancel and IO.bracket). The process will not shut down until the finalizers have completed. For example:

 object InterruptExample extends IOApp.Simple {
   val run =
     IO.bracket(startServer)(
       _ => IO.never)(
       server => IO.println("shutting down") *> server.close)
 }

If we assume the startServer function has type IO[Server] (or similar), this kind of pattern is very common. When this process receives a SIGINT, it will immediately print "shutting down" and run the server.close effect.

One consequence of this design is it is possible to build applications which will ignore process interrupts. For example, if server.close runs forever, the process will ignore interrupts and will need to be cleaned up using SIGKILL (i.e. kill -9). This same phenomenon can be demonstrated by using IO.uncancelable to suppress all interruption within the application itself:

 object Zombie extends IOApp.Simple {
   val run = IO.never.uncancelable
 }

The above process will run forever and ignore all interrupts. The only way it will shut down is if it receives SIGKILL.

It is possible (though not necessary) to override various platform-specific runtime configuration options, such as computeWorkerThreadCount (which only exists on the JVM). Please note that the default configurations have been extensively benchmarked and are optimal (or close to it) in most conventional scenarios.

However, with that said, there really is no substitute to benchmarking your own application. Every application and scenario is unique, and you will always get the absolute best results by performing your own tuning rather than trusting someone else's defaults. IOApp's defaults are very ''good'', but they are not perfect in all cases. One common example of this is applications which maintain network or file I/O worker threads which are under heavy load in steady-state operations. In such a performance profile, it is usually better to reduce the number of compute worker threads to "make room" for the I/O workers, such that they all sum to the number of physical threads exposed by the kernel.

See also:
Companion:
object
object IOApp
Companion:
class
sealed trait IOLocal[A]
Companion:
object
object IOLocal
Companion:
class
trait LiftIO[F[_]]
Companion:
object
object LiftIO
Companion:
class

A convenience trait for defining applications which are entirely within Resource. This is implemented as a relatively straightforward wrapper around IOApp and thus inherits most of its functionality and semantics.

A convenience trait for defining applications which are entirely within Resource. This is implemented as a relatively straightforward wrapper around IOApp and thus inherits most of its functionality and semantics.

This trait should generally be used for any application which would otherwise trivially end with cats.effect.kernel.Resource!.use (or one of its variants). For example:

 object HttpExample extends IOApp {
   def run(args: List[String]) = {
     val program = for {
       config <- Resource.eval(loadConfig(args.head))
       postgres <- Postgres[IO](config.jdbcUri)
       endpoints <- ExampleEndpoints[IO](config, postgres)
       _ <- HttpServer[IO](config.host, config.port, endpoints)
     } yield ()

     program.useForever.as(ExitCode.Success)
   }
 }

This example assumes some underlying libraries like Skunk and Http4s, but otherwise it represents a relatively typical example of what the main class for a realistic Cats Effect application might look like. Notably, the whole thing is enclosed in Resource, which is used at the very end. This kind of pattern is so common that ResourceApp defines a special trait which represents it. We can rewrite the above example:

 object HttpExample extends ResourceApp.Forever {
   def run(args: List[String]) =
     for {
       config <- Resource.eval(loadConfig(args.head))
       db <- Postgres[IO](config.jdbcUri)
       endpoints <- ExampleEndpoints[IO](config, db)
       _ <- HttpServer[IO](config.host, config.port, endpoints)
     } yield ()
 }

These two programs are equivalent.

See also:
Companion:
object
Companion:
class
sealed abstract class SyncIO[+A]

A pure abstraction representing the intention to perform a side effect, where the result of that side effect is obtained synchronously.

A pure abstraction representing the intention to perform a side effect, where the result of that side effect is obtained synchronously.

SyncIO is similar to IO, but does not support asynchronous computations. Consequently, a SyncIO can be run synchronously on any platform to obtain a result via unsafeRunSync. This is unlike IO#unsafeRunSync, which cannot be safely called in general -- doing so on the JVM blocks the calling thread while the async part of the computation is run and doing so on Scala.js is not supported.

Companion:
object
object SyncIO
Companion:
class
final class Trace
object implicits extends AllSyntax with AllInstances

Types

type Async[F[_]] = Async[F]
type Clock[F[_]] = Clock[F]
type Concurrent[F[_]] = Concurrent[F]
type Cont[F[_], K, R] = Cont[F, K, R]
type Deferred[F[_], A] = Deferred[F, A]
type Fiber[F[_], E, A] = Fiber[F, E, A]
type FiberIO[A] = Fiber[IO, Throwable, A]
type GenConcurrent[F[_], E] = GenConcurrent[F, E]
type GenSpawn[F[_], E] = GenSpawn[F, E]
type GenTemporal[F[_], E] = GenTemporal[F, E]
type MonadCancel[F[_], E] = MonadCancel[F, E]
type Outcome[F[_], E, A] = Outcome[F, E, A]
type OutcomeIO[A] = Outcome[IO, Throwable, A]
type ParallelF[F[_], A] = T[F, A]
type Poll[F[_]] = Poll[F]
type Ref[F[_], A] = Ref[F, A]
type Resource[F[_], +A] = Resource[F, A]
type ResourceIO[A] = Resource[IO, A]
type Spawn[F[_]] = Spawn[F]
type Sync[F[_]] = Sync[F]
type Temporal[F[_]] = Temporal[F]
type Unique[F[_]] = Unique[F]

Value members

Concrete fields

val Async: Async.type
val Clock: Clock.type
val Deferred: Deferred.type
val GenSpawn: GenSpawn.type
val Outcome: Outcome.type
val Ref: Ref.type
val Resource: Resource.type
val Spawn: GenSpawn.type
val Sync: Sync.type
val Unique: Unique.type