The primary entry point to a Cats Effect application. Extend this trait rather than defining
your own main
method. This avoids the need to run IO.unsafeRunAsync (or similar) on
your own.
IOApp
takes care of the messy details of properly setting up (and tearing down) the
unsafe.IORuntime needed to run the IO which represents your application. All of the
associated thread pools (if relevant) will be configured with the assumption that your
application is fully contained within the IO
produced by the run method. Note that the
exact details of how the runtime will be configured are very platform-specific. Part of the
point of IOApp
is to insulate users from the details of the underlying runtime (whether JVM
or JavaScript).
object MyApplication extends IOApp {
def run(args: List[String]) =
for {
_ <- IO.print("Enter your name: ")
name <- IO.readln
_ <- IO.println("Hello, " + name)
} yield ExitCode.Success
}
In the above example, MyApplication
will be a runnable class with a main
method, visible
to Sbt, IntelliJ, or plain-old java
. When run externally, it will print, read, and print in
the obvious way, producing a final process exit code of 0. Any exceptions thrown within the
IO
will be printed to standard error and the exit code will be set to 1. In the event that
the main Fiber (represented by the IO
returned by run
) is canceled, the runtime will
produce an exit code of 1.
Note that exit codes are an implementation-specific feature of the underlying runtime, as are process arguments. Naturally, all JVMs support these functions, as does Node.js and Scala Native, but some JavaScript execution environments will be unable to replicate these features (or they simply may not make sense). In such cases, exit codes may be ignored and/or argument lists may be empty.
Note that in the case of the above example, we would actually be better off using
IOApp.Simple rather than IOApp
directly, since we are neither using args
nor are we
explicitly producing a custom ExitCode:
object MyApplication extends IOApp.Simple {
val run =
for {
_ <- IO.print("Enter your name: ")
name <- IO.readln
_ <- IO.println(s"Hello, " + name)
} yield ()
}
It is valid to define val run
rather than def run
because IO
's evaluation is lazy: it
will only run when the main
method is invoked by the runtime.
In the event that the process receives an interrupt signal (SIGINT
) due to Ctrl-C (or any
other mechanism), it will immediately cancel
the main fiber. Assuming this fiber is not
within an uncancelable
region, this will result in interrupting any current activities and
immediately invoking any finalizers (see: IO.onCancel and IO.bracket). The process
will not shut down until the finalizers have completed. For example:
object InterruptExample extends IOApp.Simple {
val run =
IO.bracket(startServer)(
_ => IO.never)(
server => IO.println("shutting down") *> server.close)
}
If we assume the startServer
function has type IO[Server]
(or similar), this kind of
pattern is very common. When this process receives a SIGINT
, it will immediately print
"shutting down" and run the server.close
effect.
One consequence of this design is it is possible to build applications which will ignore
process interrupts. For example, if server.close
runs forever, the process will ignore
interrupts and will need to be cleaned up using SIGKILL
(i.e. kill -9
). This same
phenomenon can be demonstrated by using IO.uncancelable to suppress all interruption
within the application itself:
object Zombie extends IOApp.Simple {
val run = IO.never.uncancelable
}
The above process will run forever and ignore all interrupts. The only way it will shut down
is if it receives SIGKILL
.
It is possible (though not necessary) to override various platform-specific runtime
configuration options, such as computeWorkerThreadCount
(which only exists on the JVM).
Please note that the default configurations have been extensively benchmarked and are optimal
(or close to it) in most conventional scenarios.
However, with that said, there really is no substitute to benchmarking your own application.
Every application and scenario is unique, and you will always get the absolute best results
by performing your own tuning rather than trusting someone else's defaults. IOApp
's
defaults are very ''good'', but they are not perfect in all cases. One common example of this
is applications which maintain network or file I/O worker threads which are under heavy load
in steady-state operations. In such a performance profile, it is usually better to reduce the
number of compute worker threads to "make room" for the I/O workers, such that they all sum
to the number of physical threads exposed by the kernel.
Attributes
- See also:
- Companion:
- object
- Source:
- IOApp.scala
- Graph
- Supertypes
- class Objecttrait Matchableclass Any
- Known subtypes
- trait Simple