Package

spray

io

Permalink

package io

Linear Supertypes
AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. io
  2. AnyRef
  3. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Type Members

  1. trait ClientSSLEngineProvider extends (PipelineContext) ⇒ Option[SSLEngine]

    Permalink
  2. type Command = akka.io.Tcp.Command

    Permalink
  3. case class CommandWrapper(command: AnyRef) extends Command with Product with Serializable

    Permalink
  4. trait ConnectionHandler extends Actor with ActorLogging

    Permalink
  5. trait Droppable extends AnyRef

    Permalink
  6. trait DynamicCommandPipeline extends AnyRef

    Permalink
  7. trait DynamicEventPipeline extends AnyRef

    Permalink
  8. trait DynamicPipelines extends Pipelines

    Permalink
  9. type Event = akka.io.Tcp.Event

    Permalink
  10. trait OptionalPipelineStage[-C <: PipelineContext] extends RawPipelineStage[C]

    Permalink
  11. type Pipeline[-T] = (T) ⇒ Unit

    Permalink
  12. trait PipelineContext extends AnyRef

    Permalink
  13. type PipelineStage = RawPipelineStage[PipelineContext]

    Permalink
  14. trait Pipelines extends AnyRef

    Permalink
  15. trait RawPipelineStage[-C <: PipelineContext] extends AnyRef

    Permalink
  16. trait SSLContextProvider extends (PipelineContext) ⇒ Option[SSLContext]

    Permalink
  17. trait ServerSSLEngineProvider extends (PipelineContext) ⇒ Option[SSLEngine]

    Permalink
  18. class SimpleConnectionHandler extends ConnectionHandler

    Permalink
  19. trait SslTlsContext extends PipelineContext

    Permalink

Value Members

  1. object BackPressureHandling

    Permalink

    Automated back-pressure handling is based on the idea that pressure is created by the consumer but experienced at the producer side.

    Automated back-pressure handling is based on the idea that pressure is created by the consumer but experienced at the producer side. E.g. for http that means that a too big number of incoming requests is the ultimate cause of an experienced bottleneck on the response sending side.

    The principle of applying back-pressure means that the best way of handling pressure is by handling it at the root cause which means throttling the rate at which work requests are coming in. That's the underlying assumption here: work is generated on the incoming network side. If that's not true, e.g. when the network stream is a truly bi-directional one (e.g. websockets) the strategy presented here won't be optimal.

    How it works:

    No pressure:

    • forward all incoming data
    • send out n responses with NoAcks
    • send one response with Ack
    • once that ack was received we know all the former unacknowledged writes have been successful as well and don't need any further handling

    Pressure:

    • a Write fails, we know now that all former writes were successful, all latter ones, including the failed one were discarded (but we'll still receive CommandFailed messages for them as well)
    • the incoming side is informed to SuspendReading
    • we send ResumeWriting which is queued after all the Writes that will be discarded as well
    • once we receive WritingResumed go back to the no pressure mode and retry all of the buffered writes
    • we schedule a final write probe which will trigger ResumeReading when no lowWatermark is defined
    • once we receive the ack for that probe or the buffer size falls below a lowWatermark after an acknowledged Write, we ResumeReading

    Possible improvement: (see http://doc.akka.io/docs/akka/2.2.0-RC1/scala/io-tcp.html)

    • go into Ack based mode for a while after WritingResumed
  2. object ClientSSLEngineProvider extends SSLEngineProviderCompanion

    Permalink
  3. object ConnectionTimeouts

    Permalink

    A pipeline stage that will abort a connection after an idle timeout has elapsed.

    A pipeline stage that will abort a connection after an idle timeout has elapsed. The idle timer is not exact but will abort the connection earliest when the timeout has passed after these events:

    • the last Tcp.Received message was received
    • no Write was pending according to an empty test write sent after the last Write
    • a new timeout was set
  4. object EmptyPipelineStage extends PipelineStage

    Permalink
  5. object Pipeline

    Permalink
  6. object PipelineContext

    Permalink
  7. object Pipelines

    Permalink
  8. object PreventHalfClosedConnections

    Permalink

    A pipeline stage that prevents half-closed connections by actively closing this side of the connection when a Tcp.PeerClosed command was received.

    A pipeline stage that prevents half-closed connections by actively closing this side of the connection when a Tcp.PeerClosed command was received.

    It is only activated when SslTlsSupport is disabled because SslTlsSupport has the same closing semantics as this stage.

  9. object RawPipelineStage

    Permalink
  10. object SSLContextProvider

    Permalink
  11. object ServerSSLEngineProvider extends SSLEngineProviderCompanion

    Permalink
  12. object SslBufferPool

    Permalink

    A ByteBuffer pool reduces the number of ByteBuffer allocations in the SslTlsSupport.

    A ByteBuffer pool reduces the number of ByteBuffer allocations in the SslTlsSupport. The reason why SslTlsSupport requires a buffer pool is because the current SSLEngine implementation always requires a 17KiB buffer for every 'wrap' and 'unwrap' operation. In most cases, the actual size of the required buffer is much smaller than that, and therefore allocating a 17KiB buffer for every 'wrap' and 'unwrap' operation wastes a lot of memory bandwidth, resulting in application performance degradation.

    This implementation is very loosely based on the one from Netty.

  13. object SslTlsSupport

    Permalink

    A pipeline stage that provides SSL support.

    A pipeline stage that provides SSL support.

    One thing to keep in mind is that there's no support for half-closed connections in SSL (but SSL on the other side requires half-closed connections from its transport layer). This means: 1. keepOpenOnPeerClosed is not supported on top of SSL (once you receive PeerClosed the connection is closed, further CloseCommands are ignored) 2. keepOpenOnPeerClosed should always be enabled on the transport layer beneath SSL so that one can wait for the other side's SSL level close_notify message without barfing RST to the peer because this socket is already gone

  14. object TickGenerator

    Permalink

Inherited from AnyRef

Inherited from Any

Ungrouped