Automated back-pressure handling is based on the idea that pressure is created by the consumer but experienced at the producer side.
Automated back-pressure handling is based on the idea that pressure is created by the consumer but experienced at the producer side. E.g. for http that means that a too big number of incoming requests is the ultimate cause of an experienced bottleneck on the response sending side.
The principle of applying back-pressure means that the best way of handling pressure is by handling it at the root cause which means throttling the rate at which work requests are coming in. That's the underlying assumption here: work is generated on the incoming network side. If that's not true, e.g. when the network stream is a truly bi-directional one (e.g. websockets) the strategy presented here won't be optimal.
How it works:
No pressure:
Pressure:
Possible improvement: (see http://doc.akka.io/docs/akka/2.2.0-RC1/scala/io-tcp.html)
A pipeline stage that will abort a connection after an idle timeout has elapsed.
A pipeline stage that will abort a connection after an idle timeout has elapsed. The idle timer is not exact but will abort the connection earliest when the timeout has passed after these events:
A pipeline stage that prevents half-closed connections by actively closing this side of the connection when a Tcp.PeerClosed command was received.
A pipeline stage that prevents half-closed connections by actively closing this side of the connection when a Tcp.PeerClosed command was received.
It is only activated when SslTlsSupport is disabled because SslTlsSupport has the same closing semantics as this stage.
A ByteBuffer pool reduces the number of ByteBuffer allocations in the SslTlsSupport.
A ByteBuffer pool reduces the number of ByteBuffer allocations in the SslTlsSupport. The reason why SslTlsSupport requires a buffer pool is because the current SSLEngine implementation always requires a 17KiB buffer for every 'wrap' and 'unwrap' operation. In most cases, the actual size of the required buffer is much smaller than that, and therefore allocating a 17KiB buffer for every 'wrap' and 'unwrap' operation wastes a lot of memory bandwidth, resulting in application performance degradation.
This implementation is very loosely based on the one from Netty.
A pipeline stage that provides SSL support.
A pipeline stage that provides SSL support.
One thing to keep in mind is that there's no support for half-closed connections in SSL (but SSL on the other side requires half-closed connections from its transport layer). This means: 1. keepOpenOnPeerClosed is not supported on top of SSL (once you receive PeerClosed the connection is closed, further CloseCommands are ignored) 2. keepOpenOnPeerClosed should always be enabled on the transport layer beneath SSL so that one can wait for the other side's SSL level close_notify message without barfing RST to the peer because this socket is already gone