Automated back-pressure handling is based on the idea that pressure
is created by the consumer but experienced at the producer side. E.g.
for http that means that a too big number of incoming requests is the
ultimate cause of an experienced bottleneck on the response sending side.
The principle of applying back-pressure means that the best way of handling
pressure is by handling it at the root cause which means throttling the rate
at which work requests are coming in. That's the underlying assumption here:
work is generated on the incoming network side. If that's not true, e.g. when
the network stream is a truly bi-directional one (e.g. websockets) the strategy
presented here won't be optimal.
How it works:
No pressure:
forward all incoming data
send out n responses with NoAcks
send one response with Ack
once that ack was received we know all the former unacknowledged writes
have been successful as well and don't need any further handling
Pressure:
a Write fails, we know now that all former writes were successful, all
latter ones, including the failed one were discarded (but we'll still receive CommandFailed
messages for them as well)
the incoming side is informed to SuspendReading
we send ResumeWriting which is queued after all the Writes that will be discarded as well
once we receive WritingResumed go back to the no pressure mode and retry all of the buffered writes
we schedule a final write probe which will trigger ResumeReading when no lowWatermark is defined
once we receive the ack for that probe or the buffer size falls below a lowWatermark after
an acknowledged Write, we ResumeReading
Possible improvement:
(see http://doc.akka.io/docs/akka/2.2.0-RC1/scala/io-tcp.html)
go into Ack based mode for a while after WritingResumed
Linear Supertypes
AnyRef, Any
Ordering
Alphabetic
By Inheritance
Inherited
BackPressureHandling
AnyRef
Any
Hide All
Show All
Visibility
Public
All
Type Members
case classAck(offset: Int) extends akka.io.Tcp.Event with Product with Serializable
Automated back-pressure handling is based on the idea that pressure is created by the consumer but experienced at the producer side. E.g. for http that means that a too big number of incoming requests is the ultimate cause of an experienced bottleneck on the response sending side.
The principle of applying back-pressure means that the best way of handling pressure is by handling it at the root cause which means throttling the rate at which work requests are coming in. That's the underlying assumption here: work is generated on the incoming network side. If that's not true, e.g. when the network stream is a truly bi-directional one (e.g. websockets) the strategy presented here won't be optimal.
How it works:
No pressure:
Pressure:
Possible improvement: (see http://doc.akka.io/docs/akka/2.2.0-RC1/scala/io-tcp.html)