Package

akka.http.impl.engine

http2

Permalink

package http2

Content Hierarchy
Visibility
  1. Public
  2. All

Type Members

  1. final case class ContinuationFrame(streamId: Int, endHeaders: Boolean, payload: ByteString) extends StreamFrameEvent with Product with Serializable

    Permalink
  2. final case class DataFrame(streamId: Int, endStream: Boolean, payload: ByteString) extends StreamFrameEvent with Product with Serializable

    Permalink
  3. sealed trait FrameEvent extends AnyRef

    Permalink
  4. final case class GoAwayFrame(lastStreamId: Int, errorCode: ErrorCode, debug: ByteString = ByteString.empty) extends FrameEvent with Product with Serializable

    Permalink
  5. final case class HeadersFrame(streamId: Int, endStream: Boolean, endHeaders: Boolean, headerBlockFragment: ByteString, priorityInfo: Option[PriorityFrame]) extends StreamFrameEvent with Product with Serializable

    Permalink
  6. class Http2ServerDemux extends GraphStage[BidiShape[Http2SubStream, FrameEvent, FrameEvent, Http2SubStream]]

    Permalink

    This stage contains all control logic for handling frames and (de)muxing data to/from substreams.

    This stage contains all control logic for handling frames and (de)muxing data to/from substreams.

    (This is not a final documentation, more like a brain-dump of how it could work.)

    The BidiStage consumes and produces FrameEvents from the network. It will output one Http2SubStream for incoming frames per substream and likewise accepts a single Http2SubStream per substream with outgoing frames.

    (An alternative API would just push a BidiHttp2SubStream(subStreamFlow: Flow[StreamFrameEvent, StreamFrameEvent]) similarly to IncomingConnection. This would more accurately model the one-to-one relation between incoming and outgoing Http2Substream directions but wouldn't stack so nicely with other BidiFlows.)

    Backpressure logic:

    * read all incoming frames without applying backpressure * this ensures that all "control" frames are read in a timely manner * though, make sure limits are not exceeded * max connection limit (which limits number of parallel requests) * window sizes for incoming data frames * that means we need to buffer incoming substream data until the user handler (consuming the source in the Http2SubStream) will read it * per-connection and per-stream window updates should reflect how much data was (not) yet passed into the user handler and therefore are the main backpressure mechanism towards the peer * for the outgoing frame side we need to decide which frames to send per incoming demand * control frames (settings, ping, acks, window updates etc.) -> responses to incoming frames * substream frames -> incoming frame data from substreams * to be able to make a decision some data must already be buffered for those two sources of incoming frames

    Demultiplexing: * distribute incoming frames to their respective targets: * control frames: handled internally, may generate outgoing control frames directly * incoming HEADERS frames: creates a new Http2SubStream including a SubSource that will receive all upcoming data frames * incoming data frames: buffered and pushed to the SubSource of the respective substream

    Multiplexing: * schedule incoming frames from multiple sources to be pushed onto the shared medium * control frames: as generated from the stage itself (should probably preferred over everything else) * Http2SubStream produced by the user handler: read and push initial frame ASAP * outgoing data frames for each of the substreams: will comprise the bulk of the data and is where any clever, prioritizing, etc. i.e. tbd later sending strategies will apply

    In the best case we could just flattenMerge the outgoing side (hoping for the best) but this will probably not work because the sending decision relies on dynamic window size and settings information that will be only available in this stage.

  7. final case class ParsedHeadersFrame(streamId: Int, endStream: Boolean, keyValuePairs: Seq[(String, String)], priorityInfo: Option[PriorityFrame]) extends StreamFrameEvent with Product with Serializable

    Permalink
  8. case class PingFrame(ack: Boolean, data: ByteString) extends FrameEvent with Product with Serializable

    Permalink
  9. final case class PriorityFrame(streamId: Int, exclusiveFlag: Boolean, streamDependency: Int, weight: Int) extends StreamFrameEvent with Product with Serializable

    Permalink
  10. case class PushPromiseFrame(streamId: Int, endHeaders: Boolean, promisedStreamId: Int, headerBlockFragment: ByteString) extends StreamFrameEvent with Product with Serializable

    Permalink
  11. final case class RstStreamFrame(streamId: Int, errorCode: ErrorCode) extends StreamFrameEvent with Product with Serializable

    Permalink
  12. final case class Setting(identifier: SettingIdentifier, value: Int) extends Product with Serializable

    Permalink
  13. final case class SettingsAckFrame(acked: Seq[Setting]) extends FrameEvent with Product with Serializable

    Permalink
  14. final case class SettingsFrame(settings: Seq[Setting]) extends FrameEvent with Product with Serializable

    Permalink
  15. sealed trait StreamFrameEvent extends FrameEvent

    Permalink
  16. final case class UnknownFrameEvent(tpe: FrameType, flags: ByteFlag, streamId: Int, payload: ByteString) extends StreamFrameEvent with Product with Serializable

    Permalink

    Dummy event for all unknown frames

  17. final case class WindowUpdateFrame(streamId: Int, windowSizeIncrement: Int) extends StreamFrameEvent with Product with Serializable

    Permalink

Value Members

  1. object AlpnSwitch

    Permalink
  2. object Http2Blueprint

    Permalink
  3. object Setting extends Serializable

    Permalink
  4. package hpack

    Permalink

Ungrouped