A ParserSizeTracker can wrap a stream parser to ensure that the object being parsed doesn't exceed a certain size.
Streaming Parser Combinators
Streaming Parser Combinators
A Parser[T]
is an object that consumes a stream of bytes to produce a result of type T
.
A Combinator is a "higher-order" parser that takes one or more parsers to produce a new parser
The Stream parsers are very fast and efficient, but because of this they need to make some tradeoffs. They are mutable, not thread safe, and in general are designed for network protocols, which tend to have very deterministic grammars.
The Parser Rules:
1. A parser must greedily consume the data stream until it produces a result 2. When a parser consumes the last byte necessary to produce a result, it must stop consuming the stream and return the new result while resetting its state
Use any parser by itself:
val parser = bytes(4) val data = DataBuffer(ByteString("aaaabbbbccc") parser.parse(data) // Some(ByteString(97, 97, 97, 97)) parser.parse(data) >> {bytes => bytes.utf8String} // Some("bbbb") parser.parse(data) // None
Combine two parsers
val parser = bytes(3) ~ bytes(2) >> {case a ~ b => a.ut8String + ":" + b.utf8String} parser.parse(DataBuffer(ByteString("abc"))) // None parser.parse(DataBuffer(ByteString("defgh"))) // Some("abc:de")
A ParserSizeTracker can wrap a stream parser to ensure that the object being parsed doesn't exceed a certain size.
The size tracker is not exact. It simply looks at how many bytes are read off the DataBuffer each time the track method is called. Since in most cases databuffers are fairly small (128Kb right now for buffers coming out of the event loop), and since the primary purpose for this is to prevent OOM exceptions due to very large requests, the lack of precision isn't a huge issue.