The current token is a delimiter consisting of given character, reads next token, otherwise raises an error.
The current token is a delimiter consisting of given character, reads next token, otherwise raises an error.
if the current token token
is not a delimiter, or
consists of a character different from c
.
If current token equals given token, reads next token, otherwise raises an error.
If current token equals given token, reads next token, otherwise raises an error.
the given token to compare current token with
if the two tokens do not match.
If last-read character equals given character, reads next character, otherwise raises an error
If last-read character equals given character, reads next character, otherwise raises an error
the given character to compare with last-read character
if character does not match
The last-read character
Always throws a MalformedInput
exception with given error message.
Always throws a MalformedInput
exception with given error message.
the error message
Reads a numeric literal, and forms an IntLit
or FloatLit
token from it.
Reads a numeric literal, and forms an IntLit
or FloatLit
token from it.
Last-read input character ch
must be either -
or a digit.
if lexeme not recognized as a numeric literal.
Reads a string literal, and forms a StringLit
token from it.
Reads a string literal, and forms a StringLit
token from it.
Last-read input character ch
must be opening "
-quote.
if lexeme not recognized as a string literal.
Reads next character into ch
Skips whitespace and reads next lexeme into token
Skips whitespace and reads next lexeme into token
if lexeme not recognized as a valid token
The number of characters read so far
The last-read token
The number of characters read before the start of the last-read token
(lexer: StringAdd).self
(lexer: StringFormat).self
(lexer: ArrowAssoc[Lexer]).x
(Since version 2.10.0) Use leftOfArrow
instead
(lexer: Ensuring[Lexer]).x
(Since version 2.10.0) Use resultOfEnsuring
instead
A simple lexer for tokens as they are used in JSON, plus parens
(
,)
Tokens understood are:(
,)
,[
,]
,{
,}
,:
,,
,true
,false
,null
, strings (syntax as in JSON), integer numbers (syntax as in JSON: -?(0|\d+) floating point numbers (syntax as in JSON: -?(0|\d+)(\.\d+)?((e|E)(+|-)?\d+)?) The end of input is represented as its own token, EOF. Lexers can keep one token lookahead