3.4 KiB
layout | title | category | tags | order | ||
---|---|---|---|---|---|---|
developer-doc | Lexer | syntax |
|
4 |
Lexer
The lexer is the code generated by the flexer that is actually responsible for lexing Enso source code. It chunks the character stream into a (structured) token stream in order to make later processing faster, and to identify blocks
Lexer Architecture
The structure of the flexer's code generation forces the lexer to be split into
two parts: the definition, and the generation. As the latter is the point from
which the lexer will be used, the second subproject is the one that is graced
with the name lexer
.
Libraries in the Lexer Definition
The lexer generation subproject needs to be able to make the assumption that all
imports will be in the same place (relative to the crate root). To this end, the
definition subproject exports public modules library
and prelude
. These are
re-imported and used in the generation subproject to ensure that all components
are found at the same paths relative to the crate root.
This does mean, however, that all imports from within the current crate in the
definition subproject must be imported from the library
module, not from their
paths directly from the crate root.
Lexer Functionality
The lexer needs to provide the following functionality as part of the parser.
- It consumes the source lazily, character by character, and produces a structured token stream consisting of the lexer ast.
- It must succeed on any input, even if there are invalid constructs in the
token stream, represented by
Invalid
tokens.
The Lexer AST
In contrast to the full parser ast, the lexer operates on a simplified AST that we call a 'structured token stream'. While most lexers output a linear token stream, it is very important in Enso that we encode the nature of blocks into the token stream, hence giving it structure.
This encoding of blocks is crucial to the functionality of Enso as it ensures that no later stages of the parser can ignore blocks, and hence maintains them for use by the GUI.
It contains the following constructs:
Referent
: Referrent identifiers (e.g.Some_Ref_Ident
).Variable
: Variable identifiers (e.g.some_var_ident
).External
: External identifiers (e.g.someJavaName
).Blank
: The blank name_
.Operator
: Operator identifiers (e.g.-->>
).Modifier
: Modifier operators (e.g.+=
).Number
: Numbers (16_FFFF
).DanglingBase
: An explicit base without an associated number (e.g.16_
).Text
: Text (e.g."Some text goes here."
).Line
: A line in a block that contains tokens.BlankLine
: A line in a block that contains only whitespace.Block
: Syntactic blocks in the language.InvalidSuffix
: Invalid tokens when in a given state that would otherwise be valid.Unrecognized
: Tokens that the lexer doesn't recognise.
The distinction is made here between the various kinds of identifiers in order to keep lexing fast, but also in order to allow macros to switch on the kinds of identifiers.
The actionables for this section are:
- Determine if we want to have separate ASTs for the lexer and the parser, or not.