1
1
mirror of https://github.com/github/semantic.git synced 2024-11-24 00:42:33 +03:00
semantic/docs/codegen.md
2020-04-24 13:55:53 -04:00

9.5 KiB
Raw Blame History

CodeGen Documentation

CodeGen is the process for auto-generating language-specific, strongly-typed ASTs to be used in Semantic. Since it is a critical component of Semantic's language support process, we recommend reading these docs first, as they provide an overview of the pipeline CodeGen supports.

Table of Contents

CodeGen Pipeline

During parser generation, tree-sitter produces a JSON file that captures the structure of a language's grammar. Based on this, we're able to derive datatypes representing surface languages, and then use those datatypes to generically build ASTs. This automates the engineering effort historically required for adding a new language.

The following steps provide a high-level outline of the process:

  1. Deserialize. First, we deserialize the node-types.json file for a given language into the desired shape of datatypes via parsing capabilities afforded by the Aeson library. There are four distinct types represented in the node-types.json file takes on: sums, products, named leaves and anonymous leaves.
  2. Generate Syntax. We then use Template Haskell to auto-generate language-specific, strongly-typed datatypes that represent various language constructs. This API exports the top-level function astDeclarationsForLanguage to auto-generate datatypes at compile-time, which is is invoked by a given language AST module.
  3. Unmarshal. Unmarshaling is the process of iterating over tree-sitters parse trees using its tree cursor API, and producing Haskell ASTs for the relevant nodes. We parse source code from tree-sitter and unmarshal the data we get to build these ASTs generically. This file exports the top-level function parseByteString, which takes source code and a language as arguments, and produces an AST.

image

The remaining document provides more details on generating ASTs, inspecting datatypes, tests, and information on decisions pertaining to relevant APIs.

Generating ASTs

To parse source code and produce ASTs locally:

  1. Load the REPL for a given language package:
cabal new-repl lib:semantic-python
  1. Set language extensions, OverloadedStrings and TypeApplications, and import relevant modules, AST.Unmarshal, Source.Range and Source.Span:
:seti -XOverloadedStrings
:seti -XTypeApplications

import Source.Span
import Source.Range
import AST.Unmarshal
  1. You can now call parseByteString, passing in the desired language you wish to parse (in this case Python is given by the argument Language.Python.Grammar.tree_sitter_python), and the source code (in this case an integer 1). Since the function is constrained by (Unmarshal t, UnmarshalAnn a), you can use type applications to provide a top-level node t, an entry point into the tree, in addition to a polymorphic annotation a used to represent range and span. In this case, that top-level root node is Module, and the annotation is given by Span and Range as defined in the semantic-source package:
TS.parseByteString @Language.Python.AST.Module @(Source.Span.Span, Source.Range.Range) Language.Python.Grammar.tree_sitter_python "1"

This generates the following AST:

Right (Module {ann = (Span {start = Pos {line = 0, column = 0}, end = Pos {line = 0, column = 1}},Range {start = 0, end = 1}), extraChildren = [R1 (SimpleStatement {getSimpleStatement = L1 (R1 (R1 (L1 (ExpressionStatement {ann = (Span {start = Pos {line = 0, column = 0}, end = Pos {line = 0, column = 1}},Range {start = 0, end = 1}), extraChildren = L1 (L1 (Expression {getExpression = L1 (L1 (L1 (PrimaryExpression {getPrimaryExpression = R1 (L1 (L1 (L1 (Integer {ann = (Span {start = Pos {line = 0, column = 0}, end = Pos {line = 0, column = 1}},Range {start = 0, end = 1}), text = "1"}))))})))})) :| []}))))})]})

Inspecting auto-generated datatypes

Datatypes are derived from a language and its node-types.json file using the GenerateSyntax API. These datatypes can be viewed in the REPL just as they would for any other datatype, using :i after loading the language-specific AST.hs module for a given language.

:l semantic-python/src/Language/Python/AST.hs
Ok, six modules loaded.
*Language.Python.AST Source.Span Source.Range> :i Module

This shows us the auto-generated Module datatype:

data Module a
  = Module {Language.Python.AST.ann :: a,
            Language.Python.AST.extraChildren :: [(GHC.Generics.:+:)
                                                    CompoundStatement SimpleStatement a]}
  	-- Defined at /Users/aymannadeem/github/semantic/semantic-python/src/Language/Python/AST.hs:23:1
instance Show a => Show (Module a)
  -- Defined at /Users/aymannadeem/github/semantic/semantic-python/src/Language/Python/AST.hs:23:1
instance Ord a => Ord (Module a)
  -- Defined at /Users/aymannadeem/github/semantic/semantic-python/src/Language/Python/AST.hs:23:1
instance Eq a => Eq (Module a)
  -- Defined at /Users/aymannadeem/github/semantic/semantic-python/src/Language/Python/AST.hs:23:1
instance Traversable Module
  -- Defined at /Users/aymannadeem/github/semantic/semantic-python/src/Language/Python/AST.hs:23:1
instance Functor Module
  -- Defined at /Users/aymannadeem/github/semantic/semantic-python/src/Language/Python/AST.hs:23:1
instance Foldable Module
  -- Defined at /Users/aymannadeem/github/semantic/semantic-python/src/Language/Python/AST.hs:23:1

Here is an example that describes the relationship between a Python identifier represented in the tree-sitter generated JSON file, and a datatype generated by Template Haskell based on the provided JSON:

Type JSON TH-generated code
Named leaf {
"type": "identifier",
"named": true
}
data TreeSitter.Python.AST.Identifier a
= TreeSitter.Python.AST.Identifier {TreeSitter.Python.AST.ann :: a,
TreeSitter.Python.AST.bytes :: text-1.2.3.1:Data.Text.Internal.Text} -- Defined at TreeSitter/Python/AST.hs:10:1
instance Show a => Show (TreeSitter.Python.AST.Identifier a) -- Defined at TreeSitter/Python/AST.hs:10:1
instance Ord a => Ord (TreeSitter.Python.AST.Identifier a) -- Defined at TreeSitter/Python/AST.hs:10:1
instance Eq a => Eq (TreeSitter.Python.AST.Identifier a) -- Defined at TreeSitter/Python/AST.hs:10:1
instance Traversable TreeSitter.Python.AST.Identifier -- Defined at TreeSitter/Python/AST.hs:10:1
instance Functor TreeSitter.Python.AST.Identifier -- Defined at TreeSitter/Python/AST.hs:10:1
instance Foldable TreeSitter.Python.AST.Identifier -- Defined at TreeSitter/Python/AST.hs:10:1
instance Unmarshal TreeSitter.Python.AST.Identifier -- Defined at TreeSitter/Python/AST.hs:10:1
instance SymbolMatching TreeSitter.Python.AST.Identifier -- Defined at TreeSitter/Python/AST.hs:10:1

Tests

As of right now, Hedgehog tests are minimal and only in place for the Python library.

To run tests:

cabal v2-test semantic-python

Additional notes

  • GenerateSyntax provides a way to pre-define certain datatypes for which Template Haskell is not used. Any datatypes among the node types which have already been defined in the module where the splice is run will be skipped, allowing customization of the representation of parts of the tree. While this gives us flexibility, we encourage that this is used sparingly, as it imposes extra maintenance burden, particularly when the grammar is changed. This may be used to e.g. parse literals into Haskell equivalents (e.g. parsing the textual contents of integer literals into Integers), and may require defining TS.UnmarshalAnn or TS.SymbolMatching instances for (parts of) the custom datatypes, depending on where and how the datatype occurs in the generated tree, in addition to the usual Foldable, Functor, etc. instances provided for generated datatypes.
  • Annotations are captured by a polymorphic parameter a
  • Unmarshal defines both generic and non-generic classes. This is because generic behaviors are different than what we get non-generically, and in the case of  Maybe[]—we actually preference doing things non-generically. Since [] is a sum, the generic behavior for :+: would be invoked and expect that wed have repetitions represented in the parse tree as right-nested singly-linked lists (ex., (a (b (c (d…))))) rather than as just consecutive sibling nodes (ex., (a b c ...d), which is what our trees have). We want to match the latter.