leo/tests
Alessandro Coglio 62f2267c2b [parser] Fix parsing of ordering expressions.
Follow the grammar rule. The ordering operators are not associative.

Split previous tests into succeeding and failing ones.

Refresh expectations of old test files.

Add expectations of new test files.
2022-05-10 23:32:46 -07:00
..
compiler [tests] add block comment annotation to field/sub.leo and field/ternary.leo and add expectations files 2022-05-10 13:38:41 -07:00
expectations [parser] Fix parsing of ordering expressions. 2022-05-10 23:32:46 -07:00
parser [parser] Fix parsing of ordering expressions. 2022-05-10 23:32:46 -07:00
test-framework leo warnings, disable unused errors for now 2022-04-18 14:06:28 -07:00
README.md [testing] Remove obsolete section from README. 2022-05-02 21:51:54 -07:00

Leo Test Framework

This directory includes Leo code samples, which are parsed and used as tests by test-framework.

Structure

Currently, the test framework covers only two areas: compiler and parser; tests for both destinations are placed in matching folders. The third folder - expectations - contains results of test execution which are saved in git and then compared to test output.

Test Structure

Tests can be placed either in the compiler/ or in the parser/ directories. Each test is a Leo file with correct (or intentionally incorrect) Leo code. What makes Leo file a test is a block comment at the top of the file:

/*
namespace: Parse
expectation: Pass
*/

circuit X {
    x: u32,
    y: u32,
}

This comment contains YAML structure with set of mandatory and optional fields.

Test Expectations

After an initial run of the tests, test expectations will be autogenerated and placed under the expectations/ directory. They will contain the results of execution in detail (for example, in the compiler tests, they include number of constraints and output registers).

During subsequent test runs, the results of each test are compared to the stored expectations, and if the stored expectations (say, number of constraints in Pedersen Hash example) don't match actual results, an error will be thrown and the test won't pass. Of course, there are two possible scenarios:

  1. If the test has failed because the logic was changed intentionally, then expectations need to be deleted. New ones will be generated instead. A PR should contain changes to expectations as well as to tests or code.
  2. If the test should pass, then expectations should not be changed or removed.

Test Configuration

Here is the list of all possible configuration options for compiler and parser tests.

namespace

- Mandatory: yes
- Namespace: all
- Values: Compile / Parse

Only two values are supported: Parse and Compile. The former is meant to be a parser test, the latter is a full compiler test.

Besides the Parse value, there are actually additional possible values for this field: ParseStatement, ParseExpression, and Token. Each one of them allows testing Leo parser on different levels - lexer tokens or just expressions/statements.

Compiler tests always include complete Leo programs.

expectation

- Mandatory: yes
- Namespace: all
- Values: Pass / Fail

This setting indicates whether the tested code is supposed to succeed or to fail. If the test was marked as Pass but it actually failed, you'll know that something went wrong and the test or the compiler/parser needs fixing.

input_file (Compile)

- Mandatory: no
- Namespace: Compile
- Values: <input file path>, ...

This setting allows using one or more input files for the Leo program. The program will be run with every provided input. See this example:

/*
namespace: Compile
expectation: Pass
input_file:
 - inputs/a_0.in
 - inputs/a_1.in
*/

function main(a: u32) {}