leo/tests
2022-09-21 13:18:50 -07:00
..
compiler Regen test expectations 2022-09-21 13:18:50 -07:00
expectations Regen test expectations 2022-09-21 13:18:50 -07:00
parser Regen test expectations 2022-09-21 13:18:50 -07:00
test-framework Merge pull request #2059 from AleoHQ/dependabot/cargo/testnet3/criterion-0.4.0 2022-09-13 11:47:55 +02:00
README.md update test readme 2022-05-24 12:45:11 -04:00

Leo Test Framework

This directory includes Leo code samples, which are parsed and used as tests by test-framework.

Structure

Currently, the test framework covers only two areas: compiler and parser; tests for both destinations are placed in matching folders. The third folder - expectations - contains results of test execution which are saved in git and then compared to test output.

Test Structure

Tests can be placed either in the compiler/ or in the parser/ directories. Each test is a Leo file with correct (or intentionally incorrect) Leo code. What makes Leo file a test is a block comment at the top of the file:

/*
namespace: Parse
expectation: Pass
*/

circuit X {
    x: u32,
    y: u32,
}

This comment contains YAML structure with set of mandatory and optional fields.

Test Expectations

After an initial run of the tests, test expectations will be autogenerated and placed under the expectations/ directory. They will contain the results of execution in detail (for example, in the compiler tests, they include number of constraints and output registers).

During subsequent test runs, the results of each test are compared to the stored expectations, and if the stored expectations (say, number of constraints in Pedersen Hash example) don't match actual results, an error will be thrown and the test won't pass. Of course, there are two possible scenarios:

  1. If the test has failed because the logic was changed intentionally, then expectations need to be deleted. New ones will be generated instead. A PR should contain changes to expectations as well as to tests or code.
  2. If the test should pass, then expectations should not be changed or removed.

Test Configuration

Here is the list of all possible configuration options for compiler and parser tests.

namespace

- Mandatory: yes
- Namespace: all
- Values: ...

Several values are supported, but they vary depending on the directory you are in.

Parser Directory namespaces:

  • Parse - Test a file to check that it is a valid Leo program.
  • ParseExpression - Test a file line by line to check that each line is a valid Leo expression.
  • ParseStatement - Test a file consuming multiple lines till a blank line to check that it contains a valid Leo statement.
  • Serialize - Test a file to check that it can be serialized to JSON.
  • Input - Test an input file to check that it is a valid Leo input file.
  • Token - Test a file line by line to check that it contains zero or more valid Leo parser tokens.

Compiler Directory namespaces:

  • Compiler - Test a file to check that it is a valid Leo program, and it can be compiled without errors.

expectation

- Mandatory: yes
- Namespace: all
- Values: Pass / Fail

This setting indicates whether the tested code is supposed to succeed or to fail. If the test was marked as Pass but it actually failed, you'll know that something went wrong and the test or the compiler/parser needs fixing.

input_file (Compile)

- Mandatory: no
- Namespace: Compile
- Values: <input file path>, ...

This setting allows using one or more input files for the Leo program. The program will be run with every provided input. See this example:

/*
namespace: Compile
expectation: Pass
input_file:
 - inputs/a_0.in
 - inputs/a_1.in
*/

function main(a: u32) {}