Space-precedence does not apply to value-level operators (#10597)

In a sequence of value-level operators, whitespace does not affect relative precedence. Functional operators still follow the space-precedence rules.

The "functional" operators are: `>> << |> |>> <| <<| : .`, application, and any operator containing `<-` or `->`. All other operators are considered value-level operators.

Asymmetric whitespace can still be used to form *operator sections* of value-level operators, e.g. `+2 * 3` is still equivalent to `x -> (x+2) * 3`.

Precedence of application is unchanged, so `f x+y` is still equivalent to `f (x + y)` and `f x+y * z` is still equivalent to `(f (x + y)) * z`.

Any attempt to use spacing to override value-level operator precedence will be caught by the new enso linter. Mixed spacing (for clarity) in value-operator expressions is allowed, as long as it is consistent with the precedences of the operators.

Closes #10366.

# Important Notes
Precedence warnings:
- The parser emits a warning if the whitespace in an expression is inconsistent with its effective precedence.
- A new enso linter can be run with `./run libraries lint`. It parses all `.enso` files in `distribution/lib` and `test`, and reports any errors or warnings. It can also be run on individual files: `cargo run --release --bin check_syntax -- file1 file2...` (the result may be easier to read than the `./run` output).
- The linter is also run as part of `./run lint`, so it is checked in CI.

Additional language change:
- The exponentiation operator (`^`) now has higher precedence than the multiplication class (`*`, `/`, `%`). This change did not affect any current enso files.

Library changes:
- The libraries have been updated. The new warnings were used to identify all affected code; the changes themselves have not been programmatically verified (in many cases their equivalence relies on the commutativity of string concatenation).
This commit is contained in:
Kaz Wesley 2024-07-24 06:55:44 -04:00 committed by GitHub
parent 609d5abbc0
commit e5b85bf16e
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
43 changed files with 958 additions and 382 deletions

View File

@ -4,9 +4,11 @@
- [Enforce conversion method return type][10468]
- [Renaming launcher executable to ensoup][10535]
- [Space-precedence does not apply to value-level operators][10597]
[10468]: https://github.com/enso-org/enso/pull/10468
[10535]: https://github.com/enso-org/enso/pull/10535
[10597]: https://github.com/enso-org/enso/pull/10597
#### Enso IDE

View File

@ -50,6 +50,7 @@ exports[`Parsing '2
"spanLeftOffsetCodeStartLine": 1,
"spanLeftOffsetCodeStartUtf8": 4,
"type": "Number",
"warnings": [],
"whitespaceLengthInCodeParsed": 1,
"whitespaceStartInCodeParsed": 4,
},
@ -124,6 +125,7 @@ exports[`Parsing '2
"spanLeftOffsetCodeStartLine": 2,
"spanLeftOffsetCodeStartUtf8": 9,
"type": "Number",
"warnings": [],
"whitespaceLengthInCodeParsed": 1,
"whitespaceStartInCodeParsed": 9,
},
@ -197,6 +199,7 @@ exports[`Parsing '2
"spanLeftOffsetCodeStartLine": 0,
"spanLeftOffsetCodeStartUtf8": 0,
"type": "Number",
"warnings": [],
"whitespaceLengthInCodeParsed": 0,
"whitespaceStartInCodeParsed": 0,
},
@ -208,6 +211,7 @@ exports[`Parsing '2
"spanLeftOffsetCodeStartLine": 0,
"spanLeftOffsetCodeStartUtf8": 0,
"type": "OperatorBlockApplication",
"warnings": [],
"whitespaceLengthInCodeParsed": 0,
"whitespaceStartInCodeParsed": 0,
},
@ -231,6 +235,7 @@ exports[`Parsing '2
},
],
"type": "BodyBlock",
"warnings": [],
"whitespaceLengthInCodeParsed": 0,
"whitespaceStartInCodeParsed": 0,
}
@ -282,6 +287,7 @@ exports[`Parsing 'Data.read
"whitespaceStartInCodeBuffer": 0,
},
"type": "Ident",
"warnings": [],
"whitespaceLengthInCodeParsed": 0,
"whitespaceStartInCodeParsed": 0,
},
@ -336,6 +342,7 @@ exports[`Parsing 'Data.read
"whitespaceStartInCodeBuffer": 5,
},
"type": "Ident",
"warnings": [],
"whitespaceLengthInCodeParsed": 0,
"whitespaceStartInCodeParsed": 5,
},
@ -347,6 +354,7 @@ exports[`Parsing 'Data.read
"spanLeftOffsetCodeStartLine": 0,
"spanLeftOffsetCodeStartUtf8": 0,
"type": "OprApp",
"warnings": [],
"whitespaceLengthInCodeParsed": 0,
"whitespaceStartInCodeParsed": 0,
},
@ -401,6 +409,7 @@ exports[`Parsing 'Data.read
"spanLeftOffsetCodeStartLine": 1,
"spanLeftOffsetCodeStartUtf8": 10,
"type": "Number",
"warnings": [],
"whitespaceLengthInCodeParsed": 0,
"whitespaceStartInCodeParsed": 10,
},
@ -454,6 +463,7 @@ exports[`Parsing 'Data.read
"spanLeftOffsetCodeStartLine": 1,
"spanLeftOffsetCodeStartUtf8": 13,
"type": "Number",
"warnings": [],
"whitespaceLengthInCodeParsed": 1,
"whitespaceStartInCodeParsed": 13,
},
@ -465,6 +475,7 @@ exports[`Parsing 'Data.read
"spanLeftOffsetCodeStartLine": 1,
"spanLeftOffsetCodeStartUtf8": 10,
"type": "OprApp",
"warnings": [],
"whitespaceLengthInCodeParsed": 0,
"whitespaceStartInCodeParsed": 10,
},
@ -488,6 +499,7 @@ exports[`Parsing 'Data.read
},
],
"type": "BodyBlock",
"warnings": [],
"whitespaceLengthInCodeParsed": 0,
"whitespaceStartInCodeParsed": 0,
}
@ -574,6 +586,7 @@ exports[`Parsing 'Data.read "File"
"spanLeftOffsetCodeStartLine": 0,
"spanLeftOffsetCodeStartUtf8": 9,
"type": "TextLiteral",
"warnings": [],
"whitespaceLengthInCodeParsed": 1,
"whitespaceStartInCodeParsed": 9,
},
@ -611,6 +624,7 @@ exports[`Parsing 'Data.read "File"
"whitespaceStartInCodeBuffer": 0,
},
"type": "Ident",
"warnings": [],
"whitespaceLengthInCodeParsed": 0,
"whitespaceStartInCodeParsed": 0,
},
@ -665,6 +679,7 @@ exports[`Parsing 'Data.read "File"
"whitespaceStartInCodeBuffer": 5,
},
"type": "Ident",
"warnings": [],
"whitespaceLengthInCodeParsed": 0,
"whitespaceStartInCodeParsed": 5,
},
@ -676,6 +691,7 @@ exports[`Parsing 'Data.read "File"
"spanLeftOffsetCodeStartLine": 0,
"spanLeftOffsetCodeStartUtf8": 0,
"type": "OprApp",
"warnings": [],
"whitespaceLengthInCodeParsed": 0,
"whitespaceStartInCodeParsed": 0,
},
@ -687,6 +703,7 @@ exports[`Parsing 'Data.read "File"
"spanLeftOffsetCodeStartLine": 0,
"spanLeftOffsetCodeStartUtf8": 0,
"type": "App",
"warnings": [],
"whitespaceLengthInCodeParsed": 0,
"whitespaceStartInCodeParsed": 0,
},
@ -741,6 +758,7 @@ exports[`Parsing 'Data.read "File"
"spanLeftOffsetCodeStartLine": 1,
"spanLeftOffsetCodeStartUtf8": 17,
"type": "Number",
"warnings": [],
"whitespaceLengthInCodeParsed": 0,
"whitespaceStartInCodeParsed": 17,
},
@ -794,6 +812,7 @@ exports[`Parsing 'Data.read "File"
"spanLeftOffsetCodeStartLine": 1,
"spanLeftOffsetCodeStartUtf8": 20,
"type": "Number",
"warnings": [],
"whitespaceLengthInCodeParsed": 1,
"whitespaceStartInCodeParsed": 20,
},
@ -805,6 +824,7 @@ exports[`Parsing 'Data.read "File"
"spanLeftOffsetCodeStartLine": 1,
"spanLeftOffsetCodeStartUtf8": 17,
"type": "OprApp",
"warnings": [],
"whitespaceLengthInCodeParsed": 0,
"whitespaceStartInCodeParsed": 17,
},
@ -828,6 +848,7 @@ exports[`Parsing 'Data.read "File"
},
],
"type": "BodyBlock",
"warnings": [],
"whitespaceLengthInCodeParsed": 0,
"whitespaceStartInCodeParsed": 0,
}
@ -878,6 +899,7 @@ exports[`Parsing 'Data.read File
"whitespaceStartInCodeBuffer": 10,
},
"type": "Ident",
"warnings": [],
"whitespaceLengthInCodeParsed": 1,
"whitespaceStartInCodeParsed": 9,
},
@ -915,6 +937,7 @@ exports[`Parsing 'Data.read File
"whitespaceStartInCodeBuffer": 0,
},
"type": "Ident",
"warnings": [],
"whitespaceLengthInCodeParsed": 0,
"whitespaceStartInCodeParsed": 0,
},
@ -969,6 +992,7 @@ exports[`Parsing 'Data.read File
"whitespaceStartInCodeBuffer": 5,
},
"type": "Ident",
"warnings": [],
"whitespaceLengthInCodeParsed": 0,
"whitespaceStartInCodeParsed": 5,
},
@ -980,6 +1004,7 @@ exports[`Parsing 'Data.read File
"spanLeftOffsetCodeStartLine": 0,
"spanLeftOffsetCodeStartUtf8": 0,
"type": "OprApp",
"warnings": [],
"whitespaceLengthInCodeParsed": 0,
"whitespaceStartInCodeParsed": 0,
},
@ -991,6 +1016,7 @@ exports[`Parsing 'Data.read File
"spanLeftOffsetCodeStartLine": 0,
"spanLeftOffsetCodeStartUtf8": 0,
"type": "App",
"warnings": [],
"whitespaceLengthInCodeParsed": 0,
"whitespaceStartInCodeParsed": 0,
},
@ -1045,6 +1071,7 @@ exports[`Parsing 'Data.read File
"spanLeftOffsetCodeStartLine": 1,
"spanLeftOffsetCodeStartUtf8": 15,
"type": "Number",
"warnings": [],
"whitespaceLengthInCodeParsed": 0,
"whitespaceStartInCodeParsed": 15,
},
@ -1098,6 +1125,7 @@ exports[`Parsing 'Data.read File
"spanLeftOffsetCodeStartLine": 1,
"spanLeftOffsetCodeStartUtf8": 18,
"type": "Number",
"warnings": [],
"whitespaceLengthInCodeParsed": 1,
"whitespaceStartInCodeParsed": 18,
},
@ -1109,6 +1137,7 @@ exports[`Parsing 'Data.read File
"spanLeftOffsetCodeStartLine": 1,
"spanLeftOffsetCodeStartUtf8": 15,
"type": "OprApp",
"warnings": [],
"whitespaceLengthInCodeParsed": 0,
"whitespaceStartInCodeParsed": 15,
},
@ -1132,6 +1161,7 @@ exports[`Parsing 'Data.read File
},
],
"type": "BodyBlock",
"warnings": [],
"whitespaceLengthInCodeParsed": 0,
"whitespaceStartInCodeParsed": 0,
}
@ -1182,6 +1212,7 @@ exports[`Parsing 'foo bar
"whitespaceStartInCodeBuffer": 4,
},
"type": "Ident",
"warnings": [],
"whitespaceLengthInCodeParsed": 1,
"whitespaceStartInCodeParsed": 3,
},
@ -1217,6 +1248,7 @@ exports[`Parsing 'foo bar
"whitespaceStartInCodeBuffer": 0,
},
"type": "Ident",
"warnings": [],
"whitespaceLengthInCodeParsed": 0,
"whitespaceStartInCodeParsed": 0,
},
@ -1228,6 +1260,7 @@ exports[`Parsing 'foo bar
"spanLeftOffsetCodeStartLine": 0,
"spanLeftOffsetCodeStartUtf8": 0,
"type": "App",
"warnings": [],
"whitespaceLengthInCodeParsed": 0,
"whitespaceStartInCodeParsed": 0,
},
@ -1271,6 +1304,7 @@ exports[`Parsing 'foo bar
},
],
"type": "BodyBlock",
"warnings": [],
"whitespaceLengthInCodeParsed": 0,
"whitespaceStartInCodeParsed": 0,
}
@ -1320,6 +1354,7 @@ exports[`Parsing 'foo bar=baz' 1`] = `
"whitespaceStartInCodeBuffer": 8,
},
"type": "Ident",
"warnings": [],
"whitespaceLengthInCodeParsed": 0,
"whitespaceStartInCodeParsed": 8,
},
@ -1373,6 +1408,7 @@ exports[`Parsing 'foo bar=baz' 1`] = `
"whitespaceStartInCodeBuffer": 0,
},
"type": "Ident",
"warnings": [],
"whitespaceLengthInCodeParsed": 0,
"whitespaceStartInCodeParsed": 0,
},
@ -1406,6 +1442,7 @@ exports[`Parsing 'foo bar=baz' 1`] = `
"spanLeftOffsetCodeStartLine": 0,
"spanLeftOffsetCodeStartUtf8": 0,
"type": "NamedApp",
"warnings": [],
"whitespaceLengthInCodeParsed": 0,
"whitespaceStartInCodeParsed": 0,
},
@ -1429,6 +1466,7 @@ exports[`Parsing 'foo bar=baz' 1`] = `
},
],
"type": "BodyBlock",
"warnings": [],
"whitespaceLengthInCodeParsed": 0,
"whitespaceStartInCodeParsed": 0,
}

View File

@ -75,6 +75,7 @@
<edition>.yaml:
enso.bundle.template:
launcher-manifest.yaml:
lib:
engine/:
runner/:
src/:

View File

@ -5,4 +5,5 @@ use crate::prelude::*;
// === Export ===
// ==============
pub mod enso_linter;
pub mod parser;

View File

@ -0,0 +1,36 @@
use super::*;
use ide_ci::programs::cargo;
use ide_ci::programs::Cargo;
use crate::paths::generated::RepoRoot;
const LINTER_CRATE_NAME: &str = "enso-parser-debug";
const LINTER_BIN_NAME: &str = "check_syntax";
const ENSO_EXTENSION: &str = ".enso";
pub fn cargo_run_linter_cmd(repo_root: &Path) -> Result<Command> {
let mut ret = Cargo.cmd()?;
ret.current_dir(repo_root)
.apply(&cargo::Command::Run)
.apply(&cargo::Options::Package(LINTER_CRATE_NAME.into()))
.apply(&cargo::RunOption::Bin(LINTER_BIN_NAME.into()))
.apply(&cargo::RunOption::Release);
Ok(ret)
}
fn collect_source_paths(repo_root: &RepoRoot) -> Result<Vec<PathBuf>> {
let glob_pattern = format!("**/*{ENSO_EXTENSION}");
let source_roots = [&repo_root.distribution.lib.path, &repo_root.test.path];
let glob_paths: Vec<_> = source_roots
.into_iter()
.map(|path| glob::glob(path.join(&glob_pattern).as_str()))
.collect::<std::result::Result<_, _>>()?;
Ok(glob_paths.into_iter().flatten().collect::<std::result::Result<_, _>>()?)
}
pub async fn lint_all(repo_root: RepoRoot) -> Result<()> {
let sources = collect_source_paths(&repo_root)?;
cargo_run_linter_cmd(&repo_root)?.arg("--").args(sources).run_ok().await?;
Ok(())
}

View File

@ -124,6 +124,8 @@ impl Manipulator for Options {
pub enum RunOption {
/// Name of the bin target to run.
Bin(String),
/// Build in release mode.
Release,
}
impl Manipulator for RunOption {
@ -135,6 +137,7 @@ impl Manipulator for RunOption {
Bin(binary_name) => {
command.arg(binary_name.as_str());
}
Release => {}
}
}
}

View File

@ -27,6 +27,7 @@ pub mod git_clean;
pub mod gui;
pub mod ide;
pub mod java_gen;
pub mod libraries;
pub mod release;
pub mod runtime;
pub mod wasm;
@ -140,6 +141,8 @@ pub enum Target {
JavaGen(java_gen::Target),
/// Check if the changelog has been updated. Requires CI environment.
ChangelogCheck,
/// Enso-libraries related subcommand.
Libraries(libraries::Target),
}
/// Build, test and package Enso Engine.

View File

@ -0,0 +1,18 @@
use crate::prelude::*;
use clap::Args;
use clap::Subcommand;
#[derive(Subcommand, Clone, Copy, Debug, PartialEq, Eq)]
pub enum Command {
/// Check syntax of all Enso source files.
Lint,
}
#[derive(Args, Clone, Copy, Debug)]
pub struct Target {
#[clap(subcommand)]
pub action: Command,
}

View File

@ -21,6 +21,7 @@ use crate::prelude::*;
use std::future::join;
use crate::arg::java_gen;
use crate::arg::libraries;
use crate::arg::release::Action;
use crate::arg::BuildJob;
use crate::arg::Cli;
@ -772,6 +773,8 @@ pub async fn main_internal(config: Option<Config>) -> Result {
.run_ok()
.await?;
enso_build::rust::enso_linter::lint_all(ctx.repo_root.clone()).await?;
enso_build::web::install(&ctx.repo_root).await?;
enso_build::web::run_script(&ctx.repo_root, enso_build::web::Script::Typecheck).await?;
enso_build::web::run_script(&ctx.repo_root, enso_build::web::Script::Lint).await?;
@ -828,6 +831,11 @@ pub async fn main_internal(config: Option<Config>) -> Result {
let ci_context = ide_ci::actions::context::Context::from_env()?;
enso_build::changelog::check::check(ctx.repo_root.clone(), ci_context).await?;
}
Target::Libraries(command) => match command.action {
libraries::Command::Lint => {
enso_build::rust::enso_linter::lint_all(ctx.repo_root.clone()).await?;
}
},
};
info!("Completed main job.");
global::complete_tasks().await?;

View File

@ -390,7 +390,7 @@ type Accumulator
## PRIVATE
update_moments self value = if self.moments.is_nothing then Nothing else
case value of
_ : Number -> self.moments.map_with_index i->v-> v + (value ^ i+1)
_ : Number -> self.moments.map_with_index i->v-> v + value^(i+1)
_ -> Nothing
## PRIVATE

View File

@ -531,7 +531,7 @@ type Loss_Of_Numeric_Precision
## PRIVATE
to_text : Text
to_text self =
"(Loss_Of_Numeric_Precision.Warning (original_value = "+self.original_value.to_text+") (new_value = " + self.new_value.to_text+"))"
"(Loss_Of_Numeric_Precision.Warning (original_value = "+self.original_value.to_text+") (new_value = "+self.new_value.to_text+"))"
## Indicates that a computed or provided value is outside the allowable range.
type Out_Of_Range

View File

@ -53,7 +53,7 @@ moving_backup file action = recover_io_and_not_found <|
## PRIVATE
write_file_backing_up_old_one file action = recover_io_and_not_found <|
parent = file.parent
bak_file = parent / file.name+".bak"
bak_file = parent / (file.name+".bak")
# The loop is looking for a name for a temporary file that is not yet taken.
go i =
new_name = file.name + ".new" + if i == 0 then "" else "." + i.to_text

View File

@ -244,8 +244,8 @@ type JDBC_Connection
values = columns.map col-> col.at row_id
set_statement_values stmt statement_setter values expected_type_hints=expected_type_hints
stmt.addBatch
if (row_id+1 % batch_size) == 0 then check_rows stmt.executeBatch batch_size
if (num_rows % batch_size) != 0 then check_rows stmt.executeBatch (num_rows % batch_size)
if (row_id+1) % batch_size == 0 then check_rows stmt.executeBatch batch_size
if num_rows % batch_size != 0 then check_rows stmt.executeBatch (num_rows % batch_size)
## PRIVATE

View File

@ -430,7 +430,7 @@ agg_count_distinct_include_null args =
The columns are converted to VARIANT type because of that, which may incur some overhead.
But there seems to be no other reliable way to handle this for columns like numeric where no non-NULL value exists that can be guaranteed to be unused.
replace_null_with_marker expr =
SQL_Builder.code "COALESCE(" ++ expr ++ ", {'enso-null-replacement-marker':'"+Random.uuid+"'}::variant)"
SQL_Builder.code "COALESCE(" ++ expr ++ ", {'enso-null-replacement-marker':'" ++ Random.uuid ++ "'}::variant)"
## PRIVATE
A helper for `lookup_and_replace`, and perhaps other operation.

View File

@ -68,8 +68,8 @@ type Invalid_Column_Names
Nothing -> ""
message -> " "+message
case self.column_names.length == 1 of
True -> "The name " + (self.column_names.at 0).to_display_text.pretty + " is invalid and may be replaced in the resulting table."+suffix
False -> "The names "+self.column_names.short_display_text+" are invalid and may be replaced in the resulting table."+suffix
True -> "The name " + (self.column_names.at 0).to_display_text.pretty + " is invalid and may be replaced in the resulting table." + suffix
False -> "The names "+self.column_names.short_display_text+" are invalid and may be replaced in the resulting table." + suffix
## PRIVATE
Handles the Java counterpart `InvalidColumnNameException` and converts it into a dataflow error.
@ -652,11 +652,11 @@ type Conversion_Failure
case self of
Conversion_Failure.Error _ _ _ _ ->
rows_info + " could not be converted into the target type "+self.target_type.to_display_text+column_suffix+"."
rows_info + " could not be converted into the target type " + self.target_type.to_display_text + column_suffix + "."
Conversion_Failure.Out_Of_Range _ _ _ _ ->
rows_info + " are out of the range of the target type "+self.target_type.to_display_text+column_suffix+"."
rows_info + " are out of the range of the target type " + self.target_type.to_display_text + column_suffix + "."
Conversion_Failure.Text_Too_Long _ _ _ _ ->
rows_info + " have a text representation that does not fit the target type "+self.target_type.to_display_text+column_suffix+"."
rows_info + " have a text representation that does not fit the target type " + self.target_type.to_display_text + column_suffix + "."
type Loss_Of_Integer_Precision
## PRIVATE
@ -714,7 +714,7 @@ type Arithmetic_Overflow
Nothing -> ""
operands ->
" (e.g. operation " + (operands.map .to_display_text . join " ") + ")"
rows_info + op_info + " encountered integer overflow - the results do not fit the type "+self.target_type.to_display_text+". These values have ben replaced with `Nothing`."
rows_info + op_info + " encountered integer overflow - the results do not fit the type " + self.target_type.to_display_text + ". These values have ben replaced with `Nothing`."
## PRIVATE
to_text : Text
@ -834,7 +834,7 @@ type Non_Unique_Key
Pretty print the non-unique primary key error.
to_display_text : Text
to_display_text self =
"The key " + self.key_column_names.to_display_text + " is not unique, for example key "+self.clashing_example_key_values.to_display_text+" corresponds to "+self.clashing_example_row_count.to_text+" rows."
"The key " + self.key_column_names.to_display_text + " is not unique, for example key " + self.clashing_example_key_values.to_display_text + " corresponds to " + self.clashing_example_row_count.to_text + " rows."
type Null_Values_In_Key_Columns
## PRIVATE
@ -847,7 +847,7 @@ type Null_Values_In_Key_Columns
to_display_text : Text
to_display_text self =
suffix = if self.add_sql_suffix.not then "" else " The operation has been rolled back. Due to how NULL equality works in SQL, these rows would not be correctly matched to the target rows. Please use a key that does not contain NULLs."
"The operation encountered input rows that contained Nothing values in key columns (for example, the row " + self.example_row.to_display_text + ")."+suffix
"The operation encountered input rows that contained Nothing values in key columns (for example, the row " + self.example_row.to_display_text + ")." + suffix
## Indicates that the query may not have downloaded all rows that were
available.

View File

@ -492,11 +492,11 @@ Any.should_equal_ignoring_order self that frames_to_skip=0 =
loc = Meta.get_source_location 1+frames_to_skip
that.each element->
if self.contains element . not then
msg = "The collection (" + self.to_text + ") did not contain "+element.to_text+" (at " + loc + ")."
msg = "The collection (" + self.to_text + ") did not contain " + element.to_text + " (at " + loc + ")."
Test.fail msg
self.each element->
if that.contains element . not then
msg = "The collection contained an element ("+element.to_text+") which was not expected (at " + loc + ")."
msg = "The collection contained an element (" + element.to_text + ") which was not expected (at " + loc + ")."
Test.fail msg
Test.with_clue "Duplicate elements in either collection should have the same counts - checked by comparing sorted collections: " <|
## If the vector contains vectors or incomparable values, they may not get sorted correctly.

View File

@ -55,17 +55,17 @@ type Test
is_panic_matching = payload.is_a matcher
if is_panic_matching then payload else
stack_trace = caught_panic.convert_to_dataflow_error.get_stack_trace_text
Test.fail ("Expected a " + matcher.to_text + ", but " + payload.to_text + " was thrown instead (at "+loc+").") details=stack_trace
Test.fail ("Expected a " + matcher.to_text + ", but " + payload.to_text + " was thrown instead (at " + loc + ").") details=stack_trace
Panic.catch Any handler=handle_panic <|
res = action
# If the action did not panic above, we fail the test.
case res.is_error of
True ->
Test.fail ("Expected a Panic " + matcher.to_text + " to be thrown, but the action returned a Dataflow Error " + res.catch.to_display_text + " instead (at "+loc+").")
Test.fail ("Expected a Panic " + matcher.to_text + " to be thrown, but the action returned a Dataflow Error " + res.catch.to_display_text + " instead (at " + loc + ").")
False ->
return_suffix = if res.is_nothing then "" else " and returned ["+res.to_text+"]"
Test.fail ("Expected a Panic " + matcher.to_text + " to be thrown, but the action succeeded" + return_suffix + " (at "+loc+").")
Test.fail ("Expected a Panic " + matcher.to_text + " to be thrown, but the action succeeded" + return_suffix + " (at " + loc + ").")
## Expect a function to fail with the provided panic.

View File

@ -152,8 +152,8 @@ make_json_for_xml_element xml_element max_items type:Text="XML_Element" =
" " + ((c.attributes.to_vector.take 5 . map a-> a.first+'="'+a.second+'"') . join " ") + (if c.attributes.length > 5 then " ..." else "")
render_end = case c.child_count of
0 -> "/>"
1 -> if c.children.first.is_a Text then ">" + c.children.first + "</"+c.name+">" else "> ... 1 child element ... </"+c.name+">"
_ -> ">..." + c.child_count.to_text + " child elements...</"+c.name+">"
1 -> if c.children.first.is_a Text then ">" + c.children.first + "</" + c.name + ">" else "> ... 1 child element ... </" + c.name + ">"
_ -> ">..." + c.child_count.to_text + " child elements...</" + c.name + ">"
[i.to_text, "Element", render_start+render_attribs+render_end]
map_vector = Warning.clear (attribs + children)

View File

@ -44,7 +44,7 @@ class SimpleArithmeticTest extends InterpreterTest {
val code =
"""import Standard.Base.Data.Numbers
|main =
| expr = (10 * 3+2) - 1
| expr = 10 * (3+2) - 1
| -expr
|""".stripMargin

View File

@ -0,0 +1,34 @@
//! Parses Enso sources, measuring time spent in the parser.
// === Non-Standard Linter Configuration ===
#![allow(clippy::option_map_unit_fn)]
#![allow(clippy::precedence)]
#![allow(dead_code)]
#![deny(unconditional_recursion)]
#![warn(missing_docs)]
#![warn(trivial_casts)]
#![warn(unused_qualifications)]
// =============
// === Tests ===
// =============
fn main() {
let args = std::env::args().skip(1);
let parser = enso_parser::Parser::new();
let parse_time: std::time::Duration = args
.map(|path| {
let code = std::fs::read_to_string(path).unwrap();
let mut code = code.as_str();
if let Some((_meta, code_)) = enso_parser::metadata::parse(code) {
code = code_;
}
let start = std::time::Instant::now();
std::hint::black_box(parser.run(code));
start.elapsed()
})
.sum();
println!("Parse time: {} ms", parse_time.as_millis());
}

View File

@ -1,6 +1,7 @@
//! Parses Enso sources and reports any syntax errors, while performing internal consistency checks.
//! Source files may be specified as command line arguments; if none a provided, source code will be
//! read from standard input.
//! Parses Enso sources and reports any syntax errors or warnings.
//!
//! Source files may be specified as command line arguments; if none are provided, source code will
//! be read from standard input.
// === Non-Standard Linter Configuration ===
#![allow(clippy::option_map_unit_fn)]
@ -15,36 +16,121 @@ use enso_parser::prelude::*;
// =============
// === Tests ===
// =============
struct WithSourcePath<T> {
path: String,
value: T,
}
fn main() {
fn main() -> Result<(), Box<dyn std::error::Error>> {
let args = std::env::args().skip(1);
let mut parser = enso_parser::Parser::new();
let mut to_read = vec![];
let mut to_parse = vec![];
if args.len() == 0 {
use std::io::Read;
let mut input = String::new();
std::io::stdin().read_to_string(&mut input).unwrap();
check_file("<stdin>", input.as_str(), &mut parser);
let mut data = String::new();
std::io::stdin().read_to_string(&mut data).unwrap();
to_parse.push(WithSourcePath { path: "<stdin>".into(), value: data });
} else {
args.for_each(|path| {
check_file(&path, &std::fs::read_to_string(&path).unwrap(), &mut parser)
to_read.extend(args);
};
let cores = std::thread::available_parallelism()
.unwrap_or(std::num::NonZeroUsize::new(1).unwrap())
.get();
let io_threads = std::cmp::min(cores, to_read.len());
let cpu_threads = std::cmp::min(cores, to_read.len() + to_parse.len());
let to_read = std::sync::Arc::new(std::sync::Mutex::new(to_read));
let to_parse = std::sync::Arc::new((
std::sync::Mutex::new((to_parse, io_threads)),
std::sync::Condvar::new(),
));
for _ in 0..io_threads {
let to_read = std::sync::Arc::clone(&to_read);
let to_parse = std::sync::Arc::clone(&to_parse);
std::thread::spawn(move || {
let (to_parse, condvar) = &*to_parse;
while let Some(path) = to_read.lock().unwrap().pop() {
let data = std::fs::read_to_string(&path).unwrap();
to_parse.lock().unwrap().0.push(WithSourcePath { path, value: data });
condvar.notify_one();
}
let io_threads = &mut to_parse.lock().unwrap().1;
*io_threads -= 1;
if *io_threads == 0 {
condvar.notify_all();
}
});
}
let to_print = std::sync::Arc::new(std::sync::Mutex::new(vec![]));
let mut parsers = vec![];
for _ in 0..cpu_threads {
let to_parse = std::sync::Arc::clone(&to_parse);
let to_print = std::sync::Arc::clone(&to_print);
parsers.push(std::thread::spawn(move || {
let mut parser = enso_parser::Parser::new();
let (to_parse, condvar) = &*to_parse;
loop {
let source = {
let mut to_parse = to_parse.lock().unwrap();
while to_parse.0.is_empty() && to_parse.1 != 0 {
to_parse = condvar.wait(to_parse).unwrap();
}
match to_parse.0.pop() {
Some(source) => source,
None => break,
}
};
let results = check_file(source, &mut parser);
to_print.lock().unwrap().push(results);
}
}));
}
for parser in parsers {
parser.join().unwrap();
}
let mut to_print = to_print.lock().unwrap();
let mut to_print = mem::take(&mut *to_print);
to_print.sort_unstable_by(|a, b| a.path.cmp(&b.path));
let mut files_with_bugs = 0;
for source in to_print {
if !source.value.is_empty() {
files_with_bugs += 1;
}
for line in source.value {
eprintln!("{}", line);
}
}
if files_with_bugs != 0 {
Err(format!("Errors or warnings found in {files_with_bugs} files.").into())
} else {
Ok(())
}
}
fn check_file(path: &str, mut code: &str, parser: &mut enso_parser::Parser) {
fn check_file(
file: WithSourcePath<String>,
parser: &mut enso_parser::Parser,
) -> WithSourcePath<Vec<String>> {
let mut code = file.value.as_str();
if let Some((_meta, code_)) = enso_parser::metadata::parse(code) {
code = code_;
}
let ast = parser.run(code);
let errors = RefCell::new(vec![]);
let warnings = RefCell::new(vec![]);
ast.visit_trees(|tree| {
if let enso_parser::syntax::tree::Variant::Invalid(err) = &*tree.variant {
match &*tree.variant {
enso_parser::syntax::tree::Variant::Invalid(err) => {
let error = format!("{}: {}", err.error.message, tree.code());
errors.borrow_mut().push((error, tree.span.clone()));
} else if let enso_parser::syntax::tree::Variant::TextLiteral(text) = &*tree.variant {
}
enso_parser::syntax::tree::Variant::OprApp(enso_parser::syntax::tree::OprApp {
opr: Err(e),
..
}) => {
let error = format!("Consecutive operators: {:?}", e.operators);
errors.borrow_mut().push((error, tree.span.clone()));
}
enso_parser::syntax::tree::Variant::TextLiteral(text) =>
for element in &text.elements {
if let enso_parser::syntax::tree::TextElement::Escape { token } = element {
if token.variant.value.is_none() {
@ -53,28 +139,33 @@ fn check_file(path: &str, mut code: &str, parser: &mut enso_parser::Parser) {
errors.borrow_mut().push((error, tree.span.clone()));
}
}
},
_ => {}
}
for warning in tree.warnings.iter() {
warnings.borrow_mut().push((warning.clone(), tree.span.clone()));
}
});
for (error, span) in &*errors.borrow() {
let whitespace = &span.left_offset.code.repr;
let start = whitespace.as_ptr() as usize + whitespace.len() - code.as_ptr() as usize;
let mut line = 1;
let mut char = 0;
for (i, c) in code.char_indices() {
if i >= start {
break;
let sort_key = |span: &enso_parser::source::Span| {
(
span.left_offset.code.start,
span.left_offset.code.position_after().start + span.code_length,
)
};
errors.borrow_mut().sort_unstable_by_key(|(_, span)| sort_key(span));
warnings.borrow_mut().sort_unstable_by_key(|(_, span)| sort_key(span));
let mut messages = vec![];
for (message, span) in &*errors.borrow() {
messages.push(format!("E {}: {}", fmt_location(&file.path, span), &message));
}
if c == '\n' {
line += 1;
char = 0;
} else {
char += 1;
}
}
eprintln!("{path}:{line}:{char}: {}", &error);
}
for (parsed, original) in ast.code().lines().zip(code.lines()) {
assert_eq!(parsed, original, "Bug: dropped tokens, while parsing: {path}");
for (warning, span) in &*warnings.borrow() {
messages.push(format!("W {}: {}", fmt_location(&file.path, span), warning.message()));
}
WithSourcePath { path: file.path, value: messages }
}
fn fmt_location(path: &str, span: &enso_parser::source::Span) -> String {
let start = span.left_offset.code.position_after().start;
let end = start + span.code_length;
format!("{path} {}:{}-{}:{}", start.line + 1, start.col16, end.line + 1, end.col16)
}

View File

@ -126,6 +126,7 @@ fn strip_hidden_fields(tree: Value) -> Value {
":spanCodeLengthUtf16",
":spanCodeLengthNewlines",
":spanCodeLengthLineChars16",
":warnings",
];
let hidden_tree_fields: HashSet<_> = hidden_tree_fields.into_iter().collect();
Value::list(tree.to_vec().unwrap().into_iter().filter(|val| match val {

View File

@ -826,6 +826,16 @@ fn operator_sections() {
(OprApp (OprApp (Number () "1" ()) (Ok "+") ())
(Ok "<<")
(OprSectionBoundary 1 (OprApp (Number () "2" ()) (Ok "*") ())))));
test!("1+1+ << 2*2*",
(OprSectionBoundary 1
(OprApp (OprApp (OprApp (Number () "1" ())
(Ok "+")
(Number () "1" ()))
(Ok "+") ())
(Ok "<<")
(OprSectionBoundary 1
(OprApp (OprApp (Number () "2" ()) (Ok "*") (Number () "2" ()))
(Ok "*") ())))));
test!("+1 << *2",
(OprSectionBoundary 1
(OprApp (OprApp () (Ok "+") (Number () "1" ()))
@ -841,13 +851,13 @@ fn operator_sections() {
#[test]
fn template_functions() {
#[rustfmt::skip]
test("_.map (_+2 * 3) _*7", block![
test("_.map (_ + 2*3) _*7", block![
(TemplateFunction 1
(App (App (OprApp (Wildcard 0) (Ok ".") (Ident map))
(Group
(TemplateFunction 1
(OprApp (OprApp (Wildcard 0) (Ok "+") (Number () "2" ()))
(Ok "*") (Number () "3" ())))))
(Group (TemplateFunction 1
(OprApp (Wildcard 0)
(Ok "+")
(OprApp (Number () "2" ()) (Ok "*") (Number () "3" ()))))))
(TemplateFunction 1 (OprApp (Wildcard 0) (Ok "*") (Number () "7" ())))))]);
#[rustfmt::skip]
test("_.sum 1", block![

View File

@ -742,7 +742,9 @@ impl<'s> Lexer<'s> {
fn analyze_operator(token: &str) -> token::OperatorProperties {
let mut operator = token::OperatorProperties::new();
if token.ends_with("->") && !token.starts_with("<-") {
let has_right_arrow = token.ends_with("->");
let has_left_arrow = token.starts_with("<-");
if has_right_arrow && !has_left_arrow {
operator = operator.as_right_associative();
}
if token.ends_with('=') && !token.bytes().all(|c| c == b'=') {
@ -776,6 +778,7 @@ fn analyze_operator(token: &str) -> token::OperatorProperties {
.as_annotation(),
"-" =>
return operator
.as_value_operation()
.with_unary_prefix_mode(token::Precedence::unary_minus())
.with_binary_infix_precedence(15),
// "There are a few operators with the lowest precedence possible."
@ -800,14 +803,16 @@ fn analyze_operator(token: &str) -> token::OperatorProperties {
.with_lhs_section_termination(operator::SectionTermination::Unwrap)
.as_compile_time_operation()
.as_arrow(),
"!" => return operator.with_binary_infix_precedence(3),
"||" | "\\\\" | "&&" => return operator.with_binary_infix_precedence(4),
"!" => return operator.with_binary_infix_precedence(3).as_value_operation(),
"||" | "\\\\" | "&&" =>
return operator.with_binary_infix_precedence(4).as_value_operation(),
">>" | "<<" => return operator.with_binary_infix_precedence(5),
"|>" | "|>>" => return operator.with_binary_infix_precedence(6),
"<|" | "<<|" => return operator.with_binary_infix_precedence(6).as_right_associative(),
// Other special operators.
"<=" | ">=" => return operator.with_binary_infix_precedence(14),
"==" | "!=" => return operator.with_binary_infix_precedence(5),
"<=" | ">=" => return operator.with_binary_infix_precedence(14).as_value_operation(),
"==" | "!=" => return operator.with_binary_infix_precedence(5).as_value_operation(),
"," =>
return operator
.with_binary_infix_precedence(1)
@ -833,14 +838,19 @@ fn analyze_operator(token: &str) -> token::OperatorProperties {
let binary = match precedence_char.unwrap() {
'!' => 10,
'|' => 11,
'^' => 12,
'&' => 13,
'<' | '>' => 14,
'+' | '-' => 15,
'*' | '/' | '%' => 16,
_ => 17,
'^' => 17,
_ => 18,
};
operator.with_binary_infix_precedence(binary)
let operator = operator.with_binary_infix_precedence(binary);
if !has_right_arrow && !has_left_arrow {
operator.as_value_operation()
} else {
operator
}
}

View File

@ -254,6 +254,7 @@ fn if_body<'s>(
arguments,
}),
span,
..
}) => {
let mut block = block::body_from_lines(arguments);
block.span.left_offset += span.left_offset;
@ -347,6 +348,7 @@ fn to_body_statement(mut line_expression: syntax::Tree<'_>) -> syntax::Tree<'_>
..
}),
span,
..
} = line_expression
{
inner.span = span;
@ -385,6 +387,7 @@ fn to_body_statement(mut line_expression: syntax::Tree<'_>) -> syntax::Tree<'_>
Tree {
variant: box Variant::OprApp(OprApp { lhs: Some(lhs), opr: Ok(opr), rhs: Some(rhs) }),
span,
..
} if opr.properties.is_assignment() => {
left_offset = span.left_offset.clone();
last_argument_default = Some((opr.clone(), rhs.clone()));
@ -393,10 +396,11 @@ fn to_body_statement(mut line_expression: syntax::Tree<'_>) -> syntax::Tree<'_>
Tree {
variant:
box Variant::ArgumentBlockApplication(ArgumentBlockApplication {
lhs: Some(Tree { variant: box Variant::Ident(ident), span: span_ }),
lhs: Some(Tree { variant: box Variant::Ident(ident), span: span_, .. }),
arguments,
}),
span,
..
} => {
let mut constructor = ident.token.clone();
constructor.left_offset += &span.left_offset;
@ -415,7 +419,7 @@ fn to_body_statement(mut line_expression: syntax::Tree<'_>) -> syntax::Tree<'_>
_ => &line_expression,
};
let (constructor, mut arguments) = crate::collect_arguments(lhs.clone());
if let Tree { variant: box Variant::Ident(Ident { token }), span } = constructor
if let Tree { variant: box Variant::Ident(Ident { token }), span, .. } = constructor
&& token.is_type
{
let mut constructor = token;
@ -572,6 +576,7 @@ impl<'s> CaseBuilder<'s> {
mut documentation,
expression: None,
}),
..
}) if self.documentation.is_none() => {
documentation.open.left_offset += span.left_offset;
if self.case_lines.is_empty() {
@ -587,6 +592,7 @@ impl<'s> CaseBuilder<'s> {
box syntax::tree::Variant::ArgumentBlockApplication(
syntax::tree::ArgumentBlockApplication { lhs: None, arguments },
),
..
}) => {
let mut block = syntax::tree::block::body_from_lines(arguments);
block.span.left_offset += span.left_offset;

View File

@ -75,7 +75,7 @@ pub struct Code<'s> {
#[deref]
pub repr: StrRef<'s>,
#[reflect(flatten)]
start: Location,
pub start: Location,
/// The length of the source code.
#[reflect(flatten)]
pub len: Length,

View File

@ -18,3 +18,4 @@ mod treebuilding;
pub use item::Item;
pub use token::Token;
pub use tree::Tree;
pub use tree::WARNINGS;

View File

@ -3,204 +3,18 @@
mod application;
mod apply;
mod arity;
mod operand;
mod precedence_resolver;
mod reducer;
mod types;
use crate::prelude::*;
use crate::syntax;
use crate::syntax::operator::application::InsertApps;
use crate::syntax::operator::arity::ClassifyArity;
use crate::syntax::operator::operand::Operand;
use crate::syntax::operator::reducer::Reduce;
use crate::syntax::operator::types::Arity;
use crate::syntax::operator::types::BinaryOperand;
use crate::syntax::operator::types::ModifiedPrecedence;
use crate::syntax::operator::types::Operator;
use crate::syntax::token;
use crate::syntax::treebuilding;
use crate::syntax::treebuilding::Finish;
use crate::syntax::treebuilding::ItemConsumer;
use crate::syntax::Tree;
// ===============
// === Exports ===
// ===============
// ==================
// === Precedence ===
// ==================
/// Operator precedence resolver.
#[derive(Debug, Default)]
pub struct Precedence<'s> {
#[rustfmt::skip]
resolver:
// Items -> Tokens/Trees
treebuilding::FlattenBlockTrees<'s,
// Tokens/Trees -> Tokens/Trees (proper tokens only)
treebuilding::AssembleCompoundTokens<'s,
// Tokens/Trees -> Tokens/Trees + Spacing-lookahead
treebuilding::PeekSpacing<'s,
// Tokens/Trees + Spacing-lookahead -> Operators/Operands
ClassifyArity<'s,
// Operators/Operands -> Operators/Operands (balanced)
InsertApps<
// Operators/Operands -> Tree
Reduce<'s>>>>>>,
}
impl<'s> Precedence<'s> {
/// Return a new operator precedence resolver.
pub fn new() -> Self {
Self::default()
}
/// Resolve precedence in a context where the result cannot be an operator section or template
/// function.
pub fn resolve_non_section(
&mut self,
items: impl IntoIterator<Item = syntax::Item<'s>>,
) -> Option<Tree<'s>> {
items.into_iter().for_each(|i| self.push(i));
self.resolver.finish().map(|op| op.value)
}
/// Resolve precedence.
pub fn resolve(
&mut self,
items: impl IntoIterator<Item = syntax::Item<'s>>,
) -> Option<Tree<'s>> {
self.extend(items);
self.finish()
}
/// Extend the expression with a token.
pub fn push(&mut self, item: syntax::Item<'s>) {
self.resolver.push_item(item);
}
/// Return the result.
pub fn finish(&mut self) -> Option<Tree<'s>> {
self.resolver.finish().map(Tree::from)
}
}
impl<'s> Extend<syntax::Item<'s>> for Precedence<'s> {
fn extend<T: IntoIterator<Item = syntax::Item<'s>>>(&mut self, iter: T) {
for token in iter {
self.push(token);
}
}
}
// === Operator or Operand ===
#[derive(Debug)]
enum OperatorOrOperand<'s> {
Operand(Operand<Tree<'s>>),
Operator(Operator<'s>),
}
impl<'s> From<Operand<Tree<'s>>> for OperatorOrOperand<'s> {
fn from(operand: Operand<Tree<'s>>) -> Self {
OperatorOrOperand::Operand(operand)
}
}
impl<'s> From<Operator<'s>> for OperatorOrOperand<'s> {
fn from(operator: Operator<'s>) -> Self {
OperatorOrOperand::Operator(operator)
}
}
// === Applying operators ===
fn apply_operator<'s>(
tokens: Vec<token::Operator<'s>>,
lhs_section_termination: Option<SectionTermination>,
reify_rhs_section: bool,
lhs: Option<Operand<Tree<'s>>>,
rhs_: Option<Operand<Tree<'s>>>,
) -> Operand<Tree<'s>> {
if let Some(lhs_termination) = lhs_section_termination {
let lhs = match lhs_termination {
SectionTermination::Reify => lhs.map(Tree::from),
SectionTermination::Unwrap => lhs.map(|op| op.value),
};
let rhs = rhs_.map(Tree::from);
let ast = syntax::tree::apply_operator(lhs, tokens, rhs);
Operand::from(ast)
} else if tokens.len() < 2
&& let Some(opr) = tokens.first()
&& opr.properties.can_form_section()
{
let mut rhs = None;
let mut elided = 0;
let mut wildcards = 0;
if let Some(rhs_) = rhs_ {
if reify_rhs_section {
rhs = Some(Tree::from(rhs_));
} else {
rhs = Some(rhs_.value);
elided += rhs_.elided;
wildcards += rhs_.wildcards;
}
}
elided += lhs.is_none() as u32 + rhs.is_none() as u32;
let mut operand =
Operand::from(lhs).map(|lhs| syntax::tree::apply_operator(lhs, tokens, rhs));
operand.elided += elided;
operand.wildcards += wildcards;
operand
} else {
let rhs = rhs_.map(Tree::from);
let mut elided = 0;
if tokens.len() != 1 || tokens[0].properties.can_form_section() {
elided += lhs.is_none() as u32 + rhs.is_none() as u32;
}
let mut operand =
Operand::from(lhs).map(|lhs| syntax::tree::apply_operator(lhs, tokens, rhs));
operand.elided += elided;
operand
}
}
fn apply_unary_operator<'s>(
token: token::Operator<'s>,
rhs: Option<Operand<Tree<'s>>>,
error: Option<Cow<'static, str>>,
) -> Operand<Tree<'s>> {
match error {
None => Operand::new(rhs).map(|item| syntax::tree::apply_unary_operator(token, item)),
Some(error) => Operand::from(rhs)
.map(|item| syntax::tree::apply_unary_operator(token, item).with_error(error)),
}
}
// === Operator and Operand Consumers ===
trait OperandConsumer<'s> {
fn push_operand(&mut self, operand: Operand<Tree<'s>>);
}
trait OperatorConsumer<'s> {
fn push_operator(&mut self, operator: Operator<'s>);
}
// === SectionTermination ===
/// Operator-section/template-function termination behavior of an operator with regard to an
/// operand.
#[derive(Debug, Clone, Copy, PartialEq, Eq, PartialOrd, Ord, Default)]
pub enum SectionTermination {
/// If the operand is an operator-section/template-function, indicate it by wrapping it in a
/// suitable node.
#[default]
Reify,
/// Discard any operator-section/template-function properties associated with the operand.
Unwrap,
}
pub use precedence_resolver::Precedence;
pub use types::SectionTermination;
pub use types::Warnings;

View File

@ -1,12 +1,7 @@
use crate::syntax::operator::types::*;
use enso_prelude::*;
use crate::syntax::operator::operand::Operand;
use crate::syntax::operator::types::Arity;
use crate::syntax::operator::types::BinaryOperand;
use crate::syntax::operator::types::ModifiedPrecedence;
use crate::syntax::operator::types::Operator;
use crate::syntax::operator::OperandConsumer;
use crate::syntax::operator::OperatorConsumer;
use crate::syntax::token;
use crate::syntax::treebuilding::Finish;
use crate::syntax::treebuilding::Spacing;
@ -64,14 +59,17 @@ impl<Inner: Finish> Finish for InsertApps<Inner> {
}
fn application<'s>(spacing: Spacing) -> Operator<'s> {
let precedence = ModifiedPrecedence { spacing, precedence: token::Precedence::application() };
let precedence = ModifiedPrecedence {
spacing,
precedence: token::Precedence::application(),
is_value_operation: false,
};
Operator {
left_precedence: Some(precedence),
right_precedence: precedence,
associativity: token::Associativity::Left,
arity: Arity::Binary {
tokens: default(),
lhs_section_termination: default(),
missing: None,
reify_rhs_section: true,
},

View File

@ -0,0 +1,147 @@
use crate::prelude::*;
use crate::syntax::operator::types::*;
use crate::syntax;
use crate::syntax::operator::operand::Operand;
use crate::syntax::token;
use crate::syntax::Tree;
// ==========================
// === Applying operators ===
// ==========================
// === Binary operators ===
#[derive(Debug)]
pub struct ApplyOperator<'s> {
tokens: Vec<token::Operator<'s>>,
lhs: Option<Operand<Tree<'s>>>,
rhs: Option<Operand<Tree<'s>>>,
reify_rhs_section: bool,
warnings: Option<Warnings>,
}
impl<'s> ApplyOperator<'s> {
pub fn tokens(tokens: Vec<token::Operator<'s>>) -> Self {
Self {
tokens,
lhs: default(),
rhs: default(),
reify_rhs_section: default(),
warnings: default(),
}
}
pub fn token(token: token::Operator<'s>) -> Self {
Self::tokens(vec![token])
}
pub fn with_lhs(self, lhs: Option<Operand<Tree<'s>>>) -> Self {
Self { lhs, ..self }
}
pub fn with_rhs(self, rhs: Option<Operand<Tree<'s>>>, reify_rhs_section: bool) -> Self {
Self { rhs, reify_rhs_section, ..self }
}
pub fn with_warnings(self, warnings: Warnings) -> Self {
Self { warnings: Some(warnings), ..self }
}
pub fn finish(self) -> Operand<Tree<'s>> {
let Self { tokens, lhs, rhs: rhs_, reify_rhs_section, warnings } = self;
let mut operand = if let Some(lhs_termination) =
tokens.first().and_then(|token| token.properties.lhs_section_termination())
{
let lhs = match lhs_termination {
SectionTermination::Reify => lhs.map(Tree::from),
SectionTermination::Unwrap => lhs.map(|op| op.value),
};
let rhs = rhs_.map(Tree::from);
let ast = syntax::tree::apply_operator(lhs, tokens, rhs);
Operand::from(ast)
} else if tokens.len() < 2
&& let Some(opr) = tokens.first()
&& opr.properties.can_form_section()
{
let mut rhs = None;
let mut elided = 0;
let mut wildcards = 0;
if let Some(rhs_) = rhs_ {
if reify_rhs_section {
rhs = Some(Tree::from(rhs_));
} else {
rhs = Some(rhs_.value);
elided += rhs_.elided;
wildcards += rhs_.wildcards;
}
}
elided += lhs.is_none() as u32 + rhs.is_none() as u32;
let mut operand =
Operand::from(lhs).map(|lhs| syntax::tree::apply_operator(lhs, tokens, rhs));
operand.elided += elided;
operand.wildcards += wildcards;
operand
} else {
let rhs = rhs_.map(Tree::from);
let mut elided = 0;
if tokens.len() != 1 || tokens[0].properties.can_form_section() {
elided += lhs.is_none() as u32 + rhs.is_none() as u32;
}
let mut operand =
Operand::from(lhs).map(|lhs| syntax::tree::apply_operator(lhs, tokens, rhs));
operand.elided += elided;
operand
};
if let Some(warnings) = warnings {
warnings.apply(&mut operand.value);
}
operand
}
}
// === Unary operators ===
#[derive(Debug)]
pub struct ApplyUnaryOperator<'s> {
token: token::Operator<'s>,
rhs: Option<Operand<Tree<'s>>>,
error: Option<Cow<'static, str>>,
warnings: Option<Warnings>,
}
impl<'s> ApplyUnaryOperator<'s> {
pub fn token(token: token::Operator<'s>) -> Self {
Self { token, rhs: default(), error: default(), warnings: default() }
}
pub fn with_rhs(self, rhs: Option<Operand<Tree<'s>>>) -> Self {
Self { rhs, ..self }
}
pub fn with_error(self, error: Option<Cow<'static, str>>) -> Self {
Self { error, ..self }
}
pub fn with_warnings(self, warnings: Warnings) -> Self {
Self { warnings: Some(warnings), ..self }
}
pub fn finish(self) -> Operand<Tree<'s>> {
let Self { token, rhs, error, warnings } = self;
Operand::new(rhs).map(|rhs| {
let mut tree = syntax::tree::apply_unary_operator(token, rhs);
if let Some(warnings) = warnings {
warnings.apply(&mut tree);
}
match error {
None => tree,
Some(error) => tree.with_error(error),
}
})
}
}

View File

@ -1,15 +1,8 @@
use crate::syntax::operator::apply::*;
use crate::syntax::operator::types::*;
use enso_prelude::*;
use crate::syntax::operator::apply_operator;
use crate::syntax::operator::apply_unary_operator;
use crate::syntax::operator::operand::Operand;
use crate::syntax::operator::types::Arity;
use crate::syntax::operator::types::BinaryOperand;
use crate::syntax::operator::types::ModifiedPrecedence;
use crate::syntax::operator::types::Operator;
use crate::syntax::operator::OperandConsumer;
use crate::syntax::operator::OperatorConsumer;
use crate::syntax::operator::OperatorOrOperand;
use crate::syntax::token;
use crate::syntax::tree;
use crate::syntax::treebuilding::Finish;
@ -123,9 +116,14 @@ impl<'s, Inner: OperandConsumer<'s> + OperatorConsumer<'s>> ClassifyArity<'s, In
Some("Space required between term and unary-operator expression.".into()),
_ => None,
};
let is_value_operation = token.properties.is_value_operation();
self.emit(Operator {
left_precedence: None,
right_precedence: ModifiedPrecedence { spacing: Spacing::Unspaced, precedence },
right_precedence: ModifiedPrecedence {
spacing: Spacing::Unspaced,
precedence,
is_value_operation,
},
associativity,
arity: Arity::Unary { token, error },
});
@ -139,7 +137,7 @@ impl<'s, Inner: OperandConsumer<'s> + OperatorConsumer<'s>> ClassifyArity<'s, In
})) if !(tokens.first().unwrap().left_offset.visible.width_in_spaces == 0
&& token.left_offset.visible.width_in_spaces == 0) =>
self.multiple_operator_error(token, rhs),
_ => self.emit(apply_unary_operator(token, None, None)),
_ => self.emit(ApplyUnaryOperator::token(token).finish()),
}
}
@ -160,10 +158,9 @@ impl<'s, Inner: OperandConsumer<'s> + OperatorConsumer<'s>> ClassifyArity<'s, In
self.multiple_operator_error(token, rhs);
return;
}
let lhs_section_termination = token.properties.lhs_section_termination();
let missing = match (lhs, rhs) {
(None, None) => {
self.emit(apply_operator(vec![token], lhs_section_termination, false, None, None));
self.emit(ApplyOperator::token(token).finish());
return;
}
(Some(_), None) => Some(BinaryOperand::Right),
@ -172,23 +169,27 @@ impl<'s, Inner: OperandConsumer<'s> + OperatorConsumer<'s>> ClassifyArity<'s, In
};
let reify_rhs_section = token.properties.can_form_section()
&& (lhs == Some(Spacing::Spaced) || rhs == Some(Spacing::Spaced));
let is_value_operation = missing.is_none() && token.properties.is_value_operation();
self.emit(Operator {
left_precedence: lhs.map(|spacing| ModifiedPrecedence { spacing, precedence }),
right_precedence: ModifiedPrecedence { spacing: rhs.or(lhs).unwrap(), precedence },
associativity,
arity: Arity::Binary {
tokens: vec![token],
lhs_section_termination,
missing,
reify_rhs_section,
left_precedence: lhs.map(|spacing| ModifiedPrecedence {
spacing,
precedence,
is_value_operation,
}),
right_precedence: ModifiedPrecedence {
spacing: rhs.or(lhs).unwrap(),
precedence,
is_value_operation,
},
associativity,
arity: Arity::Binary { tokens: vec![token], missing, reify_rhs_section },
});
}
fn multiple_operator_error(&mut self, token: token::Operator<'s>, rhs: Option<Spacing>) {
match &mut self.lhs_item {
Some(OperatorOrOperand::Operator(Operator {
arity: Arity::Binary { tokens, lhs_section_termination, missing, reify_rhs_section },
arity: Arity::Binary { tokens, missing, .. },
..
})) => {
tokens.push(token);
@ -196,13 +197,9 @@ impl<'s, Inner: OperandConsumer<'s> + OperatorConsumer<'s>> ClassifyArity<'s, In
match missing {
None => *missing = Some(BinaryOperand::Right),
Some(BinaryOperand::Left) =>
self.lhs_item = Some(OperatorOrOperand::Operand(apply_operator(
mem::take(tokens),
*lhs_section_termination,
*reify_rhs_section,
None,
None,
))),
self.lhs_item = Some(OperatorOrOperand::Operand(
ApplyOperator::tokens(mem::take(tokens)).finish(),
)),
Some(BinaryOperand::Right) => unreachable!(),
}
}

View File

@ -0,0 +1,79 @@
use crate::prelude::*;
use crate::syntax;
use crate::syntax::operator::application::InsertApps;
use crate::syntax::operator::arity::ClassifyArity;
use crate::syntax::operator::reducer::Reduce;
use crate::syntax::treebuilding;
use crate::syntax::treebuilding::Finish;
use crate::syntax::treebuilding::ItemConsumer;
use crate::syntax::Tree;
// ==================
// === Precedence ===
// ==================
/// Operator precedence resolver.
#[derive(Debug, Default)]
pub struct Precedence<'s> {
#[rustfmt::skip]
resolver:
// Items -> Tokens/Trees
treebuilding::FlattenBlockTrees<'s,
// Tokens/Trees -> Tokens/Trees (proper tokens only)
treebuilding::AssembleCompoundTokens<'s,
// Tokens/Trees -> Tokens/Trees + Spacing-lookahead
treebuilding::PeekSpacing<'s,
// Tokens/Trees + Spacing-lookahead -> Operators/Operands
ClassifyArity<'s,
// Operators/Operands -> Operators/Operands (balanced)
InsertApps<
// Operators/Operands -> Tree
Reduce<'s>>>>>>,
}
impl<'s> Precedence<'s> {
/// Return a new operator precedence resolver.
pub fn new() -> Self {
Self::default()
}
/// Resolve precedence in a context where the result cannot be an operator section or template
/// function.
pub fn resolve_non_section(
&mut self,
items: impl IntoIterator<Item = syntax::Item<'s>>,
) -> Option<Tree<'s>> {
items.into_iter().for_each(|i| self.push(i));
self.resolver.finish().map(|op| op.value)
}
/// Resolve precedence.
pub fn resolve(
&mut self,
items: impl IntoIterator<Item = syntax::Item<'s>>,
) -> Option<Tree<'s>> {
self.extend(items);
self.finish()
}
/// Extend the expression with a token.
pub fn push(&mut self, item: syntax::Item<'s>) {
self.resolver.push_item(item);
}
/// Return the result.
pub fn finish(&mut self) -> Option<Tree<'s>> {
self.resolver.finish().map(Tree::from)
}
}
impl<'s> Extend<syntax::Item<'s>> for Precedence<'s> {
fn extend<T: IntoIterator<Item = syntax::Item<'s>>>(&mut self, iter: T) {
for token in iter {
self.push(token);
}
}
}

View File

@ -1,12 +1,7 @@
use crate::syntax::operator::apply_operator;
use crate::syntax::operator::apply_unary_operator;
use crate::syntax::operator::Arity;
use crate::syntax::operator::BinaryOperand;
use crate::syntax::operator::ModifiedPrecedence;
use crate::syntax::operator::Operand;
use crate::syntax::operator::OperandConsumer;
use crate::syntax::operator::Operator;
use crate::syntax::operator::OperatorConsumer;
use crate::syntax::operator::apply::*;
use crate::syntax::operator::types::*;
use crate::syntax::operator::operand::Operand;
use crate::syntax::token;
use crate::syntax::treebuilding::Finish;
use crate::syntax::treebuilding::Spacing;
@ -31,7 +26,7 @@ use enso_prelude::VecOps;
#[derive(Default, Debug)]
pub struct Reduce<'s> {
output: Vec<Operand<Tree<'s>>>,
operator_stack: Vec<Operator<'s>>,
operator_stack: Vec<StackOperator<'s>>,
}
impl<'s> OperandConsumer<'s> for Reduce<'s> {
@ -42,10 +37,18 @@ impl<'s> OperandConsumer<'s> for Reduce<'s> {
impl<'s> OperatorConsumer<'s> for Reduce<'s> {
fn push_operator(&mut self, operator: Operator<'s>) {
if let Some(precedence) = operator.left_precedence {
self.reduce(precedence);
}
self.operator_stack.push(operator);
let Operator { left_precedence, right_precedence, associativity, arity } = operator;
let warnings = if let Some(precedence) = left_precedence {
self.reduce(precedence)
} else {
Default::default()
};
self.operator_stack.push(StackOperator {
right_precedence,
associativity,
arity,
warnings,
});
}
}
@ -56,6 +59,7 @@ impl<'s> Finish for Reduce<'s> {
self.reduce(ModifiedPrecedence {
spacing: Spacing::Spaced,
precedence: token::Precedence::min(),
is_value_operation: false,
});
let out = self.output.pop();
debug_assert!(self.operator_stack.is_empty());
@ -72,19 +76,32 @@ impl<'s> Reduce<'s> {
/// Given a starting value, replace it with the result of successively applying to it all
/// operators in the `operator_stack` that have precedence greater than or equal to the
/// specified value, consuming LHS values from the `output` stack as needed.
fn reduce(&mut self, prec: ModifiedPrecedence) {
fn reduce(&mut self, right_op_precedence: ModifiedPrecedence) -> Warnings {
let mut rhs = self.output.pop();
while let Some(opr) = self.operator_stack.pop_if(|opr| {
opr.right_precedence > prec
|| (opr.right_precedence == prec && opr.associativity == token::Associativity::Left)
let mut right_op_warnings = Warnings::default();
while let Some(opr) = self.operator_stack.pop_if_mut(|opr| {
let ModifiedPrecedenceComparisonResult { is_greater, inconsistent_spacing } = opr
.right_precedence
.compare(&right_op_precedence, opr.associativity == token::Associativity::Left);
if inconsistent_spacing {
if is_greater { &mut right_op_warnings } else { &mut opr.warnings }
.set_inconsistent_spacing();
}
is_greater
}) {
match opr.arity {
let StackOperator { right_precedence: _, associativity: _, arity, warnings } = opr;
match arity {
Arity::Unary { token, error } => {
let rhs_ = rhs.take();
debug_assert_ne!(rhs_, None);
rhs = Some(apply_unary_operator(token, rhs_, error));
rhs = ApplyUnaryOperator::token(token)
.with_rhs(rhs_)
.with_error(error)
.with_warnings(warnings)
.finish()
.into();
}
Arity::Binary { tokens, lhs_section_termination, missing, reify_rhs_section } => {
Arity::Binary { tokens, missing, reify_rhs_section } => {
let operand = rhs.take();
debug_assert_ne!(operand, None);
let (lhs, rhs_) = match missing {
@ -96,18 +113,29 @@ impl<'s> Reduce<'s> {
(lhs, operand)
}
};
rhs = Some(apply_operator(
tokens,
lhs_section_termination,
reify_rhs_section,
lhs,
rhs_,
));
rhs = ApplyOperator::tokens(tokens)
.with_lhs(lhs)
.with_rhs(rhs_, reify_rhs_section)
.with_warnings(warnings)
.finish()
.into();
}
};
}
if let Some(rhs) = rhs {
self.output.push(rhs);
}
right_op_warnings
}
}
// === Operator on-stack information ===
#[derive(Debug)]
struct StackOperator<'s> {
right_precedence: ModifiedPrecedence,
associativity: token::Associativity,
arity: Arity<'s>,
warnings: Warnings,
}

View File

@ -1,6 +1,8 @@
use crate::syntax::operator::SectionTermination;
use crate::syntax::operator::operand::Operand;
use crate::syntax::token;
use crate::syntax::tree;
use crate::syntax::treebuilding::Spacing;
use crate::syntax::Tree;
use std::borrow::Cow;
use std::cmp::Ordering;
@ -32,7 +34,6 @@ pub enum Arity<'s> {
},
Binary {
tokens: Vec<token::Operator<'s>>,
lhs_section_termination: Option<SectionTermination>,
missing: Option<BinaryOperand>,
reify_rhs_section: bool,
},
@ -56,18 +57,121 @@ pub enum BinaryOperand {
// === Modified precedence ===
#[derive(Debug, Copy, Clone, PartialEq, Eq)]
#[derive(Debug, Copy, Clone)]
pub struct ModifiedPrecedence {
pub spacing: Spacing,
pub precedence: token::Precedence,
pub is_value_operation: bool,
}
impl PartialOrd for ModifiedPrecedence {
fn partial_cmp(&self, other: &Self) -> Option<Ordering> {
match (self.spacing, other.spacing) {
pub struct ModifiedPrecedenceComparisonResult {
pub is_greater: bool,
pub inconsistent_spacing: bool,
}
impl ModifiedPrecedence {
pub fn compare(&self, other: &Self, include_eq: bool) -> ModifiedPrecedenceComparisonResult {
let spacing_ordering = match (self.spacing, other.spacing) {
(Spacing::Spaced, Spacing::Unspaced) => Some(Ordering::Less),
(Spacing::Unspaced, Spacing::Spaced) => Some(Ordering::Greater),
_ => self.precedence.partial_cmp(&other.precedence),
_ => None,
};
let use_spacing = !(self.is_value_operation && other.is_value_operation);
let natural_ordering = self.precedence.cmp(&other.precedence);
let natural_is_greater = natural_ordering == Ordering::Greater
|| (include_eq && natural_ordering == Ordering::Equal);
let (is_greater, inconsistent_spacing) = match spacing_ordering {
Some(spacing_ordering) => {
let spacing_is_greater = spacing_ordering == Ordering::Greater
|| (include_eq && spacing_ordering == Ordering::Equal);
if use_spacing {
(spacing_is_greater, false)
} else {
(natural_is_greater, natural_is_greater != spacing_is_greater)
}
}
None => (natural_is_greater, false),
};
ModifiedPrecedenceComparisonResult { is_greater, inconsistent_spacing }
}
}
// ================
// === Warnings ===
// ================
/// Warnings applicable to an operator.
#[derive(Debug, Default)]
#[allow(missing_copy_implementations)] // Future warnings may have attached information.
pub struct Warnings {
inconsistent_spacing: bool,
}
impl Warnings {
/// Mark the operator as having spacing inconsistent with effective precedence.
pub fn set_inconsistent_spacing(&mut self) {
self.inconsistent_spacing = true;
}
/// Attach the warnings to the tree.
pub fn apply(self, tree: &mut Tree) {
let Self { inconsistent_spacing } = self;
if inconsistent_spacing {
tree.warnings.push(tree::Warning::inconsistent_spacing());
}
}
}
// ======================================
// === Operator and Operand Consumers ===
// ======================================
pub trait OperandConsumer<'s> {
fn push_operand(&mut self, operand: Operand<Tree<'s>>);
}
pub trait OperatorConsumer<'s> {
fn push_operator(&mut self, operator: Operator<'s>);
}
// ===========================
// === Operator or Operand ===
// ===========================
#[derive(Debug)]
pub enum OperatorOrOperand<'s> {
Operand(Operand<Tree<'s>>),
Operator(Operator<'s>),
}
impl<'s> From<Operand<Tree<'s>>> for OperatorOrOperand<'s> {
fn from(operand: Operand<Tree<'s>>) -> Self {
OperatorOrOperand::Operand(operand)
}
}
impl<'s> From<Operator<'s>> for OperatorOrOperand<'s> {
fn from(operator: Operator<'s>) -> Self {
OperatorOrOperand::Operator(operator)
}
}
// ==========================
// === SectionTermination ===
// ==========================
/// Operator-section/template-function termination behavior of an operator with regard to an
/// operand.
#[derive(Debug, Clone, Copy, PartialEq, Eq, PartialOrd, Ord, Default)]
pub enum SectionTermination {
/// If the operand is an operator-section/template-function, indicate it by wrapping it in a
/// suitable node.
#[default]
Reify,
/// Discard any operator-section/template-function properties associated with the operand.
Unwrap,
}

View File

@ -327,6 +327,7 @@ pub struct OperatorProperties {
// Precedence
binary_infix_precedence: Option<Precedence>,
unary_prefix_precedence: Option<Precedence>,
is_value_operation: bool,
// Operator section behavior
lhs_section_termination: Option<crate::syntax::operator::SectionTermination>,
// Special properties
@ -376,6 +377,16 @@ impl OperatorProperties {
self.is_compile_time_operation
}
/// Mark the operator as a value-level operation, as opposed to functional.
pub fn as_value_operation(self) -> Self {
Self { is_value_operation: true, ..self }
}
/// Return whether the operator is a value-level operation, as opposed to functional.
pub fn is_value_operation(&self) -> bool {
self.is_value_operation
}
/// Return a copy of this operator, modified to be flagged as right associative.
pub fn as_right_associative(self) -> Self {
Self { is_right_associative: true, ..self }

View File

@ -28,6 +28,7 @@ pub mod block;
pub struct Tree<'s> {
#[reflect(flatten, hide)]
pub span: Span<'s>,
pub warnings: Warnings,
#[deref]
#[deref_mut]
#[reflect(subtype)]
@ -38,7 +39,7 @@ pub struct Tree<'s> {
#[allow(non_snake_case)]
pub fn Tree<'s>(span: Span<'s>, variant: impl Into<Variant<'s>>) -> Tree<'s> {
let variant = Box::new(variant.into());
Tree { variant, span }
Tree { variant, span, warnings: default() }
}
impl<'s> AsRef<Span<'s>> for Tree<'s> {
@ -52,6 +53,7 @@ impl<'s> Default for Tree<'s> {
Self {
variant: Box::new(Variant::Ident(Ident { token: Default::default() })),
span: Span::empty_without_offset(),
warnings: default(),
}
}
}
@ -755,6 +757,50 @@ impl<'s> span::Builder<'s> for OperatorDelimitedTree<'s> {
// ================
// === Warnings ===
// ================
/// Warnings applicable to a [`Tree`].
pub type Warnings = ColdVec<Warning>;
// === Warning ===
/// A warning associated with a [`Tree`].
#[derive(Clone, Debug, Eq, PartialEq, Serialize, Reflect, Deserialize)]
#[allow(missing_copy_implementations)] // Future warnings may have attached information.
pub struct Warning {
id: u32,
}
impl Warning {
/// Spacing is inconsistent with effective operator precedence.
pub fn inconsistent_spacing() -> Self {
Self { id: WarningId::InconsistentSpacing as u32 }
}
/// Return a description of the warning.
pub fn message(&self) -> Cow<'static, str> {
WARNINGS[self.id as usize].into()
}
}
#[repr(u32)]
#[derive(Debug)]
enum WarningId {
InconsistentSpacing,
#[allow(non_camel_case_types)]
NUM_WARNINGS,
}
/// Template strings for printing warnings.
// These must be defined in the same order as the [`WarningId`] variants.
pub const WARNINGS: [&str; WarningId::NUM_WARNINGS as usize] =
["Spacing is inconsistent with operator precedence"];
// ====================================
// === Tree-construction operations ===
// ====================================

View File

@ -181,11 +181,14 @@ impl<'s> From<Prefix<'s>> for Tree<'s> {
fn from(prefix: Prefix<'s>) -> Self {
match prefix {
Prefix::Annotation { node, span } =>
Tree { variant: Box::new(Variant::Annotated(node)), span },
Prefix::BuiltinAnnotation { node, span } =>
Tree { variant: Box::new(Variant::AnnotatedBuiltin(node)), span },
Tree { variant: Box::new(Variant::Annotated(node)), span, warnings: default() },
Prefix::BuiltinAnnotation { node, span } => Tree {
variant: Box::new(Variant::AnnotatedBuiltin(node)),
span,
warnings: default(),
},
Prefix::Documentation { node, span } =>
Tree { variant: Box::new(Variant::Documented(node)), span },
Tree { variant: Box::new(Variant::Documented(node)), span, warnings: default() },
}
}
}

View File

@ -1,5 +1,12 @@
//! This module defines utilities for working with the [`std::vec::Vec`] type.
use enso_reflect::prelude::*;
use serde::Deserialize;
use serde::Deserializer;
use serde::Serialize;
use serde::Serializer;
// ==============
@ -19,6 +26,20 @@ pub trait VecOps<T>: AsMut<Vec<T>> + Sized {
}
None
}
/// Pop and return the last element, if the vector is non-empty and the given predicate returns
/// true when applied to the last element. The predicate may mutate the element, whether or not
/// it is popped.
fn pop_if_mut<F>(&mut self, f: F) -> Option<T>
where F: FnOnce(&mut T) -> bool {
let vec = self.as_mut();
if let Some(last) = vec.last_mut() {
if f(last) {
return vec.pop();
}
}
None
}
}
impl<T> VecOps<T> for Vec<T> {}
@ -103,3 +124,68 @@ impl<T> VecAllocation<T> {
std::mem::take(&mut self.data)
}
}
// ================
// === Cold Vec ===
// ================
/// A vector optimized to be unused.
///
/// If it has never contained any elements, it will be stored more efficiently than a [`Vec`].
#[derive(Clone, Debug, Eq, Reflect)]
#[reflect(transparent)]
pub struct ColdVec<T> {
#[allow(clippy::box_collection)]
#[reflect(as = "Vec<T>")]
elements: Option<Box<Vec<T>>>,
}
impl<T> ColdVec<T> {
pub fn push(&mut self, element: T) {
if self.elements.is_none() {
self.elements = Some(Default::default());
}
self.elements.as_mut().unwrap().push(element);
}
pub fn iter(&self) -> std::slice::Iter<T> {
match self.elements.as_ref() {
Some(elements) => elements.iter(),
None => [].iter(),
}
}
}
impl<T: PartialEq<T>> PartialEq<ColdVec<T>> for ColdVec<T> {
fn eq(&self, other: &ColdVec<T>) -> bool {
match (&self.elements, &other.elements) {
(Some(a), Some(b)) => a.eq(b),
(Some(x), None) | (None, Some(x)) => x.is_empty(),
(None, None) => true,
}
}
}
impl<T> Default for ColdVec<T> {
fn default() -> Self {
Self { elements: None }
}
}
impl<T: Serialize> Serialize for ColdVec<T> {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
where S: Serializer {
match &self.elements {
Some(elements) => elements.serialize(serializer),
None => Vec::<T>::new().serialize(serializer),
}
}
}
impl<'de, T: Deserialize<'de>> Deserialize<'de> for ColdVec<T> {
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>
where D: Deserializer<'de> {
let elements = Vec::deserialize(deserializer)?;
Ok(Self { elements: (!elements.is_empty()).then(|| Box::new(elements)) })
}
}

View File

@ -308,7 +308,7 @@ add_specs suite_builder =
# AWS S3 does not record creation time, only last modified time.
hello_txt.creation_time . should_fail_with S3_Error
my_writable_dir = writable_root / "test-run-"+(Date_Time.now.format "yyyy-MM-dd_HHmmss.fV" . replace "/" "|")+"/"
my_writable_dir = writable_root / ("test-run-"+(Date_Time.now.format "yyyy-MM-dd_HHmmss.fV" . replace "/" "|")+"/")
suite_builder.group "S3_File writing" pending=api_pending group_builder->
assert my_writable_dir.is_directory
@ -565,7 +565,7 @@ add_specs suite_builder =
table = Table.new [["X", [1, 2, 3]], ["Y", ["a", "b", "c"]]]
test_write_table_to_datalink extension check_read_back = group_builder.specify "should be able to write a Table to an S3 data link ("+extension+")" <| with_default_credentials <|
link_target = my_writable_dir / ("linked-sheet."+extension)
s3_link = my_writable_dir / "my-simple-write-"+extension+".datalink"
s3_link = my_writable_dir / ("my-simple-write-"+extension+".datalink")
Problems.assume_no_problems <| Data_Link.write_config s3_link <|
JS_Object.from_pairs <|
[["type", "S3"]]

View File

@ -76,7 +76,7 @@ add_specs suite_builder =
group_builder.specify "should be of unbound size when taking remainder" <|
expected = 3191479909175673432
((1.up_to 101 . fold 1 (*)) % 3*almost_max_long).should_equal expected
((1.up_to 101 . fold 1 (*)) % (3*almost_max_long)).should_equal expected
group_builder.specify "should allow defining extension methods through the Integer type for any number size" <|
876543.is_even.should_be_false

View File

@ -57,7 +57,7 @@ add_specs suite_builder setup:Cloud_Tests_Setup = suite_builder.group "Enso Clou
# If nothing set, defaults to root:
Enso_File.current_working_directory . should_equal Enso_File.root
subdir = (Enso_File.root / "my_test_CWD-"+(Random.uuid.take 5)).create_directory
subdir = (Enso_File.root / ("my_test_CWD-" + Random.uuid.take 5)).create_directory
subdir.should_succeed
cleanup =
Enso_User.flush_caches

View File

@ -21,7 +21,7 @@ add_specs suite_builder =
group_builder.specify "writing Excel" <|
t = Table.new [["X", [1, 2, 3]], ["Y", ["a", "b", "c"]]]
f = Enso_File.root / "write-test-"+Temporary_Directory.timestamp_text+".xlsx"
f = Enso_File.root / ("write-test-"+Temporary_Directory.timestamp_text+".xlsx")
t.write f . should_equal f
Panic.with_finalizer f.delete_if_exists <|
workbook = f.read

View File

@ -159,7 +159,7 @@ spec_write suite_builder suffix test_sheet_name =
written.close
group_builder.specify 'should be able to round trip all the data types' <|
alltypes = enso_project.data / "transient" / "alltypes."+suffix
alltypes = enso_project.data / "transient" / ("alltypes."+suffix)
alltypes.delete_if_exists . should_succeed
t1 = enso_project.data/'all_data_types.csv' . read
t1.write alltypes (..Sheet "AllTypes") . should_succeed
@ -577,7 +577,7 @@ spec_write suite_builder suffix test_sheet_name =
out.last_modified_time.should_equal lmd
group_builder.specify "should fail if the target file is read-only" <|
f = enso_project.data / "transient" / "permission."+suffix
f = enso_project.data / "transient" / ("permission."+suffix)
if f.exists then Util.set_writable f True
f.delete_if_exists
@ -640,7 +640,7 @@ spec_write suite_builder suffix test_sheet_name =
parent = enso_project.data / "transient" / "nonexistent"
parent.exists.should_be_false
f = parent / "foo."+suffix
f = parent / ("foo."+suffix)
t1 = Table.new [["X", [1, 2, 3]]]
r1 = t1.write f (..Sheet "Another")
Test.with_clue "("+r1.catch.to_display_text+") " <|
@ -648,7 +648,7 @@ spec_write suite_builder suffix test_sheet_name =
r1.catch.should_be_a File_Error.Not_Found
group_builder.specify "should allow to write and read-back Unicode characters" <|
encodings = enso_project.data / "transient" / "encodings."+suffix
encodings = enso_project.data / "transient" / ("encodings."+suffix)
encodings.delete_if_exists . should_succeed
t1 = Table.new [["A", ["A", "B", "😊", "D"]], ["B", [1, 2, 3, 4]]]
@ -658,7 +658,7 @@ spec_write suite_builder suffix test_sheet_name =
encodings.delete
group_builder.specify "should allow to writing custom types" <|
custom_types = enso_project.data / "transient" / "custom_types."+suffix
custom_types = enso_project.data / "transient" / ("custom_types."+suffix)
custom_types.delete_if_exists . should_succeed
t1 = Table.new [["A", [Complex.Value 19 89, Complex.Value -1 -42]], ["B", [1, 2]]]
@ -668,7 +668,7 @@ spec_write suite_builder suffix test_sheet_name =
custom_types.delete
group_builder.specify "should allow to writing custom types that have defined to_string" <|
custom_types2 = enso_project.data / "transient" / "custom_types2."+suffix
custom_types2 = enso_project.data / "transient" / ("custom_types2."+suffix)
custom_types2.delete_if_exists . should_succeed
t1 = Table.new [["A", [Complex_With_To_String.Value 19 89, Complex_With_To_String.Value -1 -42]], ["B", [1, 2]]]
@ -678,7 +678,7 @@ spec_write suite_builder suffix test_sheet_name =
custom_types2.delete
group_builder.specify "should be able to overwrite a pre-existing empty file" <|
empty = enso_project.data / "transient" / "empty."+suffix
empty = enso_project.data / "transient" / ("empty."+suffix)
[Existing_File_Behavior.Backup, Existing_File_Behavior.Overwrite, Existing_File_Behavior.Append].each behavior-> Test.with_clue behavior.to_text+": " <|
empty.delete_if_exists . should_succeed
"".write empty