Closes#8836.
Atom constructors can be declared as private (project-private). project-private constructors can be called only from the same project. See the encapsulation.md docs for more info.
---------
Co-authored-by: Jaroslav Tulach <jaroslav.tulach@enso.org>
Co-authored-by: Radosław Waśko <radoslaw.wasko@enso.org>
Co-authored-by: Hubert Plociniczak <hubert.plociniczak@gmail.com>
Co-authored-by: Kaz Wesley <kaz@lambdaverse.org>
Resolves#9607 by computing `Number.hash` by converting given number to `Float` first and then computing the hash. Also the conversion from `Float.to Decimal` is exact - done via `new BigDecimal(double)`. There is `Decimal.new` that handles the user-friendly conversion. However as a result `Decimal.from 2.1 != Decimal.new 2.1` - that's the only way to ensure consistency between hash code and conversions.
While investigating #9749 a JavaScript call to `Polyglot.eval("enso", ....).eval_expression("id")` was made. It crashed as JavaScript isn't using `String` but `TruffleString` to represent strings.
This change replaces an sqllite-backed suggestions' repo with a simple, in-memory, one.
As `completion` functionality has been implemented completely in GUI, there is no need to support it in backend, which simplifies a lot of functionality.
Closes#9650 and #9471.
# Important Notes
Loading suggestions and sending them to GUI on startup is almost instantaneous. Previously it would take ~10s just for `Standard.Base`.
1. Experimenting with invalidating modules' indexes without requiring full write-context locks. That should significantly improve the execution.
2. Improving performance by making background job executor run in a larger threadpool than 1.
This PR bumps the FlatBuffers version used by the backend to `24.3.25` (the latest version as of now).
Since the newer FlatBuffers releases come with prebuilt binaries for all platforms we target, we can simplify the build process by simply downloading the required `flatc` binary from the official FlatBuffers GitHub release page. This allows us to remove the dependency on `conda`, which was the only reliable way to get the outdated `flatc`.
The `conda` setup has been removed from the CI steps and the relevant code has been removed from the build script.
The FlatBuffers version is no longer hard-coded in the Rust build script, it is inferred from the `build.sbt` definition (similar to GraalVM).
# Important Notes
This does not affect the GUI binary protocol implementation.
While I initially wanted to update it, it turned out farly non-trivial.
As there are multiple issues with the generated TS code, it was significantly refactored by hand and it is impossible to automatically update it. Work to address this problem is left as [a future task](https://github.com/enso-org/enso/issues/9658).
As the Flatbuffers binary protocol is guaranteed to be compatible between versions (unlike the generated sources), there should be no adverse effects from bumping `flatc` only on the backend side.
`42 == (Error.throw "foo")` now correctly returns an `Error` rather than False
# Important Notes
The error was in the wrong usage of the `org.enso.interpreter.dsl.AcceptsError` DSL annotation.
As benchmarks show, a significant amount of time is spent traversing `Set` of `Graph.Link`s. That's unfortunate and unnecessary. We can equally keep helper maps that make search constant time.
Fixed inline compilation benchmarks by properly cleaning up scopes after runs.
Closes#9237.
# Important Notes
Things like
![Screenshot from 2024-03-22 11-23-01](https://github.com/enso-org/enso/assets/292128/7c1e220a-6e33-4396-a9b2-0e788f615323)
![Screenshot from 2024-03-22 11-13-19](https://github.com/enso-org/enso/assets/292128/0272b1cb-252e-4662-b539-174844941c8e)
are all gone. There is plenty of it those are just samples.
Benchmarks are back in order:
```
[info] # Warmup Iteration 1: 2.702 ms/op
[info] # Warmup Iteration 2: 3.080 ms/op
[info] # Warmup Iteration 3: 2.818 ms/op
[info] # Warmup Iteration 4: 3.334 ms/op
[info] # Warmup Iteration 5: 2.448 ms/op
[info] # Warmup Iteration 6: 2.583 ms/op
[info] Iteration 1: 2.908 ms/op
[info] Iteration 2: 2.915 ms/op
[info] Iteration 3: 2.774 ms/op
[info] Iteration 4: 2.601 ms/op
[info] Result "org.enso.compiler.benchmarks.inline.InlineCompilerBenchmark.longExpression":
[info] 2.799 ±(99.9%) 0.953 ms/op [Average]
[info] (min, avg, max) = (2.601, 2.799, 2.915), stdev = 0.148
[info] CI (99.9%): [1.846, 3.753] (assumes normal distribution)
```
Move the types from `Standard.Table.Data` to `Standard.Table`.
Exceptions:
- `Standard.Table.Data.Report_Unmatched` => `Standard.Table.Constants`.
- `Standard.Table.Data.Join_Kind_Cross` => `Standard.Table.Internal.Join_Kind_Cross`.
Also removed constructor as an atom type.
- `Standard.Table.Extensions.Table_Ref` => `Standard.Table.Internal.Table_Ref`.
- `Standard.Table.Data.Type.Value_Type_Helpers` => `Standard.Table.Internal.Value_Type_Helpers`.
- `Standard.Table.Data.Type.Enso_Types` => `Standard.Table.Internal.Value_Type_Helpers`.
- `Standard.Table.Data.Type.Storage` => `Standard.Table.Internal.Storage`.
Changed all `Standard.Table` imports inside project to be project.
Favoured importing from `Standard.Table.Main` in `Standard.Database`.
Also fixed some linting in Enso_File.
I forgot to add the `--disable-private-check` cmdline option in https://github.com/enso-org/enso/pull/8202. This PR fixes this:
```
> enso -h | grep -A2 private
--disable-private-check Disables private module
checking at runtime. Useful for
tests.
```
close#9351
Changelog:
- update: deprecate the `reexport` suggestion field
- add: `reexports` suggestion field containing the list of modules re-exporting this symbol
- update: exports logic to gather all the symbols exported from a given module
Fixes#9313
[Screencast from 2024-03-22 09-09-07.webm](https://github.com/enso-org/enso/assets/3919101/6ad86145-6882-4bde-993d-b1270f1ec06c)
# Important Notes
* This is PoC, so I didn't spend time on polishing the visuals; the design will likely change.
* I modified the shortcut handler a bit, allowing making multiple actions for same binding - the action's handler will be called in unspecified order, until one of them handle the event (i.e. not return false).
* To make it working regardless of imports, I needed to export AI module in Standard.Visualization. Moreover, needed to remove build_ai_prompt for Any, because it was causing issues - expect a bug report soon.
`ExecuteJob` can now be interrupted.
We now have a separate threadpool for visualization-related jobs.
# Important Notes
In a lock step situation, a job or command could have been interrupted while waiting for one of the locks. As locks ensured only that they were released once all of them have been acquired this could leave engine in a broken state.
Once `ExecuteJob` could be interrupted this became a blocker as it prevented project startup almost in every case.
The change also makes it careful to avoid constant `ExecuteJob` restarts.
Addresses #9278. There will be follow up work.
- Closes#9300
- Now the Enso libraries are themselves capable of refreshing the access token, thus there is no more problems if the token expires during a long running workflow.
- Adds `get_optional_field` sibling to `get_required_field` for more unified parsing of JSON responses from the Cloud.
- Adds `expected_type` that checks the type of extracted fields. This way, if the response is malformed we get a nice Enso Cloud error telling us what is wrong with the payload instead of a `Type_Error` later down the line.
- Fixes `Test.expect_panic_with` to actually catch only panics. Before it used to also handle dataflow errors - but these have `.should_fail_with` instead. We should distinguish these scenarios.
The `null` check creates a new Array but always assumed a non-empty one which may lead to
```
java.lang.ArrayIndexOutOfBoundsException: Index 0 out of bounds for length 0
at org.enso.runtime/org.enso.interpreter.service.ExecutionService$FunctionPointer.collectNotAppliedArguments(ExecutionService.java:778)
at org.enso.runtime/org.enso.interpreter.instrument.job.ProgramExecutionSupport$.sendExpressionUpdate(ProgramExecutionSupport.scala:430)
at org.enso.runtime/org.enso.interpreter.instrument.job.ProgramExecutionSupport$.$anonfun$executeProgram$3(ProgramExecutionSupport.scala:81)
at org.enso.runtime/org.enso.interpreter.service.ExecutionCallbacks.callOnComputedCallback(ExecutionCallbacks.java:146)
at
org.enso.runtime/org.enso.interpreter.service.ExecutionCallbacks.updateCachedResult(ExecutionCallbacks.java:117
...
```
Added a guard to prevent the exception. The flag will be useless anyway as we won't enter the for-loop in this case.
Appears to be introduced via #8743. Discovered while debugging #9389.
If some benchmark fails in dry-run (compileOnly) mode, the whole process exits with non-zero return code. Also fixes failing engine compiler benchmarks.
# Important Notes
Manually added failure:
```diff
diff --git a/engine/runtime-benchmarks/src/main/java/org/enso/interpreter/bench/benchmarks/semantic/ArrayProxyBenchmarks.java b/engine/runtime-benchmarks/src/main/java/org/enso/interpreter/bench/benchmarks/semantic/ArrayProxyBenchmarks.java
index c8d86cecc..f9f4d7cbc 100644
--- a/engine/runtime-benchmarks/src/main/java/org/enso/interpreter/bench/benchmarks/semantic/ArrayProxyBenchmarks.java
+++ b/engine/runtime-benchmarks/src/main/java/org/enso/interpreter/bench/benchmarks/semantic/ArrayProxyBenchmarks.java
@@ -95,7 +95,8 @@ public class ArrayProxyBenchmarks {
@Benchmark
public void sumOverComputingProxy(Blackhole matter) {
- performBenchmark(matter);
+ //performBenchmark(matter);
+ throw new AssertionError("My error");
}
@Benchmark
```
Run with `sbt "-Dbench.compileOnly=true runtime-benchmarks/benchOnly org.enso.interpreter.bench.benchmarks.semantic.ArrayProxyBenchmarks.sumOverComputingProxy"` fails with:
```
[info] Running benchmarks [org.enso.interpreter.bench.benchmarks.semantic.ArrayProxyBenchmarks.sumOverComputingProxy] in compileOnly mode
[info] # JMH version: 1.36
[info] # VM version: JDK 21.0.2, Java HotSpot(TM) 64-Bit Server VM, 21.0.2+13-LTS-jvmci-23.1-b30
[info] # VM invoker: /home/pavel/.sdkman/candidates/java/21.0.2-graal/bin/java
[info] # VM options: -XX:ThreadPriorityPolicy=1 -XX:+UnlockExperimentalVMOptions -XX:+EnableJVMCIProduct -XX:-UnlockExperimentalVMOptions -Dslf4j.provider=org.slf4j.nop.NOPServiceProvider -Dbench.compileOnly=true --module-path=/home/pavel/.cache/coursier/v1/https/repo1.maven.org/maven2/org/graalvm/sdk/nativeimage/23.1.2/nativeimage-23.1.2.jar:/home/pavel/.cache/coursier/v1/https/repo1.maven.org/maven2/org/graalvm/sdk/word/23.1.2/word-23.1.2.jar:/home/pavel/.cache/coursier/v1/https/repo1.maven.org/maven2/org/graalvm/sdk/jniutils/23.1.2/jniutils-23.1.2.jar:/home/pavel/.cache/coursier/v1/https/repo1.maven.org/maven2/org/graalvm/sdk/collections/23.1.2/collections-23.1.2.jar:/home/pavel/.cache/coursier/v1/https/repo1.maven.org/maven2/org/graalvm/polyglot/polyglot/23.1.2/polyglot-23.1.2.jar:/home/pavel/.cache/coursier/v1/https/repo1.maven.org/maven2/org/graalvm/truffle/truffle-api/23.1.2/truffle-api-23.1.2.jar:/home/pavel/.cache/coursier/v1/https/repo1.maven.org/maven2/org/graalvm/truffle/truffle-runtime/23.1.2/truffle-runtime-23.1.2.jar:/home/pavel/.cache/coursier/v1/https/repo1.maven.org/maven2/org/graalvm/truffle/truffle-compiler/23.1.2/truffle-compiler-23.1.2.jar:/home/pavel/.cache/coursier/v1/https/repo1.maven.org/maven2/org/graalvm/js/js-language/23.1.2/js-language-23.1.2.jar:/home/pavel/.cache/coursier/v1/https/repo1.maven.org/maven2/org/graalvm/regex/regex/23.1.2/regex-23.1.2.jar:/home/pavel/.cache/coursier/v1/https/repo1.maven.org/maven2/org/graalvm/shadowed/icu4j/23.1.2/icu4j-23.1.2.jar:/home/pavel/.cache/coursier/v1/https/repo1.maven.org/maven2/org/graalvm/python/python-language/23.1.2/python-language-23.1.2.jar:/home/pavel/.cache/coursier/v1/https/repo1.maven.org/maven2/org/graalvm/python/python-resources/23.1.2/python-resources-23.1.2.jar:/home/pavel/.cache/coursier/v1/https/repo1.maven.org/maven2/org/bouncycastle/bcutil-jdk18on/1.76/bcutil-jdk18on-1.76.jar:/home/pavel/.cache/coursier/v1/https/repo1.maven.org/maven2/org/bouncycastle/bcpkix-jdk18on/1.76/bcpkix-jdk18on-1.76.jar:/home/pavel/.cache/coursier/v1/https/repo1.maven.org/maven2/org/bouncycastle/bcprov-jdk18on/1.76/bcprov-jdk18on-1.76.jar:/home/pavel/.cache/coursier/v1/https/repo1.maven.org/maven2/org/graalvm/llvm/llvm-api/23.1.2/llvm-api-23.1.2.jar:/home/pavel/.cache/coursier/v1/https/repo1.maven.org/maven2/org/graalvm/truffle/truffle-nfi/23.1.2/truffle-nfi-23.1.2.jar:/home/pavel/.cache/coursier/v1/https/repo1.maven.org/maven2/org/graalvm/truffle/truffle-nfi-libffi/23.1.2/truffle-nfi-libffi-23.1.2.jar:/home/pavel/.cache/coursier/v1/https/repo1.maven.org/maven2/org/graalvm/tools/profiler-tool/23.1.2/profiler-tool-23.1.2.jar:/home/pavel/.cache/coursier/v1/https/repo1.maven.org/maven2/org/graalvm/shadowed/json/23.1.2/json-23.1.2.jar:/home/pavel/.cache/coursier/v1/https/repo1.maven.org/maven2/org/tukaani/xz/1.9/xz-1.9.jar:/home/pavel/.cache/coursier/v1/https/repo1.maven.org/maven2/org/slf4j/slf4j-api/2.0.9/slf4j-api-2.0.9.jar:/home/pavel/.cache/coursier/v1/https/repo1.maven.org/maven2/org/slf4j/slf4j-nop/2.0.9/slf4j-nop-2.0.9.jar:/home/pavel/dev/enso/runtime.jar --add-modules=org.enso.runtime --add-exports=org.slf4j.nop/org.slf4j.nop=org.slf4j
[info] # Blackhole mode: compiler (auto-detected, use -Djmh.blackhole.autoDetect=false to disable)
[info] # Warmup: <none>
[info] # Measurement: 1 iterations, 1 s each
[info] # Timeout: 10 min per iteration
[info] # Threads: 1 thread, will synchronize iterations
[info] # Benchmark mode: Average time, time/op
[info] # Benchmark: org.enso.interpreter.bench.benchmarks.semantic.ArrayProxyBenchmarks.sumOverComputingProxy
[info] # Run progress: 0.00% complete, ETA 00:00:01
[info] # Fork: N/A, test runs in the host VM
[info] # *** WARNING: Non-forked runs may silently omit JVM options, mess up profilers, disable compiler hints, etc. ***
[info] # *** WARNING: Use non-forked runs only for debugging purposes, not for actual performance runs. ***
[error] SLF4J: Attempting to load provider "org.slf4j.nop.NOPServiceProvider" specified via "slf4j.provider" system property
[info] Iteration 1: <failure>
[info] java.lang.AssertionError: My error
[info] at org.enso.interpreter.bench.benchmarks.semantic.ArrayProxyBenchmarks.sumOverComputingProxy(ArrayProxyBenchmarks.java:99)
[info] at org.enso.interpreter.bench.benchmarks.semantic.jmh_generated.ArrayProxyBenchmarks_sumOverComputingProxy_jmhTest.sumOverComputingProxy_avgt_jmhStub(ArrayProxyBenchmarks_sumOverComputingProxy_jmhTest.java:232)
[info] at org.enso.interpreter.bench.benchmarks.semantic.jmh_generated.ArrayProxyBenchmarks_sumOverComputingProxy_jmhTest.sumOverComputingProxy_AverageTime(ArrayProxyBenchmarks_sumOverComputingProxy_jmhTest.java:173)
[info] at java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:103)
[info] at java.base/java.lang.reflect.Method.invoke(Method.java:580)
[info] at org.openjdk.jmh.runner.BenchmarkHandler$BenchmarkTask.call(BenchmarkHandler.java:475)
[info] at org.openjdk.jmh.runner.BenchmarkHandler$BenchmarkTask.call(BenchmarkHandler.java:458)
[info] at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317)
[info] at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:572)
[info] at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317)
[info] at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)
[info] at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)
[error] Benchmark run failed: Benchmark caught the exception
[info] at java.base/java.lang.Thread.run(Thread.java:1583)
[error] org.openjdk.jmh.runner.RunnerException: Benchmark caught the exception
[error] at org.openjdk.jmh.runner.Runner.runBenchmarks(Runner.java:575)
[error] at org.openjdk.jmh.runner.Runner.internalRun(Runner.java:310)
[error] at org.openjdk.jmh.runner.Runner.run(Runner.java:209)
[error] at org.enso.interpreter.bench.BenchmarksRunner.runCompileOnly(BenchmarksRunner.java:93)
[error] at org.enso.interpreter.bench.BenchmarksRunner.run(BenchmarksRunner.java:36)
[error] at org.enso.interpreter.bench.benchmarks.RuntimeBenchmarksRunner.main(RuntimeBenchmarksRunner.java:8)
[error] Caused by: org.openjdk.jmh.runner.BenchmarkException: Benchmark error during the run
[error] at org.openjdk.jmh.runner.BenchmarkHandler.runIteration(BenchmarkHandler.java:424)
[error] at org.openjdk.jmh.runner.BaseRunner.runBenchmark(BaseRunner.java:281)
[error] at org.openjdk.jmh.runner.BaseRunner.runBenchmark(BaseRunner.java:233)
[error] at org.openjdk.jmh.runner.BaseRunner.doSingle(BaseRunner.java:138)
[error] at org.openjdk.jmh.runner.BaseRunner.runBenchmarksEmbedded(BaseRunner.java:110)
[error] at org.openjdk.jmh.runner.Runner.runBenchmarks(Runner.java:555)
[error] ... 5 more
[error] Suppressed: java.lang.AssertionError: My error
[error] at org.enso.interpreter.bench.benchmarks.semantic.ArrayProxyBenchmarks.sumOverComputingProxy(ArrayProxyBenchmarks.java:99)
[error] at org.enso.interpreter.bench.benchmarks.semantic.jmh_generated.ArrayProxyBenchmarks_sumOverComputingProxy_jmhTest.sumOverComputingProxy_avgt_jmhStub(ArrayProxyBenchmarks_sumOverComputingProxy_jmhTest.java:232)
[error] at org.enso.interpreter.bench.benchmarks.semantic.jmh_generated.ArrayProxyBenchmarks_sumOverComputingProxy_jmhTest.sumOverComputingProxy_AverageTime(ArrayProxyBenchmarks_sumOverComputingProxy_jmhTest.java:173)
[error] at java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:103)
[error] at java.base/java.lang.reflect.Method.invoke(Method.java:580)
[error] at org.openjdk.jmh.runner.BenchmarkHandler$BenchmarkTask.call(BenchmarkHandler.java:475)
[error] at org.openjdk.jmh.runner.BenchmarkHandler$BenchmarkTask.call(BenchmarkHandler.java:458)
[error] at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317)
[error] at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:572)
[error] at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317)
[error] at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)
[error] at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)
[error] at java.base/java.lang.Thread.run(Thread.java:1583)
[error] Nonzero exit code returned from runner: 1
[error] (Compile / run) Nonzero exit code returned from runner: 1
[error] Total time: 5 s, completed Mar 13, 2024, 12:49:59 PM
```
Follow up on #9150 - making sure that Arrow builder is not accidentally treated as an Array by disallowing reading elements.
# Important Notes
Also making sure that the length of the resulting Arrow Array is consistent with what user requested.
Including arrow language in the distribution by default. Added a basic example for creating an Arrow array.
Making sure that memory layout agrees with Arrow specification (padding, continuous allocation of memory chunks).
Related to #9118.
This should unblock work on allowing serialization/deserialization to/from Parquet but I'd like to delay it to a follow up ticket as it is going to be a significant amount of specialized work.
Fixes the regression introduced by #9070 in `org.enso.benchmarks.generated.Collections.list_meta_fold` benchmark.
# Important Notes
As can be seen on the graph in IGV:
![image](https://github.com/enso-org/enso/assets/14013887/31b6ceca-4909-4a8f-987f-b456b3fb0a1b)
For some reason, `EqualsSimpleNode` is POLYMORPHIC. That seems to be the most visible performance problem.
First, I tried to introduce `ConditionProfile` with:
```diff
diff --git a/engine/runtime/src/main/java/org/enso/interpreter/node/expression/builtin/meta/EqualsNode.java b/engine/runtime/src/main/java/org/enso/interpreter/node/expression/builtin/meta/EqualsNode.java
index b368fb7fe..57274b37e 100644
--- a/engine/runtime/src/main/java/org/enso/interpreter/node/expression/builtin/meta/EqualsNode.java
+++ b/engine/runtime/src/main/java/org/enso/interpreter/node/expression/builtin/meta/EqualsNode.java
@@ -9,6 +9,7 @@ import com.oracle.truffle.api.dsl.Specialization;
import com.oracle.truffle.api.frame.VirtualFrame;
import com.oracle.truffle.api.interop.ArityException;
import com.oracle.truffle.api.nodes.Node;
+import com.oracle.truffle.api.profiles.ConditionProfile;
import org.enso.interpreter.dsl.AcceptsError;
import org.enso.interpreter.dsl.BuiltinMethod;
import org.enso.interpreter.node.EnsoRootNode;
@@ -46,6 +47,7 @@ public final class EqualsNode extends Node {
@Child private EqualsSimpleNode node;
@Child private TypeOfNode types;
@Child private WithConversionNode convert;
+ private final ConditionProfile equalsProfile = ConditionProfile.create();
private static final EqualsNode UNCACHED =
new EqualsNode(EqualsSimpleNodeGen.getUncached(), TypeOfNode.getUncached(), true);
@@ -85,7 +87,7 @@ public final class EqualsNode extends Node {
public boolean execute(
VirtualFrame frame, @AcceptsError Object self, @AcceptsError Object other) {
var areEqual = node.execute(frame, self, other);
- if (!areEqual) {
+ if (!equalsProfile.profile(areEqual)) {
var selfType = types.execute(self);
var otherType = types.execute(other);
if (selfType != otherType) {
```
But that did not resolve the issue.
My second attempt was to enable splitting for `EqualsSimpleNode` with `@com.oracle.truffle.api.dsl.ReportPolymorphism` annotation, which seems to resolve the issue. The benchmark is back to its original score, and `EqualsSimpleNode` is no longer POLYMORPHIC.
Fixes the issue with attaching generic annotations in complex types.
Annotations in the type body could be lost during the compilation if its constructor was defined at the end of the type definition.
Add compiler benchmarks to `engine/runtime-benchmarks`. All the benchmarks generate source code on the fly into `engine/runtime-benchmarks/target/bench-data` directory. Random data generators are set with the same seed. For the convenience of reviewers, I am attaching the benchmark sources in [bench-data.zip](https://github.com/enso-org/enso/files/14423372/bench-data.zip).
I have created benchmarks that measure the performance of a whole module compilation, and benchmarks that measure the performance of inline compilation. They directly call `run` and `runInline` methods on `org.enso.compiler.Compiler`.
# Important Notes
- The results will be available in https://enso-org.github.io/engine-benchmark-results/engine-benchs.html in a few days after the merge of this PR.
- The benchmark parameters are tweaked so that an average iteration takes less than 400 ms and more than 30 ms.
- Ensured that the benchmarks measure performance of the compiler, for example:
![image](https://github.com/enso-org/enso/assets/14013887/c870f4ad-1418-4812-85f2-ca9664711163)
close#9109
Fixes the issue when the runner displays unexpected log messages
```
> .\built-distribution\enso-engine-0.0.0-dev-windows-amd64\enso-0.0.0-dev\bin\enso --run .\test.enso
[WARN] [2024-02-20T12:04:21+01:00] [enso.org.enso.interpreter.runtime.SerializationPool] Serialization of module `test` failed: Unable to write cache data for test.`
42
```
Simplify the `Test.Suite.run_with_filter` to accept a single filter parameter that searches for all the groups and specs that matches that filter. This filter can be a simple text provided from the command line.
# Important Notes
- Pending groups are now printed at the end of the run
- `Test.Suite.run_with_filter` is simplified to accept a single filter parameter that is either `Text` or `Nothing`. See the docs.
- Passing a filter from the command line is therefore straightforward, it is treated as a regex.
- For convenience, I have left all the `main` methods in all the test sources. I have just refactored them to accept the `filter` argument from the command line.
- For example, to run only a single spec from `Vector_Spec.enso`, invoke `enso --run test/Base_Tests/src/Data/Vector_Spec.enso "should allow vector creation with a programmatic constructor"`
- **Majority of the PR is a regex replace** of `^main =` for `main filter=Nothing =` and of `suite.run_with_filter` for `suite.run_with_filter filter`.
- **Fixed some internal engine bugs:**
- `AtomWithHole` allows to specify only one hole - https://github.com/enso-org/enso/pull/9065/files#diff-0f7bb7e85cf86a965de133aa7e6b5958ceb889bd1921c01e00d3a9ceb19626ef
- NaN keys in hash maps are handled in polyglot maps as well - c5257f6c2b78f893214ff67300893b593ea05e21..db4b3c0e9828ee79208d52e02586b24bb845b0d6
`Bump` library uses parser combinators behind the scenes which are known to be good at expressing grammars but are not performance-oriented.
This change ditches the dependency in favour of an existing Java implementation. `jsemver` implements the full specification, which is probably an overkill in our case, but proved to be an almost drop-in replacement for the previous library.
Closes#8692
# Important Notes
Peformance improvements:
- roughly 50ms compared to the previous approach (from 80ms to 20-40ms)
I don't see any time spent in the new implementation during startup so it could be potentially aggressively inlined.
Further more, we could use a facade and offer our own strip down version of semver.
There are two projects transitively required by `runtime`, that have akka dependencies:
- `downloader`
- `connected-lock-manager`
This PR replaces the `akka-http` dependency in `downloader` by HttpClient from JDK, and splits `connected-lock-manager` into two projects such that there are no akka classes in `runtime.jar`.
# Important Notes
- Simplify the `downloader` project - remove akka.
- Add HTTP tests to the `downloader` project that uses our `http-test-helper` that is normally used for stdlib tests.
- It required few tweaks so that we can embed that server in a unit test.
- Split `connected-lock-manager` project into two projects - remove akka from `runtime`.
- **Native image build fixes and quality of life improvements:**
- Output of `native-image` is captured 743e167aa4
- The output will no longer be intertwined with the output from other commands on the CI.
- Arguments to the `native-image` are passed via an argument file, not via command line - ba0a69de6e
- This resolves an issue on Windows with "Command line too long", for example in https://github.com/enso-org/enso/actions/runs/7934447148/job/21665456738?pr=8953#step:8:2269
close#8965
Changelog:
- update: keep a single `ExecuteExpressionJob` in the queue
- update: make `ExecuteExpressionCommand` synchronous to preserve the order of commands
- refactor: separate data structures for `Visualization` and `OneshotExpression` to simplify the logic
Missing ID's in IR meant that instrumentation wouldn't be applied for loaded modules. This is the reason why after a restart engine wouldn't send **any** expression updates.
Closes#8689.
# Important Notes
After the change
[Kazam_screencast_00038.webm](https://github.com/enso-org/enso/assets/292128/4249287b-6c41-4c9d-b138-e7af59512566)
The video somehow doesn't show that all nodes are loaded after the restart, but once I moved the screen they are there. This appears to be a bug in the recording somehow.
related #8689
Fixes a race between the language server SQL updating logic and the engine `DeserializeLibrarySuggestionsJob`s when the library suggestions may start loading before the database is properly cleaned up after the reconnect.
Fixes#8896 by logging `IOException` only with `WARNING` and not `SEVERE`. As such the stacktrace of the exception isn't included in the console and failures to store cache are reported as simple messages, not exceptions with stack trace.
related #8689
Clean up the client's execution contexts when it disconnects from the language server. Dangling execution contexts may slow down the execution when the user reconnects to the language server.
related #8689
Fixes the NPE during the serialization of update messages.
```
java.lang.NullPointerException: Cannot invoke "java.util.UUID.toString()" because "a" is null
```
Follow-up of #8890
Refactor the rest of the tests to the builder API (`Test_New`):
- `Image_Tests`
- `Geo_Tests`
- `Google_Api_Test`
- `Examples_Test`
- `AWS_Tests`
- `Meta_Test_Suite_Tests`
- `Visualization_Tests`
# Important Notes
- Unrelated: Fix NPE in `File.new "/" . name`
Uniqueness check of `UpsertVisualizationJob` only involved expressionId. Apparently now GUI sends mutliple visualizations for the same expressions and expects all of them to exist. Since previously we would cancel duplicate jobs, this was problematic.
This change makes sure that uniqueness also takes into account visualization id. Fixed a few logs that were not passing arguments properly.
Closes#8801
# Important Notes
I have not noticed any more problems with loading visualizations so the issue appears to be resolved with this change.
Added a unit test case that would previously fail due to cancellation of a job that upserts visualization.
This is a quick fix to a long standing problem of
`org.enso.interpreter.service.error.FailedToApplyEditsException` which would prevent backend from processing any more changes, rendering GUI (and backend) virtually useless.
Edits are submitted for (background) processing in the order they are handled. However the order of execution of such tasks is not guaranteed. Most of the time edits are processed in the same order as their requests but when they don't, files get quickly out of sync.
Related to #8770.
# Important Notes
I'm not a fan of this change because it essentially blocks all open/file requests until all edits are processed and we already have logic to deal with that appropriately. Moreover those tasks can and should be processed independently. Since we already had the single thread executor present to ensure correct synchronization of open/file/push commands, we are simply adding edit commands to the list.
Ideally we want to have a specialized executor that executes tasks within the same group sequentially but groups of tasks can be executed in parallel, thus ensuring sufficient throughput. The latter will take much longer and will require significant rewrite of the command execution.
Added tests that would previously fail due to non-deterministic execution.
Fixes#8710 by making sure suspended atom fields support works also for "normal" `Atom` instances without any special `Layout`. Moves all _atom related_ classes into single package and hides as much of classes as possible by making them _package private_.
Implements `Warnings.get_all wrap_errors=True` which wraps warnings attached to values inside vectors with `Map_Error`, which includes the position of the value within the vector. See [the documentation](https://github.com/enso-org/enso/blob/develop/docs/semantics/wrapped-errors.md) for more details.
`get_all wrap_errors=True` does not change the warnings that are attached to values -- it wraps them before returning them to the caller, but does not change the original warnings attached to the values.
Wrapped warnings only appear attached to the vector itself. The values inside the vector do not have their warnings wrapped.
Warning propagation is not changed at all; `Warnings.get_all` (with default `wrap_errors=False`) behaves as before. `get_all wrap_errors=True` is meant to be used primarily by the IDE, although it can be used anywhere this wrapping is desired.
close#8663
Changelog:
- update: use `MethodRootNode` for the atom constructor function to preserve the call info in runtime
- fix: return function schema for atom constructors
Initial implementation of the Arrow language. Closes#7755.
Currently supported logical types are
- Date (days and milliseconds)
- Int (8, 16, 32, 64)
One can currently
- allocate a new fixed-length, nullable Arrow vector - `new[<name-of-the-type>]`
- cast an already existing fixed-length Arrow vector from a memory address - `cast[<name-of-the-type>]`
Closes#7755.
The change adds a convenient trait `ReportLogsOnFailure` that, when merged with the test class, will keep logs in memory and only delegate to the underlying appender on failure. For now we only support forwarding to the console which is sufficient.
A corresponding entry in `application-test.conf` has to point to the new `memory` appender. The additional complexity in the implementation ensures that if someone forgets to mixin `ReportLogsOnFailure` logs appear as before i.e. they respect the log level.
As a bonus fixed arguments passed to ScalaTest in build.sbt so that we are now, again, showing timings of individual tests.
Closes#8603.
# Important Notes
Before:
```
[info] VcsManagerTest:
[info] Initializing project
[ERROR] [2024-01-04 17:27:03,366] [org.enso.languageserver.search.SuggestionsHandler] Cannot read the package definition from [/tmp/3607843843826594318].
[info] - must create a repository (3 seconds, 538 milliseconds)
[info] - must fail to create a repository for an already existing project (141 milliseconds)
[info] Save project
[ERROR] [2024-01-04 17:27:08,346] [org.enso.languageserver.search.SuggestionsHandler] Cannot read the package definition from [/tmp/3607843843826594318].
[info] - must create a commit with a timestamp (198 milliseconds)
[ERROR] [2024-01-04 17:27:08,570] [org.enso.languageserver.search.SuggestionsHandler] Cannot read the package definition from [/tmp/3607843843826594318].
[info] - must create a commit with a name (148 milliseconds)
[ERROR] [2024-01-04 17:27:08,741] [org.enso.languageserver.search.SuggestionsHandler] Cannot read the package definition from [/tmp/3607843843826594318].
[info] - must force all pending saves (149 milliseconds)
[info] Status project
[ERROR] [2024-01-04 17:27:08,910] [org.enso.languageserver.search.SuggestionsHandler] Cannot read the package definition from [/tmp/3607843843826594318].
[info] - must report changed files since last commit (148 milliseconds)
[info] Restore project
[ERROR] [2024-01-04 17:27:09,076] [org.enso.languageserver.search.SuggestionsHandler] Cannot read the package definition from [/tmp/3607843843826594318].
[info] - must reset to the last state with committed changes (236 milliseconds)
[ERROR] [2024-01-04 17:27:09,328] [org.enso.languageserver.search.SuggestionsHandler] Cannot read the package definition from [/tmp/3607843843826594318].
[info] - must reset to a named save (pending)
[ERROR] [2024-01-04 17:27:09,520] [org.enso.languageserver.search.SuggestionsHandler] Cannot read the package definition from [/tmp/3607843843826594318].
[info] - must reset to a named save and notify about removed files *** FAILED *** (185 milliseconds)
[info] Right({
[info] "jsonrpc" : "2.0",
[info] "method" : "file/event",
[info] "params" : {
[info] "path" : {
[info] "rootId" : "cd84a4a3-fa50-4ead-8d80-04f6d0d124a3",
[info] "segments" : [
[info] "src",
[info] "Bar.enso"
[info] ]
[info] },
[info] "kind" : "Removed"
[info] }
[info] }) did not equal Right({
[info] "jsonrpc" : "1.0",
[info] "method" : "file/event",
[info] "params" : {
[info] "path" : {
[info] "rootId" : "cd84a4a3-fa50-4ead-8d80-04f6d0d124a3",
[info] "segments" : [
[info] "src",
[info] "Bar.enso"
[info] ]
[info] },
[info] "kind" : "Removed"
[info] }
[info] }) (VcsManagerTest.scala:1343)
[info] Analysis:
[info] Right(value: Json$JObject(value: object[jsonrpc -> "2.0",method -> "file/event",params -> {
[info] "path" : {
[info] "rootId" : "cd84a4a3-fa50-4ead-8d80-04f6d0d124a3",
[info] "segments" : [
[info] "src",
[info] "Bar.enso"
[info] ]
[info] },
[info] "kind" : "Removed"
[info] }] -> object[jsonrpc -> "1.0",method -> "file/event",params -> {
[info] "path" : {
[info] "rootId" : "cd84a4a3-fa50-4ead-8d80-04f6d0d124a3",
[info] "segments" : [
[info] "src",
[info] "Bar.enso"
[info] ]
[info] },
[info] "kind" : "Removed"
[info] }]))
[ERROR] [2024-01-04 17:27:09,734] [org.enso.languageserver.search.SuggestionsHandler] Cannot read the package definition from [/tmp/3607843843826594318].
[info] List project saves
[info] - must return all explicit commits (146 milliseconds)
[info] Run completed in 9 seconds, 270 milliseconds.
[info] Total number of tests run: 9
[info] Suites: completed 1, aborted 0
[info] Tests: succeeded 8, failed 1, canceled 0, ignored 0, pending 1
[info] *** 1 TEST FAILED ***
```
After:
```
[info] VcsManagerTest:
[info] Initializing project
[info] - must create a repository (3 seconds, 554 milliseconds)
[info] - must fail to create a repository for an already existing project (164 milliseconds)
[info] Save project
[info] - must create a commit with a timestamp (212 milliseconds)
[info] - must create a commit with a name (142 milliseconds)
[info] - must force all pending saves (185 milliseconds)
[info] Status project
[info] - must report changed files since last commit (142 milliseconds)
[info] Restore project
[info] - must reset to the last state with committed changes (202 milliseconds)
[info] - must reset to a named save (pending)
[ERROR] [2024-01-04 17:24:55,738] [org.enso.languageserver.search.SuggestionsHandler] Cannot read the package definition from [/tmp/8456553964637757156].
[info] - must reset to a named save and notify about removed files *** FAILED *** (186 milliseconds)
[info] Right({
[info] "jsonrpc" : "2.0",
[info] "method" : "file/event",
[info] "params" : {
[info] "path" : {
[info] "rootId" : "965ed5c8-1760-4284-91f2-1376406fde0d",
[info] "segments" : [
[info] "src",
[info] "Bar.enso"
[info] ]
[info] },
[info] "kind" : "Removed"
[info] }
[info] }) did not equal Right({
[info] "jsonrpc" : "1.0",
[info] "method" : "file/event",
[info] "params" : {
[info] "path" : {
[info] "rootId" : "965ed5c8-1760-4284-91f2-1376406fde0d",
[info] "segments" : [
[info] "src",
[info] "Bar.enso"
[info] ]
[info] },
[info] "kind" : "Removed"
[info] }
[info] }) (VcsManagerTest.scala:1343)
[info] Analysis:
[info] Right(value: Json$JObject(value: object[jsonrpc -> "2.0",method -> "file/event",params -> {
[info] "path" : {
[info] "rootId" : "965ed5c8-1760-4284-91f2-1376406fde0d",
[info] "segments" : [
[info] "src",
[info] "Bar.enso"
[info] ]
[info] },
[info] "kind" : "Removed"
[info] }] -> object[jsonrpc -> "1.0",method -> "file/event",params -> {
[info] "path" : {
[info] "rootId" : "965ed5c8-1760-4284-91f2-1376406fde0d",
[info] "segments" : [
[info] "src",
[info] "Bar.enso"
[info] ]
[info] },
[info] "kind" : "Removed"
[info] }]))
[info] List project saves
[info] - must return all explicit commits (131 milliseconds)
[info] Run completed in 9 seconds, 400 milliseconds.
[info] Total number of tests run: 9
[info] Suites: completed 1, aborted 0
[info] Tests: succeeded 8, failed 1, canceled 0, ignored 0, pending 1
[info] *** 1 TEST FAILED ***
```
close#7184
The constructor value was not accessible because during the re-compilation a new instance of the type was registered in runtime. Then during the execution, an old cached instance of the type was used in method resolution.
Changelog:
- update: the registration of types in runtime
- update: invalidate cached nodes that became a resolution error after applying the edit
After #8620, there is a noticeable slowdown in `EqualsBenchmarks.equalsTrees` as suggested in https://github.com/enso-org/enso/pull/8620#issuecomment-1870776609. After some digging, I realized that the number of warmup iterations is most probably insufficient. Let's increase the warmup for this benchmark and see if we can get its score down again.
I noticed that sources in `runtime/bench` are not formatted at all. Turns out that the `JavaFormatterPlugin` does not override `javafmt` task for the `Benchmark` configuration. After some failed attempts, I have just redefined the `Benchmark/javafmt` task in the `runtime` project. After all, the `runtime` project is almost the only project where we have any Java benchmarks.
# Important Notes
`javafmtAll` now also formats sources in `runtime/bench/src/java`.
After #8467, Engine benchmarks are broken, they cannot compile - https://github.com/enso-org/enso/actions/runs/7268987483/job/19805862815#logs
This PR fixes the benchmark build
# Important Notes
Apart from fixing the build of `engine/bench`:
- Don't assemble any fat jars in `runtime/bench`.
- Use our `TestLogProvider` in the benches instead of NOOP provider.
- So that we can at least see warnings and errors in benchmarks.
Make sure that the correct test logging provider is loaded in `project-manager/Test`, so that only WARN and ERROR log messages are displayed. Also, make sure that the test log provider parses the correct configuration file - Rename all the `application.conf` files in the test resources to `application-test.conf`.
The problem was introduced in #8467
We infer old IDs from module's IRs location, instead. This workarounds ill-constructed IdMaps at the start of the project.
If IDE or backend ends up with invalid ID map, I suppose application of edits might still fail.
# Important Notes
Closes#8500 by not doing the parsing at all. Since I couldn't figure why we had invalid metadata in the first place, this might still bite us at some point.
close#7555
Compiler passes after `GenerateMethodBodies` expect the method body to be a function.
After fixing the pass, the compilation returns a proper compiler error:
```
built-distribution/enso-engine-0.0.0-dev-linux-amd64/enso-0.0.0-dev/lib/Standard/Table/0.0.0-dev/src/Data/Column.enso:869:22: error: Methods must have only one definition of the `this` argument, and it must be the first.
869 | round self round self (decimal_places:Integer = 0) (use_bankers:Boolean = False) = Value_Type.expect_numeric self <|
| ^~~~
Aborting due to 1 errors and 0 warnings.
```
Add a local clone of javaFormatter plugin. The upstream is not maintained anymore. And we need to update it to use the newest Google java formatter because the old one, that we use, cannot format sources with Java 8+ syntax.
# Important Notes
Update to Google java formatter 1.18.1 - https://github.com/google/google-java-format/releases/tag/v1.18.1
close#8431
Fixes the scenario:
- user sends `executionContext/executeExpression`
- program execution is scheduled
- during the compilation the already compiled `IR` is loaded from the cache (reading invalid alias analysis graph)
- during the codegen the local scope with that aliasing graph is propagated to the runtime
- `EvalNode` compiles the expression to execute with the local scope containing an invalid aliasing graph
- compilation fails in the `AliasAnalysis` pass because of the clashing IDs in the graph
- Makes sure that the compiler will print the diagnostics even if non-strict mode is used
- To prevent printing unexpected stuff to stdout in interactive mode, the diagnostics are printed as 'warning' level log messages in that mode. In strict mode, the messages are printed like before, without changes.
* tests
* wip
* wip
* additional warnings
* wip
* wip
* cleanup
* nested wrapping
* multiple nestings
* wraps_error uses looks_for, test for should_fail_with
* wip
* stack trace line fix
* use catch_primitive internally
* fix warning mapping, dtf spec
* just one wrapper checker, vector spec
* missing ctor, back to non-primitive catch
* back to c_p
* put old map back
* wip
* unnest tests
* Array.map on_problems
* wip
* Revert "wip"
This reverts commit c30d171457.
* better test names
* warning logging
* wip
* wip
* move logic into ALH
* doc
* constant
* My_Error.Error
* nested
* doc
* map_primtiive in warning mapper
* composition
* ref spec
* Remove warnings prior to matching on the value
If an expression has warnings and is matched we:
1) extract the warnings
2) execute the branch of a pattern that matches the value
3) attach extracted warnings to the result
This caused warnings to reappear when doing the custom warnings
manipulation.
This is also consistent with how `CaseNode`'s `doWarning` specialization
is defined.
* fix 1
* do not auto unwrap in test error checkers
* nested error matcher
* in problems too
* dtf
* v
* statistics
* wip
* Table_Spec, map_with_index_primitive
* Column_Operations_Spec
* disable warning wrapping and Report_Warning
* unimpl test
* Warnings_Spec
* DCS
* ACG JP
* zip_primitive
* join_helpers
* Lookup_Helpers
* Table
* Data_Formatter
* Value_Type_Helpers
* revert check types changes
* table_helpers
* table tests
* remove st
* do not remove warnings from value
* vec docs, tests for zip, mwi, flat_map
* docs, fixes
* remove nested_error_matcher
* cleanup
* benchmark
* one error
* alter
* add bench to main
* review
* review
* review
* tail call
* changelog
* tail call was not a tail call
* ws
* bad import
* Added missing import
* Update distribution/lib/Standard/Base/0.0.0-dev/src/Data/Array.enso
Co-authored-by: Radosław Waśko <radoslaw.wasko@enso.org>
* review, ref example
* lazy benchmark data
* extra paren
* check outside of catch
* review
* vector too
* actually lazy
* disambiguate Map_Error
* finish rename
* move to extensions
* combine Additional_Warnings error
* rename to map_no_wrap
* do not catch and rethrow
* review
* wip
* remove _primitives entirely
* remove unused should_fail_with function options
* remove expected_warning as function in Problems
---------
Co-authored-by: Hubert Plociniczak <hubert.plociniczak@gmail.com>
Co-authored-by: Radosław Waśko <radoslaw.wasko@enso.org>
Add `jline` module to the distribution so that our REPL is usable again.
# Important Notes
- No more: "WARNING: Unable to create a system terminal, creating a dumb terminal " warning when starting REPL
- Arrow keys works as expected in REPL
- Back search (the default shortcut `CTRL + R`) works as expected.
Implements #6166.
# Important Notes
- More consistent handling of `default` arguments. `default` is a valid identifier, and only has special meaning when it isn't bound in scope. Since distinguishing the builtin `default` from an identifier called `default` cannot be done until alias analysis has been performed, `default` is now represented in the AST as a regular identifier.
- `TreeToIr`: Remove `insideTypeAscription`. It was only used for bug-for-bug compatibility with the old parser during the transition.
We've been experiencing consistently failures on MacOS due to timeouts.
Doubling the timeout, hoping this will be sufficient to eliminate such
false failures. Will seek alternative solutions if that does not rememdy
the problem on CI.
* Test illustrating problems with FQNs
Inline execution fails with `Compile error: The name `Standard` could
not be found.`.
* Ensure InlineContext carries Package Repos info
Previously, there was no requirement that inline execution should allow
for FQNs. This meant that the omission of Package Repository info went
unnoticed.
In order to be able to refer to `Standard.Visualization.Preprocessor` it
has to be exported as well.
Adds these JAR modules to the `component` directory inside Engine distribution:
- `graal-language-23.1.0`
- `org.bouncycastle.*` - these need to be added for graalpy language
# Important Notes
- Remove `org.bouncycastle.*` packages from `runtime.jar` fat jar.
- Make sure that the `./run` script preinstalls GraalPy standalone distribution before starting engine tests
- Note that using `python -m venv` is only possible from standalone distribution, we cannot distribute `graalpython-launcher`.
- Make sure that installation of `numpy` and its polyglot execution example works.
- Convert `Text` to `TruffleString` before passing to GraalPy - 8ee9a2816f
Fixes#5233 by removing `EconomicMap` & co. and using plain old good _linear hashing_. Fixes#8090 by introducing `StorageEntry.removed()` rather than copying the builder on each removal.
Evaluating visualization expression may trigger a full compilation. A change in #7042 went a bit too far and led to a situation when there could be compilations running at the same time leading to a rather obscure `RedefinedMethodException` when the compilation on one thread already finished. This will make the logic correct again at the price of potentially slowing the processing of visualization.
Closes#8296.
# Important Notes
Should make visualizations a bit more stable as well.
Encountered a random NPE when playing with bookclubs. Test case demonstrating the problem is attached.
Threw in a bunch of minor tweaks to logs to make life of the person debugging code more pleasant.
close#8329
Changelog:
- add: `cmd`+`shift`+`,` and `cmd`+`shift`+`.` shortcuts to start and stop the backend profiling. Profiling data is stored on disk.
Changelog:
- update: always create an event log next to the profiling file when the engine is started with the `--profiling-path` flag
- remove: `--profiling-events-log-path` flag
close#8249
Changelog:
- add: `profiling/snapshot` request that takes a heap dump of the language server and puts it in the `ENSO_DATA_DIRECTORY/profiling` direcotry
Attaching or modifying a visualizations returns early on, to avoid a situation when a background job is stalled (by other jobs) and eventually the request timeouts.
This has an unfortunate consequence that any error reported in the `UpsertVisualizationJob` cannot be reported as a directly reply to a request because the sender has already been removed from the list.
Added more logs to discover why we get errors in the first place.
Modified the API a bit so that we carry `VisualizationContext` instead of three parameters all over the place.
Bonus:
Modified `JsonRpcServerTestKit` to implicitly require a position so that we get better error reporting on failures.
Upgrade to GraalVM JDK 21.
```
> java -version
openjdk version "21" 2023-09-19
OpenJDK Runtime Environment GraalVM CE 21+35.1 (build 21+35-jvmci-23.1-b15)
OpenJDK 64-Bit Server VM GraalVM CE 21+35.1 (build 21+35-jvmci-23.1-b15, mixed mode, sharing)
```
With SDKMan, download with `sdk install java 21-graalce`.
# Important Notes
- After this PR, one can theoretically run enso with any JRE with version at least 21.
- Removed `sbt bootstrap` hack and all the other build time related hacks related to the handling of GraalVM distribution.
- `project-manager` remains backward compatible - it can open older engines with runtimes. New engines now do no longer require a separate runtime to be downloaded.
- sbt does not support compilation of `module-info.java` files in mixed projects - https://github.com/sbt/sbt/issues/3368
- Which means that we can have `module-info.java` files only for Java-only projects.
- Anyway, we need just a single `module-info.class` in the resulting `runtime.jar` fat jar.
- `runtime.jar` is assembled in `runtime-with-instruments` with a custom merge strategy (`sbt-assembly` plugin). Caching is disabled for custom merge strategies, which means that re-assembly of `runtime.jar` will be more frequent.
- Engine distribution contains multiple JAR archives (modules) in `component` directory, along with `runner/runner.jar` that is hidden inside a nested directory.
- The new entry point to the engine runner is [EngineRunnerBootLoader](https://github.com/enso-org/enso/pull/7991/files#diff-9ab172d0566c18456472aeb95c4345f47e2db3965e77e29c11694d3a9333a2aa) that contains a custom ClassLoader - to make sure that everything that does not have to be loaded from a module is loaded from `runner.jar`, which is not a module.
- The new command line for launching the engine runner is in [distribution/bin/enso](https://github.com/enso-org/enso/pull/7991/files#diff-0b66983403b2c329febc7381cd23d45871d4d555ce98dd040d4d1e879c8f3725)
- [Newest version of Frgaal](https://repo1.maven.org/maven2/org/frgaal/compiler/20.0.1/) (20.0.1) does not recognize `--source 21` option, only `--source 20`.
close#8248
Changelog:
- add: `profiling/start` request starts the sampler and starts collecting runtime events to the log file
- add: `profiling/stop` request stop the sampler and write the profiling data to the `$ENSO_DATA_DIR/profiling` directory
- refactor: rewrite the profiling logic into Java
Fixes a random crash (*) during instrumentation. Notice how `onTailCallReturn` calls `onReturnValue` with `null` frame.
Bonus: noticed that for some reason we weren't getting logs for `ExecutionService`. This turned out to be the problem with the logger name which by default was `[enso]` not
`[enso.org.enso.interpreter.service.ExecutionService`] and there is some logic there that normalizes the name and assumed a dot after `enso`. This change fixes the logic.
(*)
```
[enso.org.enso.interpreter.service.ExecutionService] Execution of function main failed (Cannot invoke "com.oracle.truffle.api.frame.VirtualFrame.materialize()" because "frame" is null).
java.lang.NullPointerException: Cannot invoke "com.oracle.truffle.api.frame.VirtualFrame.materialize()" because "frame" is null
at org.enso.interpreter.instrument.IdExecutionInstrument$IdEventNodeFactory$IdExecutionEventNode.onReturnValue(IdExecutionInstrument.java:246)
at org.enso.interpreter.instrument.IdExecutionInstrument$IdEventNodeFactory$IdExecutionEventNode.onTailCallReturn(IdExecutionInstrument.java:274)
at org.enso.interpreter.instrument.IdExecutionInstrument$IdEventNodeFactory$IdExecutionEventNode.onReturnExceptional(IdExecutionInstrument.java:258)
at org.graalvm.truffle/com.oracle.truffle.api.instrumentation.ProbeNode$EventProviderChainNode.innerOnReturnExceptional(ProbeNode.java:1395)
at org.graalvm.truffle/com.oracle.truffle.api.instrumentation.ProbeNode$EventChainNode.onReturnExceptional(ProbeNode.java:1031)
at org.graalvm.truffle/com.oracle.truffle.api.instrumentation.ProbeNode.onReturnExceptionalOrUnwind(ProbeNode.java:296)
at org.enso.interpreter.node.ExpressionNodeWrapper.executeGeneric(ExpressionNodeWrapper.java:119)
at org.enso.interpreter.node.ClosureRootNode.execute(ClosureRootNode.java:85)
at jdk.internal.vm.compiler/org.graalvm.compiler.truffle.runtime.OptimizedCallTarget.executeRootNode(OptimizedCallTarget.java:718)
at jdk.internal.vm.compiler/org.graalvm.compiler.truffle.runtime.OptimizedCallTarget.profiledPERoot(OptimizedCallTarget.java:641)
at jdk.internal.vm.compiler/org.graalvm.compiler.truffle.runtime.OptimizedCallTarget.callBoundary(OptimizedCallTarget.java:574)
at jdk.internal.vm.compiler/org.graalvm.compiler.truffle.runtime.OptimizedCallTarget.doInvoke(OptimizedCallTarget.java:558)
at jdk.internal.vm.compiler/org.graalvm.compiler.truffle.runtime.OptimizedCallTarget.callDirect(OptimizedCallTarget.java:504)
at jdk.internal.vm.compiler/org.graalvm.compiler.truffle.runtime.OptimizedDirectCallNode.call(OptimizedDirectCallNode.java:69)
at org.enso.interpreter.node.callable.thunk.ThunkExecutorNode.doCached(ThunkExecutorNode.java:69)
at org.enso.interpreter.node.callable.thunk.ThunkExecutorNodeGen.executeAndSpecialize(ThunkExecutorNodeGen.java:207)
at org.enso.interpreter.node.callable.thunk.ThunkExecutorNodeGen.executeThunk(ThunkExecutorNodeGen.java:167)
...
```
# Important Notes
Fixes regressions introduced in #8148 and #8162
This change fixes a regression introduced in #7918, which prevented the execution from setting the right log level either via env var or parameter.
Now passing either of the options returns logs of the expected level in the log file:
- `ENSO_LOG_TO_FILE_LOG_LEVEL = trace`
- ... `-vv` ...
Fixes#8274
Previously custom log levels applied only to non-Truffle loggers. To allow it, filtering has to be applied appropriately at two places - first at Java's Handler and then essentially re-confirmed at SLF4J's logger to which the former forwards to.
Filters compose in an `AND` condition, therefore default log level check had to be merged into our custom filters.
`TruffleLogger` has a builtin functionality to perform the filtering when context is configured appropriately. This should be much more efficient than adding a `Filter` to the JUL Handler explicitly.
# Important Notes
```
JAVA_OPTS="-org.enso.compiler.SerializationManager.Logger.level=debug" ./built-distribution/enso-engine-0.0.0-dev-linux-amd64/enso-0.0.0-dev/bin/enso --run
```
will now assign a custom log level to `SerializationManager` Logger.
@radeusgd pointed out that tests are checking that the engine does not send updates when the dataflow error changes. In the end, it turned out that those tests were not checking what they said, and the engine sent the proper updates.
A long running initialization of the component blocks the execution significantly. Removed the `BlockingInitialization` and replaced it with a more fine grained locking.
# Important Notes
Added a simple workaround for potential slow initialization of backend - more retries. We should have a better UX in that case anyway, but due to absence of work on that in old GUI, this will have to do.
The main should be problem should be addressed already by other backend changes. Changes in `app` should only be treated as _just in case something bad happens_.
* OpenFileCmd sends a reply when finished
Lack of reply and therefore a non-determinism on when OpenFile handler
can finish, led to some sporadic instability. Once caches format got
changed, things were taking such a long time that I wasn't able to start
even a basic project (requests would start to timeout).
The change also removes PushContextCmd from synchronous cmds (as
introduced in #798 to remove initialization problems in tests as well as
in real scenarions); the change did the job but was also a bit
controversial.
The change can also help with randomly failing applies (#8174) as IDE kept
closing and opening the project that might have exploited the
race-condition.
* Adapt tests
* Make tests more resilient to out of order messages
* Drop retries that lead to confusing errors
* less random failures
* s/OpenFileNotification/OpenFileRequest
Debugging the issue reported by @PabloBuchu when the language server initialization hangs in the cloud. I'm still not sure what is happening in the cloud because I was not able to reproduce it when trying to connect two clients simultaneously.
Another potential source of the issue may be the Scala Future -> Java CompletableFuture conversion, but I didn't find anything suspicious there.
The change upgrades `directory-watcher` library, hoping that it will fix the problem reported in #7695 (there has been a number of bug fixes in MacOS listener since then).
Once upgraded, tests in `WatcherAdapterSpec` because the logic that attempted to ensure the proper initialization order in the test using semaphore was wrong. Now starting the watcher using `watchAsync` which only returns the future when the watcher successfully registers for paths. Ideally authors of the library would make the registration bit public
(3218d68a84/core/src/main/java/io/methvin/watcher/DirectoryWatcher.java (L229C7-L229C20)) but it is the best we can do so far.
Had to adapt to the new API in PathWatcher as well, ensuring the right order of initialization.
Should fix#7695.
Fixes#8186 by turning `IllegalStateException` into log message. Re-assigning of `BindingsMap` can happen in the IDE where evaluation of modules is repeated again and again. In addition to that avoid dropping errors in compiler without them being noticed.
Using a `TruffleLogger` in `SerializationManager` that is bound to the engine rather than the context prevents reaching an illegal state when using thread pools.
Also cleaned up some tests for consistency.
To verify the fix
```diff
--- a/engine/runtime/src/main/scala/org/enso/compiler/SerializationManager.scala
+++ b/engine/runtime/src/main/scala/org/enso/compiler/SerializationManager.scala
@@ -31,7 +31,7 @@ final class SerializationManager(compiler: Compiler) {
import SerializationManager._
/** The debug logging level. */
- private val debugLogLevel = Level.FINE
+ private val debugLogLevel = Level.INFO
```
and run
`sbt:enso> runtime/test`
Closes#8147.
The change eliminates a race-condition that can appear between the `PathWatcher` deregistering the last client (and shutting down) and `ReceivesTreeUpdatesHandler` receiving the termination message of that event. In between there could come a message towards the mentioned `PathWatcher` resulting in a timeout.
The scenario has become rather common in CI tests resulting in spurious failures. The problem could also be simulated by adding artificial `Thread.sleep` between sending the reply in `PathWatcher` and stopping the actor.
Fixes#8151.
# Important Notes
Example failure https://github.com/enso-org/enso/actions/runs/6642894609/job/18048711561?pr=8145
To simulate the scenario
```diff
diff --git a/engine/language-server/src/main/scala/org/enso/languageserver/filemanager/PathWatcher.scala b/engine/language-server/src/main/scala/org/enso/languageserver/filemanager/PathWatcher.scala
index 713cfe1182..88afac95cb 100644
--- a/engine/language-server/src/main/scala/org/enso/languageserver/filemanager/PathWatcher.scala
+++ b/engine/language-server/src/main/scala/org/enso/languageserver/filemanager/PathWatcher.scala
@@ -110,6 +110,7 @@ final class PathWatcher(
case UnwatchPath(client) =>
sender() ! CapabilityReleased
+ Thread.sleep(2000)
unregisterClient(root, base, clients - client)
```
Towards reduced reliance on Scala semantics.
Translated IR.scala to IR.java and extracted implicits that now need to be imported explicitly.
# Important Notes
1:1 translation. For now `@Identifier` and `@ExternalID` represent the old type aliases but are not verified at compile time.
This is because in a mixed Scala/Java world this seems impossible to employ such frameworks as Checker.
`capability/acquire` with an invalid method `executionContext/canModify` would previously timeout because the capability router simply didn't support that kind of capability request.
Now rather than timing out, indicating a failure on LS, we report an error.
Closes#8038.
In order to execute `runtime-parser` in browser, we need to slightly simplify dependencies of our code.
# Important Notes
The actual PR explaining why these changes are desirable is #8094
- Fixes#8049
- Adds tests for handling of Date_Time upload/download in Postgres.
- Adds tests for edge cases of handling of Decimal and Binary types in Postgres.
close#8033
Changelog:
- update: run language server initialization once
- fix: issues with async `getSuggestionDatabase` message handling in new IDE
- update: implement unique background jobs
- refactor: initialization logic to Java
- refactor: `UniqueJob` to a marker interface
Improved warning reporting by not reporting IR in user-directed warning
messages. Made sure that fresh names retain some knowledge of the name
which they were created from.
Closes#7963.
Unmasked position info for the failed edits so that it is possible to understand what edits are attempted at all. So far unable to locate the problem based solely on the logs and the stacktrace. Added a test confirming that making edits at the end of the file continues to be fine (that's where the problem usually manifests itself).
# Important Notes
References #7978
- Validate spans during existing lexer and parser unit tests, and in `enso_parser_debug`.
- Fix lost span info causing failures of updated tests.
# Important Notes
- [x] Output of `parse_all_enso_files.sh` is unchanged since before #7881 (modulo libs changes since then).
- When the parser encounters an input with the first line indented, it now creates a sub-block for lines at than indent level, and emits a syntax error (every indented block must have a parent).
- When the parser encounters a number with a base but no digits (e.g. `0x`), it now emits a `Number` with `None` in the digits field rather than a 0-length digits token.
`AmbgiuousImport` error and `DuplicatedImport` warning must not have `Source` as one of its fields or compiler will crash during a serialization attempt.
Changed the implementation to provide it as a parameter to the `message` method instead.
Fixes#8089
Having a modest-size files in a project would lead to a timeout when the project was first initialized. This became apparent when testing delivered `.enso-project` files with some data files. After some digging there was a bug in JGit
(https://bugs.eclipse.org/bugs/show_bug.cgi?id=494323) which meant that adding such files was really slow. The implemented fix is not on by default but even with `--renormalization` turned off I did not see improvement.
In the end it didn't make sense to add `data` directory to our version control, or any other files than those in `src` or some meta files in `.enso`. Not including such files eliminates first-use initialization problems.
# Important Notes
To test, pick an existing Enso project with some data files in it (> 100MB) and remove `.enso/.vcs` directory. Previously it would timeout on first try (and work in successive runs). Now it works even on the first try.
The crash:
```
[org.enso.languageserver.requesthandler.vcs.InitVcsHandler] Initialize project request [Number(2)] for [f9a7cd0d-529c-4e1d-a4fa-9dfe2ed79008] failed with: null.
java.util.concurrent.TimeoutException: null
at org.enso.languageserver.effect.ZioExec$.<clinit>(Exec.scala:134)
at org.enso.languageserver.effect.ZioExec.$anonfun$exec$3(Exec.scala:60)
at org.enso.languageserver.effect.ZioExec.$anonfun$exec$3$adapted(Exec.scala:60)
at zio.ZIO.$anonfun$foldCause$4(ZIO.scala:683)
at zio.internal.FiberRuntime.runLoop(FiberRuntime.scala:904)
at zio.internal.FiberRuntime.evaluateEffect(FiberRuntime.scala:381)
at zio.internal.FiberRuntime.evaluateMessageWhileSuspended(FiberRuntime.scala:504)
at zio.internal.FiberRuntime.drainQueueOnCurrentThread(FiberRuntime.scala:220)
at zio.internal.FiberRuntime.run(FiberRuntime.scala:139)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:49)
at java.base/java.util.concurrent.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1395)
at java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:373)
at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1182)
```
Prevents [this NullPointerException](https://github.com/enso-org/enso/pull/8044#issuecomment-1763736570) by making sure `ConstantNode` rejects `null` during construction.
# Important Notes
```ruby
from Standard.Base import all
polyglot java import java.util.Random
main =
operator1 = Random.new
```
* Reduce extra output in compilation and tests
I couldn't stand the amount of extra output that we got when compiling
a clean project and when executing regular tests. We should strive to
keep output clean and not print anything additional to stdout/stderr.
* Getting rid of explicit setup by service loading
In order for SL4J to use service loading correctly had to upgrade to
latest slf4j. Unfortunately `TestLogProvider` which essentially
delegates to `logback` provider will lead to spurious ambiguous warnings
on multiple providers. In order to dictate which one to use and
therefore eliminate the warnings we can use the `slf4j.provider` env
var, which is only available in slf4j 2.x.
Now, there is no need to explicitly call `LoggerSetup.get().setup()` as
that is being called during service setup.
* legal review
* linter
* Ensure ConsoleHandler uses the default level
ConsoleHandler's constructor uses `Level.INFO` which is unnecessary for
tests.
* report warnings
close#7819
Warnings
```
Execution of visualization [...] on value [...] of [Panic] failed.
```
are produced when you are typing an incomplete expression in the component browser and can be misleading.
It seems that Runtime Connector wasn't respecting the protocol it defined itself. The connector should be waiting on the `Api.InitializedNotification` message and only then start forwarding messages. So far it seems this hasn't been a problem, or at least wasn't reported as such, because initialization was fast enough.
Modified `Handler` so that we are certain that its fields hold initialized values when being accessed by different threads.
Should fix problems mentioned in #7898.
* Add support for https and wss
Preliminary support for https and wss. During language server startup we
will read the application config and search for the `https` config with
necessary env vars set.
The configuration supports two modes of creating ssl-context - via
PKCS12 format and certificat+private key.
Fixes#7839.
* Added tests, improved documentation
Generic improvements along with actual tests.
* lint
* more docs + wss support
* changelog
* Apply suggestions from code review
Co-authored-by: Dmitry Bushev <bushevdv@gmail.com>
* PR comment
* typo
* lint
* make windows line endings happy
---------
Co-authored-by: Dmitry Bushev <bushevdv@gmail.com>
It looks like visualization commands had required context lock unnecessairly. Context manager methods are synchronized and therefore should not need the lock before submitting a correspodning background job.
Additionally, the presence of a context lock leads a deadlock:
1. Consider a long running execution of a program that does not finish within the 5 seconds
2. In the meantime there comes either an `AttachVisualization` or `DetachVisualization` request from the client
The latter will get stuck and eventually timeout out because it cannot acquire the lock withing the required time limits. It is still possible that a single long-running `ExecuteJob` might block other ones (including visualization ones) but that's a separate work.
Fixes some issues present in #7941.
# Important Notes
We need to still investigate `ExecuteJob`s need for context lock which might delay the execution of other background jobs that require it as well (mostly concerned about visualization).
* Q: Is it normal for `--inspect` mode to print two debug urls?
* A: No, it should print just one.
* Q: Putting there a Java breakpoint to find out why it the chromeinspector gets initialized twice might reveal the culprit.
* A: The additional listener is happening [here](https://github.com/enso-org/enso/blob/develop/engine/runner/src/main/scala/org/enso/runner/ContextFactory.scala#L117).
# Important Notes
There is no easy check for a language being present without creating an `Engine`. It was thought creating an `Engine` is cheap operation, but it seems to have some downsides. Let's use `ENSO_JAVA` environment variable to decide whether _experimental Espresso_ support shall be enabled.
Adds the ability to declare a module as *private*. Modifies the parser to add the `private` keyword as a reserved keyword. All the checks for private modules are implemented as an independent *Compiler pass*. No checks are done at runtime.
# Important Notes
- Introduces new keyword - `private` - a reserved keyword.
- Modules that have `private` keyword as the first statement are declared as *private* (Project private)
- Public module cannot have private submodules and vice versa.
- This would require runtime access checks
- See #7088 for the specification.
* Enable log-to-file configuration
PR #7825 enabled parallel logging to a file with a much more
fine-grained log level by default.
However, logging at `TRACE` level on Windows appears to be still
problematic.
This PR reduced the default log level to file from `DEBUG` to `TRACE`
and allows to control it via an environment variable if one wishes to
change the verbosity without making code changes.
* PR comments
close#7750close#7834
Changelog:
- update: project manager uses the packaged language server to open projects
- fix: remove stack traces from connection errors on initial ping handler request (when the language server is booting)
- update: add engine and edition versions to the `initProtocolConnection` response for easier debug
- update: do not resolve project ensoVersion in the `project/list` to eliminate unnecessary network calls
* Always log verbose to a file
The change adds an option by default to always log to a file with
verbose log level.
The implementation is a bit tricky because in the most common use-case
we have to always log in verbose mode to a socket and only later apply
the desired log levels. Previously socket appender would respect the
desired log level already before forwarding the log.
If by default we log to a file, verbose mode is simply ignored and does
not override user settings.
To test run `project-manager` with `ENSO_LOGSERVER_APPENDER=console` env
variable. That will output to the console with the default `INFO` level
and `TRACE` log level for the file.
* add docs
* changelog
* Address some PR requests
1. Log INFO level to CONSOLE by default
2. Change runner's default log level from ERROR to WARN
Took a while to figure out why the correct log level wasn't being passed
to the language server, therefore ignoring the (desired) verbose logs
from the log file.
* linter
* 3rd party uses log4j for logging
Getting rid of the warning by adding a log4j over slf4j bridge:
```
ERROR StatusLogger Log4j2 could not find a logging implementation. Please add log4j-core to the classpath. Using SimpleLogger to log to the console...
```
* legal review update
* Make sure tests use test resources
Having `application.conf` in `src/main/resources` and `test/resources`
does not guarantee that in Tests we will pick up the latter. Instead, by
default it seems to do some kind of merge of different configurations,
which is far from desired.
* Ensure native launcher test log to console only
Logging to console and (temporary) files is problematic for Windows.
The CI also revealed a problem with the native configuration because it
was not possible to modify the launcher via env variables as everything
was initialized during build time.
* Adapt to method changes
* Potentially deal with Windows failures
- Closes#7461 by introducing a `Date_Time_Formatter` type and making parsing date time formats more robust and safer.
- The default ('simple') set of patterns is slightly simplified and made case insensitive (except for `M/m` and `H/h`) to avoid the `YYYY` vs `yyyy` issues and make it less error prone.
- The `YYYY` now has the same meaning as `yyyy` in simple mode. The old meaning (week-based year) is moved to a _separate mode_, triggered by `Date_Time_Formatter.from_iso_week_date_pattern`.
- Full Java syntax, as well as custom-built Java `DateTimeFormatter` can also be used by `Date_Time_Formatter.from_java`.
- Text-based constants (e.g. `ISO_ZONED_DATE_TIME`) have now become methods on `Date_Time_Formatter`, e.g. `Date_Time_Formatter.iso_zoned_date_time`).
* Improve shutdown logic of language server
This PR addresses problems mentioned in #7470 and #7729:
- shutting a language server explicitly will not lead to a soft shutdown
- `project/status` endpoint returns the state of the language server
`LanguageServerController` now also signed up for `ClientConnect`
messages. For it to be unambiguous, we need to carry around the port
number of the language server as a way of identifying the right one.
One can now use `project/status` to additionally determine the state of
the language server.
Also relies on a proper fix for #7765.
* changelog
* PR comments
close#7320
Changelog:
- update: enable conversion suggestions
- fix: conversion suggestion building
- fix: conversion suggestion types
- fix: conversion JSON-RPC representation
# Important Notes
For example, the [`Day_Of_Week_From`](5150c14afd/distribution/lib/Standard/Base/0.0.0-dev/src/Data/Time/Day_Of_Week_From.enso) conversion is sent as
```json
{
"type":"Add",
"id":32,
"suggestion":{
"type":"method",
"module":"Standard.Base.Data.Time.Day_Of_Week_From",
"name":"from",
"arguments":[
{
"name":"that",
"reprType":"Standard.Base.Data.Numbers.Integer",
"isSuspended":false,
"hasDefault":false,
"defaultValue":null,
"tagValues":null
},
{
"name":"first_day",
"reprType":"Standard.Base.Data.Time.Day_Of_Week.Day_Of_Week",
"isSuspended":false,
"hasDefault":true,
"defaultValue":"Day_Of_Week.Sunday",
"tagValues":[
"Standard.Base.Data.Time.Day_Of_Week.Day_Of_Week.Sunday",
"Standard.Base.Data.Time.Day_Of_Week.Day_Of_Week.Monday",
"Standard.Base.Data.Time.Day_Of_Week.Day_Of_Week.Tuesday",
"Standard.Base.Data.Time.Day_Of_Week.Day_Of_Week.Wednesday",
"Standard.Base.Data.Time.Day_Of_Week.Day_Of_Week.Thursday",
"Standard.Base.Data.Time.Day_Of_Week.Day_Of_Week.Friday",
"Standard.Base.Data.Time.Day_Of_Week.Day_Of_Week.Saturday"
]
},
{
"name":"start_at_zero",
"reprType":"Standard.Base.Data.Boolean.Boolean",
"isSuspended":false,
"hasDefault":true,
"defaultValue":"False",
"tagValues":[
"Standard.Base.Data.Boolean.Boolean.True",
"Standard.Base.Data.Boolean.Boolean.False"
]
}
],
"selfType":"Standard.Base.Data.Time.Day_Of_Week.Day_Of_Week",
"returnType":"Standard.Base.Data.Time.Day_Of_Week.Day_Of_Week",
"isStatic":false,
"documentation":" Convert from an integer to a Day_Of_Week\n\nArguments:\n- `that`: The first day of the week.\n- `first_day`: The first day of the week.\n- `start_at_zero`: If True, first day of the week is 0 otherwise is 1.",
"annotations":[
]
}
}
```
- Added a `FileSystemSPI` allowing protocol resolution to a target type.
- Separated `Input_Stream` and `Output_Stream` from `File` to allow use in other spaces.
- `File_Format` types `read_web` changed to be `read_stream` working with `InputStream`.
- Added directory listing to `Auto_Detect` allowing for `Data.read` to list a folder.
- Adjusted HTTP to return an `InputStream` not a `byte[]`:
- `Response_Body` adjusted to wrap an `InputStream`.
- Added ability to materialize to either and in-memory vector (<4KB) or a temporary file.
- `Data.fetch` will materialize if not a recognized mime-type.
- Added `HTTP_Error` to handle IO exceptions from the stream.
- `Excel_Format` now supports mime-type and reading a stream.
- `Excel_Workbook` can now get a `Excel_Section` using `read_section`.
- Added S3 APIs:
- `parse_uri`: splits an S3 URI into bucket and key.
- `list_objects`: list the items in a S3 bucket with specified prefix.
- `read_bucket`: list prefixes and keys with a delimiter in a S3 bucket with specified prefix.
- `head`: either head_bucket (tests existance) or head_object API (reads object meta data).
- `get_object`: gets an object from S3 returning as a `Response_Body`.
- Added `S3_File` type acting like a `File`:
- No support for writing in this PR.
- **ToDo:** recursive listing, glob filtering, exists, size.
- Fixed a few invalid type signature line.
- Moved `create` methods for `Postgres_Connection` and `SQLite_Connection` into type instead of module.
- Renamed `Column_Fetcher.Builder` to `Column_Fetcher_Builder`.
- Fixed bug with `select_into` in Dry Run mode creating permanent tables.
**ToDo:** Unit tests.
Closes#7677 by eliminating the _stackoverflow execption_. In general it seems _too adventurous_ to walk members of random foreign objects. There can be anything including cycles. Rather than trying to be too smart in these cases, let's just rely on `InteropLibrary.isIdentical` message.
# Important Notes
Calling `sort` on the `numpy` array no longer yields an error, but the array isn't sorted - that needs a fix on the Python side: https://github.com/oracle/graalpython/issues/354 - once it is in, the elements will be treated as numbers and the sorting happens automatically (without any changes in Enso code).
close#7520
Changelog:
- update: SectionsToBinOp compiler pass produces function application for left sections
- refactor: simplify the registration of builtin methods
close#7765
Changelog:
- update: instead of relying on the connection closed events, the `sendSuggestionsDatabase` request initiates the suggestions database re-indexing
close#7608
Changelog:
- update: log separately the evaluation of visualization expression and its arguments
- update: add visualization expression, its arguments, and the value type to the log
# Important Notes
Example
```
[TRACE] [2023-09-06T20:41:45+03:00] [enso] Executing visualization [VisualizationConfiguration(d195fdd8-d4e8-400f-a0d4-e50417eddd0a,ModuleMethod(MethodPointer(Standard.Visualization.Table.Visualization,Standard.Visualization.Table.Visualization,prepare_visualization),Vector(1000)),local.New_Project_1.Main)] on expression [ddc060df-9b59-48e5-bc61-aca849347343] of [class org.enso.interpreter.runtime.data.vector.Vector$Generic]...
```
Fixes the _RefactoringTests - rename project_ part of the #7775
Changelog:
- update: send the ok response before the notification to fix the order of events in tests
This PR addresses two problems mentioned in #7766:
1. A random integer overflow, likely caused by a bug in Rust parser
2. A concurrent access to a methods' map
Re 1: Unable to reproduce but it doesn't mean it won't happen again. Added a try/catch to get in the logs source code that caused it **and** not crash hard when it occurs.
Re 2: Changing methods map from `HashMap` to `ConcurrentHashMap`. Due to a poor design we leaked the underlying structure in a number of places, unnecessairly. `ConcurrentHashMap` does not accept `null` keys therefore due to leaking implementation had to ensure that `methods` of `ModuleScope` never escapes as-is.
Both workarounds should ensure that we don't crash hard when they appear.
Closes#7766
- Closes#7238
- Aligns `update_database_table` to a more consistent and clearer API - `update_rows`.
- Adds a `truncate_table` helper function, to pair up with `drop_table`. Both are `PRIVATE` for now.
- Adds tests for NULLs in keys in `update_rows` and `delete_rows`.
- The behaviour is sometimes unexpected, so instead these fail with `Null_Values_In_Key_Columns`.
- Adds a workaround for https://github.com/oracle/graal/issues/7359
- Adds a workaround for a related bug where a stack frame has no name (its `rootNode.getName() == null`).
- I could not track down this bug to provide a neat repro.
This change replaces Enso's custom logger with an existing, mostly off the shelf logging implementation. The change attempts to provide a 1:1 replacement for the existing solution while requiring only a minimal logic for the initialization.
Loggers are configured completely via `logging-server` section in `application.conf` HOCON file, all initial logback configuration has been removed. This opens up a lot of interesting opportunities because we can benefit from all the well maintained slf4j implementations without being to them in terms of functionality.
Most important differences have been outlined in `docs/infrastructure/logging.md`.
# Important Notes
Addresses:
- #7253
- #6739
- Closes#7633
- Moves `Round_Spec.enso` from published `Standard.Test` into our `test/Tests` project; the `Table_Tests` that depend on it, simply `import enso_dev.Tests`.
- Changes the layout of the local libraries directory:
- It used to be `root/<namespace>/<name>`.
- Now it is `root/<dir>` - the namespace and name are now read from `package.yaml` instead.
- Adds the parent directory of the current project to the default `ENSO_LIBRARY_PATH`.
- It is treated as a secondary path, so the default `ENSO_HOME/lib` still takes precedence.
- This allows projects to reference and load 'sibling' projects easily - the only requirement is for the project to enable `prefer-local-libraries: true` or add the other local project to its edition. The edition resolution logic is **not changed**.
Refactoring deeply nested IR classes to shallow nesting.
Fixes#7017
# Important Notes
It's a big PR but fairly consistent. Split multiple hierarchy levels into subpackages.
- Update list of groups to agreed list.
- Lower case `ALIAS` names to be consistent with function names.
- Add `GROUP` to methods.
- All constructors and functions have doc comments.
- Correct a few typos (e.g. `PRVIATE`).
- Mark some more things as `PRIVATE`.
- Use `ToDo:` and `Note:` consistently.
- Order tags in doc comment.
# Important Notes
We don't have all the doc comments on types and will want to add them in future,
close#7604
After moving the rename action to the dashboard, IDE is unaware of the new project name. PR implements a new `refactoring/projectRenamed` notification that is sent from the server to clients and informs them about the changed project name.
# Important Notes
https://github.com/enso-org/enso/assets/357683/7c62726d-217e-4e69-8e48-568e0b7b8c34
While investigating behavior of
```
sbt:std-benchmarks> withDebug benchOnly --dumpGraphs -- Vector_Operations.Max_Stat
```
in IGV I realized there is a deep chain of nodes when reading an element of `Vector` related to work with warnings. There is an invocation of `WarningsLibrary` on `this` - that's probably unnecessary as we know how it is going to resolve. This PR skips such one level of indirection by directly delegating to `this.storage`.
However I haven't seen any effect of this change on peak performance. The library overhead seems to disappear. Anyway I wanted to bring this finding to your attention and perform independent measurement on our CI server.
MethodProcessor generates code for builtin method invocation that is wrapped in `try-catch` and handles some predefined subset of `RuntimeException`. So far, only `com.oracle.truffle.dsl.api.UnsupportedSpecializationException`.
Seeing plenty of
```
java.lang.NullPointerException: Some(Null receiver values are not supported by libraries.)
at org.graalvm.truffle/com.oracle.truffle.api.library.LibraryFactory.dispatch(LibraryFactory.java:528)
at org.graalvm.truffle/com.oracle.truffle.api.library.LibraryFactory.getUncached(LibraryFactory.java:396)
at org.enso.interpreter.runtime.error.WarningsLibraryGen$UncachedDispatch.hasWarnings(WarningsLibraryGen.java:440)
at org.enso.interpreter.instrument.job.ProgramExecutionSupport$.sendExpressionUpdate(ProgramExecutionSupport.scala:366)
at org.enso.interpreter.instrument.job.ProgramExecutionSupport$.$anonfun$executeProgram$1(ProgramExecutionSupport.scala:62)
at org.enso.interpreter.instrument.job.ProgramExecutionSupport$.$anonfun$executeProgram$10(ProgramExecutionSupport.scala:151)
at java.base/java.lang.Iterable.forEach(Iterable.java:75)
at org.enso.interpreter.instrument.job.ProgramExecutionSupport$.executeProgram(ProgramExecutionSupport.scala:139)
at org.enso.interpreter.instrument.job.ProgramExecutionSupport$.$anonfun$runProgram$3(ProgramExecutionSupport.scala:217)
```
during execution of simple programs.
Added a guard to prevent us from sending expression updates when dealing with nulls.