This change replaces an sqllite-backed suggestions' repo with a simple, in-memory, one.
As `completion` functionality has been implemented completely in GUI, there is no need to support it in backend, which simplifies a lot of functionality.
Closes#9650 and #9471.
# Important Notes
Loading suggestions and sending them to GUI on startup is almost instantaneous. Previously it would take ~10s just for `Standard.Base`.
- Closes#9363
- Cleans up the Cloud mock as it got a bit messy. It still implements the bare minimum to be able to test basic secret and auth handling logic 'offline' (added very simple path resolution, only handling the minimum set of cases for the tests to work).
- Adds first implementation of caching Cloud replies.
- Currently only caching the `Enso_User.current`. This is a simple one to cache because we do not expect it to ever change, so it can be safely cached for a long period of time (I chose 2h to make it still refresh from time to time while not being noticeable).
- We may try using this for caching other values in future PRs.
This PR bumps the FlatBuffers version used by the backend to `24.3.25` (the latest version as of now).
Since the newer FlatBuffers releases come with prebuilt binaries for all platforms we target, we can simplify the build process by simply downloading the required `flatc` binary from the official FlatBuffers GitHub release page. This allows us to remove the dependency on `conda`, which was the only reliable way to get the outdated `flatc`.
The `conda` setup has been removed from the CI steps and the relevant code has been removed from the build script.
The FlatBuffers version is no longer hard-coded in the Rust build script, it is inferred from the `build.sbt` definition (similar to GraalVM).
# Important Notes
This does not affect the GUI binary protocol implementation.
While I initially wanted to update it, it turned out farly non-trivial.
As there are multiple issues with the generated TS code, it was significantly refactored by hand and it is impossible to automatically update it. Work to address this problem is left as [a future task](https://github.com/enso-org/enso/issues/9658).
As the Flatbuffers binary protocol is guaranteed to be compatible between versions (unlike the generated sources), there should be no adverse effects from bumping `flatc` only on the backend side.
* min and max
* changelog
* wip
* scale approach mostly works
* Revert "scale approach mostly works"
This reverts commit 88e6073f7a.
* review
* review
* return value type
- Closes#9289
- Ensures that we can refer through `Enso_File` to files that do not _yet_ exist - preparing us for implementing the Write functionalities for `Enso_File` (#9291).
- As asked for by @hubertp who was encountering flaky test failures on CI in the Http_Spec and related ones, I'm adding retry logic to make such cases much less likely.
- I've made the test server randomly fail 50% of tests and with the retry logic the tests are still passing, so I think that should be much more robust, in practice the failure rate is much much less (I imagine <1% as most of the time these tests were working and we do a ton of requests in a single CI run).
- I move the `with_retries` method to now be `Test.with_retries` which can be used anywhere in our tests for the retry logic.
- It sleeps for 0.1s between retries. Not all kinds of tests need it, this was mostly for propagation delays in the Cloud in our tests. I was thinking if the delay should be configurable, but I think the 0.1s delay is not problematic and if our tests are sometimes failing due to high machine load, the delay could also help.
- This _does not_ add retry logic to raw HTTP operations or `Data.fetch`. We may add that later, but that needs some further design. In such case we may remove some retries from tests if they become unnecessary.
This change makes sure to close the Google Analytics client after usage. This will a) ensure that resources are released properly b) potentially fix the exception that is causing problems on some platforms
# Important Notes
After this change I no longer see in **my** logs:
```
io.grpc.internal.ManagedChannelOrphanWrapper$ManagedChannelReference cleanQueue
SEVERE: *~*~*~ Previous channel ManagedChannelImpl{logId=1, target=analyticsdata.googleapis.com:443} was not shutdown properly!!! ~*~*~*
Make sure to call shutdown()/shutdownNow() and wait until awaitTermination() returns true.
java.lang.RuntimeException: ManagedChannel allocation site
at io.grpc.internal.ManagedChannelOrphanWrapper$ManagedChannelReference.<init>(ManagedChannelOrphanWrapper.java:102)
at io.grpc.internal.ManagedChannelOrphanWrapper.<init>(ManagedChannelOrphanWrapper.java:60)
at io.grpc.internal.ManagedChannelOrphanWrapper.<init>(ManagedChannelOrphanWrapper.java:51)
at io.grpc.internal.ManagedChannelImplBuilder.build(ManagedChannelImplBuilder.java:668)
at io.grpc.ForwardingChannelBuilder2.build(ForwardingChannelBuilder2.java:260)
at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.createSingleChannel(InstantiatingGrpcChannelProvider.java:436)
at com.google.api.gax.grpc.ChannelPool.<init>(ChannelPool.java:107)
at com.google.api.gax.grpc.ChannelPool.create(ChannelPool.java:85)
at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.createChannel(InstantiatingGrpcChannelProvider.java:243)
at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.getTransportChannel(InstantiatingGrpcChannelProvider.java:237)
at com.google.api.gax.rpc.ClientContext.create(ClientContext.java:226)
at com.google.analytics.data.v1beta.stub.GrpcBetaAnalyticsDataStub.create(GrpcBetaAnalyticsDataStub.java:217)
at com.google.analytics.data.v1beta.stub.BetaAnalyticsDataStubSettings.createStub(BetaAnalyticsDataStubSettings.java:288)
at com.google.analytics.data.v1beta.BetaAnalyticsDataClient.<init>(BetaAnalyticsDataClient.java:376)
at com.google.analytics.data.v1beta.BetaAnalyticsDataClient.create(BetaAnalyticsDataClient.java:358)
at org.graalvm.truffle/com.oracle.truffle.host.HostMethodDesc$SingleMethod$MHBase.invokeHandle(HostMethodDesc.java:371)
```
It's important because apparently that's where it would get stuck when trying to log that message.
`Jackson_Object` supported parsing but not creating JSON from text. With this change, `Jackson_Object` is on par with `JS_Object` API and replaces the latter.
The most visible differences come from more detailed parsing exception's messages. Had to add some special cases for corner cases like `NaN` or infinity.
Closes#9473.
`42 == (Error.throw "foo")` now correctly returns an `Error` rather than False
# Important Notes
The error was in the wrong usage of the `org.enso.interpreter.dsl.AcceptsError` DSL annotation.
Move the types from `Standard.Table.Data` to `Standard.Table`.
Exceptions:
- `Standard.Table.Data.Report_Unmatched` => `Standard.Table.Constants`.
- `Standard.Table.Data.Join_Kind_Cross` => `Standard.Table.Internal.Join_Kind_Cross`.
Also removed constructor as an atom type.
- `Standard.Table.Extensions.Table_Ref` => `Standard.Table.Internal.Table_Ref`.
- `Standard.Table.Data.Type.Value_Type_Helpers` => `Standard.Table.Internal.Value_Type_Helpers`.
- `Standard.Table.Data.Type.Enso_Types` => `Standard.Table.Internal.Value_Type_Helpers`.
- `Standard.Table.Data.Type.Storage` => `Standard.Table.Internal.Storage`.
Changed all `Standard.Table` imports inside project to be project.
Favoured importing from `Standard.Table.Main` in `Standard.Database`.
Also fixed some linting in Enso_File.
- Closes#9284
- Now our tests run without the default `AWS_` config, thus ensuring that the tested setups work in a clean environment.
- After all, more complicated logic was needed for buckets access - apparently the AWS SDK only allows for some operations on buckets to happen if the client is connected to the correct region. Thus detection of bucket regions had to be implemented.
- Added `AWS_Region` widget based on autoscoping.
- Fixed `AWS_Credential.profile_names` crashing if no AWS config was found. Now it returns no profiles if not found. Added a regression test.
Make expand_to_rows work for Table and Column
Make expand_column work for Column and Row
Makes solving the book club exercise easier
![image](https://github.com/enso-org/enso/assets/1720119/4532fc38-8765-43d9-b6a0-61254f52239f)
# Important Notes
We decided expand_column did not make sense for Table as the resulting Table of Columns would rarely be what was wanted.
Fixes#9313
[Screencast from 2024-03-22 09-09-07.webm](https://github.com/enso-org/enso/assets/3919101/6ad86145-6882-4bde-993d-b1270f1ec06c)
# Important Notes
* This is PoC, so I didn't spend time on polishing the visuals; the design will likely change.
* I modified the shortcut handler a bit, allowing making multiple actions for same binding - the action's handler will be called in unspecified order, until one of them handle the event (i.e. not return false).
* To make it working regardless of imports, I needed to export AI module in Standard.Visualization. Moreover, needed to remove build_ai_prompt for Any, because it was causing issues - expect a bug report soon.
- Fixed the `write` format dropdown.
- Added `Text_Input` widget to path in various places.
- ICONs and Widgets in various places.
- Secret name dropdown.
* Initial connection to Snowflake via an account, username and password.
* Fix databases and schemas in Snowflake.
Add warehouses.
* Add warehouse.
Update schema dropdowns.
* Add ability to set warehouse and pass at connect.
* Fix for NPE in license review
* scalafmt
* Separate Snowflake from Database.
* Scala fmt.
* Legal Review
* Avoid using ARROW for snowflake.
* Tidy up Entity_Naming_Properties.
* Fix for separating Entity_Namimg_Properties.
* Allow some tweaking of Postgres dialect to allow snowflake to use as well.
* Working on reading Date, Time and Date Times.
* Changelog.
* Java format.
* Make Snowflake Time and TimeStamp stuff work.
Move some responsibilities to Type_Mapping.
* Make Snowflake Time and TimeStamp stuff work.
Move some responsibilities to Type_Mapping.
* fix
* Update distribution/lib/Standard/Database/0.0.0-dev/src/Connection/Connection.enso
Co-authored-by: Radosław Waśko <radoslaw.wasko@enso.org>
* PR comments.
* Last refactor for PR.
* Fix.
---------
Co-authored-by: Radosław Waśko <radoslaw.wasko@enso.org>
Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
- Closes#9300
- Now the Enso libraries are themselves capable of refreshing the access token, thus there is no more problems if the token expires during a long running workflow.
- Adds `get_optional_field` sibling to `get_required_field` for more unified parsing of JSON responses from the Cloud.
- Adds `expected_type` that checks the type of extracted fields. This way, if the response is malformed we get a nice Enso Cloud error telling us what is wrong with the payload instead of a `Type_Error` later down the line.
- Fixes `Test.expect_panic_with` to actually catch only panics. Before it used to also handle dataflow errors - but these have `.should_fail_with` instead. We should distinguish these scenarios.
Table.from_union creates a new table when passed in a vector of tables. This is especially helpful when a grouped method is run multiple times, as it can create a unified result set.
- Fix `Excel_Workbook.sheet` and add a test.
- Add icon for `Table.row_count` and `DB_Table.row_count`.
- Make `join_kind` widget `Display.Always`.
- Add expression as an option to `Aggregate_Column`.
- Add `Simple_Calculation.Copy` to create a copy.
- Add defaults to `Simple_Expression` so less errory.
- Set period to default to day for `date_diff` allowing use in expressions.
- Add `Text_Left`, `Text_Right`, `Text_Length` and `Format` to `Simple_Expression`.
This is a first naïve implementation of Table.running that only supports count. Adding it as a scaffold for the rest of the functionality and to give us a place to agree on the API. (Which I changed slightly from the design)
![image](https://github.com/enso-org/enso/assets/1720119/a62a83ed-f864-4295-98ea-1007f62381b1)
# Important Notes
Only supports Statistic.Count. Other functionality to follow.
* Pass additional metadata to ~/.enso/credentials
* support old and new credentials format
* fix some issues after the review
* use domain instead of iss field
* small fixes
* rename Uri -> Url
* rename cognito -> Cognito
* remove unneeded file
---------
Co-authored-by: Radosław Waśko <radoslaw.wasko@enso.org>
- Adds the Excel format as one of the formats supported when creating a data link.
- The data link can choose to read the file as a workbook, or read a sheet or range from it as a table, like `Excel_Format`.
- Also updated Delimited format dialog to allow customizing the quote style.
Including arrow language in the distribution by default. Added a basic example for creating an Arrow array.
Making sure that memory layout agrees with Arrow specification (padding, continuous allocation of memory chunks).
Related to #9118.
This should unblock work on allowing serialization/deserialization to/from Parquet but I'd like to delay it to a follow up ticket as it is going to be a significant amount of specialized work.
- Adjusted `AWS_Credential` to have a `Default` and removed support for `Nothing` from functions.
- Renamed `Response_Body.to_file` to `Response_Body.write`.
- Add `write` to `Response`.
- Add `Table.get_value` and `DB_Table.get_value` allowing getting a single value from a table.
- Added `Data.download` allowing downloading from a URL to a file.
- Added text widget as input widget for `Data` methods.
![image](https://github.com/enso-org/enso/assets/4699705/fcfc8b1e-1197-4106-b8a7-43b1435327c0)
- Implements the core parts of #9048
- Currently the path resolution is done by resolving each segment, one by one - requiring as many API calls as there are segments in the path.
- This should be replaced in a followup PR, once https://github.com/enso-org/cloud-v2/issues/899 is implemented.
- Added `to_table` extensions on some core types.
- Added ICON to `Any.to`.
- Added ICON to `Column.info`, `Table.info`, `DB_Column.info` and `DB_Table.info`.
- Added defaults to `Table.cross_tab` and `DB_Table.cross_tab`.
- Added `name`, `get`, `at`, `inner_xml` and `outer_xml` to `XML_Document`.
- Added constants into left hand side of simple expressions.
- Added widget to `get` and `at` on `XML_Document` and `XML_Element`. (Some bug in annotation code with Dmitry)
- Altered `get` and `at` to not allow XPath and just get direct child/attribute values.
- Added `get_xpath` to `XML_Document`.
- Renamed `get_elements_by_tag_name` to `get_descendants_by_tag_name` and added new `get_children_by_tag_name`.
- Added `child_names` and `attribute_names` to `XML_Document` and `XML_Element`.
- Closes#9120
- Reorders CI steps to do the license check last (to avoid it preventing tests from running which are more important than the license check)
- Tries to reword the warnings to be clearer
- Adds some CSS to the report to more clearly indicate which elements can be clicked.
- 4 weeks ago we discussed with @jdunkerley we may want to move this module.
- `Data` is a submodule mostly for various data-structures etc., the cloud stuff did not fit that well in there.
Simplify the `Test.Suite.run_with_filter` to accept a single filter parameter that searches for all the groups and specs that matches that filter. This filter can be a simple text provided from the command line.
# Important Notes
- Pending groups are now printed at the end of the run
- `Test.Suite.run_with_filter` is simplified to accept a single filter parameter that is either `Text` or `Nothing`. See the docs.
- Passing a filter from the command line is therefore straightforward, it is treated as a regex.
- For convenience, I have left all the `main` methods in all the test sources. I have just refactored them to accept the `filter` argument from the command line.
- For example, to run only a single spec from `Vector_Spec.enso`, invoke `enso --run test/Base_Tests/src/Data/Vector_Spec.enso "should allow vector creation with a programmatic constructor"`
- **Majority of the PR is a regex replace** of `^main =` for `main filter=Nothing =` and of `suite.run_with_filter` for `suite.run_with_filter filter`.
- **Fixed some internal engine bugs:**
- `AtomWithHole` allows to specify only one hole - https://github.com/enso-org/enso/pull/9065/files#diff-0f7bb7e85cf86a965de133aa7e6b5958ceb889bd1921c01e00d3a9ceb19626ef
- NaN keys in hash maps are handled in polyglot maps as well - c5257f6c2b78f893214ff67300893b593ea05e21..db4b3c0e9828ee79208d52e02586b24bb845b0d6
`Bump` library uses parser combinators behind the scenes which are known to be good at expressing grammars but are not performance-oriented.
This change ditches the dependency in favour of an existing Java implementation. `jsemver` implements the full specification, which is probably an overkill in our case, but proved to be an almost drop-in replacement for the previous library.
Closes#8692
# Important Notes
Peformance improvements:
- roughly 50ms compared to the previous approach (from 80ms to 20-40ms)
I don't see any time spent in the new implementation during startup so it could be potentially aggressively inlined.
Further more, we could use a facade and offer our own strip down version of semver.
- Added `regex` function to `Standard.Base` as a way to easily and cleanly make regular expressions.
- Added `expr` and `Expression.Value` to distinguish expressions from text values.
- Fixed issues with `Table.join` widget so dropdown exists.
- Needed fully qualified name.
- Added default empty text values for right column to provided text input boxes.
- Deprecate `Table.filter_by_expression` and allow `Table.filter` to take an `Expression`.
- Added `Simple_Expression` and deprecated `Column_Operation`. Changes the order so takes a column then a calculation.
- Rename `column` to `value` and `new_name` to `as` on `Table.set`.
- Rename `name_column` to `names` on `Table.cross_tab`.
- Removed `Column_Ref.Expression` in favour of using `Expression.Value`.
In order to allow clever masking, slicing, filtering and arrow backing stores...
- Adding ColumnStorage interface with the base API a storage will need.
- Refactored each of the unary operations to a new `UnaryOperation` interface which makes them responsible for deciding if they can be executed.
There are two projects transitively required by `runtime`, that have akka dependencies:
- `downloader`
- `connected-lock-manager`
This PR replaces the `akka-http` dependency in `downloader` by HttpClient from JDK, and splits `connected-lock-manager` into two projects such that there are no akka classes in `runtime.jar`.
# Important Notes
- Simplify the `downloader` project - remove akka.
- Add HTTP tests to the `downloader` project that uses our `http-test-helper` that is normally used for stdlib tests.
- It required few tweaks so that we can embed that server in a unit test.
- Split `connected-lock-manager` project into two projects - remove akka from `runtime`.
- **Native image build fixes and quality of life improvements:**
- Output of `native-image` is captured 743e167aa4
- The output will no longer be intertwined with the output from other commands on the CI.
- Arguments to the `native-image` are passed via an argument file, not via command line - ba0a69de6e
- This resolves an issue on Windows with "Command line too long", for example in https://github.com/enso-org/enso/actions/runs/7934447148/job/21665456738?pr=8953#step:8:2269
Cleaning up some of the structures in Storage before working on UnaryOperations.
- Removed some legacy code: `countMask`, `Index` and `DefaultIndex`.
- Renamed `mask` to `applyFilter` on `Column` and `Storage`.
- Renamed `Table.mask` to `Table.filter`.
Updates Google_Api version for authentication and adds Google Analytics reporting api and run_google_report method.
This is an initial method for proof of concept, with further design changes to follow.
# Important Notes
Updates google-api-client to v 2.2.0 from 1.35.2
Adds google-analytics-data v 0.44.0
Follow-up of #8890
Refactor the rest of the tests to the builder API (`Test_New`):
- `Image_Tests`
- `Geo_Tests`
- `Google_Api_Test`
- `Examples_Test`
- `AWS_Tests`
- `Meta_Test_Suite_Tests`
- `Visualization_Tests`
# Important Notes
- Unrelated: Fix NPE in `File.new "/" . name`
- ✅Linting fixes and groups.
- ✅Add `File.from that:Text` and use `File` conversions instead of taking both `File` and `Text` and calling `File.new`.
- ✅Align Unix Epoc with the UTC timezone and add converting from long value to `Date_Time` using it.
- ❌Add simple first logging API allowing writing to log messages from Enso.
- ✅Fix minor style issue where a test type had a empty constructor.
- ❌Added a `long` based array builder.
- Added `File_By_Line` to read a file line by line.
- Added "fast" JSON parser based off Jackson.
- ✅Altered range `to_vector` to be a proxy Vector.
- ✅Added `at` and `get` to `Database.Column`.
- ✅Added `get` to `Table.Column`.
- ✅Added ability to expand `Vector`, `Array` `Range`, `Date_Range` to columns.
- ✅Altered so `expand_to_column` default column name will be the same as the input column (i.e. no `Value` suffix).
- ✅Added ability to expand `Map`, `JS_Object` and `Jackson_Object` to rows with two columns coming out (and extra key column).
- ✅ Fixed bug where couldn't use integer index to expand to rows.
Refactor `Base_Tests` to `Test_New` testing framework. Mostly automatic text replacements.
# Important Notes
List of changes that were not done automatically (not via automatic text replacement):
- Fix indexes in Instrumentor_Spec - f590c4a398
- If group or spec is pending, its block is not evaluated - 8d797f1a4a
- Spec_Result is not private - 5767535af2
Tests marked as *pending*:
- #8913
- #8910
- Closes#8808 - adds tests for various scenarios.
- Implements `size` using HEAD.
- Updates existing functions to changes in Cloud API.
- Adds stubs for `*_time` methods, `parent`, `path`.
- [x] TODO: resolve the `Enso_File.current_working_directory` from an environment variable.
- ~~TODO: recursive directory deletion?~~ left for later
# Important Notes
- Currently, the Cloud API does not offer an easy way to extract metadata for a file, in particular to get the parent folder from the file `id`.
- We should be able to get the parent, and stuff like creation/modified time.
- We need a way to resolve paths to asset ids, for `path` to work as well as `current_working_directory`.
- What is the environment variable that will be used to feed the `current_working_directory` property?
Refactor `test/Table_Test` to the builder API. The builder API is in a new library called `Test_New` that is alongside the old `Test` library. There will be follow-up PRs that will migrate the rest of the tests. Meanwhile, let's keep these two libraries, and merge them after the last PR.
# Important Notes
- For a brief introduction into the new API, see **Prototype 1** section in https://github.com/enso-org/enso/pull/8622#issuecomment-1889706168
- When executing all the tests, the behavior should be the same as with the old library. With the only exception that if `ENSO_TEST_ANSI_COLORS` env var is set, the output is more colorful than it used to be.
Goal of this PR is to refactor the design of OrderMask and avoid copying arrays or lists wherever possible.
We have removed a few legacy functions which were not being used.
On a poor mans benchmark seems to be quicker (13s vs 16s) and memory usage should be lower.
- Closes#8723
- Adds some missing features that were needed to make this work:
- `Enso_File.create_directory` and `Enso_File.delete`, and basic tests for it
- Changes how `Enso_Secret.list` is obtained - using a different Cloud endpoint allows us to implement the desired logic, the default endpoint was giving us _all_ secrets which was not what we wanted here.
- Implements `Enso_Secret.update` and tests for it
# Important Notes
Notes describing any problems with the current Cloud API:
https://docs.google.com/document/d/1x8RUt3KkwyhlxGux7XUGfOdtFSAZV3fI9lSSqQ3XsXk/edit
Apparently, everything that was needed to make this feature work has already been implemented, although a few features needed workarounds on Enso side to work properly.
- `up_to` should have the `step` as an optional argument.
- `Date_Range` conversion can do a clever auto rename, so if a period use the name of it.
- Add `to Table` for `Range`, `Date_Range`.
Random.Seed doesn't work in the GUI and the TEXT_ONLY tag doesn't do anything so it was incorrectly showing up in the component browser.
This MR makes Random.Seed private to hide it from the GUI and completely removes the TEXT_ONLY tag which is unused and unimplemented.
Implements `Warnings.get_all wrap_errors=True` which wraps warnings attached to values inside vectors with `Map_Error`, which includes the position of the value within the vector. See [the documentation](https://github.com/enso-org/enso/blob/develop/docs/semantics/wrapped-errors.md) for more details.
`get_all wrap_errors=True` does not change the warnings that are attached to values -- it wraps them before returning them to the caller, but does not change the original warnings attached to the values.
Wrapped warnings only appear attached to the vector itself. The values inside the vector do not have their warnings wrapped.
Warning propagation is not changed at all; `Warnings.get_all` (with default `wrap_errors=False`) behaves as before. `get_all wrap_errors=True` is meant to be used primarily by the IDE, although it can be used anywhere this wrapping is desired.
This is the follow up PR addressing the last couple of points from https://github.com/enso-org/enso/pull/8643 around what the return type from fill_nothing.
# Important Notes
The biggest change is changing what we size we need for an empty string. This change says a variable length string of length 1 and does it at a low enough level that it will effect the whole language. But I think that is correct.
Give file read its own helper widget for delimiters. Remove newline add none. The file read delimiter is similar but different to the split one and so should have its own set of options.
- Closes#8555
- Refactors the file format detection logic, compacting lots of repetitive logic for HTTP handling into helper functions.
- Some updates to CODEOWNERS.
- After [suggestion](https://github.com/enso-org/enso/pull/8497#discussion_r1429543815) from @JaroslavTulach I have tried reimplementing the URL encoding using just `URLEncode` builtin util. I will see if this does not complicate other followup improvements, but most likely all should work so we should be able to get rid of the unnecessary bloat.
- Closes#8354
- Extends `simple-httpbin` with a simple mock of the Cloud API (currently it checks the token and serves the `/users` endpoint).
- Renames `simple-httpbin` to `http-test-helper`.
- Closes#8352
- ~~Proposed fix for #8493~~
- The temporary fix is deemed not viable. I will try to figure out a workaround and leave fixing #8493 to the engine team.
- Closes#8111 by making sure that all Excel workbooks are read using a backing file (which should be more memory efficient).
- If the workbook is being opened from an input stream, that stream is materialized to a `Temporary_File`.
- Adds tests fetching Table formats from HTTP.
- Extends `simple-httpbin` with ability to serve files for our tests.
- Ensures that the `Infer` option on `Excel` format also works with streams, if content-type metadata is available (e.g. from HTTP headers).
- Implements a `Temporary_File` facility that can be used to create a temporary file that is deleted once all references to the `Temporary_File` instance are GCed.
* tests
* wip
* wip
* additional warnings
* wip
* wip
* cleanup
* nested wrapping
* multiple nestings
* wraps_error uses looks_for, test for should_fail_with
* wip
* stack trace line fix
* use catch_primitive internally
* fix warning mapping, dtf spec
* just one wrapper checker, vector spec
* missing ctor, back to non-primitive catch
* back to c_p
* put old map back
* wip
* unnest tests
* Array.map on_problems
* wip
* Revert "wip"
This reverts commit c30d171457.
* better test names
* warning logging
* wip
* wip
* move logic into ALH
* doc
* constant
* My_Error.Error
* nested
* doc
* map_primtiive in warning mapper
* composition
* ref spec
* Remove warnings prior to matching on the value
If an expression has warnings and is matched we:
1) extract the warnings
2) execute the branch of a pattern that matches the value
3) attach extracted warnings to the result
This caused warnings to reappear when doing the custom warnings
manipulation.
This is also consistent with how `CaseNode`'s `doWarning` specialization
is defined.
* fix 1
* do not auto unwrap in test error checkers
* nested error matcher
* in problems too
* dtf
* v
* statistics
* wip
* Table_Spec, map_with_index_primitive
* Column_Operations_Spec
* disable warning wrapping and Report_Warning
* unimpl test
* Warnings_Spec
* DCS
* ACG JP
* zip_primitive
* join_helpers
* Lookup_Helpers
* Table
* Data_Formatter
* Value_Type_Helpers
* revert check types changes
* table_helpers
* table tests
* remove st
* do not remove warnings from value
* vec docs, tests for zip, mwi, flat_map
* docs, fixes
* remove nested_error_matcher
* cleanup
* benchmark
* one error
* alter
* add bench to main
* review
* review
* review
* tail call
* changelog
* tail call was not a tail call
* ws
* bad import
* Added missing import
* Update distribution/lib/Standard/Base/0.0.0-dev/src/Data/Array.enso
Co-authored-by: Radosław Waśko <radoslaw.wasko@enso.org>
* review, ref example
* lazy benchmark data
* extra paren
* check outside of catch
* review
* vector too
* actually lazy
* disambiguate Map_Error
* finish rename
* move to extensions
* combine Additional_Warnings error
* rename to map_no_wrap
* do not catch and rethrow
* review
* wip
* remove _primitives entirely
* remove unused should_fail_with function options
* remove expected_warning as function in Problems
---------
Co-authored-by: Hubert Plociniczak <hubert.plociniczak@gmail.com>
Co-authored-by: Radosław Waśko <radoslaw.wasko@enso.org>
* Test illustrating problems with FQNs
Inline execution fails with `Compile error: The name `Standard` could
not be found.`.
* Ensure InlineContext carries Package Repos info
Previously, there was no requirement that inline execution should allow
for FQNs. This meant that the omission of Package Repository info went
unnoticed.
In order to be able to refer to `Standard.Visualization.Preprocessor` it
has to be exported as well.
Adds these JAR modules to the `component` directory inside Engine distribution:
- `graal-language-23.1.0`
- `org.bouncycastle.*` - these need to be added for graalpy language
# Important Notes
- Remove `org.bouncycastle.*` packages from `runtime.jar` fat jar.
- Make sure that the `./run` script preinstalls GraalPy standalone distribution before starting engine tests
- Note that using `python -m venv` is only possible from standalone distribution, we cannot distribute `graalpython-launcher`.
- Make sure that installation of `numpy` and its polyglot execution example works.
- Convert `Text` to `TruffleString` before passing to GraalPy - 8ee9a2816f
- Add a `File_For_Read` type. Used for `File_Format` to read files.
- Added `Enso_User` representing the current user in `Enso_Cloud`.
- *Will be later able to list known users.*
- Added `Enso_Secret` representing a value defined in `Enso_Cloud`.
- Value not used within Enso only accessed within polyglot Java.
- Integrated into `Username_And_Password` and can be used within JDBC connections.
- Integrated into HTTP Headers so a secret can be used as a value.
- New `URI_With_Query` with the same API as `URI`. Supporting secrets in the value.
- *Will be integrated with AWS credentials.*
- Added `Enso_File` representing a file or a folder in the cloud.
- Support the same API as `File` (like the `S3_File`).
- *Will support `enso://` URI style access.*
Upgrade to GraalVM JDK 21.
```
> java -version
openjdk version "21" 2023-09-19
OpenJDK Runtime Environment GraalVM CE 21+35.1 (build 21+35-jvmci-23.1-b15)
OpenJDK 64-Bit Server VM GraalVM CE 21+35.1 (build 21+35-jvmci-23.1-b15, mixed mode, sharing)
```
With SDKMan, download with `sdk install java 21-graalce`.
# Important Notes
- After this PR, one can theoretically run enso with any JRE with version at least 21.
- Removed `sbt bootstrap` hack and all the other build time related hacks related to the handling of GraalVM distribution.
- `project-manager` remains backward compatible - it can open older engines with runtimes. New engines now do no longer require a separate runtime to be downloaded.
- sbt does not support compilation of `module-info.java` files in mixed projects - https://github.com/sbt/sbt/issues/3368
- Which means that we can have `module-info.java` files only for Java-only projects.
- Anyway, we need just a single `module-info.class` in the resulting `runtime.jar` fat jar.
- `runtime.jar` is assembled in `runtime-with-instruments` with a custom merge strategy (`sbt-assembly` plugin). Caching is disabled for custom merge strategies, which means that re-assembly of `runtime.jar` will be more frequent.
- Engine distribution contains multiple JAR archives (modules) in `component` directory, along with `runner/runner.jar` that is hidden inside a nested directory.
- The new entry point to the engine runner is [EngineRunnerBootLoader](https://github.com/enso-org/enso/pull/7991/files#diff-9ab172d0566c18456472aeb95c4345f47e2db3965e77e29c11694d3a9333a2aa) that contains a custom ClassLoader - to make sure that everything that does not have to be loaded from a module is loaded from `runner.jar`, which is not a module.
- The new command line for launching the engine runner is in [distribution/bin/enso](https://github.com/enso-org/enso/pull/7991/files#diff-0b66983403b2c329febc7381cd23d45871d4d555ce98dd040d4d1e879c8f3725)
- [Newest version of Frgaal](https://repo1.maven.org/maven2/org/frgaal/compiler/20.0.1/) (20.0.1) does not recognize `--source 21` option, only `--source 20`.
- Closes#5303
- Refactors `JoinStrategy` allowing us to 'stack' join strategies on top of each other (to some extent) - currently a `HashJoin` can be followed by another join strategy (currently `SortJoin`)
- Adds benchmarks for join
- Due to limitations of the sorting approach this will still not be as fast as possible for cases where there is more than 1 `Between` condition in a single query - trying to demonstrate that in benchmarks.
- We can replace sorting by d-dimensional [RangeTrees](https://en.wikipedia.org/wiki/Range_tree) to get `O((n + m) log^d n + k)` performance (where `n` and `m` are sizes of joined tables, `d` is the amount of `Between` conditions used in the query and `k` is the result set size).
- Follow up ticket for consideration later:
#8216
- Closes#8215
- After all, it turned out that `TreeSet` was problematic (because of not enough flexibility with duplicate key handling), so the simplest solution was to immediately implement this sub-task.
- Closes#8204
- Unrelated, but I ran into this here: adds type checks to other arguments of `set`.
- Before, putting in a Column as `new_name` (i.e. mistakenly messing up the order of arguments), lead to a hard to understand `Method `if_then_else` of type Column could not be found.`, instead now it would file with type error 'expected Text got Column`.
- Sets the default limit for `Table.read` in Database to be max 1000 rows.
- The limit for in-memory compatible API still defaults to `Nothing`.
- Adds a warning if there are more rows than limit.
- Enables a few unrelated asserts.
* doc
* one test
* date tests
* empty and nothing
* ints floats
* bools
* all columns
* regex and index
* locales
* bad formats
* all with one format
* docs
* examples, not impl db
* docs, more errors
* cleanup
* changelog
* check list
* reorder
* clue
* review
* review
* review
* review
* review
* review
* specify time zone
The change upgrades `directory-watcher` library, hoping that it will fix the problem reported in #7695 (there has been a number of bug fixes in MacOS listener since then).
Once upgraded, tests in `WatcherAdapterSpec` because the logic that attempted to ensure the proper initialization order in the test using semaphore was wrong. Now starting the watcher using `watchAsync` which only returns the future when the watcher successfully registers for paths. Ideally authors of the library would make the registration bit public
(3218d68a84/core/src/main/java/io/methvin/watcher/DirectoryWatcher.java (L229C7-L229C20)) but it is the best we can do so far.
Had to adapt to the new API in PathWatcher as well, ensuring the right order of initialization.
Should fix#7695.
Added Blank_Selector constructor and applied to remove_blank_columns, select_blank_columns, filter_blank_rows for #7931 . Changed when_any to when for readability.
Previously, constant columns were given generated names with UUIDs in them, which are long and provide no information. Instead, we now use the constant value itself to form the name.
Since these new generated names are less unique, we must explicitly make them unique, in cases where the caller did not explicilty set a name.
- Closes#7981
- Adds a `RUNTIME_ERROR` operation into the DB dialect, that may be used to 'crash' a query if a condition is met - used to validate if `lookup_and_replace` invariants are still satisfied when the query is materialized.
- Removes old `Table_Helpers.is_table` and `same_backend` checks, in favour of the new way of checking this that relies on `Table.from` conversions, and is much simpler to use and also more robust.
- Fixes#8049
- Adds tests for handling of Date_Time upload/download in Postgres.
- Adds tests for edge cases of handling of Decimal and Binary types in Postgres.
- Validate spans during existing lexer and parser unit tests, and in `enso_parser_debug`.
- Fix lost span info causing failures of updated tests.
# Important Notes
- [x] Output of `parse_all_enso_files.sh` is unchanged since before #7881 (modulo libs changes since then).
- When the parser encounters an input with the first line indented, it now creates a sub-block for lines at than indent level, and emits a syntax error (every indented block must have a parent).
- When the parser encounters a number with a base but no digits (e.g. `0x`), it now emits a `Number` with `None` in the digits field rather than a 0-length digits token.
- Fixes issue with `to_display_text` on `Date_Time`. Now a reversible format.
- Adjusted the auto width code on table so it works.
- Manually sized columns should remain fixed size.
- Enable Excel range style selection in table.
- Removed page support as not implemented yet.
# Important Notes
Will need to apply to Vue based viz at some point as well.
- Follow-up of #8055
- Adds a benchmark comparing performance of Enso Map and Java HashMap in two scenarios - _only incremental_ updates (like `Vector.distinct`) and _replacing_ updates (like keeping a counter for each key). These benchmarks can be used as a metric for #8090
Having a modest-size files in a project would lead to a timeout when the project was first initialized. This became apparent when testing delivered `.enso-project` files with some data files. After some digging there was a bug in JGit
(https://bugs.eclipse.org/bugs/show_bug.cgi?id=494323) which meant that adding such files was really slow. The implemented fix is not on by default but even with `--renormalization` turned off I did not see improvement.
In the end it didn't make sense to add `data` directory to our version control, or any other files than those in `src` or some meta files in `.enso`. Not including such files eliminates first-use initialization problems.
# Important Notes
To test, pick an existing Enso project with some data files in it (> 100MB) and remove `.enso/.vcs` directory. Previously it would timeout on first try (and work in successive runs). Now it works even on the first try.
The crash:
```
[org.enso.languageserver.requesthandler.vcs.InitVcsHandler] Initialize project request [Number(2)] for [f9a7cd0d-529c-4e1d-a4fa-9dfe2ed79008] failed with: null.
java.util.concurrent.TimeoutException: null
at org.enso.languageserver.effect.ZioExec$.<clinit>(Exec.scala:134)
at org.enso.languageserver.effect.ZioExec.$anonfun$exec$3(Exec.scala:60)
at org.enso.languageserver.effect.ZioExec.$anonfun$exec$3$adapted(Exec.scala:60)
at zio.ZIO.$anonfun$foldCause$4(ZIO.scala:683)
at zio.internal.FiberRuntime.runLoop(FiberRuntime.scala:904)
at zio.internal.FiberRuntime.evaluateEffect(FiberRuntime.scala:381)
at zio.internal.FiberRuntime.evaluateMessageWhileSuspended(FiberRuntime.scala:504)
at zio.internal.FiberRuntime.drainQueueOnCurrentThread(FiberRuntime.scala:220)
at zio.internal.FiberRuntime.run(FiberRuntime.scala:139)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:49)
at java.base/java.util.concurrent.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1395)
at java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:373)
at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1182)
```
* Reduce extra output in compilation and tests
I couldn't stand the amount of extra output that we got when compiling
a clean project and when executing regular tests. We should strive to
keep output clean and not print anything additional to stdout/stderr.
* Getting rid of explicit setup by service loading
In order for SL4J to use service loading correctly had to upgrade to
latest slf4j. Unfortunately `TestLogProvider` which essentially
delegates to `logback` provider will lead to spurious ambiguous warnings
on multiple providers. In order to dictate which one to use and
therefore eliminate the warnings we can use the `slf4j.provider` env
var, which is only available in slf4j 2.x.
Now, there is no need to explicitly call `LoggerSetup.get().setup()` as
that is being called during service setup.
* legal review
* linter
* Ensure ConsoleHandler uses the default level
ConsoleHandler's constructor uses `Level.INFO` which is unnecessary for
tests.
* report warnings
- Adds the ability to use numbers, date/time and Boolean values as constants in `set`.
- `Table.set` can take a `Column_Operation`, allowing for deriving of a new column based on other columns.
- Added `Column_Ref` type to refer to a column in `filter`.
- Added some ALIASes.
- Added `sheet` to `Excel_Workbook` to give familiar API to read sheet.
- Added conversion from range to vector allowing easy use with Zip.
- Add `Map.from_keys_and_values.
- Fixes a bug where creating a temporary table could accidentally issue a `DROP` statement of the table name that the user provided, risking destruction of user data.
- Fortunately, the bad scenario was almost impossible, because the `DROP` statement was only issued _if_ we previously checked that the mentioned table _does not exist_ - dropping a nonexistent table does not do any harm.
- It could have been dangerous in a very unlikely scenario that a table was created just between the _existence check_ and the _drop_.
- After the fix the existence check and any modifications are done within a transaction to avoid interference from concurrent modifications, and the DROP is correctly applied to a temporary Enso table instead of the original one.
- Replaced a temporary log with proper simple logging of SQL statements into a file, if an Environment variable is set.
- Used that feature to test that no unexpected statements occur.
- Closes#7749 implementing the in-memory logic.
- Additional complications have surfaced regarding the Database logic, so it has been split off into a separate ticket: #7981
We currently scan the entire child list for each call to children, num_children, and get_child_element.
Instead, we should cache the filtered child list on first access.
This PR includes
* Reading XML from a file, stream, or string
* Reading XML via Data.fetch
* Accessing the root element, element children, and attributes
* Accessing tag text contents
* Get tags by name
* Inner / Outer XML string
- Fixes#7352 by remembering original value types in type inference mode to be able to reconstruct them for Mixed.
- Added more benchmarks for comparing performance of constructing columns.
- Fixes missing implementations that caused `Table.union` crashing on some type pairs.
- Ensures that `Loss_Of_Integer_Precision` warning is not swallowed when numeric columns are unioned to create a `Float` column.
- Adds test for all of the above cases.
- Allow to output benchmark results to a CSV by setting an environment variable - useful for quickly comparing benchmarks, e.g. in Enso.
- Shuffle a few `from`s into correct places:
- `Day_Of_Week.from` removing `Day_Of_Week_From` module.
- Adding short cut for `http` and `https` in `Data.read` so it calls onto `Data.fetch` giving a single entry point.
- Moved `URI` extensions from `Standard.Base.Data` module into `Standard.Base.Network.Extensions`.
- Added `post` extension for `URI`.
- Added `contains_key` to `JS_Object`.
- Restored `into` in `JS_Object`:
- Follows old logic populating a constructor.
- Will use conversion from `JS_Object` if present.
- Added automatic deserialization of `Date`, `Time_Of_Day` and `Date_Time` from JSON.
- Uses conversion from `JS_Object`.
- Added conversion from `Text` to a `HTTP_Method` and type checking where `HTTP_Method` used in public APIs.
- Added support for `Date`, `Time_Of_Day` and `Date_Time` in `Table.from_objects`.
- Added `expand_column` to `Table` to expand `JS_Object` to values.
- Add type checking for `Table` in `right` arguments (allowing `Column`s to be used).
- Use type checking in `Table.set` to allow for conversion to a `Column`.
- Remove some unused imports.
- Fix for bug in S3 edge case.
* Add Text.substring function and get_position helper function
For #7876 adds a Text.substring function which supports negative indexes and returns a part of a string from 0-based index 'start' and continuing for 'length'
* added substring and simplified get function
For #7876 adds a Text.substring function which supports negative indexes and returns a part of a string from 0-based index 'start' and continuing for 'length'.
Also simplified get function as it looped unnecessarily.
* Update distribution/lib/Standard/Base/0.0.0-dev/src/Data/Text/Extensions.enso
punctuation corrections
Co-authored-by: GregoryTravis <greg.m.travis@gmail.com>
* Update Text_Spec.enso
Added test for start index larger than string
* Update distribution/lib/Standard/Base/0.0.0-dev/src/Data/Text/Extensions.enso
updated Arguments: section to use consistent style
Co-authored-by: Radosław Waśko <radoslaw.wasko@enso.org>
* Update distribution/lib/Standard/Base/0.0.0-dev/src/Data/Text/Extensions.enso
updated Index_Out_Of_Bounds error to reference cached length
Co-authored-by: Radosław Waśko <radoslaw.wasko@enso.org>
* Removed Slice label and added changelog entry
* Re-added slice tag to substring
Per conversation with James, added slice back to substring
* Update CHANGELOG.md add link
---------
Co-authored-by: GregoryTravis <greg.m.travis@gmail.com>
Co-authored-by: Radosław Waśko <radoslaw.wasko@enso.org>
* Always log verbose to a file
The change adds an option by default to always log to a file with
verbose log level.
The implementation is a bit tricky because in the most common use-case
we have to always log in verbose mode to a socket and only later apply
the desired log levels. Previously socket appender would respect the
desired log level already before forwarding the log.
If by default we log to a file, verbose mode is simply ignored and does
not override user settings.
To test run `project-manager` with `ENSO_LOGSERVER_APPENDER=console` env
variable. That will output to the console with the default `INFO` level
and `TRACE` log level for the file.
* add docs
* changelog
* Address some PR requests
1. Log INFO level to CONSOLE by default
2. Change runner's default log level from ERROR to WARN
Took a while to figure out why the correct log level wasn't being passed
to the language server, therefore ignoring the (desired) verbose logs
from the log file.
* linter
* 3rd party uses log4j for logging
Getting rid of the warning by adding a log4j over slf4j bridge:
```
ERROR StatusLogger Log4j2 could not find a logging implementation. Please add log4j-core to the classpath. Using SimpleLogger to log to the console...
```
* legal review update
* Make sure tests use test resources
Having `application.conf` in `src/main/resources` and `test/resources`
does not guarantee that in Tests we will pick up the latter. Instead, by
default it seems to do some kind of merge of different configurations,
which is far from desired.
* Ensure native launcher test log to console only
Logging to console and (temporary) files is problematic for Windows.
The CI also revealed a problem with the native configuration because it
was not possible to modify the launcher via env variables as everything
was initialized during build time.
* Adapt to method changes
* Potentially deal with Windows failures
- Closes#7461 by introducing a `Date_Time_Formatter` type and making parsing date time formats more robust and safer.
- The default ('simple') set of patterns is slightly simplified and made case insensitive (except for `M/m` and `H/h`) to avoid the `YYYY` vs `yyyy` issues and make it less error prone.
- The `YYYY` now has the same meaning as `yyyy` in simple mode. The old meaning (week-based year) is moved to a _separate mode_, triggered by `Date_Time_Formatter.from_iso_week_date_pattern`.
- Full Java syntax, as well as custom-built Java `DateTimeFormatter` can also be used by `Date_Time_Formatter.from_java`.
- Text-based constants (e.g. `ISO_ZONED_DATE_TIME`) have now become methods on `Date_Time_Formatter`, e.g. `Date_Time_Formatter.iso_zoned_date_time`).
- Added a `FileSystemSPI` allowing protocol resolution to a target type.
- Separated `Input_Stream` and `Output_Stream` from `File` to allow use in other spaces.
- `File_Format` types `read_web` changed to be `read_stream` working with `InputStream`.
- Added directory listing to `Auto_Detect` allowing for `Data.read` to list a folder.
- Adjusted HTTP to return an `InputStream` not a `byte[]`:
- `Response_Body` adjusted to wrap an `InputStream`.
- Added ability to materialize to either and in-memory vector (<4KB) or a temporary file.
- `Data.fetch` will materialize if not a recognized mime-type.
- Added `HTTP_Error` to handle IO exceptions from the stream.
- `Excel_Format` now supports mime-type and reading a stream.
- `Excel_Workbook` can now get a `Excel_Section` using `read_section`.
- Added S3 APIs:
- `parse_uri`: splits an S3 URI into bucket and key.
- `list_objects`: list the items in a S3 bucket with specified prefix.
- `read_bucket`: list prefixes and keys with a delimiter in a S3 bucket with specified prefix.
- `head`: either head_bucket (tests existance) or head_object API (reads object meta data).
- `get_object`: gets an object from S3 returning as a `Response_Body`.
- Added `S3_File` type acting like a `File`:
- No support for writing in this PR.
- **ToDo:** recursive listing, glob filtering, exists, size.
- Fixed a few invalid type signature line.
- Moved `create` methods for `Postgres_Connection` and `SQLite_Connection` into type instead of module.
- Renamed `Column_Fetcher.Builder` to `Column_Fetcher_Builder`.
- Fixed bug with `select_into` in Dry Run mode creating permanent tables.
**ToDo:** Unit tests.
- Fixes#7354
- And also closes#7712
- Refactors how we handle numeric ops - ensuring that the 'kernels' are placed all in one place and selected based on storage types.
- Closes#7238
- Aligns `update_database_table` to a more consistent and clearer API - `update_rows`.
- Adds a `truncate_table` helper function, to pair up with `drop_table`. Both are `PRIVATE` for now.
- Adds tests for NULLs in keys in `update_rows` and `delete_rows`.
- The behaviour is sometimes unexpected, so instead these fail with `Null_Values_In_Key_Columns`.
- Adds a workaround for https://github.com/oracle/graal/issues/7359
- Adds a workaround for a related bug where a stack frame has no name (its `rootNode.getName() == null`).
- I could not track down this bug to provide a neat repro.
- Closes#7633
- Moves `Round_Spec.enso` from published `Standard.Test` into our `test/Tests` project; the `Table_Tests` that depend on it, simply `import enso_dev.Tests`.
- Changes the layout of the local libraries directory:
- It used to be `root/<namespace>/<name>`.
- Now it is `root/<dir>` - the namespace and name are now read from `package.yaml` instead.
- Adds the parent directory of the current project to the default `ENSO_LIBRARY_PATH`.
- It is treated as a secondary path, so the default `ENSO_HOME/lib` still takes precedence.
- This allows projects to reference and load 'sibling' projects easily - the only requirement is for the project to enable `prefer-local-libraries: true` or add the other local project to its edition. The edition resolution logic is **not changed**.
- Closes#6111
- Aligns semantics of handling Mixed columns.
- Now, if an operation like `iif` or `fill_nothing` is given a `Mixed` column, the result will also be `Mixed` regardless of the `inferred_precise_value_type`.
- Enables a few old tests that were pending but could be enabled since the types work is advanced enough.
- Update list of groups to agreed list.
- Lower case `ALIAS` names to be consistent with function names.
- Add `GROUP` to methods.
- All constructors and functions have doc comments.
- Correct a few typos (e.g. `PRVIATE`).
- Mark some more things as `PRIVATE`.
- Use `ToDo:` and `Note:` consistently.
- Order tags in doc comment.
# Important Notes
We don't have all the doc comments on types and will want to add them in future,
- Closes#5159
- Now data downloaded from the database can keep the type much closer to the original type (like string length limits or smaller integer types).
- Cast also exposes these types.
- The integers are still all stored as 64-bit Java `long`s, we just check their bounds. Changing underlying storage for memory efficiency may come in the future: #6109
- Fixes#7565
- Fixes#7529 by checking for arithmetic overflow in in-memory integer arithmetic operations that could overflow. Adds a documentation note saying that the behaviour for Database backends is unspecified and depends on particular database.
Closes#7353
I introduce a new type `WithAggregatedProblems`, because `WithProblems` was too simple - it only allowed to hold a `List<Problem>` but `AggregatedProblems` is more than that. Ideally we shouldn't multiply entities like this too much. We should probably unify all to use `WithAggregatedProblems` - but after starting this, I realised it will likely just take too much effort to do for this little PR. So instead, I created a follow-up task for this: #7514
- Fixes#7412
- Also adds tests and fixes some more edge cases:
- Ensures correct handling of existing Database tables whose column names may be invalid from Enso perspective, or clashing from Enso perspective (e.g. for most DBs `ś` and `s\u0301` are different names, but for Enso they are basically the same so this would cause issues - thus Enso now renames such columns when accessed (still using the correct column reference in the generated SQL under the hood).
- Closes#5951
- Ensures any SQL warnings reported by the database through the JDBC driver are processed and forwarded to the user.
- These warnings show issues like the implicit name truncation that this PR is also solving. It's good to make sure they are visible as they can help avoid and understand unexpected problems. They should not show up in most standard workflows.
- Adds simple history to our REPL.
- Fixes#7231
- Cleans up vectorized operations to distinguish unary and binary operations.
- Introduces MixedStorage which may pretend to be a more specialized storage on demand.
- Ensures that operations request a more specialized storage on right-hand side to ensure compatibility with reported inferred storage type.
- Ensures that a dataflow error returned by an Enso callback in Java is propagated as a polyglot exception and can be caught back in Enso
- Tests for comparison of Mixed storages with each other and other types
- Started using `Set` for `Filter_Condition.Is_In` for better performance.
- ~~Migrated `Column.map` and `Column.zip` to use the Java-to-Enso callbacks.~~
- This does not forward warnings. IMO we should not be losing them. We can switch and add a ticket to fix the warnings, but that would be a regression (current implementation handles them correctly). Instead, we should first gain some ability to work with warnings in polyglot. I created a ticket to get this figured out #7371
- ~~Trying to avoid conversions when calling Enso functions from Java.~~
- Needs extra care as dataflow errors may not be handled right then. So only works for simple functions that should not error.
- Not sure how much it really helps. [Benchmarks](https://github.com/enso-org/enso/pull/7270#issuecomment-1635618393) suggested it could improve the performance quite significantly, but the practical solution is not exactly the same as the one measured, so we may have to measure and tune it to get the best results.
- Created #7378 to track this.
Fixes#7336 in a quick way.
Next to the old way of defining groups, the library can just add `GROUP` tag to some entities, and it will be added to the group specified in tag's description.
The group name may be qualified (with project name, like `Standard.Base.Input/Output`) or just name - in the latter case, IDE will assume a group defined in the same library as the entity.
Also moved some entities from "export" list in package.yaml to GROUP tag to give an example. I didn't move all of those, as I assume the library team will reorganize those groups anyway.
### Important Notes
@jdunkerley @radeusgd @GregoryTravis When you will start specifying groups in tags, remember that:
* The groups still belongs to a concrete project; if some entity outside a project wants to be added to its group, the "qualified" name should be specified. See `Table.new` example in this PR.
* If the group name does not reflect any group in package.yaml **the tag is ignored**.
* A single entity may be only in a single group. If it's specified in both package.yaml and in tag, the tag takes precedence.
---------
Co-authored-by: Ilya Bogdanov <fumlead@gmail.com>
The added benchmark is a basis for a performance investigation.
We compare the performance of the same operation run in Java vs Enso to see what is the overhead and try to get the Enso operations closer to the pure-Java performance.
- Previous GraalVM update: https://github.com/enso-org/enso/pull/6750
Removed warnings:
- Remove deprecated `ConditionProfile.createCountingProfile()`.
- Add `@Shared` to some `@Cached` parameters (Truffle now emits warnings about potential `@Share` usage).
- Specialization method names should not start with execute
- Add limit attribute to some specialization methods
- Add `@NeverDefault` for some cached initializer expressions
- Add `@Idempotent` or `@NonIdempotent` where appropriate
BigInteger and potential Node inlining are tracked in follow-up issues.
# Important Notes
For `SDKMan` users:
```
sdk install java 17.0.7-graalce
sdk use java 17.0.7-graalce
```
For other users - download link can be found at https://github.com/graalvm/graalvm-ce-builds/releases/tag/jdk-17.0.7
Release notes: https://www.graalvm.org/release-notes/JDK_17/
R component was dropped from the release 23.0.0, only `python` is available to install via `gu install python`.
Designing new `Bench` API to _collect benchmarks_ first and only execute them then. This is a minimal change to allow implementation of #7323 - e.g. ability to invoke a _single benchmark_ via JMH harness.
# Important Notes
This is just the basic API skeleton. It can be enhanced, if the basic properties (allowing integration with JMH) are kept. It is not intent of this PR to make the API 100% perfect and usable. Neither it is goal of this PR to update existing benchmarks to use it (74ac8d7 changes only one of them to demonstrate _it all works_ somehow). It is however expected that once this PR is integrated, the newly written benchmarks (like the ones from #7270) are going to use (or even enhance) the new API.
This PR does three related things:
- Fails more gracefully when a non-string is passed to compile_regex
- Don't pass a non-string to compile_regex
- Allow a Regex param to parse_to_table
The Regex change introduced some issues.
Added a test for missed case in `rename_columns` where using vector of pairs.
Reverted parameter order change for `select_columns`.
This PR modifies the builtin method processor such that it forbids arrays of non-primitive and non-guest objects in builtin methods. And provides a proper implementation for the builtin methods in `EnsoFile`.
- Remove last `to_array` calls from `File.enso`
- Add dropdowns for `replace` functions.
- Retire `Column_Selector` type.
- Add `select_blank_columns` and `remove_blank_columns` functions to table types.
- Allow Regex to be used to pick columns.
In #7148 I improved the error message when a `Filter_Condition` constructor without arguments is provided to `Vector.filter` and its friends. This PR applies the same check to the `Table.filter`.
This is useful, because when we select a Filter_Condition from a widget, initially it does not have all its arguments applied. This used to lead to confusing errors being reported to the user, now, a much clearer error is shown:
![image](https://github.com/enso-org/enso/assets/1436948/19140a7b-d6fc-4292-81d3-dc6d61135cb9)
- Adds `Column.date_diff` for computing date/time difference as integer multiply of some unit.
- Adds `Column.date_add` for shifting date/time by a unit.
- Adds `Column.date_part` for extracting various parts of the date/time value as integer.
- Adds widgets for the 3 methods above whose content depends on the column value type.
- Adds shorthands: `Column.hour`, `Column.minute` and `Column.second` to extract these date parts.
- Extends `Time_Period` with support for milli-, micro- and nano- seconds; and adapts functions taking `Time_Period` to support these wherever possible.
- Removed Array methods: `new`, `copy` and `new_[1234]`.
- New builtins for `Vector.insert`, `Vector.remove` and `Vector.flatten`.
- Replaced `Vector_Builder` use of `Array.copy` to a `Vector.Builder` approach.
Mostly stuff to tidy up the static methods in the CB.
- Remove default pattern from `parse_to_table` (caused IDE to freeze).
- Rename any `_` arguments to what they are.
- Merge `Date.now` into `Date.today`
- Merge the Interval constructors into a single constructor.
- Hide various methods.
Fixes#6955 by:
- using `visualisationModule` to specify the module where the visualization is to be used
- referring to method in `Meta.get_annotation` with `.method_name` - e.g. unresolved symbol notation
- evaluating arguments to `Meta.get_annotation` in the context of the user module (which can access the extension functions)