* treat scale nothing as unspecifed
* cast to decimal
* float int biginteger
* conversion failure ints
* loss of decimal precision
* precision loss for mixed column to float
* mixed columns
* loss of precision on inexact float conversion
* cleanup, reuse
* changelog
* review
* no fits bd
* no warning on 0.1 conversion
* fmt
* big_decimal_fetcher
* default fetcher and statement setting
* round-trip d
* fix warning
* expr +10
* double builder retype to bigdecimal
* Use BD fetcher for underspecified postgres numeric column, not inferred builder, and do not use biginteger builder for integral bigdecimal values
* fix tests
* fix test
* cast_op_type
* no-ops for other dialects
* Types
* sum + avg
* avg + sum test
* fix test
* update agg type inference test
* wip
* is_int8, stddev
* more doc, overflow check
* fmt
* finish round-trip test
* wip
- Part of #9486
- Fixing our tests to not rely on deterministic ordering of created Tables in Database backends
- Before, SQLite and Postgres used to mostly return rows in the order they were inserted in, but Snowflake does not.
- Fixing various parts of Snowflake dialect.
- Removed `second_row` and `second_column` from the `Table` and `DB_Table`.
- Added `first_value` and `last_value` to the `Table` and `DB_Table`.
- Fixed bug where negative index access wasn't allowed on `Column`.
- Added error if negative index access used on `DB_Column`. Tells user they have to materialize.
- Fix argument order for `Table.text_cleanse` and a couple of typo corrections.
- Rename `auto_value_type` to `auto_cast` on table and columns.
* Update existing behaviou to match new
* Add signatures
* Red test
* First test green
* sbt javafmtAll
* In-Memory working
* Not implemeted for In-Db
* Docs
* Disable tests for in-db
* Changelog
* Code review changes
* Fix
* Fix
* Fixc tests
- Includes HTTP method in error message
- Does not do special handling for `403` status code - this was wrong and led to `Unauthorized` error when the real cause was lack of permssions in the Cloud. The errors should be more understandable now.
- Adds `projectSessionId` to audit log metadata.
- Fixes a test (`Secrets_Spec`) that did not have unique names and would fail if cleanup of previous runs failed (or if ran in parallel).
- remove legacy matrix and object types from vue component and any further code relating to those
- remove the index and index header being sent in the json for tables
- add has_index_col flag for json hat previously sent 'indicies_header' and 'indicies' so that index/# column is still rendered where required
- Part of #9486
- Building on top of initial work by @jdunkerley and finishing it
- Reverted the changes to the Postgres_Dialect from last Snowflake work and split the Snowflake_Dialect into a separate module.
- Moved from `rounding_decimal_places_not_allowed_for_floats` to `supports_float_round_decimal_places` (as too confusing).
- Added Snowflake_Dialect type.
- Extracted `Snowflake_Spec` into separate `Snowflake_Tests`
- It imports the common tests from `Table_Tests`.
- Some initial adaptations to make the snowflake dialect not-crash.
- Adding `Internals_Access` proxy to allow external implementations to access our internal data structures without directly exposing them to users. Users should not use these.
- Adding profiling of SQL to check performance.
Fixes `Standard.Base.Meta.Enso_Project.enso_project` to return a project descriptor for the *main* project, i.e., the one configured as a *root* for the engine.
# Important Notes
`enso_project` builtin no longer iterates the stack frames to infer the project descriptor. It derives it from the default package repository.
The standard library vector sampling test happened to fail by random chance of two consecutive samplings being equal.
![image](https://github.com/enso-org/enso/assets/919491/3c9c73a8-51da-468c-a42d-88a99d30ecbf)
To prevent that from happening again, the sampled vector and number of samples was increased. Also, the non-determinism test for some reason was actually performed 3 times, giving 3 opportunities for samplings to accidentaly match and fail the test. Removed that outer loop, since one non-equality is plenty enough to pass the test.
* treat scale nothing as unspecifed
* cast to decimal
* float int biginteger
* conversion failure ints
* loss of decimal precision
* precision loss for mixed column to float
* mixed columns
* loss of precision on inexact float conversion
* cleanup, reuse
* changelog
* review
* no fits bd
* no warning on 0.1 conversion
* fmt
* hack
* make a column
* add
* no scale=0 on BD type
* a test
* wip
* 3 arithmetic ops
* /
* wip
* BigDecimalPowerOp
* wip
* mod test
* NumericBinaryOpReturningBigDecimal
* with scalar
* misc arithmetic tests
* fix integralBigDecimalToInteger
* mixed columns
* bigdecimal pow via double
* cleanup
* j2e on get
* arithmetic exception
* mod 0
* cleanup
* fmt
* changelog
* check type first
* merge
* mc error message
* add BD case to Builder.java
* fmt
* changelog
* add BD case to StorageConverter.java
* fmt
* fix test
- Improve BOM handling: detect and skip the BOM character, Default encoding that detects encoding based on BOM if present, warnings if unexpected BOM is encountered.
- Closes#9849
- Windows-1252 fallback will be done as a separate PR as it has additional complexity. Tracked in ticket #10148.
Add support for private methods. Most of the changes are in parser and compiler. The runtime checking of private functions was already present since #9692
# Important Notes
- Only top-level methods can be declared `private`.
- private method cannot be called from different project
- private method cannot be accessed from polyglot code (private method does not exist for polyglot code)
- Add ranged number widget to `at` and `get`.
- Add defaults to `at` and `get` picking the first item.
- PRIVATE on various Excel_Workbook methods. It still works like a connection but not shown in CB.
- Supersedes #9966 as I wanted to test these changes in one go.
- Fixes#10037 caused by lack of CI check and my oversight (forgot to run full tests after a minor change).
- Fixes a regression after [file metadata fields were renamed](c09d856ac8 (diff-9f59b6a0ee3155efecdc70c1ea0c90ab5cde00b5623d84363118b1793f941c46R2037)).
- Fixes handling of creating new datalinks and using them after cache was cleared (e.g. workflow restart).
- This was caused by troubles with path resolver.
- The fix addresses the most common issue and adds a test for it (test flushes the caches to ensure path resolver is used instead of the cached value).
- Some related issues were discovered on the cloud side, tracked by https://github.com/enso-org/cloud-v2/issues/1252
* Signature
* More API work
* Red
* Still red
* Green
* Red
* Green
* Red
* Green
* Refactor
* Refactor
* Refactor
* Red
* Green
* Red
* Green
* Red
* Green
* More tests
* non-ascii
* Ordering tests
* Remove tabs
* Numbers and Letters
* Changelog.md
* Add documentation
* Tests for non-text columns
* Add punctuation
* Add symbols
* Fix
* Refactor
* Refactor
* Move to base
* Fix
* Start of in-db tests. Not working
* DB versions
* Update widget
* Fix widgets
* Move tests to base
* Code review changes
- Closes#9673
- Adds the ability to save an existing Postgres connection as a datalink into the Enso Cloud, automatically promoting plain-text passwords into a Secret.
- Fixes dataflow error propagation in `JS_Object.from_pairs`.
- Prepares for the Cloud API change, together with a fallback for the old API to avoid problems during migration.
- This PR should be merged before the https://github.com/enso-org/cloud-v2/pull/1236 PR is _deployed_.
This PR fixes several issues that were appearing when running CI jobs on PRs created against the repository forks:
* electron-builder on Windows and macOS will properly recognize that the secrets are missing and will not attempt to sign the artifacts;
* similarly, fixed the S3 library tests;
* test reporter step will be now skipped, as it does not support forks.
Following an example from Steve, adjusted `rename_column` to allow a `Table` to define the mapping.
- A single Text column table gives a list of new names.
- A two Text column table gives a map from old name to new name.
- If a `DB_Table` is passed errors and tells the user to materialize the table.
![image](https://github.com/enso-org/enso/assets/4699705/3ed98330-1fce-465e-bf96-4a86e0872dd3)
- Closes#9599
- Implemented API for sending audit logs to the cloud on a background thread.
- If the Postgres connection is opened through a datalink, its internal JDBC connection is replaced by a wrapper that reports executed queries to the audit log.
- Also introduces `EnsoMeta` - a helper Java class that can be used in our helper libraries to access Enso types.
- I have replaced the common pattern scattered throughout the codebase with calls to this 'library' to avoid repetitive code.
- Refactored `Table.display` to share code between in-memory and DB - it was needed as the function stopped working for `DB_Table` after adding making the `Table` constructor `private`.
- Clearer error when reading a SQLite database from a remote file (tells the user to download it first).
- Follow up - correlate asset id of the data link:
#9869
- Follow up - include project name (once bug is fixed):
#9875
- Some problems/improvements of the audit log:
- The audit log system is not yet ready for high throughput of logs
#9870
- The logs may be lost if `System.exit` is used
#9871
* New Test
* Improve DateTime recognition
* Re-enable slow test
* If there is a time take it regardless of format
* If there is a time take it regardless of format
* Code Review Changes
In certain cases, when the `action` of `Panic.catch` is tail-call-optimized (via `@Tail_Call`) annotation, the panic is not caught. Fixed by ensuring that the `action` of `Panic.catch` is executed as `NOT_TAIL` rather than `TAIL_DIRECT`.
# Important Notes
The `handler` parameter of `Panic.catch` is executed as `NOT_TAIL` as well, just to be sure.
* Make Column.should_equal detect colums of different types and think nan==nan
* Refactor Table.should_equal
* More Column tests
* Adjust spacing
* Tests Green
* Check same number of columns
* Refactor
* Extra test
* Code Review Changes
* Fix
* Fix
* Fix tests
* Fix Tests
* Fix Test
* Fix test
* Code review change
Closes#8836.
Atom constructors can be declared as private (project-private). project-private constructors can be called only from the same project. See the encapsulation.md docs for more info.
---------
Co-authored-by: Jaroslav Tulach <jaroslav.tulach@enso.org>
Co-authored-by: Radosław Waśko <radoslaw.wasko@enso.org>
Co-authored-by: Hubert Plociniczak <hubert.plociniczak@gmail.com>
Co-authored-by: Kaz Wesley <kaz@lambdaverse.org>
As reported by our users, when using the AWS SSO, our code was failing with:
```
Execution finished with an error: To use Sso related properties in the 'xyz' profile, the 'sso' service module must be on the class path.
```
This PR adds the missing JARs to fix that.
Additionally it improves the license review tool UX a bit (parts of #9122):
- sorting the report by amount of problems, so that dependencies with unresolved problems appear at the top,
- semi-automatic helper button to rename package configurations after a version bump,
- button to remove stale entries from config (files or copyrights that disappeared after update),
- button to add custom copyright notice text straight from the report UI,
- button to set a file as the license for the project (creating the `custom-license` file automatically)
- ability to filter processed projects - e.g. `openLegalReviewReport AWS` will only run on the AWS subproject - saving time processing unchanged dependencies,
- updated the license search heuristic, fixing a problem with duplicates:
- if we had dependencies `netty-http` and `netty-http2`, because of a prefix-check logic, the notices for `netty-http` would also appear again for `netty-http2`, which is not valid. I have improved the heuristic to avoid these false positives and removed them from the current report.
- WIP: button to mark a license type as reviewed (not finished in this PR).
Resolves#9607 by computing `Number.hash` by converting given number to `Float` first and then computing the hash. Also the conversion from `Float.to Decimal` is exact - done via `new BigDecimal(double)`. There is `Decimal.new` that handles the user-friendly conversion. However as a result `Decimal.from 2.1 != Decimal.new 2.1` - that's the only way to ensure consistency between hash code and conversions.
- Closes#9363
- Cleans up the Cloud mock as it got a bit messy. It still implements the bare minimum to be able to test basic secret and auth handling logic 'offline' (added very simple path resolution, only handling the minimum set of cases for the tests to work).
- Adds first implementation of caching Cloud replies.
- Currently only caching the `Enso_User.current`. This is a simple one to cache because we do not expect it to ever change, so it can be safely cached for a long period of time (I chose 2h to make it still refresh from time to time while not being noticeable).
- We may try using this for caching other values in future PRs.
* min and max
* changelog
* wip
* scale approach mostly works
* Revert "scale approach mostly works"
This reverts commit 88e6073f7a.
* review
* review
* return value type
- Closes#9289
- Ensures that we can refer through `Enso_File` to files that do not _yet_ exist - preparing us for implementing the Write functionalities for `Enso_File` (#9291).
- As asked for by @hubertp who was encountering flaky test failures on CI in the Http_Spec and related ones, I'm adding retry logic to make such cases much less likely.
- I've made the test server randomly fail 50% of tests and with the retry logic the tests are still passing, so I think that should be much more robust, in practice the failure rate is much much less (I imagine <1% as most of the time these tests were working and we do a ton of requests in a single CI run).
- I move the `with_retries` method to now be `Test.with_retries` which can be used anywhere in our tests for the retry logic.
- It sleeps for 0.1s between retries. Not all kinds of tests need it, this was mostly for propagation delays in the Cloud in our tests. I was thinking if the delay should be configurable, but I think the 0.1s delay is not problematic and if our tests are sometimes failing due to high machine load, the delay could also help.
- This _does not_ add retry logic to raw HTTP operations or `Data.fetch`. We may add that later, but that needs some further design. In such case we may remove some retries from tests if they become unnecessary.
`Jackson_Object` supported parsing but not creating JSON from text. With this change, `Jackson_Object` is on par with `JS_Object` API and replaces the latter.
The most visible differences come from more detailed parsing exception's messages. Had to add some special cases for corner cases like `NaN` or infinity.
Closes#9473.
`42 == (Error.throw "foo")` now correctly returns an `Error` rather than False
# Important Notes
The error was in the wrong usage of the `org.enso.interpreter.dsl.AcceptsError` DSL annotation.
Move the types from `Standard.Table.Data` to `Standard.Table`.
Exceptions:
- `Standard.Table.Data.Report_Unmatched` => `Standard.Table.Constants`.
- `Standard.Table.Data.Join_Kind_Cross` => `Standard.Table.Internal.Join_Kind_Cross`.
Also removed constructor as an atom type.
- `Standard.Table.Extensions.Table_Ref` => `Standard.Table.Internal.Table_Ref`.
- `Standard.Table.Data.Type.Value_Type_Helpers` => `Standard.Table.Internal.Value_Type_Helpers`.
- `Standard.Table.Data.Type.Enso_Types` => `Standard.Table.Internal.Value_Type_Helpers`.
- `Standard.Table.Data.Type.Storage` => `Standard.Table.Internal.Storage`.
Changed all `Standard.Table` imports inside project to be project.
Favoured importing from `Standard.Table.Main` in `Standard.Database`.
Also fixed some linting in Enso_File.
- Closes#9284
- Now our tests run without the default `AWS_` config, thus ensuring that the tested setups work in a clean environment.
- After all, more complicated logic was needed for buckets access - apparently the AWS SDK only allows for some operations on buckets to happen if the client is connected to the correct region. Thus detection of bucket regions had to be implemented.
- Added `AWS_Region` widget based on autoscoping.
- Fixed `AWS_Credential.profile_names` crashing if no AWS config was found. Now it returns no profiles if not found. Added a regression test.
Make expand_to_rows work for Table and Column
Make expand_column work for Column and Row
Makes solving the book club exercise easier
![image](https://github.com/enso-org/enso/assets/1720119/4532fc38-8765-43d9-b6a0-61254f52239f)
# Important Notes
We decided expand_column did not make sense for Table as the resulting Table of Columns would rarely be what was wanted.
* Initial connection to Snowflake via an account, username and password.
* Fix databases and schemas in Snowflake.
Add warehouses.
* Add warehouse.
Update schema dropdowns.
* Add ability to set warehouse and pass at connect.
* Fix for NPE in license review
* scalafmt
* Separate Snowflake from Database.
* Scala fmt.
* Legal Review
* Avoid using ARROW for snowflake.
* Tidy up Entity_Naming_Properties.
* Fix for separating Entity_Namimg_Properties.
* Allow some tweaking of Postgres dialect to allow snowflake to use as well.
* Working on reading Date, Time and Date Times.
* Changelog.
* Java format.
* Make Snowflake Time and TimeStamp stuff work.
Move some responsibilities to Type_Mapping.
* Make Snowflake Time and TimeStamp stuff work.
Move some responsibilities to Type_Mapping.
* fix
* Update distribution/lib/Standard/Database/0.0.0-dev/src/Connection/Connection.enso
Co-authored-by: Radosław Waśko <radoslaw.wasko@enso.org>
* PR comments.
* Last refactor for PR.
* Fix.
---------
Co-authored-by: Radosław Waśko <radoslaw.wasko@enso.org>
Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
- Closes#9300
- Now the Enso libraries are themselves capable of refreshing the access token, thus there is no more problems if the token expires during a long running workflow.
- Adds `get_optional_field` sibling to `get_required_field` for more unified parsing of JSON responses from the Cloud.
- Adds `expected_type` that checks the type of extracted fields. This way, if the response is malformed we get a nice Enso Cloud error telling us what is wrong with the payload instead of a `Type_Error` later down the line.
- Fixes `Test.expect_panic_with` to actually catch only panics. Before it used to also handle dataflow errors - but these have `.should_fail_with` instead. We should distinguish these scenarios.
Table.from_union creates a new table when passed in a vector of tables. This is especially helpful when a grouped method is run multiple times, as it can create a unified result set.
- Fix `Excel_Workbook.sheet` and add a test.
- Add icon for `Table.row_count` and `DB_Table.row_count`.
- Make `join_kind` widget `Display.Always`.
- Add expression as an option to `Aggregate_Column`.
- Add `Simple_Calculation.Copy` to create a copy.
- Add defaults to `Simple_Expression` so less errory.
- Set period to default to day for `date_diff` allowing use in expressions.
- Add `Text_Left`, `Text_Right`, `Text_Length` and `Format` to `Simple_Expression`.
Follow up on #9150 - making sure that Arrow builder is not accidentally treated as an Array by disallowing reading elements.
# Important Notes
Also making sure that the length of the resulting Arrow Array is consistent with what user requested.
This is a first naïve implementation of Table.running that only supports count. Adding it as a scaffold for the rest of the functionality and to give us a place to agree on the API. (Which I changed slightly from the design)
![image](https://github.com/enso-org/enso/assets/1720119/a62a83ed-f864-4295-98ea-1007f62381b1)
# Important Notes
Only supports Statistic.Count. Other functionality to follow.
- Adds the Excel format as one of the formats supported when creating a data link.
- The data link can choose to read the file as a workbook, or read a sheet or range from it as a table, like `Excel_Format`.
- Also updated Delimited format dialog to allow customizing the quote style.
Including arrow language in the distribution by default. Added a basic example for creating an Arrow array.
Making sure that memory layout agrees with Arrow specification (padding, continuous allocation of memory chunks).
Related to #9118.
This should unblock work on allowing serialization/deserialization to/from Parquet but I'd like to delay it to a follow up ticket as it is going to be a significant amount of specialized work.
- Implements the core parts of #9048
- Currently the path resolution is done by resolving each segment, one by one - requiring as many API calls as there are segments in the path.
- This should be replaced in a followup PR, once https://github.com/enso-org/cloud-v2/issues/899 is implemented.
- Added `to_table` extensions on some core types.
- Added ICON to `Any.to`.
- Added ICON to `Column.info`, `Table.info`, `DB_Column.info` and `DB_Table.info`.
- Added defaults to `Table.cross_tab` and `DB_Table.cross_tab`.
- Added `name`, `get`, `at`, `inner_xml` and `outer_xml` to `XML_Document`.
- Added constants into left hand side of simple expressions.
- Added widget to `get` and `at` on `XML_Document` and `XML_Element`. (Some bug in annotation code with Dmitry)
- Altered `get` and `at` to not allow XPath and just get direct child/attribute values.
- Added `get_xpath` to `XML_Document`.
- Renamed `get_elements_by_tag_name` to `get_descendants_by_tag_name` and added new `get_children_by_tag_name`.
- Added `child_names` and `attribute_names` to `XML_Document` and `XML_Element`.
- 4 weeks ago we discussed with @jdunkerley we may want to move this module.
- `Data` is a submodule mostly for various data-structures etc., the cloud stuff did not fit that well in there.
* Test describing the current behavior of chained if then else application
* Chained block should behave just like Group around if_then_else
* Finishing line on BlockStart fixes if_then_else_chained_block
* Only finish the line when there was not start of a macro segment
* Fix tests
* Refine else-body with macro patterns.
* Update test syntax to maintain original semantics
* Few additional tests
---------
Co-authored-by: Kaz Wesley <kaz@lambdaverse.org>
Simplify the `Test.Suite.run_with_filter` to accept a single filter parameter that searches for all the groups and specs that matches that filter. This filter can be a simple text provided from the command line.
# Important Notes
- Pending groups are now printed at the end of the run
- `Test.Suite.run_with_filter` is simplified to accept a single filter parameter that is either `Text` or `Nothing`. See the docs.
- Passing a filter from the command line is therefore straightforward, it is treated as a regex.
- For convenience, I have left all the `main` methods in all the test sources. I have just refactored them to accept the `filter` argument from the command line.
- For example, to run only a single spec from `Vector_Spec.enso`, invoke `enso --run test/Base_Tests/src/Data/Vector_Spec.enso "should allow vector creation with a programmatic constructor"`
- **Majority of the PR is a regex replace** of `^main =` for `main filter=Nothing =` and of `suite.run_with_filter` for `suite.run_with_filter filter`.
- **Fixed some internal engine bugs:**
- `AtomWithHole` allows to specify only one hole - https://github.com/enso-org/enso/pull/9065/files#diff-0f7bb7e85cf86a965de133aa7e6b5958ceb889bd1921c01e00d3a9ceb19626ef
- NaN keys in hash maps are handled in polyglot maps as well - c5257f6c2b78f893214ff67300893b593ea05e21..db4b3c0e9828ee79208d52e02586b24bb845b0d6