Move the types from `Standard.Table.Data` to `Standard.Table`.
Exceptions:
- `Standard.Table.Data.Report_Unmatched` => `Standard.Table.Constants`.
- `Standard.Table.Data.Join_Kind_Cross` => `Standard.Table.Internal.Join_Kind_Cross`.
Also removed constructor as an atom type.
- `Standard.Table.Extensions.Table_Ref` => `Standard.Table.Internal.Table_Ref`.
- `Standard.Table.Data.Type.Value_Type_Helpers` => `Standard.Table.Internal.Value_Type_Helpers`.
- `Standard.Table.Data.Type.Enso_Types` => `Standard.Table.Internal.Value_Type_Helpers`.
- `Standard.Table.Data.Type.Storage` => `Standard.Table.Internal.Storage`.
Changed all `Standard.Table` imports inside project to be project.
Favoured importing from `Standard.Table.Main` in `Standard.Database`.
Also fixed some linting in Enso_File.
- Closes#9284
- Now our tests run without the default `AWS_` config, thus ensuring that the tested setups work in a clean environment.
- After all, more complicated logic was needed for buckets access - apparently the AWS SDK only allows for some operations on buckets to happen if the client is connected to the correct region. Thus detection of bucket regions had to be implemented.
- Added `AWS_Region` widget based on autoscoping.
- Fixed `AWS_Credential.profile_names` crashing if no AWS config was found. Now it returns no profiles if not found. Added a regression test.
Make expand_to_rows work for Table and Column
Make expand_column work for Column and Row
Makes solving the book club exercise easier
![image](https://github.com/enso-org/enso/assets/1720119/4532fc38-8765-43d9-b6a0-61254f52239f)
# Important Notes
We decided expand_column did not make sense for Table as the resulting Table of Columns would rarely be what was wanted.
* Initial connection to Snowflake via an account, username and password.
* Fix databases and schemas in Snowflake.
Add warehouses.
* Add warehouse.
Update schema dropdowns.
* Add ability to set warehouse and pass at connect.
* Fix for NPE in license review
* scalafmt
* Separate Snowflake from Database.
* Scala fmt.
* Legal Review
* Avoid using ARROW for snowflake.
* Tidy up Entity_Naming_Properties.
* Fix for separating Entity_Namimg_Properties.
* Allow some tweaking of Postgres dialect to allow snowflake to use as well.
* Working on reading Date, Time and Date Times.
* Changelog.
* Java format.
* Make Snowflake Time and TimeStamp stuff work.
Move some responsibilities to Type_Mapping.
* Make Snowflake Time and TimeStamp stuff work.
Move some responsibilities to Type_Mapping.
* fix
* Update distribution/lib/Standard/Database/0.0.0-dev/src/Connection/Connection.enso
Co-authored-by: Radosław Waśko <radoslaw.wasko@enso.org>
* PR comments.
* Last refactor for PR.
* Fix.
---------
Co-authored-by: Radosław Waśko <radoslaw.wasko@enso.org>
Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
- Closes#9300
- Now the Enso libraries are themselves capable of refreshing the access token, thus there is no more problems if the token expires during a long running workflow.
- Adds `get_optional_field` sibling to `get_required_field` for more unified parsing of JSON responses from the Cloud.
- Adds `expected_type` that checks the type of extracted fields. This way, if the response is malformed we get a nice Enso Cloud error telling us what is wrong with the payload instead of a `Type_Error` later down the line.
- Fixes `Test.expect_panic_with` to actually catch only panics. Before it used to also handle dataflow errors - but these have `.should_fail_with` instead. We should distinguish these scenarios.
Table.from_union creates a new table when passed in a vector of tables. This is especially helpful when a grouped method is run multiple times, as it can create a unified result set.
- Fix `Excel_Workbook.sheet` and add a test.
- Add icon for `Table.row_count` and `DB_Table.row_count`.
- Make `join_kind` widget `Display.Always`.
- Add expression as an option to `Aggregate_Column`.
- Add `Simple_Calculation.Copy` to create a copy.
- Add defaults to `Simple_Expression` so less errory.
- Set period to default to day for `date_diff` allowing use in expressions.
- Add `Text_Left`, `Text_Right`, `Text_Length` and `Format` to `Simple_Expression`.
Follow up on #9150 - making sure that Arrow builder is not accidentally treated as an Array by disallowing reading elements.
# Important Notes
Also making sure that the length of the resulting Arrow Array is consistent with what user requested.
This is a first naïve implementation of Table.running that only supports count. Adding it as a scaffold for the rest of the functionality and to give us a place to agree on the API. (Which I changed slightly from the design)
![image](https://github.com/enso-org/enso/assets/1720119/a62a83ed-f864-4295-98ea-1007f62381b1)
# Important Notes
Only supports Statistic.Count. Other functionality to follow.
- Adds the Excel format as one of the formats supported when creating a data link.
- The data link can choose to read the file as a workbook, or read a sheet or range from it as a table, like `Excel_Format`.
- Also updated Delimited format dialog to allow customizing the quote style.
Including arrow language in the distribution by default. Added a basic example for creating an Arrow array.
Making sure that memory layout agrees with Arrow specification (padding, continuous allocation of memory chunks).
Related to #9118.
This should unblock work on allowing serialization/deserialization to/from Parquet but I'd like to delay it to a follow up ticket as it is going to be a significant amount of specialized work.
- Implements the core parts of #9048
- Currently the path resolution is done by resolving each segment, one by one - requiring as many API calls as there are segments in the path.
- This should be replaced in a followup PR, once https://github.com/enso-org/cloud-v2/issues/899 is implemented.
- Added `to_table` extensions on some core types.
- Added ICON to `Any.to`.
- Added ICON to `Column.info`, `Table.info`, `DB_Column.info` and `DB_Table.info`.
- Added defaults to `Table.cross_tab` and `DB_Table.cross_tab`.
- Added `name`, `get`, `at`, `inner_xml` and `outer_xml` to `XML_Document`.
- Added constants into left hand side of simple expressions.
- Added widget to `get` and `at` on `XML_Document` and `XML_Element`. (Some bug in annotation code with Dmitry)
- Altered `get` and `at` to not allow XPath and just get direct child/attribute values.
- Added `get_xpath` to `XML_Document`.
- Renamed `get_elements_by_tag_name` to `get_descendants_by_tag_name` and added new `get_children_by_tag_name`.
- Added `child_names` and `attribute_names` to `XML_Document` and `XML_Element`.
- 4 weeks ago we discussed with @jdunkerley we may want to move this module.
- `Data` is a submodule mostly for various data-structures etc., the cloud stuff did not fit that well in there.
* Test describing the current behavior of chained if then else application
* Chained block should behave just like Group around if_then_else
* Finishing line on BlockStart fixes if_then_else_chained_block
* Only finish the line when there was not start of a macro segment
* Fix tests
* Refine else-body with macro patterns.
* Update test syntax to maintain original semantics
* Few additional tests
---------
Co-authored-by: Kaz Wesley <kaz@lambdaverse.org>
Simplify the `Test.Suite.run_with_filter` to accept a single filter parameter that searches for all the groups and specs that matches that filter. This filter can be a simple text provided from the command line.
# Important Notes
- Pending groups are now printed at the end of the run
- `Test.Suite.run_with_filter` is simplified to accept a single filter parameter that is either `Text` or `Nothing`. See the docs.
- Passing a filter from the command line is therefore straightforward, it is treated as a regex.
- For convenience, I have left all the `main` methods in all the test sources. I have just refactored them to accept the `filter` argument from the command line.
- For example, to run only a single spec from `Vector_Spec.enso`, invoke `enso --run test/Base_Tests/src/Data/Vector_Spec.enso "should allow vector creation with a programmatic constructor"`
- **Majority of the PR is a regex replace** of `^main =` for `main filter=Nothing =` and of `suite.run_with_filter` for `suite.run_with_filter filter`.
- **Fixed some internal engine bugs:**
- `AtomWithHole` allows to specify only one hole - https://github.com/enso-org/enso/pull/9065/files#diff-0f7bb7e85cf86a965de133aa7e6b5958ceb889bd1921c01e00d3a9ceb19626ef
- NaN keys in hash maps are handled in polyglot maps as well - c5257f6c2b78f893214ff67300893b593ea05e21..db4b3c0e9828ee79208d52e02586b24bb845b0d6
- Added `regex` function to `Standard.Base` as a way to easily and cleanly make regular expressions.
- Added `expr` and `Expression.Value` to distinguish expressions from text values.
- Fixed issues with `Table.join` widget so dropdown exists.
- Needed fully qualified name.
- Added default empty text values for right column to provided text input boxes.
- Deprecate `Table.filter_by_expression` and allow `Table.filter` to take an `Expression`.
- Added `Simple_Expression` and deprecated `Column_Operation`. Changes the order so takes a column then a calculation.
- Rename `column` to `value` and `new_name` to `as` on `Table.set`.
- Rename `name_column` to `names` on `Table.cross_tab`.
- Removed `Column_Ref.Expression` in favour of using `Expression.Value`.
In order to allow clever masking, slicing, filtering and arrow backing stores...
- Adding ColumnStorage interface with the base API a storage will need.
- Refactored each of the unary operations to a new `UnaryOperation` interface which makes them responsible for deciding if they can be executed.
Cleaning up some of the structures in Storage before working on UnaryOperations.
- Removed some legacy code: `countMask`, `Index` and `DefaultIndex`.
- Renamed `mask` to `applyFilter` on `Column` and `Storage`.
- Renamed `Table.mask` to `Table.filter`.