Closes#5113
Fixes a bug where read-only files would be overwritten if File.write was used in backup mode, and added tests to avoid such regression. To implement it, introduced a `is_writable` property on `File`.
- Adjust Excel Workbook write behaviour.
- Support Nothing / Null constants.
- Deduce the type of arithmetic operations and `iif`.
- Allow Date_Time constants, treating as local timezone.
- Removed the `to_column_name` and `ensure_sane_name` code.
- Handle `WithWarnings` in `IndirectInvokeCallableNode`.
- Handle no RootNode in `ErrorResolver`.
- Allow table vizualisation to cope if no `data` passed.
- Add `Warning.has_warnings` to check if warnings present.
- Adjust `set_value` for `JS_Object` so creates a new object each time.
- Updates the `rename_columns` API.
- Add `first_row`, `second_row` and `last_row` to the Table types.
- New option for reading only last row of ResultSet.
Rename is_match + match to match + find (respectively), and remove all non-regexp functionality.
Regexp flags and Match_Mode are also no longer supported by these methods.
Critical performance improvements after #4067
# Important Notes
- Replace if-then-else expressions in `Any.==` with case expressions.
- Fix caching in `EqualsNode`.
- This includes fixing specializations, along with fallback guard.
Remove regex support from .locate and .locate_all; regex functionality is moved to .match and .match_all where appropriate. This is in preparation for simplifying regex support across the board.
Also change Matching_Mode types to a single type with two variants.
Note: the matcher parameter to .locate and .locate_all has been replaced by a case_sensitivity parameter, of type Case_Sensitivity, which differs in that it also has a Default option. Default is treated as Sensitive.
- Updated `Widget.Vector_Editor` ready for use by IDE team.
- Added `get` to `Row` to make API more aligned.
- Added `first_column`, `second_column` and `last_column` to `Table` APIs.
- Adjusted `Column_Selector` and associated methods to have simpler API.
- Removed `Column` from `Aggregate_Column` constructors.
- Added new `Excel_Workbook` type and added to `Excel_Section`.
- Added new `SQLiteFormatSPI` and `SQLite_Format`.
- Added new `IamgeFormatSPI` and `Image_Format`.
Closes#5109
# Important Notes
- Currently the tests pass for the in-memory parts of Common_Table_Operations, but still some stuff not working on DB backends - in progress.
Add `Comparator` type class emulation for all types. Migrate all the types in stdlib to this new `Comparator` API. The main documentation is in `Ordering.enso`.
Fixes these pivotals:
- https://www.pivotaltracker.com/story/show/183945328
- https://www.pivotaltracker.com/story/show/183958734
- https://www.pivotaltracker.com/story/show/184380208
# Important Notes
- The new Comparator API forces users to specify both `equals` and `hash` methods on their custom comparators.
- All the `compare_to` overrides were replaced by definition of a custom _ordered_ comparator.
- All the call sites of `x.compare_to y` method were replaced with `Ordering.compare x y`.
- `Ordering.compare` is essentially a shortcut for `Comparable.from x . compare x y`.
- The default comparator for `Any` is `Default_Unordered_Comparator`, which just forwards to the builtin `EqualsNode` and `HashCodeNode` nodes.
- For `x`, one can get its hash with `Comparable.from x . hash x`.
- This makes `hash` as _hidden_ as possible. There are no other public methods to get a hash code of an object.
- Comparing `x` and `y` can be done either by `Ordering.compare x y` or `Comparable.from x . compare x y` instead of `x.compare_to y`.
- Fixes the display of Date, Time_Of_Day and Date_Time so doesn't wrap.
- Adjust serialization of large integer values for JS and display within table.
- Workaround for issue with using `.lines` in the Table (new bug filed).
- Disabled warning on no specified `separator` on `Concatenate`.
Does not include fix for aggregation on integer values outside of `long` range.
Closes#5038
- Use the proper widget structure.
- Provide new method `get_widget_json` with whole structure, but keep `get_full_annotations_json` in old form.
- Start to get to some reusable functions.
- Added widget to JS_Object field selections.
- New `set` function design - takes a `Column` and works with that more easily and supports control of `Set_Mode`.
- New simple `parse` API on `Column`.
- Separated expression support for `filter` to new `filter_by_expression` on `Table`.
- New `compute` function allowing creation of a column from an expression.
- Added case sensitivity argument to `Column` based on `starts_with`, `ends_with` and `contains`.
- Added case sensitivity argument to `Filter_Condition` for `Starts_With`, `Ends_With`, `Contains` and `Not_Contains`.
- Fixed the issue in JS Table visualisation where JavaScript date was incorrectly set.
- Some dynamic dropdown expressions - experimenting with ways to use them.
- Fixed issue with `.pretty` that wasn't escaping `\`.
- Changed default Postgres DB to `postgres`.
- Fixed SQLite support for starts_with, ends_with and contains to be consistent (using GLOB not LIKE).
- Updated `Text.starts_with`, `Text.ends_with` and `Text.contains` to new simpler API.
- Added a `Case_Sensitivity.Default` and adjusted `Table.distinct` to use it by default.
- Fixed a bug with `Data.fetch` on an HTTP error.
- Improved SQLite Case Sensitivity control in distinct to use collations.
When the function is visualized with the `default_preprocessor` method, it returns a dataflow error that shows up in the logs as
```
Cannot encode class org.enso.interpreter.runtime.error.DataflowError to byte array.
```
PR handles dataflow errors in the preprocessor and fallbacks the `to_display_text` value. As a bonus, we get a function visualization.
Implements [#183453466](https://www.pivotaltracker.com/story/show/183453466).
https://user-images.githubusercontent.com/1428930/203870063-dd9c3941-ce79-4ce9-a772-a2014e900b20.mp4
# Important Notes
* the best laziness is used for `Text` type, which makes use of its internal representation to send data
* any type will first compute its default string representation and then send the content of that lazy to the IDE
* special handling of files and their content will be implemented in the future
* size of the displayed text can be updated dynamically based on best effort information: if the backend does not yet know the full width/height of the text, it can update the IDE at any time and this will be handled gracefully by updating the scrollbar position and sizes.
- add: `GeneralAnnotation` IR node for `@name expression` annotations
- update: compilation pipeline to process the annotation expressions
- update: rewrite `OverloadsResolution` compiler pass so that it keeps the order of module definitions
- add: `Meta.get_annotation` builtin function that returns the result of annotation expression
- misc: improvements (private methods, lazy arguments, build.sbt cleanup)
* Hash codes prototype
* Remove Any.hash_code
* Improve caching of hashcode in atoms
* [WIP] Add Hash_Map type
* Implement Any.hash_code builtin for primitives and vectors
* Add some values to ValuesGenerator
* Fix example docs on Time_Zone.new
* [WIP] QuickFix for HashCodeTest before PR #3956 is merged
* Fix hash code contract in HashCodeTest
* Add times and dates values to HashCodeTest
* Fix docs
* Remove hashCodeForMetaInterop specialization
* Introduce snapshoting of HashMapBuilder
* Add unit tests for EnsoHashMap
* Remove duplicate test in Map_Spec.enso
* Hash_Map.to_vector caches result
* Hash_Map_Spec is a copy of Map_Spec
* Implement some methods in Hash_Map
* Add equalsHashMaps specialization to EqualsAnyNode
* get and insert operations are able to work with polyglot values
* Implement rest of Hash_Map API
* Add test that inserts elements with keys with same hash code
* EnsoHashMap.toDisplayString use builder storage directly
* Add separate specialization for host objects in EqualsAnyNode
* Fix specialization for host objects in EqualsAnyNode
* Add polyglot hash map tests
* EconomicMap keeps reference to EqualsNode and HashCodeNode.
Rather than passing these nodes to `get` and `insert` methods.
* HashMapTest run in polyglot context
* Fix containsKey index handling in snapshots
* Remove snapshots field from EnsoHashMapBuilder
* Prepare polyglot hash map handling.
- Hash_Map builtin methods are separate nodes
* Some bug fixes
* Remove ForeignMapWrapper.
We would have to wrap foreign maps in assignments for this to be efficient.
* Improve performance of Hash_Map.get_builtin
Also, if_nothing parameter is suspended
* Remove to_flat_vector.
Interop API requires nested vector (our previous to_vector implementation). Seems that I have misunderstood the docs the first time I read it.
- to_vector does not sort the vector by keys by default
* Fix polyglot hash maps method dispatch
* Add tests that effectively test hash code implementation.
Via hash map that behaves like a hash set.
* Remove Hashcode_Spec
* Add some polyglot tests
* Add Text.== tests for NFD normalization
* Fix NFD normalization bug in Text.java
* Improve performance of EqualsAnyNode.equalsTexts specialization
* Properly compute hash code for Atom and cache it
* Fix Text specialization in HashCodeAnyNode
* Add Hash_Map_Spec as part of all tests
* Remove HashMapTest.java
Providing all the infrastructure for all the needed Truffle nodes is no longer manageable.
* Remove rest of identityHashCode message implementations
* Replace old Map with Hash_Map
* Add some docs
* Add TruffleBoundaries
* Formatting
* Fix some tests to accept unsorted vector from Map.to_vector
* Delete Map.first and Map.last methods
* Add specialization for big integer hash
* Introduce proper HashCodeTest and EqualsTest.
- Use jUnit theories.
- Call nodes directly
* Fix some specializations for primitives in HashCodeAnyNode
* Fix host object specialization
* Remove Any.hash_code
* Fix import in Map.enso
* Update changelog
* Reformat
* Add truffle boundary to BigInteger.hashCode
* Fix performance of HashCodeTest - initialize DataPoints just once
* Fix MetaIsATest
* Fix ValuesGenerator.textual - Java's char is not Text
* Fix indent in Map_Spec.enso
* Add maps to datapoints in HashCodeTest
* Add specialization for maps in HashCodeAnyNode
* Add multiLevelAtoms to ValuesGenerator
* Provide a workaround for non-linear key inserts
* Fix specializations for double and BigInteger
* Cosmetics
* Add truffle boundaries
* Add allowInlining=true to some truffle boundaries.
Increases performance a lot.
* Increase the size of vectors, and warmup time for Vector.Distinct benchmark
* Various small performance fixes.
* Fix Geo_Spec tests to accept unsorted Map.to_vector
* Implement Map.remove
* FIx Visualization tests to accept unsorted Map.to_vector
* Treat java.util.Properties as Map
* Add truffle boundaries
* Invoke polyglot methods on java.util.Properties
* Ignore python tests if python lang is missing
- Add `get` to Table.
- Correct `Count Nothing` examples.
- Add `join` to File.
- Add `File_Format.all` listing all installed formats.
- Add some more ALIAS entries.
Fixes https://www.pivotaltracker.com/story/show/184073099
# Important Notes
- Since now the only operator on columns for division, `/`, returns floats, it may be worth creating an additional `div` operator exposing integer division. But that will be done as a separate task aligning column operator APIs.
Use `InteropLibrary.isString` and `asString` to convert any string value to `byte[]`
# Important Notes
Also contains a support for `Metadata.assertInCode` to help locating the right place in the code snippets.
**Vector**
- Adjusted `Vector.sort` to be `Vector.sort order on by`.
- Adjusted other sort to use `order` for direction argument.
- Added `insert`, `remove`, `index_of` and `last_index_of` to `Vector`.
- Added `start` and `if_missing` arguments to `find` on `Vector`, and adjusted default is `Not_Found` error.
- Added type checking to `+` on `Vector`.
- Altered `first`, `second` and `last` to error with `Index_Out_Of_Bounds` on `Vector`.
- Removed `sum`, `exists`, `head`, `init`, `tail`, `rest`, `append`, `prepend` from `Vector`.
**Pair**
- Added `last`, `any`, `all`, `contains`, `find`, `index_of`, `last_index_of`, `reverse`, `each`, `fold` and `reduce` to `Pair`.
- Added `get` to `Pair`.
**Range**
- Added `first`, `second`, `index_of`, `last_index_of`, `reverse` and `reduce` to `Range`.
- Added `at` and `get` to `Range`.
- Added `start` and `if_missing` arguments to `find` on `Range`.
- Simplified `last` and `length` of `Range`.
- Removed `exists` from `Range`.
**List**
- Added `second`, `find`, `index_of`, `last_index_of`, `reverse` and `reduce` to `Range`.
- Added `at` and `get` to `List`.
- Removed `exists` from `List`.
- Made `all` short-circuit if any fail on `List`.
- Altered `is_empty` to not compute the length of `List`.
- Altered `first`, `tail`, `head`, `init` and `last` to error with `Index_Out_Of_Bounds` on `List`.
**Others**
- Added `first`, `second`, `last`, `get` to `Text`.
- Added wrapper methods to the Random_Number_Generator so you can get random values more easily.
- Adjusted `Aggregate_Column` to operate on the first column by default.
- Added `contains_key` to `Map`.
- Added ALIAS to `row_count` and `order_by`.
Don't propagate errors from `toDisplayString` - construct an error message with `Atom.toString`.
# Important Notes
> currently a failure in to_text is swallowed by `toString` and we cannot detect that something went wrong during the serialization
Not sure how satisfying the solution is, but the error swallowing happens in Truffle and there is little to do with it. We can just catch the error ourselves and produce some meaningful string.
Most of the problems with accessing `ArrayOverBuffer` have been resolved by using `CoerceArrayNode` (https://github.com/enso-org/enso/pull/3817). In `Array.sort` we still however specialized on Array which wasn't compatible with `ArrayOverBuffer`. Similarly sorting JS or Python arrays wouldn't work.
Added a specialization to `Array.sort` to deal with that case. A generic specialization (with `hasArrayElements`) not only handles `ArrayOverBuffer` but also polyglot arrays coming from JS or Python. We could have an additional specialization for `ArrayOverBuffer` only (removed in the last commit) that returns `ArrayOverBuffer` rather than `Array` although that adds additional complexity which so far is unnecessary.
Also fixed an example in `Array.enso` by providing a default argument.
Compiler performed name resolution of literals in type signatures but would silently fail to report any problems.
This meant that wrong names or forgotten imports would sneak in to stdlib.
This change introduces 2 main changes:
1) failed name resolutions are appended in `TypeNames` pass
2) `GatherDiagnostics` pass also collects and reports failures from type
signatures IR
Updated stdlib so that it passes given the correct gatekeepers in place.
Implements https://www.pivotaltracker.com/story/show/184032869
# Important Notes
- Currently we get failures in Full joins on Postgres which show a more serious problem - amending equality to ensure that `[NULL = NULL] == True` breaks hash/merge based indexing - so such joins will be extremely inefficient. All our joins currently rely on this notion of equality which will mean all of our DB joins will be extremely inefficient.
- We need to find a solution that will support nulls and still work OK with indices (but after exploring a few approaches: `COALESCE(a = b, a IS NULL AND b is NULL)`, `a IS NOT DISTINCT FROM b`, `(a = b) OR (a IS NULL AND b is NULL)`; all of which did not work (they all result in `ERROR: FULL JOIN is only supported with merge-joinable or hash-joinable join conditions`) I'm less certain that it is possible. Alternatively, we may need to change the NULL semantics to align it with SQL - this seems like likely the simpler solution, allowing us to generate simple, reliable SQL - the NULL=NULL solution will be cornering us into nasty workarounds very dependent on the particular backend.
PR adds a flag to `Text` implementation tracking whether it is in a FCD normal form. Then this information can be used in the `Normalizer.compare` method.
| Benchmark name | Old (ms) | With flag (ms)
| --- | --- | ---
| Unicode very short | 40.29 | 40.04
| Unicode medium | 9.07 | 1.99
| Unicode big - random | 115.39 | 0.35
| Unicode big - early difference | 107.02 | 0.54
| Unicode big - late difference | 749.81 | 94.73
| ASCII very short | 28.13 | 31.13
| ASCII medium | 4.58 | 2.26
| ASCII big - random | 42.68 | 0.26
| ASCII big - early difference | 30.91 | 0.32
| ASCII big - late difference | 66.29 | 42.72
Full benchmark output.
[bench_old.txt](https://github.com/enso-org/enso/files/10325202/bench_old.txt)
[bench_new.txt](https://github.com/enso-org/enso/files/10325201/bench_new.txt)