The main culprit of a Vector slowdown (when compared to Array) was the normalization of the index when accessing the elements. Turns out that the Graal was very persistent on **not** inlining that particular fragment and that was degrading the results in benchmarks.
Being unable to force it to do it (looks like a combination of thunk execution and another layer of indirection) we resorted to just moving the normalization to the builtin method. That makes Array and Vector perform roughly the same.
Moved all handling of invalid index into the builtin as well, simplifying the Enso implementation. This also meant that `Vector.unsafe_at` is now obsolete.
Additionally, added support for negative indices in Array, to behave in the same way as for Vector.
# Important Notes
Note that this workaround only addresses this particular perf issue. I'm pretty sure we will have more of such scenarios.
Before the change `averageOverVector` benchmark averaged around `0.033 ms/op` now it does consistently `0.016 ms/op`, similarly to `averageOverArray`.
Improve `Unsupported_Argument_Types` error so that it includes the message from the original exception. `arguments` field is retained, but not included in `to_display_text` method.
- Removed `Dubious constructor export` from Examples, Geo, Google_Api, Image and Test.
- Updated Google_Api project to meet newer code standards.
- Restructured `Standard.Test`:
- `Main.enso` now exports `Bench`, `Faker`, `Problems`, `Test`, `Test_Suite`
- `Test.Suite` methods moved into a `Test_Suite` type.
- Moved `Bench.measure` into `Bench` type.
- Separated the reporting to a `Test_Reporter` module.
- Moved `Faker` methods into `Faker` type.
- Removed `Verbs` and `.should` method.
- Added `should_start_with` and `should_contain` extensions to `Any`.
- Restructured `Standard.Image`:
- Merged Codecs methods into `Image`.
- Export `Image`, `Read_Flag`, `Write_Flag` and `Matrix` as types from `Main.enso`.
- Merged the internal methods into `Matrix` and `Image`.
- Fixed `Day_Of_Week` to be exported as a type and sort the `from` method.
- Reimplement the `Duration` type to a built-in type.
- `Duration` is an interop type.
- Allow Enso method dispatch on `Duration` interop coming from different languages.
# Important Notes
- The older `Duration` type should now be split into new `Duration` builtin type and a `Period` type.
- This PR does not implement `Period` type, so all the `Period`-related functionality is currently not working, e.g., `Date - Period`.
- This PR removes `Integer.milliseconds`, `Integer.seconds`, ..., `Integer.years` extension methods.
- Moved `Standard.Database.connect` into `Standard.Database.Database.connect`, so can now just `from Standard.Database import ...`.
- Removed all `Dubious constructor export`s.
- Switched to using `project` for internal imports.
- Moved to using `Value` for private constructors and not re-exporting.
- Export types not modules from `Standard.Database`.
- Broke up `IR` into separate files (Context, Expression, From_Spec, Internal_Column, Join_Kind, Query).
- No longer use `IR.` instead via specific types.
- Broke up `SQL` into separate files (SQL_Type and SQL_Statement).
Additionally;
- Standard.Table: Moved `storage_types` into `Storage`.
- Standard.Table: Switched to using `project` for internal imports.
- Standard.Table.Excel: Renamed modules `Range` to `Excel_Range` and `Section` to `Excel_Section`.
- `Standard.Visualisation`: Switched to using `project` for internal imports.
- `Standard.Visualisation`: Moved to using `Value` for private constructors and not re-exporting.
# Important Notes
- Have not cleared up the `Errors` yet.
- Have not switched to type pattern matching.
- Generally export types not modules from the `Standard.Table` import.
- Moved `new`, `from_rows` the `Standard.Table` library into the `Table` type.
- Renames `Standard.Table.Data.Storage.Type` to `Standard.Table.Data.Storage.Storage`
- Removed the internal `from_columns` method.
- Removed `join` and `concat` and merged into instance methods.
- Removed `Table` and `Column` from the `Standard.Database` exports.
- Removed `Standard.Table.Data.Column.Aggregate_Column` as not used any more.
- Adds a `details` field to `Failure` for additional contextual information.
- Stacktraces are moved from main message (which should generally be short and fit in one line) to the `details`.
- Ensuring that the attribute does not contain multiple lines fixes the CI viewer which seems to have been breaking on multiline attributes.
- Additionally, test execution time is now measured and printed in the CLI as well as included in the JUnit report for the CI - we can use this to catch tests running unexpectedly slowly.
Changelog
- fix reporting of runtime type for values annotated with warning
- fix visualizations of values annotated with warnings
- fix `Runtime.get_stack_trace` failure in interactive mode
Allows using `Vector ColumnName` for the various table functions as short hand.
- `select_columns`, `remove_columns`,`reorder_columns`, `distinct` all map to an exact By_Name match.
- `rename_columns` does a positional rename on the Vector passed.
- `order_by` sorts ascending on each column passed in order.
# Important Notes
This may be reversed once widgets are available and working but this makes the APIs much more usable in current UI.
This change brings by-type pattern matching to Enso.
One can pattern match on Enso types as well as on polyglot types.
For example,
```
case x of
_ : Integer -> ...
_ : Text -> ...
_ -> ...
```
as well as Java's types
```
case y of
_ : ArrayList -> ...
_ : List -> ...
_ : AbstractList -> ...
_ -> ..
```
It is no longer possible to match a value with a corresponding type constructor.
For example
```
case Date.now of
Date -> ...
```
will no longer match and one should match on the type (`_ : Date`) instead.
```
case Date of
Date -> ...
```
is fine though, as requested in the ticket.
The change required further changes to `type_of` logic which wasn't dealing well with polyglot values.
Implements https://www.pivotaltracker.com/story/show/183188846
# Important Notes
~I discovered late in the game that nested patterns involving type patterns, such as `Const (f : Foo) tail -> ...` are not possible due to the old parser logic.
I would prefer to add it in a separate PR because this one is already getting quite large.~ This is now supported!
Implements https://www.pivotaltracker.com/story/show/183402892
# Important Notes
- Fixes inconsistent `compare_to` vs `==` behaviour in date/time types and adds test for that.
- Adds test for `Table.order_by` on dates and custom types.
- Fixes an issue with `Table.order_by` for custom types.
- Unifies how incomparable objects are reported by `Table.order_by` and `Vector.sort`.
- Adds benchmarks comparing `Table.order_by` and `Vector.sort` performance.
Makes statics static. A type and its instances have different methods defined on them, as it should be. Constructors are now scoped in types, and can be imported/exported.
# Important Notes
The method of fixing stdlib chosen here is to just not. All the conses are exported to make all old code work. All such instances are marked with `TODO Dubious constructor export` so that it can be found and fixed.
This change implements a simple `type_of` method that returns a type of a given value, including for polyglot objects.
The change also allows for pattern matching on various time-related instances. It is a nice-to-have on its own, but it was primarily needed here to write some tests. For equality checks on types we currently can't use `==` due to a known _feature_ which essentially does wrong dispatching. This will be improved in the upcoming statics PR so we agreed that there is no point in duplicating that work and we can replace it later.
Also, note that this PR changes `Meta.is_same_object`. Comparing types revealed that it was wrong when comparing polyglot wrappers over the same value.
Use an `ArraySlice` to slice `Vector`.
Avoids memory copying for the slice function.
# Important Notes
| Test | Ref | New |
| --- | --- | --- |
| New Vector | 71.9 | 71.0 |
| Append Single | 26.0 | 27.7 |
| Append Large | 15.1 | 14.9 |
| Sum | 156.4 | 165.8 |
| Drop First 20 and Sum | 171.2 | 165.3 |
| Drop Last 20 and Sum | 170.7 | 163.0 |
| Filter | 76.9 | 76.9 |
| Filter With Index | 166.3 | 168.3 |
| Partition | 278.5 | 273.8 |
| Partition With Index | 392.0 | 393.7 |
| Each | 101.9 | 102.7 |
- Note: the performance of New and Append has got slower from previous tests.
Implements https://www.pivotaltracker.com/story/show/183082087
# Important Notes
- Removed unnecessary invocations of `Error.throw` improving performance of `Vector.distinct`. The time of the `add_work_days and work_days_until should be consistent with each other` test suite came down from 15s to 3s after the changes.
Repairing the constructor name following the types work. Some general tiding up as well.
- Remove `Standard.Database.Data.Column.Aggregate_Column_Builder`.
- Remove `Standard.Database.Data.Dialect.Dialect.Dialect_Data`.
- Remove unused imports and update some type definitions.
- Rename `Postgres.Postgres_Data` => `Postgres_Options.Postgres`.
- Rename `Redshift.Redshift_Data` => `Redshift_Options.Redshift`.
- Rename `SQLite.SQLite_Data` => `SQLite_Options.SQLite`.
- Rename `Credentials.Credentials_Data` => `Credentials.Username_And_Password`.
- Rename `Sql` to `SQL` across the board.
- Merge `Standard.Database.Data.Internal` into `Standard.Database.Internal`.
- Move dialects into `Internal` and merge the function in `Helpers` into `Base_Generator`.
Turns that if you import a two-part import we had special code that would a) add Main submodule b) add an explicit rename.
b) is problematic because sometimes we only want to import specific names.
E.g.,
```
from Bar.Foo import Bar, Baz
```
would be translated to
```
from Bar.Foo.Main as Foo import Bar, Baz
```
and it should only be translated to
```
from Bar.Foo.Main import Bar, Baz
```
This change detects this scenario and does not add renames in that case.
Fixes [183276486](https://www.pivotaltracker.com/story/show/183276486).
Changes following Marcin's work. Should be back to very similar public API as before.
- Add an "interface" type: `Standard.Base.System.File_Format.File_Format`.
- All `File_Format` types now have a `can_read` method to decide if they can read a file.
- Move `Standard.Table.IO.File_Format.Text.Text_Data` to `Standard.Base.System.File_Format.Plain_Text_Format.Plain_Text`.
- Move `Standard.Table.IO.File_Format.Bytes` to `Standard.Base.System.File_Format.Bytes`.
- Move `Standard.Table.IO.File_Format.Infer` to `Standard.Base.System.File_Format.Infer`. **(doesn't belong here...)**
- Move `Standard.Table.IO.File_Format.Unsupported_File_Type` to `Standard.Base.Error.Common.Unsupported_File_Type`.
- Add `Infer`, `File_Format`, `Bytes`, `Plain_Text`, `Plain_Text_Format` to `Standard.Base` exports.
- Fold extension methods of `Standard.Base.Meta.Unresolved_Symbol` into type.
- Move `Standard.Table.IO.File_Format.Auto` to `Standard.Table.IO.Auto_Detect.Auto_Detect`.
- Added a `types` Vector of all the built in formats.
- `Auto_Detect` asks each type if they `can_read` a file.
- Broke up and moved `Standard.Table.IO.Excel` into `Standard.Table.Excel`:
- Moved `Standard.Table.IO.File_Format.Excel.Excel_Data` to `Standard.Table.Excel.Excel_Format.Excel_Format.Excel`.
- Renamed `Sheet` to `Worksheet`.
- Internal types `Reader` and `Writer` providing the actual read and write methods.
- Created `Standard.Table.Delimited` with similar structure to `Standard.Table.Excel`:
- Moved `Standard.Table.IO.File_Format.Delimited.Delimited_Data` to `Standard.Table.Delimited.Delimited_Format.Delimited_Format.Delimited`.
- Moved `Standard.Table.IO.Quote_Style` to `Standard.Table.Delimited.Quote_Style`.
- Moved the `Reader` and `Writer` internal types into here. Renamed methods to have unique names.
- Add `Aggregate_Column`, `Auto_Detect`, `Delimited`, `Delimited_Format`, `Excel`, `Excel_Format`, `Sheet_Names`, `Range_Names`, `Worksheet` and `Cell_Range` to `Standard.Table` exports.
`Vector` type is now a builtin type. This requires a bunch of additional builtin methods for its creation:
- Use `Vector.from_array` to convert any array-like structure into a `Vector` [by copy](f628b28f5f)
- Use (already existing) `Vector.from_polyglot_array` to convert any array-like structure into a `Vector` **without** copying
- Use (already existing) `Vector.fill 1 item` to create a singleton `Vector`
Additional, for pattern matching purposes, we had to implement a `VectorBranchNode`. Use following to match on `x` being an instance of `Vector` type:
```
import Standard.Base.Data.Vector
size = case x of
Vector.Vector -> x.length
_ -> 0
```
Finally, `VectorLiterals` pass that transforms `[1,2,3]` to (roughly)
```
a1 = 1
a2 = 2
a3 = 3
Vector (Array (a1,a2, a3))
```
had to be modified to generate
```
a1 = 1
a2 = 2
a3 = 3
Vector.from_array (Array (a1, a2, a3))
```
instead to accomodate to the API changes. As of 025acaa676 all the known CI checks passes. Let's start the review.
# Important Notes
Matching in `case` statement is currently done via `Vector_Data`. Use:
```
case x of
Vector.Vector_Data -> True
```
until a better alternative is found.
Small clean up PR.
- Aligns a few type signatures with their functions.
- Some formatting fixes.
- Remove a few unused types.
- Make error extension functions be standard methods.
- Added `databases`, `database`, `set_database`.
- Added `schemas`, `schema`, `set_schema`.
- Added `table_types`,
- Added `tables`.
- Moved the vast majority of the connection work into a lower level `JDBC_Connection` object.
- `Connection` represents the standard API for database connections and provides a base JDBC implementation.
- `SQLite_Connection` has the `Connection` API but with custom `databases` and `schemas` methods for SQLite.
- `Postgres_Connection` has the `Connection` API but with custom `set_database`, `databases`, `set_schema` and `schemas` methods for Postgres.
- Updated `Redshift` - no public API change.
Implements https://www.pivotaltracker.com/story/show/182307143
# Important Notes
- Modified standard library Java helpers dependencies so that `std-table` module depends on `std-base`, as a provided dependency. This is allowed, because `std-table` is used by the `Standard.Table` Enso module which depends on `Standard.Base` which ensures that the `std-base` is loaded onto the classpath, thus whenever `std-table` is loaded by `Standard.Table`, so is `std-base`. Thus we can rely on classes from `std-base` and its dependencies being _provided_ on the classpath. Thanks to that we can use utilities like `Text_Utils` also in `std-table`, avoiding code duplication. Additional advantage of that is that we don't need to specify ICU4J as a separate dependency for `std-table`, since it is 'taken' from `std-base` already - so we avoid including it in our build packages twice.
This is a step towards the new language spec. The `type` keyword now means something. So we now have
```
type Maybe a
Some (from_some : a)
None
```
as a thing one may write. Also `Some` and `None` are not standalone types now – only `Maybe` is.
This halfway to static methods – we still allow for things like `Number + Number` for backwards compatibility. It will disappear in the next PR.
The concept of a type is now used for method dispatch – with great impact on interpreter code density.
Some APIs in the STDLIB may require re-thinking. I take this is going to be up to the libraries team – some choices are not as good with a semantically different language. I've strived to update stdlib with minimal changes – to make sure it still works as it did.
It is worth mentioning the conflicting constructor name convention I've used: if `Foo` only has one constructor, previously named `Foo`, we now have:
```
type Foo
Foo_Data f1 f2 f3
```
This is now necessary, because we still don't have proper statics. When they arrive, this can be changed (quite easily, with SED) to use them, and figure out the actual convention then.
I have also reworked large parts of the builtins system, because it did not work at all with the new concepts.
It also exposes the type variants in SuggestionBuilder, that was the original tiny PR this was based on.
PS I'm so sorry for the size of this. No idea how this could have been smaller. It's a breaking language change after all.
- Added `Zone`, `Date_Time` and `Time_Of_Day` to `Standard.Base`.
- Renamed `Zone` to `Time_Zone`.
- Added `century`.
- Added `is_leap_year`.
- Added `length_of_year`.
- Added `length_of_month`.
- Added `quarter`.
- Added `day_of_year`.
- Added `Day_Of_Week` type and `day_of_week` function.
- Updated `week_of_year` to support ISO.
# Important Notes
- Had to pass locale to formatter for date/time tests to work on my PC.
- Changed default of `week_of_year` to use ISO.
Implements https://www.pivotaltracker.com/story/show/182879865
# Important Notes
Note that removing `set_at` still does not make our arrays fully immutable - `Array.copy` can still be used to mutate them.
* Builtin Date_Time, Time_Of_Day, Zone
Improved polyglot support for Date_Time (formerly Time), Time_Of_Day and
Zone. This follows the pattern introduced for Enso Date.
Minor caveat - in tests for Date, had to bend a lot for JS Date to pass.
This is because JS Date is not really only a Date, but also a Time and
Timezone, previously we just didn't consider the latter.
Also, JS Date does not deal well with setting timezones so the trick I
used is to first call foreign function returning a polyglot JS Date,
which is converted to ZonedDateTime and only then set the correct
timezone. That way none of the existing tests had to be changes or
special cased.
Additionally, JS deals with milliseconds rather than nanoseconds so
there is loss in precision, as noted in Time_Spec.
* Add tests for Java's LocalTime
* changelog
* Make date formatters in table happy
* PR review, add more tests for zone
* More tests and fixed a bug in column reader
Column reader didn't take into account timezone but that was a mistake
since then it wouldn't map to Enso's Date_Time.
Added tests that check it now.
* remove redundant conversion
* Update distribution/lib/Standard/Base/0.0.0-dev/src/Data/Time.enso
Co-authored-by: Radosław Waśko <radoslaw.wasko@enso.org>
* First round of addressing PR review
* don't leak java exceptions in Zone
* Move Date_Time to top-level module
* PR review
Co-authored-by: Radosław Waśko <radoslaw.wasko@enso.org>
Co-authored-by: Jaroslav Tulach <jaroslav.tulach@enso.org>
Use Proxy_Polyglot_Array as a proxy for polyglot arrays, thus unifying
the way the underlying array is accessed in Vector.
Used the opportunity to cleanup builtin lookup, which now actually
respects what is defined in the body of @Builtin_Method annotation.
Also discovered that polyglot null values (in JS, Python and R) were leaking to Enso.
Fixed that by doing explicit translation to `Nothing`.
https://www.pivotaltracker.com/story/show/181123986
First of all this PR demonstrates how to implement _lazy visualization_:
- one needs to write/enhance Enso visualization libraries - this PR adds two optional parameters (`bounds` and `limit`) to `process_to_json_text` function.
- the `process_to_json_text` can be tested by standard Enso test harness which this PR also does
- then one has to modify JavaScript on the IDE side to construct `setPreprocessor` expression using the optional parameters
The idea of _scatter plot lazy visualization_ is to limit the amount of points the IDE requests. Initially the limit is set to `limit=1024`. The `Scatter_Plot.enso` then processes the data and selects/generates the `limit` subset. Right now it includes `min`, `max` in both `x`, `y` axis plus randomly chosen points up to the `limit`.
![Zooming In](https://user-images.githubusercontent.com/26887752/185336126-f4fbd914-7fd8-4f0b-8377-178095401f46.png)
The D3 visualization widget is capable of _zooming in_. When that happens the JavaScript widget composes new expression with `bounds` set to the newly visible area. By calling `setPreprocessor` the engine recomputes the visualization data, filters out any data outside of the `bounds` and selects another `limit` points from the new data. The IDE visualization then updates itself to display these more detailed data. Users can zoom-in to see the smallest detail where the number of points gets bellow `limit` or they can select _Fit all_ to see all the data without any `bounds`.
# Important Notes
Randomly selecting `limit` samples from the dataset may be misleading. Probably implementing _k-means clustering_ (where `k=limit`) would generate more representative approximation.
- Removed various unnecessary `Standard.Base` imports still left behind.
- Added `Regex` to default `Standard.Base`.
- Removed aliasing from the examples as no longer needed (case coercion no long occurs).
- Remove `import Standard.Table` from within the Table library (directly importing types).
- Reviewed what was in `Standard.Database` - a few tweaks and removals.
- Removed various un-needed aliasing following Hubert's import work.