Turns that if you import a two-part import we had special code that would a) add Main submodule b) add an explicit rename.
b) is problematic because sometimes we only want to import specific names.
E.g.,
```
from Bar.Foo import Bar, Baz
```
would be translated to
```
from Bar.Foo.Main as Foo import Bar, Baz
```
and it should only be translated to
```
from Bar.Foo.Main import Bar, Baz
```
This change detects this scenario and does not add renames in that case.
Fixes [183276486](https://www.pivotaltracker.com/story/show/183276486).
Changes following Marcin's work. Should be back to very similar public API as before.
- Add an "interface" type: `Standard.Base.System.File_Format.File_Format`.
- All `File_Format` types now have a `can_read` method to decide if they can read a file.
- Move `Standard.Table.IO.File_Format.Text.Text_Data` to `Standard.Base.System.File_Format.Plain_Text_Format.Plain_Text`.
- Move `Standard.Table.IO.File_Format.Bytes` to `Standard.Base.System.File_Format.Bytes`.
- Move `Standard.Table.IO.File_Format.Infer` to `Standard.Base.System.File_Format.Infer`. **(doesn't belong here...)**
- Move `Standard.Table.IO.File_Format.Unsupported_File_Type` to `Standard.Base.Error.Common.Unsupported_File_Type`.
- Add `Infer`, `File_Format`, `Bytes`, `Plain_Text`, `Plain_Text_Format` to `Standard.Base` exports.
- Fold extension methods of `Standard.Base.Meta.Unresolved_Symbol` into type.
- Move `Standard.Table.IO.File_Format.Auto` to `Standard.Table.IO.Auto_Detect.Auto_Detect`.
- Added a `types` Vector of all the built in formats.
- `Auto_Detect` asks each type if they `can_read` a file.
- Broke up and moved `Standard.Table.IO.Excel` into `Standard.Table.Excel`:
- Moved `Standard.Table.IO.File_Format.Excel.Excel_Data` to `Standard.Table.Excel.Excel_Format.Excel_Format.Excel`.
- Renamed `Sheet` to `Worksheet`.
- Internal types `Reader` and `Writer` providing the actual read and write methods.
- Created `Standard.Table.Delimited` with similar structure to `Standard.Table.Excel`:
- Moved `Standard.Table.IO.File_Format.Delimited.Delimited_Data` to `Standard.Table.Delimited.Delimited_Format.Delimited_Format.Delimited`.
- Moved `Standard.Table.IO.Quote_Style` to `Standard.Table.Delimited.Quote_Style`.
- Moved the `Reader` and `Writer` internal types into here. Renamed methods to have unique names.
- Add `Aggregate_Column`, `Auto_Detect`, `Delimited`, `Delimited_Format`, `Excel`, `Excel_Format`, `Sheet_Names`, `Range_Names`, `Worksheet` and `Cell_Range` to `Standard.Table` exports.
`Vector` type is now a builtin type. This requires a bunch of additional builtin methods for its creation:
- Use `Vector.from_array` to convert any array-like structure into a `Vector` [by copy](f628b28f5f)
- Use (already existing) `Vector.from_polyglot_array` to convert any array-like structure into a `Vector` **without** copying
- Use (already existing) `Vector.fill 1 item` to create a singleton `Vector`
Additional, for pattern matching purposes, we had to implement a `VectorBranchNode`. Use following to match on `x` being an instance of `Vector` type:
```
import Standard.Base.Data.Vector
size = case x of
Vector.Vector -> x.length
_ -> 0
```
Finally, `VectorLiterals` pass that transforms `[1,2,3]` to (roughly)
```
a1 = 1
a2 = 2
a3 = 3
Vector (Array (a1,a2, a3))
```
had to be modified to generate
```
a1 = 1
a2 = 2
a3 = 3
Vector.from_array (Array (a1, a2, a3))
```
instead to accomodate to the API changes. As of 025acaa676 all the known CI checks passes. Let's start the review.
# Important Notes
Matching in `case` statement is currently done via `Vector_Data`. Use:
```
case x of
Vector.Vector_Data -> True
```
until a better alternative is found.
Small clean up PR.
- Aligns a few type signatures with their functions.
- Some formatting fixes.
- Remove a few unused types.
- Make error extension functions be standard methods.
- Added `databases`, `database`, `set_database`.
- Added `schemas`, `schema`, `set_schema`.
- Added `table_types`,
- Added `tables`.
- Moved the vast majority of the connection work into a lower level `JDBC_Connection` object.
- `Connection` represents the standard API for database connections and provides a base JDBC implementation.
- `SQLite_Connection` has the `Connection` API but with custom `databases` and `schemas` methods for SQLite.
- `Postgres_Connection` has the `Connection` API but with custom `set_database`, `databases`, `set_schema` and `schemas` methods for Postgres.
- Updated `Redshift` - no public API change.
Implements https://www.pivotaltracker.com/story/show/182307143
# Important Notes
- Modified standard library Java helpers dependencies so that `std-table` module depends on `std-base`, as a provided dependency. This is allowed, because `std-table` is used by the `Standard.Table` Enso module which depends on `Standard.Base` which ensures that the `std-base` is loaded onto the classpath, thus whenever `std-table` is loaded by `Standard.Table`, so is `std-base`. Thus we can rely on classes from `std-base` and its dependencies being _provided_ on the classpath. Thanks to that we can use utilities like `Text_Utils` also in `std-table`, avoiding code duplication. Additional advantage of that is that we don't need to specify ICU4J as a separate dependency for `std-table`, since it is 'taken' from `std-base` already - so we avoid including it in our build packages twice.
This is a step towards the new language spec. The `type` keyword now means something. So we now have
```
type Maybe a
Some (from_some : a)
None
```
as a thing one may write. Also `Some` and `None` are not standalone types now – only `Maybe` is.
This halfway to static methods – we still allow for things like `Number + Number` for backwards compatibility. It will disappear in the next PR.
The concept of a type is now used for method dispatch – with great impact on interpreter code density.
Some APIs in the STDLIB may require re-thinking. I take this is going to be up to the libraries team – some choices are not as good with a semantically different language. I've strived to update stdlib with minimal changes – to make sure it still works as it did.
It is worth mentioning the conflicting constructor name convention I've used: if `Foo` only has one constructor, previously named `Foo`, we now have:
```
type Foo
Foo_Data f1 f2 f3
```
This is now necessary, because we still don't have proper statics. When they arrive, this can be changed (quite easily, with SED) to use them, and figure out the actual convention then.
I have also reworked large parts of the builtins system, because it did not work at all with the new concepts.
It also exposes the type variants in SuggestionBuilder, that was the original tiny PR this was based on.
PS I'm so sorry for the size of this. No idea how this could have been smaller. It's a breaking language change after all.
- Added `Zone`, `Date_Time` and `Time_Of_Day` to `Standard.Base`.
- Renamed `Zone` to `Time_Zone`.
- Added `century`.
- Added `is_leap_year`.
- Added `length_of_year`.
- Added `length_of_month`.
- Added `quarter`.
- Added `day_of_year`.
- Added `Day_Of_Week` type and `day_of_week` function.
- Updated `week_of_year` to support ISO.
# Important Notes
- Had to pass locale to formatter for date/time tests to work on my PC.
- Changed default of `week_of_year` to use ISO.
Implements https://www.pivotaltracker.com/story/show/182879865
# Important Notes
Note that removing `set_at` still does not make our arrays fully immutable - `Array.copy` can still be used to mutate them.
* Builtin Date_Time, Time_Of_Day, Zone
Improved polyglot support for Date_Time (formerly Time), Time_Of_Day and
Zone. This follows the pattern introduced for Enso Date.
Minor caveat - in tests for Date, had to bend a lot for JS Date to pass.
This is because JS Date is not really only a Date, but also a Time and
Timezone, previously we just didn't consider the latter.
Also, JS Date does not deal well with setting timezones so the trick I
used is to first call foreign function returning a polyglot JS Date,
which is converted to ZonedDateTime and only then set the correct
timezone. That way none of the existing tests had to be changes or
special cased.
Additionally, JS deals with milliseconds rather than nanoseconds so
there is loss in precision, as noted in Time_Spec.
* Add tests for Java's LocalTime
* changelog
* Make date formatters in table happy
* PR review, add more tests for zone
* More tests and fixed a bug in column reader
Column reader didn't take into account timezone but that was a mistake
since then it wouldn't map to Enso's Date_Time.
Added tests that check it now.
* remove redundant conversion
* Update distribution/lib/Standard/Base/0.0.0-dev/src/Data/Time.enso
Co-authored-by: Radosław Waśko <radoslaw.wasko@enso.org>
* First round of addressing PR review
* don't leak java exceptions in Zone
* Move Date_Time to top-level module
* PR review
Co-authored-by: Radosław Waśko <radoslaw.wasko@enso.org>
Co-authored-by: Jaroslav Tulach <jaroslav.tulach@enso.org>
Use Proxy_Polyglot_Array as a proxy for polyglot arrays, thus unifying
the way the underlying array is accessed in Vector.
Used the opportunity to cleanup builtin lookup, which now actually
respects what is defined in the body of @Builtin_Method annotation.
Also discovered that polyglot null values (in JS, Python and R) were leaking to Enso.
Fixed that by doing explicit translation to `Nothing`.
https://www.pivotaltracker.com/story/show/181123986
First of all this PR demonstrates how to implement _lazy visualization_:
- one needs to write/enhance Enso visualization libraries - this PR adds two optional parameters (`bounds` and `limit`) to `process_to_json_text` function.
- the `process_to_json_text` can be tested by standard Enso test harness which this PR also does
- then one has to modify JavaScript on the IDE side to construct `setPreprocessor` expression using the optional parameters
The idea of _scatter plot lazy visualization_ is to limit the amount of points the IDE requests. Initially the limit is set to `limit=1024`. The `Scatter_Plot.enso` then processes the data and selects/generates the `limit` subset. Right now it includes `min`, `max` in both `x`, `y` axis plus randomly chosen points up to the `limit`.
![Zooming In](https://user-images.githubusercontent.com/26887752/185336126-f4fbd914-7fd8-4f0b-8377-178095401f46.png)
The D3 visualization widget is capable of _zooming in_. When that happens the JavaScript widget composes new expression with `bounds` set to the newly visible area. By calling `setPreprocessor` the engine recomputes the visualization data, filters out any data outside of the `bounds` and selects another `limit` points from the new data. The IDE visualization then updates itself to display these more detailed data. Users can zoom-in to see the smallest detail where the number of points gets bellow `limit` or they can select _Fit all_ to see all the data without any `bounds`.
# Important Notes
Randomly selecting `limit` samples from the dataset may be misleading. Probably implementing _k-means clustering_ (where `k=limit`) would generate more representative approximation.
- Removed various unnecessary `Standard.Base` imports still left behind.
- Added `Regex` to default `Standard.Base`.
- Removed aliasing from the examples as no longer needed (case coercion no long occurs).
- Remove `import Standard.Table` from within the Table library (directly importing types).
- Reviewed what was in `Standard.Database` - a few tweaks and removals.
- Removed various un-needed aliasing following Hubert's import work.
Show custom icons in Component Browser for entries that have a non-empty `Icon` section in their docs with the section's body containing a name of a predefined icon.
https://www.pivotaltracker.com/story/show/182584336
#### Visuals
A screenshot of a couple custom icons in the Component Browser:
<img width="346" alt="Screenshot 2022-07-27 at 15 55 33" src="https://user-images.githubusercontent.com/273837/181265249-d57f861f-8095-4933-9ef6-e62644e11da3.png">
# Important Notes
- The PR assigns icon names to four items in the standard library, but only three of them are shown in the Component Browser because of [a parsing bug in the Engine](https://www.pivotaltracker.com/story/show/182781673).
- Icon names are assigned only to four items in the standard library because only two currently predefined icons match entries in the currently defined Virtual Component Groups. Adjusting the definitions of icons and Virtual Component Groups is covered by [a different task](https://www.pivotaltracker.com/story/show/182584311).
- A bug in the documentation of the Enso protocol message `DocSection` is fixed. A `text` field in the `Tag` interface is renamed to `body` (this is the field name used in Engine).
This change allows for importing modules using a qualified name and deals with any conflicts on the way.
Given a module C defined at `A/B/C.enso` with
```
type C
type C a
```
it is now possible to import it as
```
import project.A
...
val x = A.B.C 10
```
Given a module located at `A/B/C/D.enso`, we will generate
intermediate, synthetic, modules that only import and export the successor module along the path.
For example, the contents of a synthetic module B will look like
```
import <namespace>.<pkg-name>.A.B.C
export <namespace>.<pkg-name>.A.B.C
```
If module B is defined already by the developer, the compiler will _inject_ the above statements to the IR.
Also removed the last elements of some lowercase name resolution that managed to survive recent
changes (`Meta.Enso_Project` would now be ambiguous with `enso_project` method).
Finally, added a pass that detects shadowing of the synthetic module by the type defined along the path.
We print a warning in such a situation.
Related to https://www.pivotaltracker.com/n/projects/2539304
# Important Notes
There was an additional request to fix the annoying problem with `from` imports that would always bring
the module into the scope. The changes in stdlib demonstrate how it is now possible to avoid the workaround of
```
from X.Y.Z as Z_Module import A, B
```
(i.e. `as Z_Module` part is almost always unnecessary).
This change modifies the current language by requiring explicit `self` parameter declaration
for methods. Methods without `self` parameter in the first position should be treated as statics
although that is not yet part of this PR. We add an implicit self to all methods
This obviously required updating the whole stdlib and its components, tests etc but the change
is pretty straightforward in the diff.
Notice that this change **does not** change method dispatch, which was removed in the last changes.
This was done on purpose to simplify the implementation for now. We will likely still remove all
those implicit selfs to bring true statics.
Minor caveat - since `main` doesn't actually need self, already removed that which simplified
a lot of code.
Adds least squares regression APIs. Covers the basic 4 trend line types from Excel (doesn't cover Polynomial or Moving Average).
Removes the old `Model` from the `Standard.Table`.
Significantly improves the polyglot Date support (as introduced by #3374). It enhances the `Date_Spec` to run it in four flavors:
- with Enso Date (as of now)
- with JavaScript Date
- with JavaScript Date wrapped in (JavaScript) array
- with Java LocalDate allocated directly
The code is then improved by necessary modifications to make the `Date_Spec` pass.
# Important Notes
James has requested in [#181755990](https://www.pivotaltracker.com/n/projects/2539304/stories/181755990) - e.g. _Review and improve InMemory Table support for Dates, Times, DateTimes, BigIntegers_ the following program to work:
```
foreign js dateArr = """
return [1, new Date(), 7]
main =
IO.println <| (dateArr.at 1).week_of_year
```
the program works with here in provided changes and prints `27` as of today.
@jdunkerley has provided tests for proper behavior of date in `Table` and `Column`. Those tests are working as of [f16d07e](f16d07e640). One just needs to accept `List<Value>` and then query `Value` for `isDate()` when needed.
Last round of changes is related to **exception handling**. 8b686b12bd makes sure `makePolyglotError` accepts only polyglot values. Then it wraps plain Java exceptions into `WrapPlainException` with `has_type` method - 60da5e70ed - the remaining changes in the PR are only trying to get all tests working in the new setup.
The support for `Time` isn't part of this PR yet.
Updates `write_bytes` API to be part of `Vector` and to conform to `write` APIs.
# Important Notes
Ensures doesn't touch the file if an invalid byte array.
Add some additional scenarios to Excel append tests:
- Non-A1 start
- Name duplication
- Hitting another range
# Important Notes
Also fixed a warning in the Image library.
Updated the SQLite, PostgreSQL and Redshift drivers.
# Important Notes
Updated the API for Redshift and proved able to connect without the ini file workaround.
More and more often I need a way to only recover a specific type of a dataflow error (in a similar manner as with panics). So the API for `Error.catch` has been amended to more closely resemble `Panic.catch`, allowing to handle only specific types of dataflow errors, passing others through unchanged. The default is `Any`, meaning all errors are caught by default, and the behaviour of `x.catch` remains unchanged.
Modified UppercaseNames to now resolve methods without an explicit `here` to point to the current module.
`here` was also often used instead of `self` which was allowed by the compiler.
Therefore UppercaseNames pass is now GlobalNames and does some extra work -
it translated method calls without an explicit target into proper applications.
# Important Notes
There was a long-standing bug in scopes usage when compiling standalone expressions.
This resulted in AliasAnalysis generating incorrect graphs and manifested itself only in unit tests
and when running `eval`, thus being a bit hard to locate.
See `runExpression` for details.
Additionally, method name resolution is now case-sensitive.
Obsolete passes like UndefinedVariables and ModuleThisToHere were removed. All tests have been adapted.
Adds support for appending to an existing Excel table.
# Important Notes
- Renamed `Column_Mapping` to `Column_Name_Mapping`
- Changed new type name to `Map_Column`
- Added last modified time and creation time to `File`.
Initial work restructuring the `Database.connect` API
- New SQLite API with support for InMemory.
- Updated PostgreSQL API with SSL and Client Certificate Support.
- Updated Redshift API.
# Important Notes
Follow up tasks:
- PostgreSQL SSL additional testing.
- Driver version updating.
- `.pgpass` support.
- Remove `from_xls` and `from_xlsx`.
- Add `headers` support to `File_Format.Excel`.
- Altered default read for Excel to be the first sheet.
- Altered behavior so that single cells grow down and right when reading sheet.
- Altered `Excel_Range` so knows if single cell or 1x1 range address.
# Important Notes
- Renamed `Range` to `Cell_Range` to avoid name clash.