Get rid of free-floating atoms. Everything has a type now! (#3671)

This is a step towards the new language spec. The `type` keyword now means something. So we now have
```
type Maybe a
Some (from_some : a)
None
```
as a thing one may write. Also `Some` and `None` are not standalone types now – only `Maybe` is.
This halfway to static methods – we still allow for things like `Number + Number` for backwards compatibility. It will disappear in the next PR.

The concept of a type is now used for method dispatch – with great impact on interpreter code density.

Some APIs in the STDLIB may require re-thinking. I take this is going to be up to the libraries team – some choices are not as good with a semantically different language. I've strived to update stdlib with minimal changes – to make sure it still works as it did.

It is worth mentioning the conflicting constructor name convention I've used: if `Foo` only has one constructor, previously named `Foo`, we now have:
```
type Foo
Foo_Data f1 f2 f3
```

This is now necessary, because we still don't have proper statics. When they arrive, this can be changed (quite easily, with SED) to use them, and figure out the actual convention then.

I have also reworked large parts of the builtins system, because it did not work at all with the new concepts.

It also exposes the type variants in SuggestionBuilder, that was the original tiny PR this was based on.

PS I'm so sorry for the size of this. No idea how this could have been smaller. It's a breaking language change after all.
This commit is contained in:
Marcin Kostrzewa 2022-08-31 00:54:53 +02:00 committed by GitHub
parent f00da24d0a
commit 4fc6dcced0
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
513 changed files with 7081 additions and 7829 deletions

View File

@ -338,6 +338,8 @@
- [Support pattern matching on constants][3641]
- [Builtin Date_Time, Time_Of_Day and Zone types for better polyglot
support][3658]
- [Implement new specification of data types: `type` has a runtime
representation, every atom has a type][3671]
[3227]: https://github.com/enso-org/enso/pull/3227
[3248]: https://github.com/enso-org/enso/pull/3248
@ -381,6 +383,7 @@
[3637]: https://github.com/enso-org/enso/pull/3637
[3641]: https://github.com/enso-org/enso/pull/3641
[3658]: https://github.com/enso-org/enso/pull/3658
[3671]: https://github.com/enso-org/enso/pull/3671
# Enso 2.0.0-alpha.18 (2021-10-12)

View File

@ -2,16 +2,13 @@ from Standard.Base import all
from Standard.Base.Error.Common import dataflow_error_handler
# The type that subsumes all types.
## Any is the universal top-type, with all other types being subsumed by it.
If a value of type Any is expected in a given location, _any value_ can
be used in that position.
@Builtin_Type
type Any
## Any is the universal top-type, with all other types being subsumed by it.
If a value of type Any is expected in a given location, _any value_ can
be used in that position.
@Builtin_Type
type Any
## PRIVATE
Executes the provided handler on a dataflow error, or executes as
@ -78,19 +75,19 @@ type Any
== self that = if Meta.is_same_object self that then True else
self_meta = Meta.meta self
that_meta = Meta.meta that
case Cons self_meta that_meta of
Cons (Meta.Atom _) (Meta.Atom _) ->
case Pair_Data self_meta that_meta of
Pair_Data (Meta.Atom_Data _) (Meta.Atom_Data _) ->
c_1 = self_meta.constructor
c_2 = that_meta.constructor
if Meta.is_same_object c_1 c_2 . not then False else
f_1 = self_meta.fields
f_2 = that_meta.fields
0.up_to f_1.length . all i-> (f_1.at i) == (f_2.at i)
Cons (Meta.Error _) (Meta.Error _) -> self_meta.payload == that_meta.payload
Cons (Meta.Polyglot o_1) (Meta.Polyglot o_2) ->
Pair_Data (Meta.Error_Data _) (Meta.Error_Data _) -> self_meta.payload == that_meta.payload
Pair_Data (Meta.Polyglot_Data o_1) (Meta.Polyglot_Data o_2) ->
langs_match = (self_meta.get_language == Meta.Java) && (that_meta.get_language == Meta.Java)
if langs_match.not then False else o_1.equals o_2
Cons (Meta.Unresolved_Symbol _) (Meta.Unresolved_Symbol _) ->
Pair_Data (Meta.Unresolved_Symbol_Data _) (Meta.Unresolved_Symbol_Data _) ->
(self_meta.name == that_meta.name) && (self_meta.scope == that_meta.scope)
## Constructor comparison is covered by the identity equality.
Primitive objects should define their own equality.

View File

@ -1,12 +1,8 @@
import Standard.Base.Data.Vector
## Utilities for working with primitive arrays.
## The type of primitive mutable arrays.
@Builtin_Type
type Array
## The type of primitive mutable arrays.
@Builtin_Type
type Array
## Gets the element at index in the array this.
Arguments:
@ -68,7 +64,7 @@ type Array
[1, 2, 3, 4].to_array.to_default_visualization_data
to_default_visualization_data : Text
to_default_visualization_data self =
Vector.Vector self . to_default_visualization_data
Vector.Vector_Data self . to_default_visualization_data
## Creates an array with length 0.

View File

@ -1,12 +1,11 @@
## Booleans.
## A type with only two possible values.
The boolean type represents the two truth values of boolean logic. It is
primarily used for control-flow.
@Builtin_Type
type Boolean
## A type with only two possible values.
The boolean type represents the two truth values of boolean logic. It is
primarily used for control-flow.
@Builtin_Type
type Boolean
True
False
## Compares two booleans for equality.
@ -106,11 +105,3 @@ type Boolean
if (27 % 3) == 0 then IO.println "Fizz"
if_then : Any -> Any | Nothing
if_then self ~on_true = @Builtin_Method "Boolean.if_then"
## The constructor for the value True.
@Builtin_Type
type True
## The constructor for the value False.
@Builtin_Type
type False

View File

@ -8,16 +8,16 @@ type Index_Sub_Range
Selects no items if `count` is less than or equal to 0.
Selects all items if `count` is greater than the length of the input.
type First (count : Integer = 1)
First (count : Integer = 1)
## Select the last `count` characters.
Selects no items if `count` is less than or equal to 0.
Selects all items if `count` is greater than the length of the input.
type Last (count : Integer = 1)
Last (count : Integer = 1)
## Select elements from the start while the predicate returns `True`.
type While (predicate : (Any -> Boolean))
While (predicate : (Any -> Boolean))
## Selects specific indexes (starting from 0) either as an `Integer` or a
`Range`.
@ -29,13 +29,13 @@ type Index_Sub_Range
Only ranges with positive step and positive indices are supported.
Individual integer indices can be negative which allows for indexing
from the end of the collection.
type By_Index (indexes : (Integer | Range | Vector (Integer | Range)) = [0])
By_Index (indexes : (Integer | Range | Vector (Integer | Range)) = [0])
## Gets a random sample of entries, without repetitions.
If `count` is greater than the length of the input, a random permutation
of all elements from the input is selected.
type Sample (count:Integer) (seed:Integer=Random.get_default_seed)
Sample (count:Integer) (seed:Integer=Random.get_default_seed)
## Gets every Nth entry.
@ -43,7 +43,7 @@ type Index_Sub_Range
- step: The step between consecutive entries that are included.
- first: The first entry to include. If it is outside of bounds of the
input, an error is raised.
type Every (step:Integer) (first:Integer=0)
Every (step:Integer) (first:Integer=0)
## PRIVATE
Resolves a vector of ranges or indices into a vector of ranges that fit
@ -55,15 +55,15 @@ resolve_ranges ranges length =
trim descriptor = case descriptor of
Integer ->
actual_index = if descriptor < 0 then length + descriptor else descriptor
if (actual_index < 0) || (actual_index >= length) then Panic.throw (Index_Out_Of_Bounds_Error descriptor length) else
if (actual_index < 0) || (actual_index >= length) then Panic.throw (Index_Out_Of_Bounds_Error_Data descriptor length) else
actual_index
Range start end step ->
if step <= 0 then Panic.throw (Illegal_Argument_Error "Range step must be positive.") else
if (start < 0) || (end < 0) then Panic.throw (Illegal_Argument_Error "Range start and end must not be negative.") else
if start >= length then Panic.throw (Index_Out_Of_Bounds_Error start length) else
Range_Data start end step ->
if step <= 0 then Panic.throw (Illegal_Argument_Error_Data "Range step must be positive.") else
if (start < 0) || (end < 0) then Panic.throw (Illegal_Argument_Error_Data "Range start and end must not be negative.") else
if start >= length then Panic.throw (Index_Out_Of_Bounds_Error_Data start length) else
actual_end = Math.min end length
if actual_end < start then Range start start step else
Range start actual_end step
if actual_end < start then Range_Data start start step else
Range_Data start actual_end step
ranges.map trim
## PRIVATE
@ -73,11 +73,11 @@ resolve_ranges ranges length =
single-element ranges.
normalize_ranges descriptors =
normalize descriptor = case descriptor of
Integer -> [Range descriptor descriptor+1]
Range _ _ _ ->
Integer -> [Range_Data descriptor descriptor+1]
Range_Data _ _ _ ->
if descriptor.step == 1 then [descriptor] else
descriptor.to_vector.map ix->
Range ix ix+1
Range_Data ix ix+1
descriptors.flat_map normalize
## PRIVATE
@ -96,9 +96,9 @@ normalize_ranges descriptors =
invert_range_selection : Vector Range -> Integer -> Boolean -> Vector Range
invert_range_selection ranges length needs_sorting =
sorted = if needs_sorting then sort_and_merge_ranges ranges else ranges
ranges_with_sentinels = [Range 0 0] + sorted + [Range length length]
ranges_with_sentinels = [Range_Data 0 0] + sorted + [Range_Data length length]
ranges_with_sentinels.zip ranges_with_sentinels.tail prev-> next->
Range prev.end next.start
Range_Data prev.end next.start
## PRIVATE
Returns a new sorted list of ranges where intersecting ranges have been
@ -113,7 +113,7 @@ sort_and_merge_ranges ranges =
sorted.tail.each range->
current = current_ref.get
case range.start <= current.end of
True -> current_ref.put (Range current.start (Math.max current.end range.end))
True -> current_ref.put (Range_Data current.start (Math.max current.end range.end))
False ->
builder.append current
current_ref.put range
@ -147,16 +147,16 @@ sort_and_merge_ranges ranges =
- range: The `Index_Sub_Range` to take from the collection.
take_helper : Integer -> (Integer -> Any) -> (Integer -> Integer -> Any) -> (Vector (Integer | Range) -> Vector Any) -> Index_Sub_Range -> Any
take_helper length at single_slice slice_ranges index_sub_range = case index_sub_range of
Range _ _ _ -> take_helper length at single_slice slice_ranges (By_Index index_sub_range)
Range_Data _ _ _ -> take_helper length at single_slice slice_ranges (By_Index index_sub_range)
First count -> single_slice 0 (Math.min length count)
Last count -> single_slice length-count length
While predicate ->
end = 0.up_to length . find i-> (predicate (at i)).not
true_end = if end.is_nothing then length else end
single_slice 0 true_end
By_Index one_or_many_descriptors -> Panic.recover [Index_Out_Of_Bounds_Error, Illegal_Argument_Error] <|
By_Index one_or_many_descriptors -> Panic.recover [Index_Out_Of_Bounds_Error_Data, Illegal_Argument_Error_Data] <|
indices = case one_or_many_descriptors of
Vector.Vector _ -> one_or_many_descriptors
Vector.Vector_Data _ -> one_or_many_descriptors
_ -> [one_or_many_descriptors]
trimmed = resolve_ranges indices length
slice_ranges trimmed
@ -165,9 +165,9 @@ take_helper length at single_slice slice_ranges index_sub_range = case index_sub
indices_to_take = Random.random_indices length count rng
take_helper length at single_slice slice_ranges (By_Index indices_to_take)
Every step start ->
if step <= 0 then Error.throw (Illegal_Argument_Error "Step within Every must be positive.") else
if step <= 0 then Error.throw (Illegal_Argument_Error_Data "Step within Every must be positive.") else
if start >= length then single_slice 0 0 else
range = Range start length step
range = Range_Data start length step
take_helper length at single_slice slice_ranges (By_Index range)
## PRIVATE
@ -196,16 +196,16 @@ take_helper length at single_slice slice_ranges index_sub_range = case index_sub
- range: The `Index_Sub_Range` to drop from the collection.
drop_helper : Integer -> (Integer -> Any) -> (Integer -> Integer -> Any) -> (Vector (Integer | Range) -> Vector Any) -> Index_Sub_Range -> Any
drop_helper length at single_slice slice_ranges index_sub_range = case index_sub_range of
Range _ _ _ -> drop_helper length at single_slice slice_ranges (By_Index index_sub_range)
Range_Data _ _ _ -> drop_helper length at single_slice slice_ranges (By_Index index_sub_range)
First count -> single_slice count length
Last count -> single_slice 0 length-count
While predicate ->
end = 0.up_to length . find i-> (predicate (at i)).not
true_end = if end.is_nothing then length else end
single_slice true_end length
By_Index one_or_many_descriptors -> Panic.recover [Index_Out_Of_Bounds_Error, Illegal_Argument_Error] <|
By_Index one_or_many_descriptors -> Panic.recover [Index_Out_Of_Bounds_Error_Data, Illegal_Argument_Error_Data] <|
indices = case one_or_many_descriptors of
Vector.Vector _ -> one_or_many_descriptors
Vector.Vector_Data _ -> one_or_many_descriptors
_ -> [one_or_many_descriptors]
trimmed = resolve_ranges indices length
normalized = normalize_ranges trimmed
@ -216,7 +216,7 @@ drop_helper length at single_slice slice_ranges index_sub_range = case index_sub
indices_to_drop = Random.random_indices length count rng
drop_helper length at single_slice slice_ranges (By_Index indices_to_drop)
Every step start ->
if step <= 0 then Error.throw (Illegal_Argument_Error "Step within Every must be positive.") else
if step <= 0 then Error.throw (Illegal_Argument_Error_Data "Step within Every must be positive.") else
if start >= length then single_slice 0 length else
range = Range start length step
range = Range_Data start length step
drop_helper length at single_slice slice_ranges (By_Index range)

View File

@ -13,7 +13,7 @@ export Standard.Base.Data.Interval.Bound
example_exclusive = Interval.exclusive 0.1 0.5
exclusive : Number -> Number -> Interval
exclusive start end = Interval (Bound.Exclusive start) (Bound.Exclusive end)
exclusive start end = Interval_Data (Bound.Exclusive start) (Bound.Exclusive end)
## Creates an interval that excludes its lower bound.
@ -24,7 +24,7 @@ exclusive start end = Interval (Bound.Exclusive start) (Bound.Exclusive end)
example_start_exclusive = Interval.start_exclusive 1 5
start_exclusive : Number -> Number -> Interval
start_exclusive start end = Interval (Bound.Exclusive start) (Bound.Inclusive end)
start_exclusive start end = Interval_Data (Bound.Exclusive start) (Bound.Inclusive end)
## Creates an interval that excludes its upper bound.
@ -35,7 +35,7 @@ start_exclusive start end = Interval (Bound.Exclusive start) (Bound.Inclusive en
example_end_exclusive = Interval.end_exclusive 1 5
end_exclusive : Number -> Number -> Interval
end_exclusive start end = Interval (Bound.Inclusive start) (Bound.Exclusive end)
end_exclusive start end = Interval_Data (Bound.Inclusive start) (Bound.Exclusive end)
## Creates an interval that includes both of its bounds.
@ -46,7 +46,7 @@ end_exclusive start end = Interval (Bound.Inclusive start) (Bound.Exclusive end)
example_inclusive = Interval.inclusive 0 0
inclusive : Number -> Number -> Interval
inclusive start end = Interval (Bound.Inclusive start) (Bound.Inclusive end)
inclusive start end = Interval_Data (Bound.Inclusive start) (Bound.Inclusive end)
## A type representing an interval over real numbers.
type Interval
@ -58,7 +58,7 @@ type Interval
Arguments:
- start: The start of the interval.
- end: The end of the interval.
type Interval (start : Number) (end : Number)
Interval_Data (start : Bound.Bound) (end : Bound.Bound)
## Checks if the interval contains `that`.

View File

@ -16,7 +16,7 @@ type Bound
import Standard.Base.Data.Interval.Bound
example_bound_inclusive = Bound.Inclusive 2
type Inclusive n
Inclusive n
## A bound that excludes the value `n`.
@ -29,4 +29,4 @@ type Bound
import Standard.Base.Data.Interval.Bound
example_bound_exclusive = Bound.Exclusive 2.
type Exclusive n
Exclusive n

View File

@ -39,7 +39,7 @@ from_pairs contents =
parse : Text -> Json ! Parse_Error
parse json_text =
Panic.catch_java Any (Internal.parse_helper json_text) java_exception->
Error.throw (Parse_Error java_exception.getMessage)
Error.throw (Parse_Error_Data java_exception.getMessage)
## Represents a JSON structure.
type Json
@ -48,34 +48,34 @@ type Json
Arguments:
- fields: The fields of the JSON object.
type Object fields
Object fields
## A representation of a JSON array.
Arguments:
- items: The items in the JSON array.
type Array items
Array items
## A representation of a JSON string.
Arguments:
- value: The text contained in the JSON string.
type String value
String value
## A representation of a JSON number.
Arguments:
- value: The number contained in the JSON number.
type Number value
Number value
## A representation of a JSON boolean.
Arguments:
- value: The boolean contained in a JSON boolean.
type Boolean value
Boolean value
## A representation of a JSON null.
type Null
Null
## Marshalls this JSON into an arbitrary value described by
`type_descriptor`.
@ -133,19 +133,41 @@ type Json
example_unwrap = Json.Number 3 . unwrap
unwrap : Any
unwrap self = case self of
Json.Array its -> its.map .unwrap
Json.Boolean b -> b
Json.Number n -> n
Json.String t -> t
Json.Null -> Nothing
Json.Object f -> f.map .unwrap
Array its -> its.map .unwrap
Boolean b -> b
Number n -> n
String t -> t
Null -> Nothing
Object f -> f.map .unwrap
## Gets the value associated with the given key in this object.
Arguments:
- field: The name of the field from which to get the value.
Throws `Nothing` if the associated key is not defined.
> Example
Get the "title" field from this JSON representing a book.
import Standard.Base.Data.Json
import Standard.Examples
example_get = Examples.json_object.get "title"
get : Text -> Json ! No_Such_Field_Error
get self field = case self of
Object _ -> self.fields.get field . map_error case _ of
Map.No_Value_For_Key_Error_Data _ -> No_Such_Field_Error_Data field
x -> x
_ -> Error.throw (Illegal_Argument_Error_Data "Json.get: self must be an Object")
## UNSTABLE
A failure indicating malformed text input into the JSON parser.
Check the `message` field for detailed information on the specific failure.
type Parse_Error message
type Parse_Error
Parse_Error_Data message
## UNSTABLE
@ -154,29 +176,11 @@ Parse_Error.to_display_text : Text
Parse_Error.to_display_text self =
"Parse error in parsing JSON: " + self.message.to_text + "."
## Gets the value associated with the given key in this object.
Arguments:
- field: The name of the field from which to get the value.
Throws `Nothing` if the associated key is not defined.
> Example
Get the "title" field from this JSON representing a book.
import Standard.Base.Data.Json
import Standard.Examples
example_get = Examples.json_object.get "title"
Object.get : Text -> Json ! No_Such_Field_Error
Object.get self field = self.fields.get field . map_error case _ of
Map.No_Value_For_Key_Error _ -> No_Such_Field_Error field
x -> x
## UNSTABLE
An error indicating that there is no such field in the JSON object.
type No_Such_Field_Error field_name
type No_Such_Field_Error
No_Such_Field_Error_Data field_name
## UNSTABLE
@ -201,7 +205,7 @@ type Marshalling_Error
- format: The type format that did not match.
This can occur e.g. when trying to reinterpret a number as a `Text`, etc.
type Type_Mismatch_Error json format
Type_Mismatch_Error json format
## UNSTABLE
@ -215,7 +219,7 @@ type Marshalling_Error
This can occure when trying to reinterpret a JSON object into an atom,
when the JSON does not contain all the fields required by the atom.
type Missing_Field_Error json field format
Missing_Field_Error json field format
## UNSTABLE
@ -243,21 +247,21 @@ type Marshalling_Error
Any.to_json self =
m = Meta.meta self
case m of
Meta.Atom _ ->
cons = Meta.Constructor m.constructor
Meta.Atom_Data _ ->
cons = Meta.Constructor_Data m.constructor
fs = m.fields
fnames = cons.fields
json_fs = 0.up_to fnames.length . fold Map.empty m-> i->
m.insert (fnames.at i) (fs.at i . to_json)
with_tp = json_fs . insert "type" (String cons.name)
Object with_tp
Meta.Constructor _ ->
Meta.Constructor_Data _ ->
Object (Map.empty . insert "type" (String m.name))
## The following two cases cannot be handled generically and should
instead define their own `to_json` implementations.
Meta.Polyglot _ -> Null
Meta.Primitive _ -> Null
Meta.Polyglot_Data _ -> Null
Meta.Primitive_Data _ -> Null
## Method used by object builders to convert a value into a valid JSON key.

View File

@ -23,7 +23,7 @@ type Consumer
- value: The value being consumed.
Conforms to the `org.enso.base.json.Parser.JsonConsumer` Java interface.
type Consumer child_consumer value
Consumer_Data child_consumer value
## PRIVATE
@ -144,7 +144,7 @@ type Array_Consumer
Arguments:
- builder: The builder for array values.
- parent: The parent consumer.
type Array_Consumer builder parent
Array_Consumer_Data builder parent
## PRIVATE
@ -175,7 +175,7 @@ type Object_Consumer
- last_key: The last object key that has been seen.
- map: The map representing the object.
- parent: The parent consumer.
type Object_Consumer last_key map parent
Object_Consumer_Data last_key map parent
## PRIVATE
@ -217,7 +217,7 @@ mk_object_consumer : Any -> Object_Consumer
mk_object_consumer parent =
k = Ref.new ""
m = Ref.new Map.empty
Object_Consumer k m parent
Object_Consumer_Data k m parent
## PRIVATE
@ -228,7 +228,7 @@ mk_object_consumer parent =
mk_array_consumer : Any -> Array_Consumer
mk_array_consumer parent =
bldr = Vector.new_builder
Array_Consumer bldr parent
Array_Consumer_Data bldr parent
## PRIVATE
@ -237,7 +237,7 @@ mk_consumer : Consumer
mk_consumer =
child = Ref.new Nil
val = Ref.new Nothing
Consumer child val
Consumer_Data child val
## PRIVATE
@ -287,7 +287,7 @@ render_helper builder json = case json of
See `Json.into` for semantics documentation.
into_helper : Any -> Json -> Any
into_helper fmt json = case fmt of
Base.Vector.Vector field -> case json of
Base.Vector.Vector_Data field -> case json of
Array items -> items.map (into_helper field)
_ -> Panic.throw (Type_Mismatch_Error json fmt)
Base.Boolean -> case json of
@ -302,9 +302,9 @@ into_helper fmt json = case fmt of
_ ->
m = Meta.meta fmt
case m of
Meta.Atom _ -> case json of
Meta.Atom_Data _ -> case json of
Object json_fields ->
cons = Meta.Constructor m.constructor
cons = Meta.Constructor_Data m.constructor
fnames = cons.fields
ffmts = m.fields
field_values = fnames.zip ffmts n-> inner_fmt->

View File

@ -18,14 +18,14 @@ import Standard.Base.Runtime.Unsafe
type List
## The type that indicates the end of a cons list.
type Nil
Nil
## A cons cell for a cons list.
Arguments:
- x: The element at this position in the list.
- xs: The rest of the list.
type Cons x xs
Cons x xs
## Computes the number of elements in the list.

View File

@ -303,7 +303,7 @@ type Locale
Arguments:
- java_locale: The Java locale representation used internally.
type Locale java_locale
Locale_Data java_locale
## Gets the language from the locale.
@ -417,7 +417,7 @@ type Locale
## Compares two locales for equality.
== : Any -> Boolean
== self other = case other of
Locale other_java_locale -> self.java_locale.equals other_java_locale
Locale_Data other_java_locale -> self.java_locale.equals other_java_locale
_ -> False
## PRIVATE
@ -427,7 +427,7 @@ type Locale
Arguments:
- java: The java locale value.
from_java : JavaLocale -> Locale
from_java java = Locale java
from_java java = Locale_Data java
## PRIVATE
javaLocaleBuilder = JavaLocale.Builder

View File

@ -40,7 +40,7 @@ singleton key value = Bin 1 key value Tip Tip
example_from_vector = Map.from_vector [[1, 2], [3, 4]]
from_vector : Vector.Vector Any -> Map
from_vector vec = vec.fold Map.empty (m -> el -> m.insert (el.at 0) (el.at 1))
from_vector vec = vec.fold empty (m -> el -> m.insert (el.at 0) (el.at 1))
## A key-value store. This type assumes all keys are pairwise comparable,
using the `<`, `>` and `==` operators.
@ -49,7 +49,7 @@ type Map
## PRIVATE
A key-value store. This type assumes all keys are pairwise comparable,
using the `<`, `>` and `==` operators.
type Tip
Tip
## PRIVATE
A key-value store. This type assumes all keys are pairwise comparable,
@ -61,7 +61,7 @@ type Map
- value: The value stored at this node.
- left: The left subtree.
- right: The right subtree.
type Bin s key value left right
Bin s key value left right
## Checks if the map is empty.
@ -193,7 +193,7 @@ type Map
get : Any -> Any ! No_Value_For_Key_Error
get self key =
go map = case map of
Tip -> Error.throw (No_Value_For_Key_Error key)
Tip -> Error.throw (No_Value_For_Key_Error_Data key)
Bin _ k v l r ->
if k == key then v else
if k > key then @Tail_Call go l else @Tail_Call go r
@ -217,7 +217,7 @@ type Map
example_get_or_else = Examples.map.get_or_else 2 "zero"
get_or_else : Any -> Any -> Any
get_or_else self key ~other =
self.get key . catch No_Value_For_Key_Error (_ -> other)
self.get key . catch No_Value_For_Key_Error_Data (_ -> other)
## Transforms the map's keys and values to create a new map.
@ -445,7 +445,7 @@ type Map
first : Pair
first self =
first p m = case m of
Bin _ k v l _ -> @Tail_Call first (Pair k v) l
Bin _ k v l _ -> @Tail_Call first (Pair_Data k v) l
Tip -> p
first Nothing self
@ -454,7 +454,7 @@ type Map
last : Pair
last self =
last p m = case m of
Bin _ k v _ r -> @Tail_Call last (Pair k v) r
Bin _ k v _ r -> @Tail_Call last (Pair_Data k v) r
Tip -> p
last Nothing self
@ -464,7 +464,8 @@ type Map
Arguments:
- key: The key that was asked for.
type No_Value_For_Key_Error key
type No_Value_For_Key_Error
No_Value_For_Key_Error_Data key
## UNSTABLE

View File

@ -4,7 +4,7 @@ from Standard.Base import all
type Maybe
## No contained value.
Nothing
None
## A value.
@ -17,13 +17,13 @@ type Maybe
import Standard.Base.Data.Maybe
example_some = Maybe.Some "yes!"
type Some value
Some value
## Applies the provided function to the contained value if it exists,
otherwise returning the provided default value.
Arguments:
- default: The value to return if `self` is Nothing. This value is lazy
- default: The value to return if `self` is None. This value is lazy
and hence will not execute any provided computation unless it is used.
- function: The function to execute on the value inside the `Some`, if it
is a just.
@ -36,16 +36,21 @@ type Maybe
example_maybe = Maybe.Some 2 . maybe 0 *2
maybe : Any -> (Any -> Any) -> Any
maybe self ~default function = case self of
Nothing -> default
None -> default
Some val -> function val
## Check if the maybe value is `Some`.
> Example
Check if `Nothing` is `Some`.
Check if `None` is `Some`.
import Standard.Base.Data.Maybe
example_is_some = Maybe.Some "yes!" . is_some
is_some : Boolean
is_some self = self.is_nothing.not
is_some self = case self of
None -> False
Some _ -> True
is_none : Boolean
is_none self = self.is_some.not

View File

@ -12,12 +12,6 @@ polyglot java import java.util.Random
To be a valid generator, it must provide the `step` method as described
below.
type Generator
## PRIVATE
The basic generator type.
type Generator
## PRIVATE
Step the generator to produce the next value..
@ -38,14 +32,6 @@ type Generator
It produces what is commonly termed "white" noise, where any value in the
range has an equal chance of occurring.
type Deterministic_Random
## A deterministic random noise generator that performs a perturbation of the
input
It produces what is commonly termed as "white" noise, where any value in
the range has an equal chance of occurring.
type Deterministic_Random
## Step the generator to produce the next value.
Arguments:

View File

@ -13,15 +13,8 @@ from Standard.Base.Error.Common import Panic,Error,Illegal_Argument_Error
If a Number is expected, then the program can provide either a Decimal or
an Integer in its place.
@Builtin_Type
type Number
## The root type of the Enso numeric hierarchy.
If a Number is expected, then the program can provide either a Decimal or
an Integer in its place.
@Builtin_Type
type Number
## ALIAS Add
Adds two arbitrary numbers.
@ -364,17 +357,14 @@ type Number
if self < 0 then -1 else 0
## Decimal numbers.
## Decimal is the type of decimal numbers in Enso.
? Representation
Enso's decimal numbers are represented as IEEE754 double-precision
floating point numbers.
@Builtin_Type
type Decimal
## Decimal is the type of decimal numbers in Enso.
? Representation
Enso's decimal numbers are represented as IEEE754 double-precision
floating point numbers.
@Builtin_Type
type Decimal
## Adds a deceimal and an arbitrary number.
Arguments:
@ -607,27 +597,24 @@ type Decimal
parse : Text -> Decimal ! Parse_Error
parse text =
Panic.catch NumberFormatException (Double.parseDouble text) _->
Error.throw (Parse_Error text)
Error.throw (Parse_Error_Data text)
## Integral numbers.
## Integer is the type of integral numbers in Enso. They are of unbounded
size and can grow as large as necessary.
? Representation
For certain operations (such as bitwise logic), the underlying
representation of the number matters. Enso Integers are represented as
signed 2's complement numbers.
? Performance
Integers that fit into 64 bits are represented in memory as 64 bits.
This means that operations on them achieve excellent performance. Once
the integer grows beyond being able to fit in 64 bits, performance will
degrade.
@Builtin_Type
type Integer
## Integer is the type of integral numbers in Enso. They are of unbounded
size and can grow as large as necessary.
? Representation
For certain operations (such as bitwise logic), the underlying
representation of the number matters. Enso Integers are represented as
signed 2's complement numbers.
? Performance
Integers that fit into 64 bits are represented in memory as 64 bits.
This means that operations on them achieve excellent performance. Once
the integer grows beyond being able to fit in 64 bits, performance will
degrade.
@Builtin_Type
type Integer
## Adds an integer and an arbitrary number.
Arguments:
@ -988,12 +975,13 @@ type Integer
parse : Text -> Text -> Integer ! Parse_Error
parse text (radix=10) =
Panic.catch NumberFormatException (Long.parseLong text radix) _->
Error.throw (Parse_Error text)
Error.throw (Parse_Error_Data text)
## UNSTABLE
A syntax error when parsing a double.
type Parse_Error text
type Parse_Error
Parse_Error_Data text
## UNSTABLE

View File

@ -18,19 +18,17 @@ from_sign sign = if sign == 0 then Equal else
The result should be returned in terms of how `self` orders in comparison to
`that`. So, if `self` is greater than `that`, you should return `Greater.`
@Builtin_Type
type Ordering
## A representation that the first value orders as less than the second.
@Builtin_Type
type Less
Less
## A representation that the first value orders as equal to the second.
@Builtin_Type
type Equal
Equal
## A representation that the first value orders as greater than the second.
@Builtin_Type
type Greater
Greater
## Converts the ordering to the signed notion of ordering based on integers.

View File

@ -30,5 +30,5 @@ for_text_ordering text_ordering =
txt_cmp a b = Natural_Order.compare a b text_ordering.case_sensitive . to_sign
new.withCustomTextComparator txt_cmp
False -> case text_ordering.case_sensitive of
Case_Insensitive locale -> new.withCaseInsensitivity locale.java_locale
Case_Insensitive_Data locale -> new.withCaseInsensitivity locale.java_locale
_ -> new

View File

@ -17,7 +17,7 @@ polyglot java import com.ibm.icu.text.BreakIterator
compare : Text -> Text -> (True|Case_Insensitive) Ordering
compare text1 text2 case_sensitive=True =
compare_text = case case_sensitive of
Case_Insensitive locale -> a -> b -> a.compare_to_ignore_case b locale
Case_Insensitive_Data locale -> a -> b -> a.compare_to_ignore_case b locale
_ -> _.compare_to _
iter1 = BreakIterator.getCharacterInstance
@ -38,10 +38,10 @@ compare text1 text2 case_sensitive=True =
## Find end of number and return pair of index and flag if reached end
loop text next iter =
new_next = iter.next
if (new_next == -1) then (Pair next True) else
if (new_next == -1) then (Pair_Data next True) else
substring = Text_Utils.substring text next new_next
character = Text_Utils.get_chars substring . at 0
if (is_digit character).not then (Pair next False) else
if (is_digit character).not then (Pair_Data next False) else
@Tail_Call loop text new_next iter
pair = loop text next iter
@ -60,18 +60,18 @@ compare text1 text2 case_sensitive=True =
prev2 - index to start of current character in text2.
next2 - index to start of next character (or -1 if finished) in text2.
order prev1 next1 prev2 next2 =
case (Pair (next1 == -1) (next2 == -1)) of
Pair True True -> Ordering.Equal
Pair True False -> Ordering.Less
Pair False True -> Ordering.Greater
Pair False False ->
case (Pair_Data (next1 == -1) (next2 == -1)) of
Pair_Data True True -> Ordering.Equal
Pair_Data True False -> Ordering.Less
Pair_Data False True -> Ordering.Greater
Pair_Data False False ->
substring1 = Text_Utils.substring text1 prev1 next1
first_char_1 = Text_Utils.get_chars substring1 . at 0
substring2 = Text_Utils.substring text2 prev2 next2
first_char_2 = Text_Utils.get_chars substring2 . at 0
tmp = Pair (is_digit first_char_1) (is_digit first_char_2)
tmp = Pair_Data (is_digit first_char_1) (is_digit first_char_2)
## ToDo: Move to case on second block
Appears to be an issue using a nested case statement on a pair
https://www.pivotaltracker.com/story/show/181280737

View File

@ -9,7 +9,7 @@ type Sort_Direction
Create an ascending order.
Sort_Direction.Ascending
type Ascending
Ascending
## Elements should be sorted in descending order.
@ -17,7 +17,7 @@ type Sort_Direction
Create a descending order.
Sort_Direction.Descending
type Descending
Descending
## Convert into the sign of the direction
to_sign : Integer

View File

@ -10,7 +10,7 @@ type Pair
Arguments:
- first: The first element.
- second: The second element.
type Pair first second
Pair_Data first second
## UNSTABLE
@ -22,4 +22,4 @@ type Pair
(Pair 1 2).map (+1) == (Pair 2 3)
map : (Any -> Any) -> Pair
map self fun =
Pair (fun self.first) (fun self.second)
Pair_Data (fun self.first) (fun self.second)

View File

@ -11,7 +11,7 @@ type Range
- end: The right boundary of the range. Its value is excluded.
- step: The step between consecutive elements of the range. It must be
non-zero. Defaults to 1.
type Range (start : Integer) (end : Integer) (step : Integer = 1)
Range_Data (start : Integer) (end : Integer) (step : Integer = 1)
## Creates a copy of this range with a changed step.
@ -28,10 +28,10 @@ type Range
with_step self new_step = case new_step of
Integer ->
if new_step == 0 then throw_zero_step_error else
if new_step < 0 then Error.throw (Illegal_Argument_Error "The step should be positive. A decreasing sequence will remain decreasing after updating it with positive step, as this operation only sets the magnitude without changing the sign.") else
Range self.start self.end self.step.signum*new_step
if new_step < 0 then Error.throw (Illegal_Argument_Error_Data "The step should be positive. A decreasing sequence will remain decreasing after updating it with positive step, as this operation only sets the magnitude without changing the sign.") else
Range_Data self.start self.end self.step.signum*new_step
_ ->
Error.throw (Illegal_Argument_Error "Range step should be an integer.")
Error.throw (Illegal_Argument_Error_Data "Range step should be an integer.")
## Returns the last element that is included within the range or `Nothing`
if the range is empty.
@ -270,7 +270,7 @@ type Range
`Range 0 10 . contains 3.0 == False` and get a type error for
decimals instead.
_ ->
Error.throw (Illegal_Argument_Error "`Range.contains` only accepts Integers.")
Error.throw (Illegal_Argument_Error_Data "`Range.contains` only accepts Integers.")
## ALIAS Range
@ -285,8 +285,8 @@ type Range
0.up_to 5
Integer.up_to : Integer -> Range
Integer.up_to self n = case n of
Integer -> Range self n
_ -> Error.throw (Illegal_Argument_Error "Expected range end to be an Integer.")
Integer -> Range_Data self n
_ -> Error.throw (Illegal_Argument_Error_Data "Expected range end to be an Integer.")
## ALIAS Range
@ -301,8 +301,8 @@ Integer.up_to self n = case n of
5.down_to 0
Integer.down_to : Integer -> Range
Integer.down_to self n = case n of
Integer -> Range self n -1
_ -> Error.throw (Illegal_Argument_Error "Expected range end to be an Integer.")
Integer -> Range_Data self n -1
_ -> Error.throw (Illegal_Argument_Error_Data "Expected range end to be an Integer.")
## PRIVATE
throw_zero_step_error = Error.throw (Illegal_State_Error "A range with step = 0 is ill-formed.")
throw_zero_step_error = Error.throw (Illegal_State_Error_Data "A range with step = 0 is ill-formed.")

View File

@ -5,21 +5,21 @@ polyglot java import org.enso.base.statistics.FitError
type Model
## Fit a line (y = A x + B) to the data with an optional fixed intercept.
type Linear_Model (intercept:Number|Nothing=Nothing)
Linear_Model (intercept:Number|Nothing=Nothing)
## Fit a exponential line (y = A exp(B x)) to the data with an optional fixed intercept.
type Exponential_Model (intercept:Number|Nothing=Nothing)
Exponential_Model (intercept:Number|Nothing=Nothing)
## Fit a logarithmic line (y = A log x + B) to the data.
type Logarithmic_Model
Logarithmic_Model
## Fit a power series (y = A x ^ B) to the data.
type Power_Model
Power_Model
## Use Least Squares to fit a line to the data.
fit_least_squares : Vector -> Vector -> Model -> Fitted_Model ! Illegal_Argument_Error | Fit_Error
fit_least_squares known_xs known_ys model=Linear_Model =
Illegal_Argument_Error.handle_java_exception <| Fit_Error.handle_java_exception <| case model of
Illegal_Argument_Error.handle_java_exception <| handle_fit_java_exception <| case model of
Linear_Model intercept ->
fitted = if intercept.is_nothing then Regression.fit_linear known_xs.to_array known_ys.to_array else
Regression.fit_linear known_xs.to_array known_ys.to_array intercept
@ -38,20 +38,20 @@ fit_least_squares known_xs known_ys model=Linear_Model =
log_ys = ln_series known_ys "Y-values"
fitted = Regression.fit_linear log_xs.to_array log_ys.to_array
fitted_model_with_r_squared Fitted_Power_Model fitted.intercept.exp fitted.slope known_xs known_ys
_ -> Error.throw (Illegal_Argument_Error "Unsupported model.")
_ -> Error.throw (Illegal_Argument_Error_Data "Unsupported model.")
type Fitted_Model
## Fitted line (y = slope x + intercept).
type Fitted_Linear_Model slope:Number intercept:Number r_squared:Number=0.0
Fitted_Linear_Model slope:Number intercept:Number r_squared:Number=0.0
## Fitted exponential line (y = a exp(b x)).
type Fitted_Exponential_Model a:Number b:Number r_squared:Number=0.0
Fitted_Exponential_Model a:Number b:Number r_squared:Number=0.0
## Fitted logarithmic line (y = a log x + b).
type Fitted_Logarithmic_Model a:Number b:Number r_squared:Number=0.0
Fitted_Logarithmic_Model a:Number b:Number r_squared:Number=0.0
## Fitted power series (y = a x ^ b).
type Fitted_Power_Model a:Number b:Number r_squared:Number=0.0
Fitted_Power_Model a:Number b:Number r_squared:Number=0.0
## Display the fitted line.
to_text : Text
@ -70,7 +70,7 @@ type Fitted_Model
Fitted_Exponential_Model a b _ -> a * (b * x).exp
Fitted_Logarithmic_Model a b _ -> a * x.ln + b
Fitted_Power_Model a b _ -> a * (x ^ b)
_ -> Error.throw (Illegal_Argument_Error "Unsupported model.")
_ -> Error.throw (Illegal_Argument_Error_Data "Unsupported model.")
## PRIVATE
Computes the R Squared value for a model and returns a new instance.
@ -86,8 +86,8 @@ fitted_model_with_r_squared constructor a b known_xs known_ys =
ln_series : Vector -> Vector ! Illegal_Argument_Error
ln_series xs series_name="Values" =
ln_with_panic x = if x.is_nothing then Nothing else
if x <= 0 then Panic.throw (Illegal_Argument_Error (series_name + " must be positive.")) else x.ln
Panic.recover Illegal_Argument_Error <| xs.map ln_with_panic
if x <= 0 then Panic.throw (Illegal_Argument_Error_Data (series_name + " must be positive.")) else x.ln
Panic.recover Illegal_Argument_Error_Data <| xs.map ln_with_panic
## PRIVATE
@ -95,7 +95,8 @@ ln_series xs series_name="Values" =
Arguments:
- message: The error message.
type Fit_Error message
type Fit_Error
Fit_Error_Data message
## PRIVATE
@ -104,5 +105,5 @@ Fit_Error.to_display_text : Text
Fit_Error.to_display_text self = "Could not fit the model: " + self.message.to_text
## PRIVATE
Fit_Error.handle_java_exception =
Panic.catch_java FitError handler=(java_exception-> Error.throw (Fit_Error java_exception.getMessage))
handle_fit_java_exception =
Panic.catch_java FitError handler=(java_exception-> Error.throw (Fit_Error_Data java_exception.getMessage))

View File

@ -1,4 +1,4 @@
from Standard.Base import Boolean, True, False, Nothing, Vector, Number, Any, Error, Array, Panic, Illegal_Argument_Error, Unsupported_Argument_Types
from Standard.Base import Boolean, True, False, Nothing, Vector, Number, Any, Error, Array, Panic, Illegal_Argument_Error_Data, Illegal_Argument_Error, Unsupported_Argument_Types, Unsupported_Argument_Types_Data
from Standard.Base.Data.Vector import Empty_Error
@ -28,62 +28,62 @@ type Statistic
_ -> Nothing
## Count the number of non-Nothing and non-NaN values.
type Count
Count
## The minimum value.
type Minimum
Minimum
## The maximum value.
type Maximum
Maximum
## Sum the non-Nothing and non-NaN values.
type Sum
Sum
## The sample mean of the values.
type Mean
Mean
## The variance of the values.
Arguments:
- population: specifies if data is a sample or the population.
type Variance (population:Boolean=False)
Variance (population:Boolean=False)
## The standard deviation of the values.
Arguments:
- population: specifies if data is a sample or the population.
type Standard_Deviation (population:Boolean=False)
Standard_Deviation (population:Boolean=False)
## The skewness of the values.
Arguments:
- population: specifies if data is a sample or the population.
type Skew (population:Boolean=False)
Skew (population:Boolean=False)
## The sample kurtosis of the values.
type Kurtosis
Kurtosis
## Calculate the Covariance between data and series.
Arguments:
- series: the series to compute the covariance with.
type Covariance (series:Vector)
Covariance (series:Vector)
## Calculate the Pearson Correlation between data and series.
Arguments:
- series: the series to compute the correlation with.
type Pearson (series:Vector)
Pearson (series:Vector)
## Calculate the Spearman Rank Correlation between data and series.
Arguments:
- series: the series to compute the correlation with.
type Spearman (series:Vector)
Spearman (series:Vector)
## Calculate the coefficient of determination between data and predicted
series.
Arguments:
- predicted: the series to compute the r_squared with.
type R_Squared (predicted:Vector)
R_Squared (predicted:Vector)
## Compute a single statistic on a vector like object.
@ -111,8 +111,8 @@ compute_bulk data statistics=[Count, Sum] =
report_invalid _ =
statistics.map_with_index i->v->
if java_stats.at i . is_nothing then Nothing else
Error.throw (Illegal_Argument_Error ("Can only compute " + v.to_text + " on numerical data sets."))
handle_unsupported = Panic.catch Unsupported_Argument_Types handler=report_invalid
Error.throw (Illegal_Argument_Error_Data ("Can only compute " + v.to_text + " on numerical data sets."))
handle_unsupported = Panic.catch Unsupported_Argument_Types_Data handler=report_invalid
empty_map s = if (s == Count) || (s == Sum) then 0 else
if (s == Minimum) || (s == Maximum) then Error.throw Empty_Error else
@ -181,8 +181,8 @@ spearman_correlation data =
## PRIVATE
wrap_java_call : Any -> Any
wrap_java_call ~function =
report_unsupported _ = Error.throw (Illegal_Argument_Error ("Can only compute correlations on numerical data sets."))
handle_unsupported = Panic.catch Unsupported_Argument_Types handler=report_unsupported
report_unsupported _ = Error.throw (Illegal_Argument_Error_Data ("Can only compute correlations on numerical data sets."))
handle_unsupported = Panic.catch Unsupported_Argument_Types_Data handler=report_unsupported
handle_unsupported <| Illegal_Argument_Error.handle_java_exception <| function
@ -242,7 +242,7 @@ rank_data input method=Rank_Method.Average =
Rank_Method.Ordinal -> Rank.Method.ORDINAL
Rank_Method.Dense -> Rank.Method.DENSE
report_nullpointer caught_panic = Error.throw (Illegal_Argument_Error caught_panic.payload.cause.getMessage)
report_nullpointer caught_panic = Error.throw (Illegal_Argument_Error_Data caught_panic.payload.cause.getMessage)
handle_nullpointer = Panic.catch NullPointerException handler=report_nullpointer
handle_classcast = Panic.catch ClassCastException handler=(Error.throw Vector.Incomparable_Values_Error)

View File

@ -2,17 +2,17 @@
## Specifies how to handle ranking of equal values.
type Rank_Method
## Use the mean of all ranks for equal values.
type Average
Average
## Use the lowest of all ranks for equal values.
type Minimum
Minimum
## Use the highest of all ranks for equal values.
type Maximum
Maximum
## Use same rank value for equal values and next group is the immediate
following ranking number.
type Dense
Dense
## Equal values are assigned the next rank in order that they occur.
type Ordinal
Ordinal

View File

@ -3,20 +3,18 @@ import Standard.Base.Meta
polyglot java import org.enso.base.Text_Utils
## Enso's text type.
Enso's text type is natively unicode aware, and will handle arbitrary
textual data.
? Concatenation
Enso's text type uses a rope-based structure under the hood to provide
users with efficient concatenation operations.
@Builtin_Type
type Text
## Enso's text type.
Enso's text type is natively unicode aware, and will handle arbitrary
textual data.
? Concatenation
Enso's text type uses a rope-based structure under the hood to provide
users with efficient concatenation operations.
@Builtin_Type
type Text
## Concatenates the text that to the right side of this.
Arguments:

View File

@ -1,10 +1,10 @@
## Specifies the casing options for text conversion.
type Case
## All letters in lower case.
type Lower
Lower
## All letters in upper case.
type Upper
Upper
## First letter of each word in upper case, rest in lower case.
type Title
Title

View File

@ -14,7 +14,7 @@ all_character_sets =
## Get all available Encodings.
all_encodings : Vector Encoding
all_encodings =
all_character_sets.map Encoding
all_character_sets.map Encoding_Data
## Represents a character encoding.
type Encoding
@ -22,81 +22,82 @@ type Encoding
Arguments:
- character_set: java.nio.charset name.
type Encoding (character_set:Text)
Encoding_Data (character_set:Text)
## PRIVATE
Convert an Encoding to it's corresponding Java Charset
to_java_charset : Charset
to_java_charset self =
Panic.catch UnsupportedCharsetException (Charset.forName self.character_set) _->
Error.throw (Illegal_Argument_Error ("Unknown Character Set: " + self.character_set))
Error.throw (Illegal_Argument_Error_Data ("Unknown Character Set: " + self.character_set))
## Encoding for ASCII.
ascii : Encoding
ascii = Encoding "US-ASCII"
ascii = Encoding_Data "US-ASCII"
## Encoding for Unicode UTF-8.
utf_8 : Encoding
utf_8 = Encoding "UTF-8"
utf_8 = Encoding_Data "UTF-8"
## Encoding for Unicode UTF-16 Little Endian.
utf_16_le : Encoding
utf_16_le = Encoding "UTF-16LE"
utf_16_le = Encoding_Data "UTF-16LE"
## Encoding for Unicode UTF-16 Big Endian.
utf_16_be : Encoding
utf_16_be = Encoding "UTF-16BE"
utf_16_be = Encoding_Data "UTF-16BE"
## Encoding for Unicode UTF-32 Little Endian.
utf_32_le : Encoding
utf_32_le = Encoding "UTF-32LE"
utf_32_le = Encoding_Data "UTF-32LE"
## Encoding for Unicode UTF-32 Big Endian.
utf_32_be : Encoding
utf_32_be = Encoding "UTF-32BE"
utf_32_be = Encoding_Data "UTF-32BE"
## Encoding for Central European (Windows).
windows_1250 : Encoding
windows_1250 = Encoding "windows-1250"
windows_1250 = Encoding_Data "windows-1250"
## Encoding for Cyrillic (Windows).
windows_1251 : Encoding
windows_1251 = Encoding "windows-1251"
windows_1251 = Encoding_Data "windows-1251"
## ALIAS ISO-8859-1
Encoding for Western European (Windows).
windows_1252 : Encoding
windows_1252 = Encoding "windows-1252"
windows_1252 = Encoding_Data "windows-1252"
## Encoding for Greek (Windows).
windows_1253 : Encoding
windows_1253 = Encoding "windows-1253"
windows_1253 = Encoding_Data "windows-1253"
## ALIAS ISO-8859-9
Encoding for Turkish (Windows).
windows_1254 : Encoding
windows_1254 = Encoding "windows-1254"
windows_1254 = Encoding_Data "windows-1254"
## Encoding for Hebrew (Windows).
windows_1255 : Encoding
windows_1255 = Encoding "windows-1255"
windows_1255 = Encoding_Data "windows-1255"
## Encoding for Arabic (Windows).
windows_1256 : Encoding
windows_1256 = Encoding "windows-1256"
windows_1256 = Encoding_Data "windows-1256"
## Encoding for Baltic (Windows).
windows_1257 : Encoding
windows_1257 = Encoding "windows-1257"
windows_1257 = Encoding_Data "windows-1257"
## Encoding for Vietnamese (Windows).
windows_1258 : Encoding
windows_1258 = Encoding "windows-1258"
windows_1258 = Encoding_Data "windows-1258"
## One or more byte sequences were not decodable using the Encoding.
type Encoding_Error (message:Text)
type Encoding_Error
Encoding_Error_Data (message:Text)
Encoding_Error.to_display_text : Text
Encoding_Error.to_display_text self = "Encoding_Error: " + self.message

View File

@ -120,14 +120,14 @@ Text.at self index =
True ->
length = self.length
new_index = index + length
if new_index < 0 then Error.throw (Index_Out_Of_Bounds_Error index length) else
if new_index < 0 then Error.throw (Index_Out_Of_Bounds_Error_Data index length) else
self.at new_index
False ->
iterator = BreakIterator.getCharacterInstance
iterator.setText self
first = iterator.next index
next = if first == -1 then -1 else iterator.next
if (next == -1) then (Error.throw (Index_Out_Of_Bounds_Error index self.length)) else
if (next == -1) then (Error.throw (Index_Out_Of_Bounds_Error_Data index self.length)) else
Text_Utils.substring self first next
## ALIAS Get Characters
@ -362,13 +362,13 @@ Text.find self pattern mode=Mode.All match_ascii=Nothing case_insensitive=Nothin
'abc def\tghi'.split '\\s+' Regex_Matcher == ["abc", "def", "ghi"]
Text.split : Text -> (Text_Matcher | Regex_Matcher) -> Vector.Vector Text
Text.split self delimiter="," matcher=Text_Matcher = if delimiter.is_empty then Error.throw (Illegal_Argument_Error "The delimiter cannot be empty.") else
Text.split self delimiter="," matcher=Text_Matcher_Data = if delimiter.is_empty then Error.throw (Illegal_Argument_Error_Data "The delimiter cannot be empty.") else
case matcher of
Text_Matcher case_sensitivity ->
Text_Matcher_Data case_sensitivity ->
delimiters = Vector.from_polyglot_array <| case case_sensitivity of
True ->
Text_Utils.span_of_all self delimiter
Case_Insensitive locale ->
Case_Insensitive_Data locale ->
Text_Utils.span_of_all_case_insensitive self delimiter locale.java_locale
Vector.new delimiters.length+1 i->
start = if i == 0 then 0 else
@ -376,7 +376,7 @@ Text.split self delimiter="," matcher=Text_Matcher = if delimiter.is_empty then
end = if i == delimiters.length then (Text_Utils.char_length self) else
delimiters.at i . codeunit_start
Text_Utils.substring self start end
Regex_Matcher _ _ _ _ _ ->
Regex_Matcher_Data _ _ _ _ _ ->
compiled_pattern = matcher.compile delimiter
compiled_pattern.split self mode=Mode.All
@ -457,9 +457,9 @@ Text.split self delimiter="," matcher=Text_Matcher = if delimiter.is_empty then
"aaa aaa".replace "aa" "c" mode=Matching_Mode.First matcher=Regex_Matcher . should_equal "ca aaa"
"aaa aaa".replace "aa" "c" mode=Matching_Mode.Last matcher=Regex_Matcher . should_equal "aaa ca"
Text.replace : Text -> Text -> (Matching_Mode.First | Matching_Mode.Last | Mode.All) -> (Text_Matcher | Regex_Matcher) -> Text
Text.replace self term="" new_text="" mode=Mode.All matcher=Text_Matcher = if term.is_empty then self else
Text.replace self term="" new_text="" mode=Mode.All matcher=Text_Matcher_Data = if term.is_empty then self else
case matcher of
Text_Matcher case_sensitivity ->
Text_Matcher_Data case_sensitivity ->
array_from_single_result result = case result of
Nothing -> Array.empty
_ -> Array.new_1 result
@ -471,7 +471,7 @@ Text.replace self term="" new_text="" mode=Mode.All matcher=Text_Matcher = if te
array_from_single_result <| Text_Utils.span_of self term
Matching_Mode.Last ->
array_from_single_result <| Text_Utils.last_span_of self term
Case_Insensitive locale -> case mode of
Case_Insensitive_Data locale -> case mode of
Mode.All ->
Text_Utils.span_of_all_case_insensitive self term locale.java_locale
Matching_Mode.First ->
@ -481,7 +481,7 @@ Text.replace self term="" new_text="" mode=Mode.All matcher=Text_Matcher = if te
array_from_single_result <|
Text_Utils.span_of_case_insensitive self term locale.java_locale True
Text_Utils.replace_spans self spans_array new_text
Regex_Matcher _ _ _ _ _ ->
Regex_Matcher_Data _ _ _ _ _ ->
compiled_pattern = matcher.compile term
compiled_pattern.replace self new_text mode=mode
@ -675,11 +675,11 @@ Text.insert : Integer -> Text -> Text ! Index_Out_Of_Bounds_Error
Text.insert self index that =
len = self.length
idx = if index < 0 then len + index + 1 else index
if (idx < 0) || (idx > len) then Error.throw (Index_Out_Of_Bounds_Error index len) else
if (idx < 0) || (idx > len) then Error.throw (Index_Out_Of_Bounds_Error_Data index len) else
if idx == 0 then that + self else
if idx == len then self + that else
pre = self.take (Range 0 idx)
post = self.take (Range idx len)
pre = self.take (Range_Data 0 idx)
post = self.take (Range_Data idx len)
pre + that + post
## Returns if a character from the text at the specified index (0-based) is a
@ -742,7 +742,7 @@ Text.bytes self encoding on_problems=Report_Warning =
result = Encoding_Utils.get_bytes self (encoding . to_java_charset)
vector = Vector.from_polyglot_array result.result
if result.warnings.is_nothing then vector else
on_problems.attach_problems_after vector [Encoding_Error result.warnings]
on_problems.attach_problems_after vector [Encoding_Error_Data result.warnings]
## Takes a vector of bytes and returns Text resulting from decoding it using the
specified encoding.
@ -764,7 +764,7 @@ Text.from_bytes : Vector.Vector Byte -> Encoding -> Text
Text.from_bytes bytes encoding on_problems=Report_Warning =
result = Encoding_Utils.from_bytes bytes.to_array (encoding . to_java_charset)
if result.warnings.is_nothing then result.result else
on_problems.attach_problems_after result.result [Encoding_Error result.warnings]
on_problems.attach_problems_after result.result [Encoding_Error_Data result.warnings]
## Returns a vector containing bytes representing the UTF-8 encoding of the
input text.
@ -894,13 +894,13 @@ Text.from_codepoints codepoints = Text_Utils.from_codepoints codepoints.to_array
"Hello!".starts_with "[a-z]" Regex_Matcher == False
"Hello!".starts_with "[A-Z]" Regex_Matcher == True
Text.starts_with : Text -> Matcher -> Boolean
Text.starts_with self prefix matcher=Text_Matcher = case matcher of
Text_Matcher case_sensitivity -> case case_sensitivity of
Text.starts_with self prefix matcher=Text_Matcher_Data = case matcher of
Text_Matcher_Data case_sensitivity -> case case_sensitivity of
True ->
self.take (Text_Sub_Range.First prefix.length) == prefix
Case_Insensitive locale ->
Case_Insensitive_Data locale ->
self.take (Text_Sub_Range.First prefix.length) . equals_ignore_case prefix locale=locale
Regex_Matcher _ _ _ _ _ ->
Regex_Matcher_Data _ _ _ _ _ ->
preprocessed_pattern = "\A(?:" + prefix + ")"
compiled_pattern = matcher.compile preprocessed_pattern
match = compiled_pattern.match self Mode.First
@ -931,13 +931,13 @@ Text.starts_with self prefix matcher=Text_Matcher = case matcher of
"Hello World".ends_with "world" (Text_Matcher Case_Insensitive) == True
"Hello World".ends_with "[A-Z][a-z]{4}" Regex_Matcher == True
Text.ends_with : Text -> Matcher -> Boolean
Text.ends_with self suffix matcher=Text_Matcher = case matcher of
Text_Matcher case_sensitivity -> case case_sensitivity of
Text.ends_with self suffix matcher=Text_Matcher_Data = case matcher of
Text_Matcher_Data case_sensitivity -> case case_sensitivity of
True ->
self.take (Text_Sub_Range.Last suffix.length) == suffix
Case_Insensitive locale ->
Case_Insensitive_Data locale ->
self.take (Text_Sub_Range.Last suffix.length) . equals_ignore_case suffix locale=locale
Regex_Matcher _ _ _ _ _ ->
Regex_Matcher_Data _ _ _ _ _ ->
preprocessed_pattern = "(?:" + suffix + ")\z"
compiled_pattern = matcher.compile preprocessed_pattern
match = compiled_pattern.match self Mode.First
@ -995,12 +995,12 @@ Text.ends_with self suffix matcher=Text_Matcher = case matcher of
"Hello!".contains "[a-z]" Regex_Matcher
Text.contains : Text -> Matcher -> Boolean
Text.contains self term="" matcher=Text_Matcher = case matcher of
Text_Matcher case_sensitivity -> case case_sensitivity of
Text.contains self term="" matcher=Text_Matcher_Data = case matcher of
Text_Matcher_Data case_sensitivity -> case case_sensitivity of
True -> Text_Utils.contains self term
Case_Insensitive locale ->
Case_Insensitive_Data locale ->
Text_Utils.contains_case_insensitive self term locale.java_locale
Regex_Matcher _ _ _ _ _ ->
Regex_Matcher_Data _ _ _ _ _ ->
compiled_pattern = matcher.compile term
match = compiled_pattern.match self Mode.First
match.is_nothing.not
@ -1097,9 +1097,9 @@ Text.take : (Text_Sub_Range | Index_Sub_Range | Range) -> Text ! Index_Out_Of_Bo
Text.take self range=(First 1) =
ranges = Text_Sub_Range.find_codepoint_ranges self range
case ranges of
Range start end _ ->
Range_Data start end _ ->
Text_Utils.substring self start end
Text_Sub_Range.Codepoint_Ranges char_ranges _ ->
Text_Sub_Range.Codepoint_Ranges_Data char_ranges _ ->
slice_text self char_ranges
## ALIAS skip, remove
@ -1143,12 +1143,12 @@ Text.drop : (Text_Sub_Range | Index_Sub_Range | Range) -> Text ! Index_Out_Of_Bo
Text.drop self range=(First 1) =
ranges = Text_Sub_Range.find_codepoint_ranges self range
case ranges of
Range start end _ ->
Range_Data start end _ ->
if start == 0 then Text_Utils.drop_first self end else
prefix = Text_Utils.substring self 0 start
if end == (Text_Utils.char_length self) then prefix else
prefix + Text_Utils.drop_first self end
Text_Sub_Range.Codepoint_Ranges _ _ ->
Text_Sub_Range.Codepoint_Ranges_Data _ _ ->
sorted_char_ranges_to_remove = ranges.sorted_and_distinct_ranges
char_length = Text_Utils.char_length self
inverted = Index_Sub_Range.invert_range_selection sorted_char_ranges_to_remove char_length needs_sorting=False
@ -1218,7 +1218,7 @@ Text.to_case self case_option=Case.Lower locale=Locale.default = case case_optio
Text.pad : Integer -> Text -> (Location.Start | Location.End) -> Text
Text.pad self length=0 with_pad=' ' at=Location.End =
with_pad_length = with_pad.length
if with_pad_length == 0 then Error.throw (Illegal_Argument_Error "`with_pad` must not be an empty string.") else
if with_pad_length == 0 then Error.throw (Illegal_Argument_Error_Data "`with_pad` must not be an empty string.") else
pad_size = length - self.length
if pad_size <= 0 then self else
full_repetitions = pad_size.div with_pad_length
@ -1376,8 +1376,8 @@ Text.trim self where=Location.Both what=_.is_whitespace =
"aaa aaa".location_of "aa" mode=Matching_Mode.Last matcher=Text_Matcher == Span (Range 5 7) "aaa aaa"
"aaa aaa".location_of "aa" mode=Matching_Mode.Last matcher=Regex_Matcher == Span (Range 4 6) "aaa aaa"
Text.location_of : Text -> (Matching_Mode.First | Matching_Mode.Last) -> Matcher -> Span | Nothing
Text.location_of self term="" mode=Matching_Mode.First matcher=Text_Matcher = case matcher of
Text_Matcher case_sensitive -> case case_sensitive of
Text.location_of self term="" mode=Matching_Mode.First matcher=Text_Matcher_Data = case matcher of
Text_Matcher_Data case_sensitive -> case case_sensitive of
True ->
codepoint_span = case mode of
Matching_Mode.First -> Text_Utils.span_of self term
@ -1388,13 +1388,13 @@ Text.location_of self term="" mode=Matching_Mode.First matcher=Text_Matcher = ca
from our term, the `length` counted in grapheme clusters is
guaranteed to be the same.
end = start + term.length
Span (Range start end) self
Case_Insensitive locale -> case term.is_empty of
Span_Data (Range_Data start end) self
Case_Insensitive_Data locale -> case term.is_empty of
True -> case mode of
Matching_Mode.First -> Span (Range 0 0) self
Matching_Mode.First -> Span_Data (Range_Data 0 0) self
Matching_Mode.Last ->
end = self.length
Span (Range end end) self
Span_Data (Range_Data end end) self
False ->
search_for_last = case mode of
Matching_Mode.First -> False
@ -1402,8 +1402,8 @@ Text.location_of self term="" mode=Matching_Mode.First matcher=Text_Matcher = ca
case Text_Utils.span_of_case_insensitive self term locale.java_locale search_for_last of
Nothing -> Nothing
grapheme_span ->
Span (Range grapheme_span.grapheme_start grapheme_span.grapheme_end) self
Regex_Matcher _ _ _ _ _ -> case mode of
Span_Data (Range_Data grapheme_span.grapheme_start grapheme_span.grapheme_end) self
Regex_Matcher_Data _ _ _ _ _ -> case mode of
Matching_Mode.First ->
case matcher.compile term . match self Mode.First of
Nothing -> Nothing
@ -1479,8 +1479,8 @@ Text.location_of self term="" mode=Matching_Mode.First matcher=Text_Matcher = ca
match_2 = ligatures . location_of_all "ffiff" matcher=(Text_Matcher Case_Insensitive)
match_2 . map .length == [2, 5]
Text.location_of_all : Text -> Matcher -> [Span]
Text.location_of_all self term="" matcher=Text_Matcher = case matcher of
Text_Matcher case_sensitive -> if term.is_empty then Vector.new (self.length + 1) (ix -> Span (Range ix ix) self) else case case_sensitive of
Text.location_of_all self term="" matcher=Text_Matcher_Data = case matcher of
Text_Matcher_Data case_sensitive -> if term.is_empty then Vector.new (self.length + 1) (ix -> Span_Data (Range_Data ix ix) self) else case case_sensitive of
True ->
codepoint_spans = Vector.from_polyglot_array <| Text_Utils.span_of_all self term
grahpeme_ixes = Vector.from_polyglot_array <| Text_Utils.utf16_indices_to_grapheme_indices self (codepoint_spans.map .codeunit_start).to_array
@ -1490,12 +1490,12 @@ Text.location_of_all self term="" matcher=Text_Matcher = case matcher of
offset = term.length
grahpeme_ixes . map start->
end = start+offset
Span (Range start end) self
Case_Insensitive locale ->
Span_Data (Range_Data start end) self
Case_Insensitive_Data locale ->
grapheme_spans = Vector.from_polyglot_array <| Text_Utils.span_of_all_case_insensitive self term locale.java_locale
grapheme_spans.map grapheme_span->
Span (Range grapheme_span.grapheme_start grapheme_span.grapheme_end) self
Regex_Matcher _ _ _ _ _ ->
Span_Data (Range_Data grapheme_span.grapheme_start grapheme_span.grapheme_end) self
Regex_Matcher_Data _ _ _ _ _ ->
case matcher.compile term . match self Mode.All of
Nothing -> []
matches -> matches.map m-> m.span 0 . to_grapheme_span

View File

@ -4,14 +4,14 @@ from Standard.Base import Text
type Line_Ending_Style
## Unix-style endings. Used, among others, on Linux and modern MacOS.
The text equivalent is `'\n'`.
type Unix
Unix
## Windows-style endings. The text equivalent is `'\r\n'`.
type Windows
Windows
## Legacy MacOS endings. Only used on very old Mac systems.
The text equivalent is `'\r\n'`.
type Mac_Legacy
Mac_Legacy
## Returns the text equivalent of the line ending.
to_text : Text

View File

@ -1,11 +1,12 @@
from Standard.Base import all
from Standard.Base.Error.Problem_Behavior import Report_Warning
from Standard.Base.Error.Common import Wrapped_Dataflow_Error
from Standard.Base.Error.Common import Wrapped_Dataflow_Error_Data
## UNSTABLE
An error indicating that some criteria did not match any names in the input.
type No_Matches_Found (criteria : Vector Text)
type No_Matches_Found
No_Matches_Found_Data (criteria : Vector Text)
No_Matches_Found.to_display_text : Text
No_Matches_Found.to_display_text self =
@ -16,14 +17,16 @@ No_Matches_Found.to_display_text self =
Arguments:
- locale: The locale used for the comparison.
type Case_Insensitive locale=Locale.default
type Case_Insensitive
Case_Insensitive_Data locale=Locale.default
## Represents exact text matching mode.
Arguments:
- case_sensitive: Case Sensitive if True. Otherwise, the comparison is case
insensitive using the specified locale.
type Text_Matcher (case_sensitive : (True | Case_Insensitive) = True)
type Text_Matcher
Text_Matcher_Data (case_sensitive : (True | Case_Insensitive) = True)
## Represents regex matching mode.
@ -50,7 +53,8 @@ type Text_Matcher (case_sensitive : (True | Case_Insensitive) = True)
preceded by an unescaped backslash, all characters from the leftmost such
`#` to the end of the line are ignored. That is to say; they act as
'comments' in the regex.
type Regex_Matcher (case_sensitive : (True | Case_Insensitive) = True) (multiline : Boolean = False) (match_ascii : Boolean = False) (dot_matches_newline : Boolean = False) (comments : Boolean = False)
type Regex_Matcher
Regex_Matcher_Data (case_sensitive : (True | Case_Insensitive) = True) (multiline : Boolean = False) (match_ascii : Boolean = False) (dot_matches_newline : Boolean = False) (comments : Boolean = False)
## UNSTABLE
Compiles a provided pattern according to the rules defined in this
@ -62,7 +66,7 @@ Regex_Matcher.compile self pattern =
## TODO [RW] Currently locale is not supported in case-insensitive
Regex matching. There are plans to revisit it:
https://www.pivotaltracker.com/story/show/181313576
Case_Insensitive _ -> True
Case_Insensitive_Data _ -> True
compiled_pattern = Regex.compile pattern case_insensitive=case_insensitive match_ascii=self.match_ascii dot_matches_newline=self.dot_matches_newline multiline=self.multiline comments=self.comments
compiled_pattern
@ -82,7 +86,7 @@ Text_Matcher.match_single_criterion : Text -> Text -> Boolean
Text_Matcher.match_single_criterion self name criterion =
case self.case_sensitive of
True -> name == criterion
Case_Insensitive locale -> name.equals_ignore_case criterion locale=locale
Case_Insensitive_Data locale -> name.equals_ignore_case criterion locale=locale
## UNSTABLE
Checks if a name matches the provided criterion according to the specified
@ -138,7 +142,7 @@ Regex_Matcher.match_single_criterion self name criterion =
Selects pairs matching their first element with the provided criteria and
ordering the result according to the order of criteria that matched them.
Text_Matcher.match_criteria [Pair "foo" 42, Pair "bar" 33, Pair "baz" 10, Pair "foo" 0, Pair 10 10] ["bar", "foo"] reorder=True name_mapper=_.name == [Pair "bar" 33, Pair "foo" 42, Pair "foo" 0]
Text_Matcher.match_criteria [Pair_Data "foo" 42, Pair_Data "bar" 33, Pair_Data "baz" 10, Pair_Data "foo" 0, Pair_Data 10 10] ["bar", "foo"] reorder=True name_mapper=_.name == [Pair_Data "bar" 33, Pair_Data "foo" 42, Pair_Data "foo" 0]
Text_Matcher.match_criteria : Vector Any -> Vector Text -> Boolean -> (Any -> Text) -> Problem_Behavior -> Vector Any ! No_Matches_Found
Text_Matcher.match_criteria self = match_criteria_implementation self
@ -179,21 +183,16 @@ Text_Matcher.match_criteria self = match_criteria_implementation self
Selects pairs matching their first element with the provided criteria and
ordering the result according to the order of criteria that matched them.
Text_Matcher.match_criteria [Pair "foo" 42, Pair "bar" 33, Pair "baz" 10, Pair "foo" 0, Pair 10 10] ["bar", "foo"] reorder=True name_mapper=_.name == [Pair "bar" 33, Pair "foo" 42, Pair "foo" 0]
Text_Matcher.match_criteria [Pair_Data "foo" 42, Pair_Data "bar" 33, Pair_Data "baz" 10, Pair_Data "foo" 0, Pair_Data 10 10] ["bar", "foo"] reorder=True name_mapper=_.name == [Pair_Data "bar" 33, Pair_Data "foo" 42, Pair_Data "foo" 0]
Regex_Matcher.match_criteria : Vector Any -> Vector Text -> Boolean -> (Any -> Text) -> Problem_Behavior -> Vector Any ! No_Matches_Found
Regex_Matcher.match_criteria self = match_criteria_implementation self
## A common supertype representing a matching strategy.
type Matcher
Text_Matcher
Regex_Matcher
## PRIVATE
match_criteria_implementation matcher objects criteria reorder=False name_mapper=(x->x) on_problems=Report_Warning =
result = internal_match_criteria_implementation matcher objects criteria reorder name_mapper
unmatched_criteria = result.second
problems = if unmatched_criteria.is_empty then [] else
[No_Matches_Found unmatched_criteria]
[No_Matches_Found_Data unmatched_criteria]
on_problems.attach_problems_after result.first problems
## PRIVATE
@ -206,7 +205,7 @@ match_criteria_callback matcher objects criteria problem_callback reorder=False
type Match_Matrix
## PRIVATE
A helper type holding a matrix of matches.
type Match_Matrix matrix criteria objects
Match_Matrix_Data matrix criteria objects
# Checks if the ith object is matched by any criterion.
is_object_matched_by_anything : Integer -> Boolean
@ -223,7 +222,7 @@ type Match_Matrix
unmatched_criteria self =
checked_criteria = self.criteria.map_with_index j-> criterion->
has_matches = self.does_criterion_match_anything j
Pair has_matches criterion
Pair_Data has_matches criterion
checked_criteria.filter (p -> p.first.not) . map .second
## PRIVATE
@ -250,13 +249,13 @@ make_match_matrix matcher objects criteria object_name_mapper=(x->x) criterion_m
matrix = objects.map obj->
criteria.map criterion->
matcher.match_single_criterion (object_name_mapper obj) (criterion_mapper criterion)
Match_Matrix matrix criteria objects
Match_Matrix_Data matrix criteria objects
## PRIVATE
internal_match_criteria_implementation matcher objects criteria reorder=False name_mapper=(x->x) = Panic.catch Wrapped_Dataflow_Error (handler = x-> x.payload.unwrap) <|
internal_match_criteria_implementation matcher objects criteria reorder=False name_mapper=(x->x) = Panic.catch Wrapped_Dataflow_Error_Data (handler = x-> x.payload.unwrap) <|
## TODO [RW] discuss: this line of code also shows an issue we had with ensuring input dataflow-errors are correctly propagated, later on we stopped doing that and testing for that as it was too cumbersome. Maybe it could be helped with an @Accepts_Error annotation similar to the one from the interpreter???
[matcher, objects, criteria, reorder, name_mapper] . each v->
Panic.rethrow (v.map_error Wrapped_Dataflow_Error)
Panic.rethrow (v.map_error Wrapped_Dataflow_Error_Data)
match_matrix = make_match_matrix matcher objects criteria name_mapper
unmatched_criteria = match_matrix.unmatched_criteria
@ -277,4 +276,4 @@ internal_match_criteria_implementation matcher objects criteria reorder=False na
select_matching_indices match_matrix.is_object_matched_by_anything
result = selected_indices.map objects.at
Pair result unmatched_criteria
Pair_Data result unmatched_criteria

View File

@ -1,4 +1,5 @@
## This module contains the basic interface to the more advanced functionality
of Enso's regular expression engine.
TODO Examples
@ -119,7 +120,8 @@ from_flags match_ascii case_insensitive dot_matches_newline multiline comments e
Arguments:
- id: The identifier of the group that was asked for but does not exist.
type No_Such_Group_Error (id : Text | Integer)
type No_Such_Group_Error
No_Such_Group_Error_Data (id : Text | Integer)
## PRIVATE

View File

@ -24,13 +24,6 @@ from Standard.Base.Data.Text.Regex.Engine.Default as Default_Engine export Defau
## The `Data.Text.Regex.Engine.Engine` interface.
type Engine
## PRIVATE
A type to represent the regular expresion engine.
This type may have whichever fields are required to implement the engine.
type Engine
## PRIVATE
Compile the provided `expression` into a regex pattern that can be used
@ -57,13 +50,6 @@ type Engine
## The `Data.Text.Regex.Engine.Pattern` interface.
type Pattern
## PRIVATE
A type to represent the pattern that results from compilation.
The type may contain any fields necessary for its implementation.
type Pattern
## PRIVATE
Tries to match the provided `input` against the pattern `self`.
@ -144,13 +130,6 @@ type Pattern
## The `Data.Text.Regex.Engine.Match` interface.
type Match
## PRIVATE
A type to represent the match.
This type may contain any fields necessary.
type Match
## PRIVATE
Gets the text matched by the group with the provided identifier, or

View File

@ -66,7 +66,7 @@ polyglot java import org.enso.base.Text_Utils
engine_opts = [Default_Engine.Literal_Pattern]
Default_Engine.new engine_opts
new : Vector.Vector Option -> Engine
new opts=[] = Engine opts
new opts=[] = Engine_Data opts
## The default implementation of the `Data.Text.Regex.Engine.Engine` interface.
type Engine
@ -78,7 +78,7 @@ type Engine
Arguments:
- engine_opts: Options for regex matching that are specific to this
engine.
type Engine (engine_opts : Vector.Vector Option)
Engine_Data (engine_opts : Vector.Vector Option)
## ADVANCED
@ -119,12 +119,12 @@ type Engine
Java_Pattern.compile (unicode_regex.transform expression) options_bitmask
internal_pattern = maybe_java_pattern.map_error case _ of
Polyglot_Error err ->
Polyglot_Error_Data err ->
if err.is_a PatternSyntaxException . not then err else
Syntax_Error ("The regex could not be compiled: " + err.getMessage)
Syntax_Error_Data ("The regex could not be compiled: " + err.getMessage)
other -> other
Pattern internal_pattern all_options self
Pattern_Data internal_pattern all_options self
## ADVANCED
@ -158,7 +158,7 @@ type Pattern
- internal_pattern: The internal representation of the compiled pattern.
- options: The vector of options with which this pattern was built.
- engine: A handle to the engine that built this pattern.
type Pattern (internal_pattern : Java_Pattern) (options : Vector.Vector (Global_Option.Option | Option)) (engine : Engine)
Pattern_Data (internal_pattern : Java_Pattern) (options : Vector.Vector (Global_Option.Option | Option)) (engine : Engine)
## PRIVATE
@ -262,10 +262,10 @@ type Pattern
internal_matcher = self.build_matcher input start end
if internal_matcher . find start . not then Nothing else
Match internal_matcher start end input
Match_Data internal_matcher start end input
Integer ->
if mode < 0 then Panic.throw <|
Mode_Error "Cannot match a negative number of times."
Mode_Error_Data "Cannot match a negative number of times."
builder = Vector.new_builder
@ -277,7 +277,7 @@ type Pattern
found = internal_matcher.find offset
if found.not then Nothing else
builder.append (Match internal_matcher start end input)
builder.append (Match_Data internal_matcher start end input)
match_end = internal_matcher.end 0
# Ensure progress even if the match is an empty string.
new_offset = if match_end > offset then match_end else offset+1
@ -297,7 +297,7 @@ type Pattern
found = internal_matcher.find offset
if found.not then Nothing else
builder.append (Match internal_matcher start end input)
builder.append (Match_Data internal_matcher start end input)
match_end = internal_matcher.end 0
# Ensure progress even if the match is an empty string.
new_offset = if match_end > offset then match_end else offset+1
@ -310,9 +310,9 @@ type Pattern
Mode.Full ->
internal_matcher = self.build_matcher input start end
if internal_matcher.matches.not then Nothing else
Match internal_matcher start end input
Match_Data internal_matcher start end input
Mode.Bounded _ _ _ -> Panic.throw <|
Mode_Error "Modes cannot be recursive."
Mode_Error_Data "Modes cannot be recursive."
case mode of
Mode.Bounded start end sub_mode ->
@ -340,8 +340,8 @@ type Pattern
pattern.matches input
matches : Text -> Boolean
matches self input = case self.match input mode=Mode.Full of
Match _ _ _ _ -> True
Vector.Vector _ -> True
Match_Data _ _ _ _ -> True
Vector.Vector_Data _ -> True
_ -> False
## ADVANCED
@ -411,8 +411,8 @@ type Pattern
find self input mode=Mode.All =
matches = self.match input mode
case matches of
Match _ _ _ _ -> matches.group 0
Vector.Vector _ -> matches.map (_.group 0)
Match_Data _ _ _ _ -> matches.group 0
Vector.Vector_Data _ -> matches.map (_.group 0)
_ -> matches
## ADVANCED
@ -467,14 +467,14 @@ type Pattern
Mode.First -> 2
Integer ->
if mode < 0 then Panic.throw <|
Mode_Error "Cannot match a negative number of times."
Mode_Error_Data "Cannot match a negative number of times."
mode + 1
Mode.All -> -1
Mode.Full -> Panic.throw <|
Mode_Error "Splitting on a full match yields an empty text."
Mode_Error_Data "Splitting on a full match yields an empty text."
Mode.Bounded _ _ _ -> Panic.throw <|
Mode_Error "Splitting on a bounded region is not well-defined."
Mode_Error_Data "Splitting on a bounded region is not well-defined."
splits = self.internal_pattern.split input limit
Vector.from_polyglot_array splits
@ -536,7 +536,7 @@ type Pattern
internal_matcher.replaceFirst replacement
Integer ->
if mode < 0 then Panic.throw <|
Mode_Error "Cannot replace a negative number of times."
Mode_Error_Data "Cannot replace a negative number of times."
internal_matcher = self.build_matcher input start end
buffer = StringBuffer.new
@ -554,7 +554,7 @@ type Pattern
internal_matcher.replaceAll replacement
Mode.Full ->
case self.match input mode=Mode.Full of
Match _ _ _ _ -> self.replace input replacement Mode.First
Match_Data _ _ _ _ -> self.replace input replacement Mode.First
Nothing -> input
Matching_Mode.Last ->
all_matches = self.match input
@ -575,11 +575,11 @@ type Pattern
internal_matcher.appendTail buffer
buffer.to_text
Mode.Bounded _ _ _ -> Panic.throw <|
Mode_Error "Modes cannot be recursive."
Mode_Error_Data "Modes cannot be recursive."
case mode of
Mode.Bounded _ _ _ -> Panic.throw <|
Mode_Error "Bounded replacements are not well-formed."
Mode_Error_Data "Bounded replacements are not well-formed."
_ -> do_replace_mode mode 0 (Text_Utils.char_length input)
## The default implementation of the `Data.Text.Regex.Engine.Match` interface.
@ -595,7 +595,7 @@ type Match
- region_start: The start of the region over which the match was made.
- region_end: The end of the region over which the match was made.
- input: The input text that was being matched.
type Match (internal_match : Java_Matcher) (region_start : Integer) (region_end : Integer) (input : Text)
Match_Data (internal_match : Java_Matcher) (region_start : Integer) (region_end : Integer) (input : Text)
## Gets the text matched by the group with the provided identifier, or
`Nothing` if the group did not participate in the match. If no such group
@ -771,7 +771,7 @@ type Match
span : Integer | Text -> Utf_16_Span | Nothing ! Regex.No_Such_Group_Error
span self id = case self.group id of
Nothing -> Nothing
_ -> Utf_16_Span (Range (self.start id) (self.end id)) self.input
_ -> Utf_16_Span_Data (Range_Data (self.start id) (self.end id)) self.input
## Returns the start character index of the match's region.
@ -820,13 +820,13 @@ type Match
- id: The group identifier with which the error is associated.
handle_error : Any -> (Text | Integer) -> Any
handle_error error id = case error of
Polyglot_Error err ->
Polyglot_Error_Data err ->
is_ioob = err.is_a IndexOutOfBoundsException
is_iae = err.is_a IllegalArgumentException
maps_to_no_such_group = is_ioob || is_iae
if maps_to_no_such_group.not then err else
Regex.No_Such_Group_Error id
Regex.No_Such_Group_Error_Data id
other -> other
## Options specific to the `Default` regular expression engine.
@ -835,14 +835,14 @@ type Option
## Specifies that the input expression to the pattern be treated as a
sequence of literal characters. Metacharacters and escape sequences have
no special meaning in this mode.
type Literal_Pattern
Literal_Pattern
## Disables anchoring to the region's boundaries.
By default, the regex engine will allow `^` and `$` to match the
boundaries of a restricted region. With this option specified, they will
only match the start and end of the input.
type No_Anchoring_Bounds
No_Anchoring_Bounds
## Enables transparent bounds.
@ -852,11 +852,11 @@ type Option
Without this flag, the region boundaries are treated as opaque, meaning
that the above constructs will fail to match anything outside the region.
type Transparent_Bounds
Transparent_Bounds
## Specifies that only the unix line ending `''\n'` be considered in the
behaviour of the `^` and `$` special characters.
type Unix_Lines
Unix_Lines
## PRIVATE
@ -877,7 +877,7 @@ from_enso_options opts =
Global_Option.Ascii_Matching -> []
No_Anchoring_Bounds -> []
Transparent_Bounds -> []
other -> Panic.throw (Invalid_Option_Error other)
other -> Panic.throw (Invalid_Option_Error_Data other)
options_bitmask = java_flags.fold 0 .bit_or
@ -904,7 +904,8 @@ Invalid_Bounds_Error.to_display_text =
Arguments:
- message: The text of the message to display to users.
type Mode_Error (message : Text)
type Mode_Error
Mode_Error_Data (message : Text)
## PRIVATE
@ -918,7 +919,8 @@ Mode_Error.to_display_text self = self.message.to_text
Arguments:
- opt: The option that was not valid for this regex engine.
type Invalid_Option_Error (opt : Any)
type Invalid_Option_Error
Invalid_Option_Error_Data (opt : Any)
## PRIVATE

View File

@ -4,22 +4,15 @@
to matching on the `Full` content of the input text.
from Standard.Base import all
from Standard.Base.Data.Text.Matching_Mode import First
from Standard.Base.Data.Text.Matching_Mode export First
from Standard.Base.Data.Text.Matching_Mode import First, Last
from Standard.Base.Data.Text.Matching_Mode export First, Last
type Mode
## The regex will only match the first instance it finds.
First
## The regex will match up to some `Integer` number of instances.
Integer
## The regex will make all possible matches.
type All
All
## The regex will only match if the _entire_ text matches.
type Full
Full
## The regex will only match within the region defined by start..end.
@ -32,5 +25,5 @@ type Mode
The `start` and `end` indices range over _characters_ in the text. The
precise definition of `character` is, for the moment, defined by the
regular expression engine itself.
type Bounded (start : Integer) (end : Integer) (mode : (First | Integer | All | Full) = All)
Bounded (start : Integer) (end : Integer) (mode : (First | Integer | All | Full) = All)

View File

@ -15,10 +15,10 @@ type Option
the ASCII character set, you may be able to obtain a performance boost
by specifying this flag. This may not be the case on all engines or all
regexes.
type Ascii_Matching
Ascii_Matching
## Specifies that matching should be performed in a case-insensitive manner.
type Case_Insensitive
Case_Insensitive
## Specifies that the regular expression should be interpreted in comments
mode.
@ -31,16 +31,16 @@ type Option
preceeded by an unescaped backslash, all characters from the leftmost
such `#` to the end of the line are ignored. That is to say, they act
as _comments_ in the regex.
type Comments
Comments
## Specifies that the `.` special character should match everything
_including_ newline characters. Without this flag, it will match all
characters _except_ newlines.
type Dot_Matches_Newline
Dot_Matches_Newline
## Specifies that the pattern character `^` matches at both the beginning of
the string and at the beginning of each line (immediately following a
newline), and that the pattern character `$` matches at the end of each
line _and_ at the end of the string.
type Multiline
Multiline

View File

@ -7,7 +7,7 @@
example_span =
text = "Hello!"
Span 0 3 text
Span_Data 0 3 text
from Standard.Base import all
@ -38,8 +38,8 @@ type Span
example_span =
text = "Hello!"
range = 0.up_to 3
Span.Span range text
type Span (range : Range.Range) (text : Text)
Span.Span_Data range text
Span_Data (range : Range.Range) (text : Text)
## The index of the first character included in the span.
@ -76,10 +76,10 @@ type Span
Find the span of code units corresponding to the span of extended grapheme clusters.
text = 'ae\u{301}fz'
(Span (Range 1 3) text).to_utf_16_span == (Utf_16_Span (Range 1 4) text)
(Span_Data (Range 1 3) text).to_utf_16_span == (Utf_16_Span_Data (Range 1 4) text)
to_utf_16_span : Utf_16_Span
to_utf_16_span self =
Utf_16_Span (range_to_char_indices self.text self.range) self.text
Utf_16_Span_Data (range_to_char_indices self.text self.range) self.text
type Utf_16_Span
@ -97,8 +97,8 @@ type Utf_16_Span
example_span =
text = 'a\u{301}bc'
Span.Utf_16_Span (Range 0 3) text
type Utf_16_Span (range : Range.Range) (text : Text)
Span.Utf_16_Span_Data (Range 0 3) text
Utf_16_Span_Data (range : Range.Range) (text : Text)
## The index of the first code unit included in the span.
start : Integer
@ -126,17 +126,17 @@ type Utf_16_Span
Convert a codepoint span to graphemes and back.
text = 'a\u{301}e\u{302}o\u{303}'
span = Utf_16_Span (Range 1 5) text # The span contains the units [\u{301}, e, \u{302}, o].
span = Utf_16_Span_Data (Range 1 5) text # The span contains the units [\u{301}, e, \u{302}, o].
extended = span.to_grapheme_span
extended == Span (Range 0 3) text # The span is extended to the whole string since it contained code units from every grapheme cluster.
extended.to_utf_16_span == Utf_16_Span (Range 0 6) text
extended == Span_Data (Range 0 3) text # The span is extended to the whole string since it contained code units from every grapheme cluster.
extended.to_utf_16_span == Utf_16_Span_Data (Range 0 6) text
to_grapheme_span : Span
to_grapheme_span self = if (self.start < 0) || (self.end > Text_Utils.char_length self.text) then Error.throw (Illegal_State_Error "Utf_16_Span indices are out of range of the associated text.") else
if self.end < self.start then Error.throw (Illegal_State_Error "Utf_16_Span invariant violation: start <= end") else
case self.start == self.end of
True ->
grapheme_ix = Text_Utils.utf16_index_to_grapheme_index self.text self.start
Span (Range grapheme_ix grapheme_ix) self.text
Span_Data (Range_Data grapheme_ix grapheme_ix) self.text
False ->
grapheme_ixes = Text_Utils.utf16_indices_to_grapheme_indices self.text [self.start, self.end - 1].to_array
grapheme_first = grapheme_ixes.at 0
@ -146,7 +146,7 @@ type Utf_16_Span
only a part of a grapheme were contained in our original span, the resulting span will be
extended to contain this whole grapheme.
grapheme_end = grapheme_last + 1
Span (Range grapheme_first grapheme_end) self.text
Span_Data (Range_Data grapheme_first grapheme_end) self.text
## PRIVATE
Utility function taking a range pointing at grapheme clusters and converting
@ -156,19 +156,19 @@ range_to_char_indices text range = if range.step != 1 then Error.throw (Illegal_
len = text.length
start = if range.start < 0 then range.start + len else range.start
end = if range.end == Nothing then len else (if range.end < 0 then range.end + len else range.end)
is_valid = (Range 0 len+1).contains
is_valid = (Range_Data 0 len+1).contains
case (Pair (is_valid start) (is_valid end)) of
Pair False _ -> Error.throw (Index_Out_Of_Bounds_Error range.start len)
Pair True False -> Error.throw (Index_Out_Of_Bounds_Error range.end len)
Pair True True ->
if start>=end then (Range 0 0) else
case (Pair_Data (is_valid start) (is_valid end)) of
Pair_Data False _ -> Error.throw (Index_Out_Of_Bounds_Error_Data range.start len)
Pair_Data True False -> Error.throw (Index_Out_Of_Bounds_Error_Data range.end len)
Pair_Data True True ->
if start>=end then (Range_Data 0 0) else
iterator = BreakIterator.getCharacterInstance
iterator.setText text
start_index = iterator.next start
end_index = iterator.next (end - start)
Range start_index end_index
Range_Data start_index end_index
Span.from (that:Utf_16_Span) = that.to_grapheme_span
Utf_16_Span.from (that:Span) = that.to_utf_16_span

View File

@ -9,4 +9,5 @@ from Standard.Base import all
set to `Nothing` (the default), it chooses the default ordering for a given
backend. For the In-memory backend, the default ordering is case sensitive.
In databases, the default ordering depends on the database configuration.
type Text_Ordering (sort_digits_as_numbers:Boolean=False) (case_sensitive:(Nothing|True|Case_Insensitive)=Nothing)
type Text_Ordering
Text_Ordering_Data (sort_digits_as_numbers:Boolean=False) (case_sensitive:(Nothing|True|Case_Insensitive)=Nothing)

View File

@ -15,20 +15,20 @@ type Text_Sub_Range
## Select characters until the first instance of `delimiter`.
Select an empty string if `delimiter` is empty.
Select the entire string if the input does not contain `delimiter`.
type Before (delimiter : Text)
Before (delimiter : Text)
## Select characters until the last instance of `delimiter`.
Select an empty string if `delimiter` is empty.
Select the entire string if the input does not contain `delimiter`.
type Before_Last (delimiter : Text)
Before_Last (delimiter : Text)
## Select characters after the first instance of `delimiter`.
Select an empty string if the input does not contain `delimiter`.
type After (delimiter : Text)
After (delimiter : Text)
## Select characters after the last instance of `delimiter`.
Select an empty string if the input does not contain `delimiter`.
type After_Last (delimiter : Text)
After_Last (delimiter : Text)
## PRIVATE
Finds code-point indices corresponding to the part of the input matching the
@ -40,50 +40,50 @@ type Text_Sub_Range
While the input ranges may have varying steps, they are processed and split
in such a way that the ranges returned by this method always have a step
equal to 1.
find_codepoint_ranges : Text -> (Text_Sub_Range | Index_Sub_Range | Range) -> (Range | Codepoint_Ranges)
find_codepoint_ranges : Text -> (Text_Sub_Range | Index_Sub_Range | Range) -> (Range_Data | Codepoint_Ranges)
find_codepoint_ranges text subrange =
case subrange of
Before delimiter ->
if delimiter.is_empty then (Range 0 0) else
if delimiter.is_empty then (Range_Data 0 0) else
span = Text_Utils.span_of text delimiter
if span.is_nothing then (Range 0 (Text_Utils.char_length text)) else
(Range 0 span.codeunit_start)
if span.is_nothing then (Range_Data 0 (Text_Utils.char_length text)) else
(Range_Data 0 span.codeunit_start)
Before_Last delimiter ->
if delimiter.is_empty then (Range 0 (Text_Utils.char_length text)) else
if delimiter.is_empty then (Range_Data 0 (Text_Utils.char_length text)) else
span = Text_Utils.last_span_of text delimiter
if span.is_nothing then (Range 0 (Text_Utils.char_length text)) else
(Range 0 span.codeunit_start)
if span.is_nothing then (Range_Data 0 (Text_Utils.char_length text)) else
(Range_Data 0 span.codeunit_start)
After delimiter ->
if delimiter.is_empty then (Range 0 (Text_Utils.char_length text)) else
if delimiter.is_empty then (Range_Data 0 (Text_Utils.char_length text)) else
span = Text_Utils.span_of text delimiter
if span.is_nothing then (Range 0 0) else
(Range span.codeunit_end (Text_Utils.char_length text))
if span.is_nothing then (Range_Data 0 0) else
(Range_Data span.codeunit_end (Text_Utils.char_length text))
After_Last delimiter ->
if delimiter.is_empty then (Range 0 0) else
if delimiter.is_empty then (Range_Data 0 0) else
span = Text_Utils.last_span_of text delimiter
if span.is_nothing then (Range 0 0) else
(Range span.codeunit_end (Text_Utils.char_length text))
if span.is_nothing then (Range_Data 0 0) else
(Range_Data span.codeunit_end (Text_Utils.char_length text))
First count ->
if count <= 0 then (Range 0 0) else
if count <= 0 then (Range_Data 0 0) else
iterator = BreakIterator.getCharacterInstance
iterator.setText text
start_index = iterator.next count
Range 0 (if start_index == -1 then (Text_Utils.char_length text) else start_index)
Range_Data 0 (if start_index == -1 then (Text_Utils.char_length text) else start_index)
Last count ->
if count <= 0 then (Range 0 0) else
if count <= 0 then (Range_Data 0 0) else
iterator = BreakIterator.getCharacterInstance
iterator.setText text
iterator.last
start_index = iterator.next -count
Range (if start_index == -1 then 0 else start_index) (Text_Utils.char_length text)
Range_Data (if start_index == -1 then 0 else start_index) (Text_Utils.char_length text)
While predicate ->
indices = find_sub_range_end text _-> start-> end->
predicate (Text_Utils.substring text start end) . not
if indices.first.is_nothing then (Range 0 indices.second) else
Range 0 indices.first
if indices.first.is_nothing then (Range_Data 0 indices.second) else
Range_Data 0 indices.first
By_Index indices ->
case indices of
Vector.Vector _ ->
Vector.Vector_Data _ ->
if indices.length == 1 then resolve_index_or_range text indices.first else
batch_resolve_indices_or_ranges text indices
_ -> resolve_index_or_range text indices
@ -92,12 +92,12 @@ find_codepoint_ranges text subrange =
indices = Random.random_indices text.length count rng
find_codepoint_ranges text (By_Index indices)
Every step start ->
if step <= 0 then Error.throw (Illegal_Argument_Error "Step within Every must be positive.") else
if step <= 0 then Error.throw (Illegal_Argument_Error_Data "Step within Every must be positive.") else
len = text.length
if start >= len then Range 0 0 else
range = Range start text.length step
if start >= len then Range_Data 0 0 else
range = Range_Data start text.length step
find_codepoint_ranges text (By_Index range)
Range _ _ _ ->
Range_Data _ _ _ ->
find_codepoint_ranges text (By_Index subrange)
type Codepoint_Ranges
@ -109,7 +109,7 @@ type Codepoint_Ranges
- ranges: the list of ranges. Each `Range` has `step` equal to 1.
- is_sorted_and_distinct: A helper value specifying if the ranges are
already sorted and non-intersecting.
type Codepoint_Ranges (ranges : Vector Range) (is_sorted_and_distinct : Boolean)
Codepoint_Ranges_Data (ranges : Vector Range) (is_sorted_and_distinct : Boolean)
## PRIVATE
Returns a new sorted list of ranges where intersecting ranges have been
@ -136,14 +136,14 @@ find_sub_range_end = text->predicate->
iterator.setText text
loop index start end =
if end == -1 then (Pair Nothing start) else
if predicate index start end then (Pair start end) else
if end == -1 then (Pair_Data Nothing start) else
if predicate index start end then (Pair_Data start end) else
@Tail_Call loop (index + 1) end iterator.next
loop 0 0 iterator.next
## PRIVATE
resolve_index_or_range text descriptor = Panic.recover [Index_Out_Of_Bounds_Error, Illegal_Argument_Error] <|
resolve_index_or_range text descriptor = Panic.recover [Index_Out_Of_Bounds_Error_Data, Illegal_Argument_Error_Data] <|
iterator = BreakIterator.getCharacterInstance
iterator.setText text
case descriptor of
@ -152,12 +152,12 @@ resolve_index_or_range text descriptor = Panic.recover [Index_Out_Of_Bounds_Erro
iterator.last
start = iterator.next descriptor
end = iterator.next
if (start == -1) || (end == -1) then Error.throw (Index_Out_Of_Bounds_Error descriptor text.length) else
Range start end
Range _ _ _ ->
if (start == -1) || (end == -1) then Error.throw (Index_Out_Of_Bounds_Error_Data descriptor text.length) else
Range_Data start end
Range_Data _ _ _ ->
len = text.length
true_range = normalize_range descriptor len
if descriptor.is_empty then Range 0 0 else
if descriptor.is_empty then Range_Data 0 0 else
case true_range.step == 1 of
True -> range_to_char_indices text true_range
False ->
@ -166,14 +166,14 @@ resolve_index_or_range text descriptor = Panic.recover [Index_Out_Of_Bounds_Erro
go start_index current_grapheme =
end_index = iterator.next
if (start_index == -1) || (end_index == -1) || (current_grapheme >= true_range.end) then Nothing else
ranges.append (Range start_index end_index)
ranges.append (Range_Data start_index end_index)
## We advance by step-1, because we already advanced by
one grapheme when looking for the end of the previous
one.
@Tail_Call go (iterator.next true_range.step-1) current_grapheme+true_range.step
go (iterator.next true_range.start) true_range.start
Codepoint_Ranges ranges.to_vector is_sorted_and_distinct=True
Codepoint_Ranges_Data ranges.to_vector is_sorted_and_distinct=True
## PRIVATE
Returns an array of UTF-16 code-unit indices corresponding to the beginning
@ -185,13 +185,13 @@ character_ranges text =
iterator.setText text
ranges = Vector.new_builder
go prev nxt = if nxt == -1 then Nothing else
ranges.append (Range prev nxt)
ranges.append (Range_Data prev nxt)
@Tail_Call go nxt iterator.next
go iterator.first iterator.next
ranges.to_vector
## PRIVATE
batch_resolve_indices_or_ranges text descriptors = Panic.recover [Index_Out_Of_Bounds_Error, Illegal_Argument_Error] <|
batch_resolve_indices_or_ranges text descriptors = Panic.recover [Index_Out_Of_Bounds_Error_Data, Illegal_Argument_Error_Data] <|
## This is pre-computing the ranges for all characters in the string, which
may be much more than necessary, for example if all ranges reference only
the beginning of the string. In the future we may want to replace this
@ -204,24 +204,24 @@ batch_resolve_indices_or_ranges text descriptors = Panic.recover [Index_Out_Of_B
case descriptor of
Integer ->
ranges.append (Panic.rethrow <| characters.at descriptor)
Range _ _ _ ->
if descriptor.is_empty then Range 0 0 else
Range_Data _ _ _ ->
if descriptor.is_empty then Range_Data 0 0 else
true_range = normalize_range descriptor characters.length
case true_range.step == 1 of
True ->
first_grapheme = Panic.rethrow <| characters.at true_range.start
last_grapheme = Panic.rethrow <| characters.at true_range.end-1
ranges.append (Range first_grapheme.start last_grapheme.end)
ranges.append (Range_Data first_grapheme.start last_grapheme.end)
False ->
if true_range.start >= characters.length then
Panic.throw (Index_Out_Of_Bounds_Error true_range.start characters.length)
Panic.throw (Index_Out_Of_Bounds_Error_Data true_range.start characters.length)
true_range.to_vector.each ix->
ranges.append (Panic.rethrow <| characters.at ix)
Codepoint_Ranges ranges.to_vector is_sorted_and_distinct=False
Codepoint_Ranges_Data ranges.to_vector is_sorted_and_distinct=False
## PRIVATE
panic_on_non_positive_step =
Panic.throw (Illegal_Argument_Error "Range step must be positive.")
Panic.throw (Illegal_Argument_Error_Data "Range step must be positive.")
## PRIVATE
Ensures that the range is valid and trims it to the length of the collection.
@ -229,8 +229,8 @@ normalize_range range length =
if range.step <= 0 then panic_on_non_positive_step
# We may add support for negative indices in the future.
if (range.start < 0) || (range.end < 0) then
Panic.throw (Illegal_Argument_Error "Ranges with negative indices are not supported for indexing.")
Panic.throw (Illegal_Argument_Error_Data "Ranges with negative indices are not supported for indexing.")
if (range.start >= length) then
Panic.throw (Index_Out_Of_Bounds_Error range.start length)
if range.end >= length then Range range.start length range.step else
Panic.throw (Index_Out_Of_Bounds_Error_Data range.start length)
if range.end >= length then Range_Data range.start length range.step else
range

View File

@ -3,7 +3,7 @@ from Standard.Base import all
import Standard.Base.Data.Time.Duration
import Standard.Base.Polyglot
from Standard.Base.Error.Common import Time_Error
from Standard.Base.Error.Common import Time_Error_Data
polyglot java import org.enso.base.Time_Utils
polyglot java import java.time.temporal.ChronoField
@ -60,7 +60,7 @@ new year (month = 1) (day = 1) =
https://github.com/enso-org/enso/pull/3559
Then this should be switched to use `Panic.catch_java`.
Panic.recover Any (Date.internal_new year month day) . catch Any e-> case e of
Polyglot_Error err -> Error.throw (Time_Error err.getMessage)
Polyglot_Error_Data err -> Error.throw (Time_Error_Data err.getMessage)
ex -> ex
## ALIAS Date from Text
@ -130,27 +130,26 @@ parse text pattern=Nothing =
result = Panic.recover Any <| case pattern of
Nothing -> Date.internal_parse text 0
Text -> Date.internal_parse text pattern
_ -> Panic.throw (Time_Error "An invalid pattern was provided.")
_ -> Panic.throw (Time_Error_Data "An invalid pattern was provided.")
result . map_error <| case _ of
Polyglot_Error err -> Time_Error err.getMessage
Polyglot_Error_Data err -> Time_Error_Data err.getMessage
ex -> ex
## This type represents a date, often viewed as year-month-day.
Arguments:
- internal_local_date: The internal date representation.
For example, the value "2nd October 2007" can be stored in a `Date`.
This class does not store or represent a time or timezone. Instead, it
is a description of the date, as used for birthdays. It cannot represent
an instant on the time-line without additional information such as an
offset or timezone.
@Builtin_Type
type Date
## This type represents a date, often viewed as year-month-day.
Arguments:
- internal_local_date: The internal date representation.
For example, the value "2nd October 2007" can be stored in a `Date`.
This class does not store or represent a time or timezone. Instead, it
is a description of the date, as used for birthdays. It cannot represent
an instant on the time-line without additional information such as an
offset or timezone.
@Builtin_Type
type Date
## Get the year field.
> Example
@ -260,7 +259,7 @@ type Date
example_add = Date.new 2020 + 6.months
+ : Duration -> Date
+ self amount = if amount.is_time then Error.throw (Time_Error "Date does not support time intervals") else
+ self amount = if amount.is_time then Error.throw (Time_Error_Data "Date does not support time intervals") else
(Time_Utils.date_adjust self Time_Utils.AdjustOp.PLUS amount.internal_period) . internal_local_date
## Subtract the specified amount of time from this instant to get another
@ -277,7 +276,7 @@ type Date
example_subtract = Date.new 2020 - 7.days
- : Duration -> Date
- self amount = if amount.is_time then Error.throw (Time_Error "Date does not support time intervals") else
- self amount = if amount.is_time then Error.throw (Time_Error_Data "Date does not support time intervals") else
(Time_Utils.date_adjust self Time_Utils.AdjustOp.MINUS amount.internal_period) . internal_local_date

View File

@ -53,7 +53,7 @@ now = @Builtin_Method "Date_Time.now"
new : Integer -> Integer -> Integer -> Integer -> Integer -> Integer -> Integer -> Time_Zone -> Date_Time ! Time_Error
new year (month = 1) (day = 1) (hour = 0) (minute = 0) (second = 0) (nanosecond = 0) (zone = Time_Zone.system) =
Panic.catch_java Any (Date_Time.new_builtin year month day hour minute second nanosecond zone) java_exception->
Error.throw (Time_Error java_exception.getMessage)
Error.throw (Time_Error_Data java_exception.getMessage)
## ALIAS Time from Text
@ -135,28 +135,27 @@ new year (month = 1) (day = 1) (hour = 0) (minute = 0) (second = 0) (nanosecond
Date_Time.parse "06 of May 2020 at 04:30AM" "dd 'of' MMMM yyyy 'at' hh:mma"
parse : Text -> Text | Nothing -> Locale -> Date_Time ! Time_Error
parse text pattern=Nothing locale=Locale.default =
Panic.catch_java Any handler=(java_exception -> Error.throw (Time_Error java_exception.getMessage)) <|
Panic.catch_java Any handler=(java_exception -> Error.throw (Time_Error_Data java_exception.getMessage)) <|
case pattern of
Nothing -> Date_Time.parse_builtin text
Text -> Time_Utils.parse_datetime_format text pattern locale.java_locale
## PRIVATE
A date-time with a timezone in the ISO-8601 calendar system, such as
"2007-12-03T10:15:30+01:00 Europe/Paris".
Time is a representation of a date-time with a timezone. This class
stores all date and time fields, to a precision of nanoseconds, and a
timezone, with a zone offset used to handle ambiguous local
date-times.
For example, the value "2nd October 2007 at 13:45.30.123456789 +02:00 in
the Europe/Paris timezone" can be stored as `Time`.
@Builtin_Type
type Date_Time
## PRIVATE
A date-time with a timezone in the ISO-8601 calendar system, such as
"2007-12-03T10:15:30+01:00 Europe/Paris".
Time is a representation of a date-time with a timezone. This class
stores all date and time fields, to a precision of nanoseconds, and a
timezone, with a zone offset used to handle ambiguous local
date-times.
For example, the value "2nd October 2007 at 13:45.30.123456789 +02:00 in
the Europe/Paris timezone" can be stored as `Time`.
@Builtin_Type
type Date_Time
## Get the year portion of the time.
> Example

View File

@ -1,19 +1,19 @@
from Standard.Base import all
type Day_Of_Week
type Sunday
Sunday
type Monday
Monday
type Tuesday
Tuesday
type Wednesday
Wednesday
type Thursday
Thursday
type Friday
Friday
type Saturday
Saturday
## Convert the Day_Of_Week to an Integer
@ -31,7 +31,7 @@ type Day_Of_Week
Friday -> 5
Saturday -> 6
shifted = if first_day == Day_Of_Week.Sunday then day_number else
shifted = if first_day == Sunday then day_number else
(day_number + 7 - (first_day.to_integer start_at_zero=True)) % 7
shifted + if start_at_zero then 0 else 1
@ -42,7 +42,7 @@ type Day_Of_Week
- `that`: The first day of the week.
- `first_day`: The first day of the week.
- `start_at_zero`: If True, first day of the week is 0 otherwise is 1.
Day_Of_Week.from (that : Integer) (first_day:Day_Of_Week=Sunday) (start_at_zero:Boolean=False) =
from (that : Integer) (first_day:Day_Of_Week=Sunday) (start_at_zero:Boolean=False) =
shifted = if start_at_zero then that else that - 1
case (shifted < 0) || (shifted > 6) of
@ -51,7 +51,7 @@ Day_Of_Week.from (that : Integer) (first_day:Day_Of_Week=Sunday) (start_at_zero:
message = "Invalid day of week (must be " + valid_range + ")."
Error.throw (Illegal_Argument_Error message)
False ->
day_number = if first_day == Day_Of_Week.Sunday then shifted else
day_number = if first_day == Sunday then shifted else
(shifted + (first_day.to_integer start_at_zero=True)) % 7
[Sunday, Monday, Tuesday, Wednesday, Thursday, Friday, Saturday].at day_number

View File

@ -23,7 +23,7 @@ between : Date_Time -> Date_Time -> Duration
between start_inclusive end_exclusive timezone_aware=True =
period = Java_Period.ofDays 0 . normalized
duration = Time_Utils.duration_between start_inclusive end_exclusive timezone_aware
Duration period duration
Duration_Data period duration
## ADVANCED
@ -37,8 +37,8 @@ time_execution ~function =
start = System.nano_time
result = Runtime.no_inline function
end = System.nano_time
duration = Duration (Java_Period.ofDays 0) (Java_Duration.ofNanos (end - start))
Pair duration result
duration = Duration_Data (Java_Period.ofDays 0) (Java_Duration.ofNanos (end - start))
Pair_Data duration result
type Duration
@ -50,7 +50,7 @@ type Duration
- internal_period: The internal representation of the time as a period.
- internal_duration: The internal representation of the time as a
duration.
type Duration internal_period internal_duration
Duration_Data internal_period internal_duration
## Add the specified amount of time to this duration.
@ -74,7 +74,7 @@ type Duration
+ self that =
period = self.internal_period . plus that.internal_period . normalized
duration = self.internal_duration . plus that.internal_duration
Duration period duration
Duration_Data period duration
## Subtract the specified amount of time from this duration.
@ -98,7 +98,7 @@ type Duration
- self that =
period = self.internal_period . minus that.internal_period . normalized
duration = self.internal_duration . minus that.internal_duration
Duration period duration
Duration_Data period duration
## Get the portion of the duration expressed in nanoseconds.
@ -311,7 +311,7 @@ type Duration
example_nano = 1.nanosecond
Integer.nanosecond : Duration
Integer.nanosecond self = Duration (Java_Period.ofDays 0) (Java_Duration.ofNanos self)
Integer.nanosecond self = Duration_Data (Java_Period.ofDays 0) (Java_Duration.ofNanos self)
## Create a duration of `self` nanoseconds.
@ -333,7 +333,7 @@ Integer.nanoseconds self = self.nanosecond
example_milli = 1.millisecond
Integer.millisecond : Duration
Integer.millisecond self = Duration (Java_Period.ofDays 0) (Java_Duration.ofMillis self)
Integer.millisecond self = Duration_Data (Java_Period.ofDays 0) (Java_Duration.ofMillis self)
## Create a duration of `self` milliseconds.
@ -355,7 +355,7 @@ Integer.milliseconds self = self.millisecond
example_second = 1.second
Integer.second : Duration
Integer.second self = Duration (Java_Period.ofDays 0) (Java_Duration.ofSeconds self)
Integer.second self = Duration_Data (Java_Period.ofDays 0) (Java_Duration.ofSeconds self)
## Create a duration of `self` seconds.
@ -377,7 +377,7 @@ Integer.seconds self = self.second
example_min = 1.minute
Integer.minute : Duration
Integer.minute self = Duration (Java_Period.ofDays 0) (Java_Duration.ofMinutes self)
Integer.minute self = Duration_Data (Java_Period.ofDays 0) (Java_Duration.ofMinutes self)
## Create a duration of `self` minutes.
@ -399,7 +399,7 @@ Integer.minutes self = self.minute
example_hour = 1.hour
Integer.hour : Duration
Integer.hour self = Duration (Java_Period.ofDays 0) (Java_Duration.ofHours self)
Integer.hour self = Duration_Data (Java_Period.ofDays 0) (Java_Duration.ofHours self)
## Create a duration of `self` hours.
@ -421,7 +421,7 @@ Integer.hours self = self.hour
example_day = 1.day
Integer.day : Duration
Integer.day self = Duration (Java_Period.ofDays self . normalized) (Java_Duration.ofSeconds 0)
Integer.day self = Duration_Data (Java_Period.ofDays self . normalized) (Java_Duration.ofSeconds 0)
## Create a duration of `self` days.
@ -443,7 +443,7 @@ Integer.days self = self.day
example_month = 1.month
Integer.month : Duration
Integer.month self = Duration (Java_Period.ofMonths self . normalized) (Java_Duration.ofSeconds 0)
Integer.month self = Duration_Data (Java_Period.ofMonths self . normalized) (Java_Duration.ofSeconds 0)
## Create a duration of `self` months.
@ -465,7 +465,7 @@ Integer.months self = self.month
example_year = 1.year
Integer.year : Duration
Integer.year self = Duration (Java_Period.ofYears self . normalized) (Java_Duration.ofSeconds 0)
Integer.year self = Duration_Data (Java_Period.ofYears self . normalized) (Java_Duration.ofSeconds 0)
## Create a duration of `self` years.

View File

@ -46,7 +46,7 @@ now = @Builtin_Method "Time_Of_Day.now"
new : Integer -> Integer -> Integer -> Integer -> Time_Of_Day ! Time_Error
new (hour = 0) (minute = 0) (second = 0) (nanosecond = 0) =
Panic.catch_java Any (Time_Of_Day.new_builtin hour minute second nanosecond) java_exception->
Error.throw (Time_Error java_exception.getMessage)
Error.throw (Time_Error_Data java_exception.getMessage)
## Obtains an instance of `Time_Of_Day` from a text such as "10:15".
@ -111,23 +111,21 @@ new (hour = 0) (minute = 0) (second = 0) (nanosecond = 0) =
example_parse = Time_Of_Day.parse "4:30AM" "h:mma"
parse : Text -> Text | Nothing -> Locale -> Time_Of_Day ! Time_Error
parse text pattern=Nothing locale=Locale.default =
Panic.catch_java Any handler=(java_exception -> Error.throw (Time_Error java_exception.getMessage)) <|
Panic.catch_java Any handler=(java_exception -> Error.throw (Time_Error_Data java_exception.getMessage)) <|
case pattern of
Nothing -> Time_Of_Day.parse_builtin text
Text -> Time_Utils.parse_time text pattern locale.java_locale
## PRIVATE
This type is a date-time object that represents a time, often viewed
as hour-minute-second.
Time is represented to nanosecond precision. For example, the value
"13:45.30.123456789" can be stored in a `Time_Of_Day`.
@Builtin_Type
type Time_Of_Day
## PRIVATE
This type is a date-time object that represents a time, often viewed
as hour-minute-second.
Time is represented to nanosecond precision. For example, the value
"13:45.30.123456789" can be stored in a `Time_Of_Day`.
@Builtin_Type
type Time_Of_Day
## Get the hour portion of the time of day.
> Example
@ -210,7 +208,7 @@ type Time_Of_Day
example_plus = Time_Of_Day.new + 3.seconds
+ : Duration -> Time_Of_Day
+ self amount = if amount.is_date then Error.throw (Time_Error "Time_Of_Day does not support date intervals") else
+ self amount = if amount.is_date then Error.throw (Time_Error_Data "Time_Of_Day does not support date intervals") else
Time_Utils.time_adjust self Time_Utils.AdjustOp.PLUS amount.internal_duration
## Subtract the specified amount of time from this instant to get a new
@ -227,7 +225,7 @@ type Time_Of_Day
example_minus = Time_Of_Day.now - 12.hours
- : Duration -> Time_Of_Day
- self amount = if amount.is_date then Error.throw (Time_Error "Time_Of_Day does not support date intervals") else
- self amount = if amount.is_date then Error.throw (Time_Error_Data "Time_Of_Day does not support date intervals") else
Time_Utils.time_adjust self Time_Utils.AdjustOp.MINUS amount.internal_duration
## Format this time of day as text using the default formatter.

View File

@ -98,23 +98,23 @@ new (hours = 0) (minutes = 0) (seconds = 0) =
example_parse = Time_Zone.parse "+03:02:01"
parse : Text -> Time_Zone
parse text =
Panic.catch_java Any handler=(java_exception -> Error.throw (Time_Error java_exception.getMessage)) <|
Panic.catch_java Any handler=(java_exception -> Error.throw (Time_Error_Data java_exception.getMessage)) <|
Time_Zone.parse_builtin text
## PRIVATE
A type representing a time zone.
Arguments:
- internal_zone_id: The identifier for the internal zone of the
representation.
A time zone can be eiter offset-based like "-06:00" or id-based like
"Europe/Paris".
@Builtin_Type
type Time_Zone
## PRIVATE
A type representing a time zone.
Arguments:
- internal_zone_id: The identifier for the internal zone of the
representation.
A time zone can be eiter offset-based like "-06:00" or id-based like
"Europe/Paris".
@Builtin_Type
type Time_Zone
## Get the unique timezone ID.

View File

@ -91,10 +91,10 @@ new_builder (capacity=10) = Builder.new capacity
A vector allows to store an arbitrary number of elements in linear memory. It
is the recommended data structure for most applications.
from_polyglot_array : Any -> Vector Any
from_polyglot_array arr = Vector (Proxy_Polyglot_Array.Proxy_Polyglot_Array arr)
from_polyglot_array arr = Vector_Data (Proxy_Polyglot_Array.Proxy_Polyglot_Array_Data arr)
## The basic, immutable, vector type.
type Vector
type Vector a
## ADVANCED
@ -114,13 +114,13 @@ type Vector
A vector containing 50 elements, each being the number `42`, can be
created by:
Vector.fill length=50 item=42
type Vector storage
Vector_Data storage
## PRIVATE
to_array self =
arr = self.storage.to_array
case Meta.meta arr of
Meta.Primitive _ -> arr
Meta.Primitive_Data _ -> arr
_ ->
len = self.storage.length
a = Array.new len
@ -156,8 +156,8 @@ type Vector
at : Integer -> Any ! Index_Out_Of_Bounds_Error
at self index =
actual_index = if index < 0 then self.length + index else index
Panic.catch Invalid_Array_Index_Error (self.unsafe_at actual_index) _->
Error.throw (Index_Out_Of_Bounds_Error index self.length)
Panic.catch Invalid_Array_Index_Error_Data (self.unsafe_at actual_index) _->
Error.throw (Index_Out_Of_Bounds_Error_Data index self.length)
## ADVANCED
UNSTABLE
@ -240,7 +240,7 @@ type Vector
sum self =
result = Panic.recover Any <| self.reduce (+)
result.map_error x->case x of
No_Such_Method_Error _ _ -> x
No_Such_Method_Error_Data _ _ -> x
Empty_Error -> x
_ -> Panic.throw x
@ -393,12 +393,12 @@ type Vector
[1, 2, 3, 4, 5].partition (x -> x % 2 == 0) == (Pair [2, 4] [1, 3, 5])
partition : (Any -> Boolean) -> Pair (Vector Any) (Vector Any)
partition self predicate =
pair = self.fold (Pair new_builder new_builder) acc-> elem->
pair = self.fold (Pair_Data new_builder new_builder) acc-> elem->
case predicate elem of
True ->
Pair (acc.first.append elem) acc.second
Pair_Data (acc.first.append elem) acc.second
False ->
Pair acc.first (acc.second.append elem)
Pair_Data acc.first (acc.second.append elem)
pair.map .to_vector
## Partitions the vector into vectors of elements which satisfy a given
@ -421,10 +421,10 @@ type Vector
["a", "b", "c", "d"].partition_with_index (ix -> _ -> ix % 2 == 0) == (Pair ["a", "c"] ["b", "d"])
partition_with_index : (Integer -> Any -> Boolean) -> Pair (Vector Any) (Vector Any)
partition_with_index self predicate =
pair = self.fold_with_index (Pair new_builder new_builder) acc-> ix-> elem->
pair = self.fold_with_index (Pair_Data new_builder new_builder) acc-> ix-> elem->
case predicate ix elem of
True -> Pair (acc.first.append elem) acc.second
False -> Pair acc.first (acc.second.append elem)
True -> Pair_Data (acc.first.append elem) acc.second
False -> Pair_Data acc.first (acc.second.append elem)
pair.map .to_vector
## Applies a function to each element of the vector, returning the vector of
@ -471,7 +471,7 @@ type Vector
self.fold 0 i-> vec->
Array.copy vec.to_array 0 arr i vec.length
i + vec.length
Vector arr
Vector_Data arr
## Applies a function to each element of the vector, returning the vector
of results.
@ -561,7 +561,7 @@ type Vector
(0.up_to 100).to_vector.short_display_text max_entries=2 == "[0, 1 and 98 more elements]"
short_display_text : Integer -> Text
short_display_text self max_entries=10 =
if max_entries < 1 then Error.throw <| Illegal_Argument_Error "The `max_entries` parameter must be positive." else
if max_entries < 1 then Error.throw <| Illegal_Argument_Error_Data "The `max_entries` parameter must be positive." else
prefix = self.take (First max_entries)
if prefix.length == self.length then self.to_text else
remaining_count = self.length - prefix.length
@ -602,7 +602,7 @@ type Vector
arr = Array.new (self_len + that.length)
Array.copy self.to_array 0 arr 0 self_len
Array.copy that.to_array 0 arr self_len that.length
Vector arr
Vector_Data arr
## Add `element` to the beginning of `self` vector.
@ -667,7 +667,7 @@ type Vector
len = slice_end - slice_start
arr = Array.new len
Array.copy self.to_array slice_start arr 0 len
Vector arr
Vector_Data arr
## Creates a new vector with only the specified range of elements from the
input, removing any elements outside the range.
@ -820,7 +820,7 @@ type Vector
[1, 2, 3, 4].second
second : Vector ! Singleton_Error
second self = if self.length >= 2 then self.unsafe_at 1 else
Error.throw (Singleton_Error self)
Error.throw (Singleton_Error_Data self)
## Get all elements in the vector except the first.
@ -889,7 +889,7 @@ type Vector
new_vec_arr.sort compare
Vector new_vec_arr
Vector_Data new_vec_arr
## UNSTABLE
Keeps only unique elements within the Vector, removing any duplicates.
@ -982,7 +982,7 @@ type Builder
and get wrong error propagation. Instead we may want to have a `Ref`
inside of the Builder. Any error detected during `append` could set
that `Ref` and then `to_vector` could propagate that error.
type Builder java_builder
Builder_Data java_builder
## Creates a new builder.
@ -994,7 +994,7 @@ type Builder
Vector.new_builder
new : Integer->Builder
new (capacity=10) = Builder (ArrayList.new capacity)
new (capacity=10) = Builder_Data (ArrayList.new capacity)
## Checks if this builder is empty.
is_empty : Boolean
@ -1073,7 +1073,7 @@ type Builder
at self index =
actual_index = if index < 0 then self.length + index else index
Panic.catch IndexOutOfBoundsException (self.java_builder.get actual_index) _->
Error.throw (Index_Out_Of_Bounds_Error index self.length)
Error.throw (Index_Out_Of_Bounds_Error_Data index self.length)
## Checks whether a predicate holds for at least one element of this builder.
@ -1120,7 +1120,8 @@ Empty_Error.to_display_text self = "The vector is empty."
Arguments:
- vec: The vector that only has one element.
type Singleton_Error vec
type Singleton_Error
Singleton_Error_Data vec
## UNSTABLE
@ -1130,7 +1131,8 @@ Singleton_Error.to_display_text self =
"The vector " + self.vec.to_text + " has only one element."
## PRIVATE
type Partition_Accumulator true_builder false_builder ix
type Partition_Accumulator
Partition_Accumulator_Data true_builder false_builder ix
## UNSTABLE
@ -1143,7 +1145,7 @@ type Incomparable_Values_Error
Incomparable_Values_Error if any occur.
handle_incomparable_value ~function =
handle t = Panic.catch t handler=(Error.throw Incomparable_Values_Error)
handle No_Such_Method_Error <| handle Type_Error <| handle Unsupported_Argument_Types <| function
handle No_Such_Method_Error_Data <| handle Type_Error_Data <| handle Unsupported_Argument_Types_Data <| function
## PRIVATE
Creates a new vector where for each range, a corresponding section of the
@ -1156,7 +1158,7 @@ slice_ranges vector ranges =
if ranges.length != 1 then slice_many_ranges vector ranges else
case ranges.first of
Integer -> [vector.unsafe_at ranges.first]
Range start end step -> case step == 1 of
Range_Data start end step -> case step == 1 of
True -> vector.slice start end
False -> slice_many_ranges vector ranges
@ -1165,12 +1167,12 @@ slice_ranges vector ranges =
slice_many_ranges vector ranges =
new_length = ranges.fold 0 acc-> descriptor-> case descriptor of
Integer -> acc+1
Range _ _ _ -> acc+descriptor.length
Range_Data _ _ _ -> acc+descriptor.length
builder = new_builder new_length
ranges.each descriptor-> case descriptor of
Integer ->
builder.append (vector.unsafe_at descriptor)
Range start end step -> case step == 1 of
Range_Data start end step -> case step == 1 of
True ->
builder.append_vector_range vector start end
False ->

View File

@ -3,23 +3,19 @@ import Standard.Base.Runtime
polyglot java import java.lang.IllegalArgumentException
## Dataflow errors.
## A type representing dataflow errors.
A dataflow error in Enso is one that behaves like a standard value, and
hence represents erroneous states in a way that exists _within_ standard
control flow.
? Dataflow Errors or Panics
Whilst a Panic is useful for unrecoverable situations, most Enso APIs
are designed to use dataflow errors instead. As they exist within the
normal program control flow, they are able to be represented on the
Enso graph.
@Builtin_Type
type Error
## A type representing dataflow errors.
A dataflow error in Enso is one that behaves like a standard value, and
hence represents erroneous states in a way that exists _within_ standard
control flow.
? Dataflow Errors or Panics
Whilst a Panic is useful for unrecoverable situations, most Enso APIs
are designed to use dataflow errors instead. As they exist within the
normal program control flow, they are able to be represented on the
Enso graph.
@Builtin_Type
type Error
## Creates a new dataflow error containing the provided payload.
Arguments:
@ -166,6 +162,13 @@ type Error
is_error : Boolean
is_error self = True
## PRIVATE
TODO this is a kludge until we have proper eigentypes and statics.
Allows to check equality of the `Error` type with itself.
== self that = if Meta.is_error self then self else
if Meta.is_error that then that else
Meta.is_same_object self that
type Illegal_State_Error
@ -178,7 +181,7 @@ type Illegal_State_Error
- message: the error message explaining why the operation cannot be
performed.
- cause: (optional) another error that is the cause of this one.
type Illegal_State_Error message cause=Nothing
Illegal_State_Error_Data message cause=Nothing
type Illegal_Argument_Error
@ -190,12 +193,12 @@ type Illegal_Argument_Error
Arguments:
- message: the error message explaining why the argument is illegal.
- cause: (optional) another error that is the cause of this one.
type Illegal_Argument_Error message cause=Nothing
Illegal_Argument_Error_Data message cause=Nothing
## PRIVATE
Capture a Java IllegalArgumentException and rethrow
handle_java_exception =
Panic.catch_java IllegalArgumentException handler=(cause-> Error.throw (Illegal_Argument_Error cause.getMessage cause))
Panic.catch_java IllegalArgumentException handler=(cause-> Error.throw (Illegal_Argument_Error_Data cause.getMessage cause))
## UNSTABLE
@ -204,7 +207,8 @@ type Illegal_Argument_Error
Arguments:
- index: The requested index.
- length: The length of the collection.
type Index_Out_Of_Bounds_Error index length
type Index_Out_Of_Bounds_Error
Index_Out_Of_Bounds_Error_Data index length
## UNSTABLE
@ -216,12 +220,14 @@ Index_Out_Of_Bounds_Error.to_display_text self =
## PRIVATE
Wraps a dataflow error lifted to a panic, making possible to distinguish it
from other panics.
type Wrapped_Dataflow_Error payload
type Wrapped_Dataflow_Error
Wrapped_Dataflow_Error_Data payload
## PRIVATE
Throws the original error.
Wrapped_Dataflow_Error.unwrap self = Error.throw self.payload
@Builtin_Type
type Caught_Panic
## A wrapper for a caught panic.
@ -231,8 +237,7 @@ type Caught_Panic
the source of this panic. Only for internal use. To get the Java exception
from polyglot exceptions, match the `payload` on `Polyglot_Error` and
extract the Java object from there.
@Builtin_Type
type Caught_Panic payload internal_original_exception
Caught_Panic_Data payload internal_original_exception
## Converts this caught panic into a dataflow error containing the same
payload and stack trace.
@ -244,22 +249,19 @@ type Caught_Panic
stack_trace self =
Panic.get_attached_stack_trace self
## Panics.
## A panic is an error condition that is based _outside_ of the normal
program control flow.
Panics "bubble up" through the program until they reach either an
invocation of Panic.recover Any or the program's main method. An unhandled
panic in main will terminate the program.
? Dataflow Errors or Panics
Panics are designed to be used for unrecoverable situations that need
to be handled through non-linear control flow mechanisms.
@Builtin_Type
type Panic
## A panic is an error condition that is based _outside_ of the normal
program control flow.
Panics "bubble up" through the program until they reach either an
invocation of Panic.recover Any or the program's main method. An unhandled
panic in main will terminate the program.
? Dataflow Errors or Panics
Panics are designed to be used for unrecoverable situations that need
to be handled through non-linear control flow mechanisms.
@Builtin_Type
type Panic
## Throws a new panic with the provided payload.
Arguments:
@ -316,10 +318,10 @@ type Panic
get_attached_stack_trace : Caught_Panic | Throwable -> Vector.Vector Runtime.Stack_Trace_Element
get_attached_stack_trace error =
throwable = case error of
Caught_Panic _ internal_original_exception -> internal_original_exception
Caught_Panic_Data _ internal_original_exception -> internal_original_exception
throwable -> throwable
prim_stack = Panic.primitive_get_attached_stack_trace throwable
stack_with_prims = Vector.Vector prim_stack
stack_with_prims = Vector.Vector_Data prim_stack
stack_with_prims.map Runtime.wrap_primitive_stack_trace_element
## Takes any value, and if it is a dataflow error, throws it as a Panic,
@ -383,7 +385,7 @@ type Panic
True -> handler caught_panic
False -> Panic.throw caught_panic
True -> case caught_panic.payload of
Polyglot_Error java_exception ->
Polyglot_Error_Data java_exception ->
case java_exception.is_a panic_type of
True -> handler caught_panic
False -> Panic.throw caught_panic
@ -413,7 +415,7 @@ type Panic
catch_java : Any -> Any -> (Throwable -> Any) -> Any
catch_java panic_type ~action handler =
Panic.catch_primitive action caught_panic-> case caught_panic.payload of
Polyglot_Error java_exception ->
Polyglot_Error_Data java_exception ->
case (panic_type == Any) || (java_exception.is_a panic_type) of
True -> handler java_exception
False -> Panic.throw caught_panic
@ -445,7 +447,7 @@ type Panic
recover : (Vector.Vector Any | Any) -> Any -> Any
recover expected_types ~action =
types_to_check = case expected_types of
Vector.Vector _ -> expected_types
Vector.Vector_Data _ -> expected_types
_ -> [expected_types]
Panic.catch Any action caught_panic->
is_matched = types_to_check.exists typ->
@ -460,7 +462,7 @@ type Panic
- value: value to return if not an error, or rethrow as a Panic.
throw_wrapped_if_error : Any -> Any
throw_wrapped_if_error ~value =
if value.is_error then Panic.throw (Wrapped_Dataflow_Error value.catch) else value
if value.is_error then Panic.throw (Wrapped_Dataflow_Error_Data value.catch) else value
## Catch any `Wrapped_Dataflow_Error` Panic and rethrow it as a dataflow error.
@ -468,7 +470,7 @@ type Panic
- action: The code to execute that potentially raised a Wrapped_Dataflow_Error.
handle_wrapped_dataflow_error : Any -> Any
handle_wrapped_dataflow_error ~action =
Panic.catch Wrapped_Dataflow_Error action caught_panic->
Panic.catch Wrapped_Dataflow_Error_Data action caught_panic->
Error.throw caught_panic.payload.payload
## The runtime representation of a syntax error.
@ -476,7 +478,8 @@ type Panic
Arguments:
- message: A description of the erroneous syntax.
@Builtin_Type
type Syntax_Error message
type Syntax_Error
Syntax_Error_Data message
## The runtime representation of a type error.
@ -485,21 +488,24 @@ type Syntax_Error message
- actual: The actual type at the error location.
- name: The name of the argument whose type is mismatched.
@Builtin_Type
type Type_Error expected actual name
type Type_Error
Type_Error_Data expected actual name
## The runtime representation of a compilation error.
Arguments:
- message: A description of the erroneous state.
@Builtin_Type
type Compile_Error message
type Compile_Error
Compile_Error_Data message
## The error thrown when a there is no pattern to match on the scrutinee.
Arguments:
- scrutinee: The scrutinee that failed to match.
@Builtin_Type
type Inexhaustive_Pattern_Match_Error scrutinee
type Inexhaustive_Pattern_Match_Error
Inexhaustive_Pattern_Match_Error_Data scrutinee
## The error thrown when the number of arguments provided to an operation
does not match the expected number of arguments.
@ -509,7 +515,8 @@ type Inexhaustive_Pattern_Match_Error scrutinee
- expected_max: the maximum expected number of arguments.
- actual: the actual number of arguments passed.
@Builtin_Type
type Arity_Error expected_min expected_max actual
type Arity_Error
Arity_Error_Data expected_min expected_max actual
## The error thrown when the program attempts to read from a state slot that has
not yet been initialized.
@ -517,7 +524,8 @@ type Arity_Error expected_min expected_max actual
Arguments:
- key: The key for the state slot that was not initialized.
@Builtin_Type
type Uninitialized_State key
type Uninitialized_State
Uninitialized_State_Data key
## The error thrown when the specified symbol does not exist as a method on
the target.
@ -526,7 +534,8 @@ type Uninitialized_State key
- target: The target on which the attempted method call was performed.
- symbol: The symbol that was attempted to be called on target.
@Builtin_Type
type No_Such_Method_Error target symbol
type No_Such_Method_Error
No_Such_Method_Error_Data target symbol
## ADVANCED
UNSTABLE
@ -551,7 +560,8 @@ No_Such_Method_Error.method_name self =
Arguments:
- cause: A polyglot object corresponding to the original error.
@Builtin_Type
type Polyglot_Error cause
type Polyglot_Error
Polyglot_Error_Data cause
## An error that occurs when the enso_project function is called in a file
that is not part of a project.
@ -563,7 +573,8 @@ type Module_Not_In_Package_Error
Arguments:
- message: A description of the error condition.
@Builtin_Type
type Arithmetic_Error message
type Arithmetic_Error
Arithmetic_Error_Data message
## An error that occurs when a program requests a read from an array index
that is out of bounds in the array.
@ -572,7 +583,8 @@ type Arithmetic_Error message
- array: The array in which the index was requested.
- index: The index that was out of bounds.
@Builtin_Type
type Invalid_Array_Index_Error array index
type Invalid_Array_Index_Error
Invalid_Array_Index_Error_Data array index
## An error that occurs when an object is used as a function in a function
call, but it cannot be called.
@ -580,7 +592,8 @@ type Invalid_Array_Index_Error array index
Arguments:
- target: The called object.
@Builtin_Type
type Not_Invokable_Error target
type Not_Invokable_Error
Not_Invokable_Error_Data target
## An error that occurs when arguments used in a function call are invalid
types for the function.
@ -588,14 +601,16 @@ type Not_Invokable_Error target
Arguments:
- arguments: The passed arguments.
@Builtin_Type
type Unsupported_Argument_Types arguments
type Unsupported_Argument_Types
Unsupported_Argument_Types_Data arguments
## An error that occurs when the specified module cannot be found.
Arguments:
- name: The module searched for.
@Builtin_Type
type Module_Does_Not_Exist name
type Module_Does_Not_Exist
Module_Does_Not_Exist_Data name
## An error that occurs when the specified value cannot be converted to a given type
## FIXME: please check
@ -603,7 +618,8 @@ type Module_Does_Not_Exist name
Arguments:
- target: ...
@Builtin_Type
type Invalid_Conversion_Target_Error target
type Invalid_Conversion_Target_Error
Invalid_Conversion_Target_Error_Data target
## An error that occurs when the conversion from one type to another does not exist
## FIXME: please check
@ -614,6 +630,7 @@ type Invalid_Conversion_Target_Error target
- conversion: ...
@Builtin_Type
type No_Such_Conversion_Error
No_Such_Conversion_Error_Data target that conversion
## UNSTABLE
@ -621,7 +638,8 @@ type No_Such_Conversion_Error
Arguments:
- message: The message describing what implementation is missing.
type Unimplemented_Error message
type Unimplemented_Error
Unimplemented_Error_Data message
## UNSTABLE
@ -644,7 +662,7 @@ Unimplemented_Error.to_display_text self = "An implementation is missing: " + se
example_unimplemented = Errors.unimplemented
unimplemented : Text -> Void
unimplemented message="" = Panic.throw (Unimplemented_Error message)
unimplemented message="" = Panic.throw (Unimplemented_Error_Data message)
type Time_Error
@ -654,4 +672,4 @@ type Time_Error
Arguments:
- error_message: The message for the error.
type Time_Error error_message
Time_Error_Data error_message

View File

@ -6,15 +6,15 @@ import Standard.Base.Warning
type Problem_Behavior
## UNSTABLE
Ignore the problem and attempt to complete the operation
type Ignore
Ignore
## UNSTABLE
Report the problem as a warning and attempt to complete the operation
type Report_Warning
Report_Warning
## UNSTABLE
Report the problem as a dataflow error and abort the operation
type Report_Error
Report_Error
## ADVANCED
UNSTABLE

View File

@ -1,15 +1,12 @@
import Standard.Base.Data.Vector
# Function types.
## A function is any type that represents a not-yet evaluated computation.
Methods are represented as functions with dynamic dispatch semantics on
the this argument.
@Builtin_Type
type Function
## A function is any type that represents a not-yet evaluated computation.
Methods are represented as functions with dynamic dispatch semantics on
the this argument.
@Builtin_Type
type Function
## An identity function which returns the provided argument.

View File

@ -87,7 +87,7 @@ from project.Data.Boolean export all
from project.Data.List export Nil, Cons, List
from project.Data.Numbers export all hiding Math, String, Double, Parse_Error
from project.Data.Noise export all hiding Noise
from project.Data.Pair export Pair
from project.Data.Pair export Pair, Pair_Data
from project.Data.Range export all
## TODO [RW] Once autoscoping is implemented or automatic imports for ADTs are
fixed in the IDE, we should revisit if we want to export ADTs like `Case` by
@ -97,9 +97,9 @@ from project.Data.Range export all
https://www.pivotaltracker.com/story/show/181403340
https://www.pivotaltracker.com/story/show/181309938
from project.Data.Text.Extensions export Text, Line_Ending_Style, Case, Location, Matching_Mode
from project.Data.Text.Matching export Case_Insensitive, Text_Matcher, Regex_Matcher, No_Matches_Found
from project.Data.Text.Matching export Case_Insensitive_Data, Text_Matcher_Data, Regex_Matcher_Data, No_Matches_Found_Data
from project.Data.Text export all hiding Encoding, Span, Text_Ordering
from project.Data.Text.Encoding export Encoding, Encoding_Error
from project.Data.Text.Encoding export Encoding, Encoding_Error, Encoding_Error_Data
from project.Data.Text.Text_Ordering export all
from project.Data.Text.Span export all
from project.Error.Common export all

View File

@ -3,67 +3,62 @@ from Standard.Base import all
## UNSTABLE
ADVANCED
A meta-representation of a runtime value.
An Atom meta-representation.
! Warning
The functionality contained in this module exposes certain implementation
details of the language. As such, the API has no stability guarantees and
is subject to change as the Enso interpreter evolves.
type Meta
Arguments:
- value: The value of the atom in the meta representation.
type Atom
Atom_Data value
## UNSTABLE
ADVANCED
## UNSTABLE
ADVANCED
An Atom meta-representation.
A constructor meta-representation.
Arguments:
- value: The value of the atom in the meta representation.
type Atom value
Arguments:
- value: The value of the constructor in the meta representation.
type Constructor
Constructor_Data value
## UNSTABLE
ADVANCED
## UNSTABLE
ADVANCED
A constructor meta-representation.
A primitive value meta-prepresentation.
Arguments:
- value: The value of the constructor in the meta representation.
type Constructor value
Arguments:
- value: The value of the primitive object in the meta representation.
type Primitive
Primitive_Data value
## UNSTABLE
ADVANCED
## UNSTABLE
ADVANCED
A primitive value meta-prepresentation.
An unresolved symbol meta-representation.
Arguments:
- value: The value of the primitive object in the meta representation.
type Primitive value
Arguments:
- value: The value of the unresolved symbol in the meta representation.
type Unresolved_Symbol
Unresolved_Symbol_Data value
## UNSTABLE
ADVANCED
## UNSTABLE
ADVANCED
An unresolved symbol meta-representation.
An error meta-representation, containing the payload of a dataflow error.
Arguments:
- value: The value of the unresolved symbol in the meta representation.
type Unresolved_Symbol value
Arguments:
- value: The payload of the error.
type Error
Error_Data value
## UNSTABLE
ADVANCED
## UNSTABLE
ADVANCED
An error meta-representation, containing the payload of a dataflow error.
A polyglot value meta-representation.
Arguments:
- value: The payload of the error.
type Error value
## UNSTABLE
ADVANCED
A polyglot value meta-representation.
Arguments:
- value: The polyglot value contained in the meta representation.
type Polyglot value
Arguments:
- value: The polyglot value contained in the meta representation.
type Polyglot
Polyglot_Data value
## Atom methods
@ -90,7 +85,7 @@ get_atom_fields atom = @Builtin_Method "Meta.get_atom_fields"
Returns a vector of field values of the given atom.
Atom.fields : Vector.Vector
Atom.fields self = Vector.Vector (get_atom_fields self.value)
Atom.fields self = Vector.Vector_Data (get_atom_fields self.value)
## UNSTABLE
ADVANCED
@ -208,7 +203,7 @@ new_atom constructor fields = @Builtin_Method "Meta.new_atom"
Returns a vector of field names defined by a constructor.
Constructor.fields : Vector.Vector
Constructor.fields self = Vector.Vector (get_constructor_fields self.value)
Constructor.fields self = Vector.Vector_Data (get_constructor_fields self.value)
## UNSTABLE
ADVANCED
@ -237,12 +232,12 @@ Constructor.new self fields = new_atom self.value fields.to_array
Arguments:
- value: The runtime entity to get the meta representation of.
meta : Any -> Meta
meta value = if is_atom value then Atom value else
if is_atom_constructor value then Constructor value else
if is_polyglot value then Polyglot value else
if is_unresolved_symbol value then Unresolved_Symbol value else
if is_error value then Error value.catch else
Primitive value
meta value = if is_atom value then Atom_Data value else
if is_atom_constructor value then Constructor_Data value else
if is_polyglot value then Polyglot_Data value else
if is_unresolved_symbol value then Unresolved_Symbol_Data value else
if is_error value then Error_Data value.catch else
Primitive_Data value
## UNSTABLE
ADVANCED
@ -304,30 +299,38 @@ Base.Error.is_an self typ = typ==Any || typ==Base.Error
- value: The value to check for being an instance of `typ`.
- typ: The type to check `self` against.
is_a : Any -> Any -> Boolean
is_a value typ = if typ == Any then True else
if is_error value then typ == Base.Error else
case value of
Array -> typ == Array
Boolean -> if typ == Boolean then True else value == typ
Text -> typ == Text
Number -> if typ == Number then True else case value of
Integer -> typ == Integer
Decimal -> typ == Decimal
Base.Polyglot -> typ == Base.Polyglot
_ ->
meta_val = meta value
case meta_val of
Atom _ -> if is_atom typ then typ == value else
meta_val.constructor == typ
Constructor _ ->
meta_typ = meta typ
case meta_typ of
Atom _ -> meta_val == meta_typ.constructor
Constructor _ -> meta_val == meta_typ
_ -> False
Error _ -> typ == Error
Unresolved_Symbol _ -> typ == Unresolved_Symbol
_ -> False
is_a value typ = if is_same_object value typ then True else
if typ == Any then True else
if is_error value then typ == Base.Error else
case value of
Array -> typ == Array
Boolean -> if typ == Boolean then True else value == typ
Text -> typ == Text
Number -> if typ == Number then True else case value of
Integer -> typ == Integer
Decimal -> typ == Decimal
Base.Polyglot ->
typ==Base.Polyglot || java_instance_check value typ
_ ->
meta_val = meta value
case meta_val of
Atom_Data _ -> if is_atom typ then typ == value else
meta_val.constructor == typ
Constructor_Data _ ->
meta_typ = meta typ
case meta_typ of
Atom_Data _ -> meta_val == meta_typ.constructor
Constructor_Data _ -> meta_val == meta_typ
_ -> False
Error_Data _ -> typ == Error
Unresolved_Symbol_Data _ -> typ == Unresolved_Symbol
_ -> False
## PRIVATE
java_instance_check value typ =
val_java = get_polyglot_language value == "java"
typ_java = get_polyglot_language typ == "java"
val_java && typ_java && Base.Java.is_instance value typ
## UNSTABLE
ADVANCED
@ -347,13 +350,13 @@ type Language
ADVANCED
The Java laguage.
type Java
Java
## UNSTABLE
ADVANCED
An unknown language.
type Unknown
Unknown
## PRIVATE

View File

@ -1,6 +1,7 @@
import Standard.Base.System.File
## Functionality for inspecting the current project.
@Builtin_Type
type Project_Description
## A representation of an Enso project.
@ -8,8 +9,7 @@ type Project_Description
Arguments:
- prim_root_file: The primitive root file of the project.
- prim_config: The primitive config of the project.
@Builtin_Type
type Project_Description prim_root_file prim_config
Project_Description_Data prim_root_file prim_config
## Returns the root directory of the project.

View File

@ -49,7 +49,7 @@ polyglot java import org.enso.base.Http_Utils
Http.new (timeout = 30.seconds) (proxy = Proxy.new "example.com" 8080)
new : Duration -> Boolean -> Proxy -> Http
new (timeout = 10.seconds) (follow_redirects = True) (proxy = Proxy.System) (version = Version.Http_1_1) =
Http timeout follow_redirects proxy version
Http_Data timeout follow_redirects proxy version
## Send an Options request.
@ -291,7 +291,7 @@ type Http
- follow_redirects: Whether or not the client should follow redirects.
- proxy: The proxy that the client should use, if any.
- version: The HTTP version supported by the client.
type Http timeout follow_redirects proxy version
Http_Data timeout follow_redirects proxy version
## Send an Options request.
@ -596,7 +596,7 @@ type Http
request : Request -> Response ! Request_Error
request self req =
handle_request_error =
Panic.catch_java Any handler=(err-> Error.throw (Request_Error 'IllegalArgumentException' err.getMessage))
Panic.catch_java Any handler=(err-> Error.throw (Request_Error_Data 'IllegalArgumentException' err.getMessage))
Panic.recover Any <| handle_request_error <|
body_publishers = HttpRequest.BodyPublishers
builder = HttpRequest.newBuilder
@ -605,14 +605,14 @@ type Http
# prepare headers and body
req_with_body = case req.body of
Request_Body.Empty ->
Pair req body_publishers.noBody
Pair_Data req body_publishers.noBody
Request_Body.Text text ->
builder.header Header.text_plain.name Header.text_plain.value
Pair req (body_publishers.ofString text)
Pair_Data req (body_publishers.ofString text)
Request_Body.Json json ->
builder.header Header.application_json.name Header.application_json.value
json_body = if json.is_a Text then json else json.to_text
Pair req (body_publishers.ofString json_body)
Pair_Data req (body_publishers.ofString json_body)
Request_Body.Form form ->
add_multipart form =
body_builder = Http_Utils.multipart_body_builder
@ -620,18 +620,18 @@ type Http
Form.Part_Text text -> body_builder.add_part_text part.key text
Form.Part_File file -> body_builder.add_part_file part.key file.path
boundary = body_builder.get_boundary
Pair (req.with_headers [Header.multipart_form_data boundary]) body_builder.build
Pair_Data (req.with_headers [Header.multipart_form_data boundary]) body_builder.build
add_urlencoded form =
body_builder = Http_Utils.urlencoded_body_builder
form.parts.map part-> case part.value of
Form.Part_Text text -> body_builder.add_part_text part.key text
Form.Part_File file -> body_builder.add_part_file part.key file.path
Pair req body_builder.build
Pair_Data req body_builder.build
if req.headers.contains Header.multipart_form_data then add_multipart form else
add_urlencoded form
Request_Body.Bytes bytes ->
builder.header Header.application_octet_stream.name Header.application_octet_stream.value
Pair req (body_publishers.ofByteArray bytes.to_array)
Pair_Data req (body_publishers.ofByteArray bytes.to_array)
# method
req_http_method = case req.method of
Method.Options -> "OPTIONS"
@ -643,14 +643,14 @@ type Http
Method.Trace -> "TRACE"
Method.Connect -> "CONNECT"
case req_with_body of
Pair req body ->
Pair_Data req body ->
# set method and body
builder.method req_http_method body
# set headers
req.headers.map h-> builder.header h.name h.value
http_request = builder.build
body_handler = HttpResponse.BodyHandlers . ofByteArray
Response.Response (self.internal_http_client.send http_request body_handler)
Response.Response_Data (self.internal_http_client.send http_request body_handler)
## PRIVATE
@ -659,7 +659,7 @@ type Http
internal_http_client self =
builder = HttpClient.newBuilder
# timeout
if self.timeout.is_date then Panic.throw (Time_Error "Connection timeout does not support date intervals") else
if self.timeout.is_date then Panic.throw (Time_Error_Data "Connection timeout does not support date intervals") else
builder.connectTimeout self.timeout.internal_duration
# redirect
redirect = HttpClient.Redirect
@ -693,7 +693,8 @@ type Http
Arguments:
- error_type: The type of the error.
- message: The message for the error.
type Request_Error error_type message
type Request_Error
Request_Error_Data error_type message
## UNSTABLE

View File

@ -12,7 +12,7 @@ from Standard.Base import all
example_form_new = Form.new (Form.text_field "foo" "bar")
new : Vector.Vector -> Form
new parts = Form parts
new parts = Form_Data parts
# Helpers for creating different parts of the form.
@ -29,7 +29,7 @@ new parts = Form parts
example_text_field = Form.text_field "Foo" "bar"
text_field : Text -> Text -> Part
text_field key val = Part key (Part_Text val)
text_field key val = Part_Data key (Part_Text val)
## Create a file field of a Form.
@ -44,7 +44,7 @@ text_field key val = Part key (Part_Text val)
example_text_field = Form.file_field "Foo" "My file contents"
file_field : Text -> Text -> Part
file_field key file = Part key (Part_File file)
file_field key file = Part_Data key (Part_File file)
## The HTTP form containing a vector of parts.
type Form
@ -55,7 +55,7 @@ type Form
Arguments:
- parts: A vector of form segments.
type Form parts
Form_Data parts
## Convert this to a Form.
@ -79,7 +79,7 @@ type Form
part_1 = Form.text_field "Foo" "bar"
part_2 = Form.text_field "Baz" "quux"
[part_1, part_2].to_form
Vector.Vector.to_form self = Form self
Vector.Vector.to_form self = Form_Data self
## The key-value element of the form.
type Part
@ -89,7 +89,7 @@ type Part
Arguments:
- key: The key for the form section.
- value: The value of the form section.
type Part key value
Part_Data key value
## The value of the form element.
type Part_Value
@ -98,10 +98,10 @@ type Part_Value
Arguments:
- part_text: The text for the form part.
type Part_Text part_text
Part_Text part_text
## A file value for a form part.
Arguments:
- part_file: The file for the form part.
type Part_File part_file
Part_File part_file

View File

@ -17,7 +17,7 @@ polyglot java import org.enso.base.Http_Utils
example_new = Header.new "My_Header" "my header's value"
new : Text -> Text -> Header
new name value = Header name value
new name value = Header_Data name value
# Accept
@ -33,7 +33,7 @@ new name value = Header name value
example_accept = Header.accept "my_field"
accept : Text -> Header
accept value = Header "Accept" value
accept value = Header_Data "Accept" value
## Create a header that accepts all (`"*/*"`).
@ -60,7 +60,7 @@ accept_all = accept "*/*"
example_auth = Header.authorization "foo"
authorization : Text -> Header
authorization value = Header "Authorization" value
authorization value = Header_Data "Authorization" value
## Create HTTP basic auth header.
@ -92,7 +92,7 @@ authorization_basic user pass =
example_content_type = Header.content_type "my_type"
content_type : Text -> Header
content_type value = Header "Content-Type" value
content_type value = Header_Data "Content-Type" value
## Header "Content-Type: application/json".
@ -163,7 +163,7 @@ type Header
Arguments:
- name: The header name.
- value: The header value.
type Header name value
Header_Data name value
## Header equality.

View File

@ -1,25 +1,25 @@
type Method
## The HTTP method "OPTIONS".
type Options
Options
## The HTTP method "GET".
type Get
Get
## The HTTP method "HEAD".
type Head
Head
## The HTTP method "POST".
type Post
Post
## The HTTP method "PUT".
type Put
Put
## The HTTP method "DELETE".
type Delete
Delete
## The HTTP method "TRACE".
type Trace
Trace
## The HTTP method "CONNECT".
type Connect
Connect

View File

@ -24,7 +24,7 @@ import Standard.Base.Network.URI
example_new = Request.new Method.Post (URI.parse "http://example.com")
new : Method -> (Text | URI) -> Vector.Vector -> Request_Body -> Request
new method addr (headers = []) (body = Request_Body.Empty) =
Panic.recover Any (Request method (Panic.rethrow (addr.to_uri)) headers body)
Panic.recover Any (Request_Data method (Panic.rethrow (addr.to_uri)) headers body)
## Create an Options request.
@ -135,7 +135,7 @@ type Request
- uri: The URI for the request.
- headers: A vector containing headers for the request.
- body: The body of the request.
type Request method uri headers body
Request_Data method uri headers body
## Sets the header for the request.
@ -153,13 +153,13 @@ type Request
with_header self key val =
new_header = Header.new key val
update_header p h = case p of
Pair acc True -> Pair (acc + [h]) True
Pair acc False ->
if h.name . equals_ignore_case key then Pair (acc + [new_header]) True else Pair (acc + [h]) False
new_headers = case self.headers.fold (Pair [] False) update_header of
Pair acc True -> acc
Pair acc False -> acc + [new_header]
Request self.method self.uri new_headers self.body
Pair_Data acc True -> Pair_Data (acc + [h]) True
Pair_Data acc False ->
if h.name . equals_ignore_case key then Pair_Data (acc + [new_header]) True else Pair_Data (acc + [h]) False
new_headers = case self.headers.fold (Pair_Data [] False) update_header of
Pair_Data acc True -> acc
Pair_Data acc False -> acc + [new_header]
Request_Data self.method self.uri new_headers self.body
## Sets the headers in the request.
@ -193,7 +193,7 @@ type Request
example_with_body =
Request.post (URI.parse "http://example.com") Request_Body.Empty |> _.with_body Request_Body.Empty
with_body : Request_Body -> Request
with_body self new_body = Request self.method self.uri self.headers new_body
with_body self new_body = Request_Data self.method self.uri self.headers new_body
## Set the body text in the request encoded as "application/json".
@ -213,7 +213,7 @@ type Request
with_json : (Text | Json) -> Request
with_json self json_body =
new_body = Request_Body.Json json_body
Request self.method self.uri self.headers new_body . with_headers [Header.application_json]
Request_Data self.method self.uri self.headers new_body . with_headers [Header.application_json]
## Set body as vector of parts encoded as "application/x-www-form-urlencoded".
@ -231,4 +231,4 @@ type Request
with_form : (Vector | Form) -> Request
with_form self parts =
new_body = Request_Body.Form parts.to_form
Request self.method self.uri self.headers new_body . with_headers [Header.application_x_www_form_urlencoded]
Request_Data self.method self.uri self.headers new_body . with_headers [Header.application_x_www_form_urlencoded]

View File

@ -4,34 +4,34 @@ from Standard.Base import all
type Body
## Empty request body.
type Empty
Empty
## Request body with text.
Arguments:
- text: The plain text in the request body.
type Text text
Text text
## Request body with JSON.
Arguments:
- json: The JSON in the request body.
type Json json
Json json
## Request body with form data.
Arguments:
- form: The form data in the request body.
type Form form
Form form
## Request body with file data.
Arguments:
- file: The file data in the request body.
type File file
File file
## Request body with binary.
Arguments:
- bytes: The binary data in the request body.
type Bytes bytes
Bytes bytes

View File

@ -15,7 +15,7 @@ type Response
Arguments:
- internal_http_response: The internal represnetation of the HTTP
response.
type Response internal_http_response
Response_Data internal_http_response
## Get the response headers.
@ -41,7 +41,7 @@ type Response
example_body = Examples.get_response.body
body : Response_Body
body self = Response_Body.Body (Vector.from_polyglot_array self.internal_http_response.body)
body self = Response_Body.Body_Data (Vector.from_polyglot_array self.internal_http_response.body)
## Get the response status code.
@ -53,7 +53,7 @@ type Response
example_code = Examples.get_response.code
code : Status_Code
code self = Status_Code.Status_Code self.internal_http_response.statusCode
code self = Status_Code.Status_Code_Data self.internal_http_response.statusCode
## Convert the response to JSON.

View File

@ -6,7 +6,7 @@ type Body
Arguments:
- bytes: The body of the response as binary data.
type Body bytes
Body_Data bytes
## Convert response body to Text.

View File

@ -6,164 +6,164 @@ type Status_Code
Arguments:
- code: The numeric representation of the code.
type Status_Code code
Status_Code_Data code
## 100 Continue.
continue : Status_Code
continue = Status_Code 100
continue = Status_Code_Data 100
## 101 Switching Protocols.
switching_protocols : Status_Code
switching_protocols = Status_Code 101
switching_protocols = Status_Code_Data 101
## 200 OK.
ok : Status_Code
ok = Status_Code 200
ok = Status_Code_Data 200
## 201 Created.
created : Status_Code
created = Status_Code 201
created = Status_Code_Data 201
## 202 Accepted.
accepted : Status_Code
accepted = Status_Code 202
accepted = Status_Code_Data 202
## 203 Non-Authoritative Information.
non_authoritative_information : Status_Code
non_authoritative_information = Status_Code 203
non_authoritative_information = Status_Code_Data 203
## 204 No Content.
no_content : Status_Code
no_content = Status_Code 204
no_content = Status_Code_Data 204
## 205 Reset Content.
reset_content : Status_Code
reset_content = Status_Code 205
reset_content = Status_Code_Data 205
## 206 Partial Content.
partial_content : Status_Code
partial_content = Status_Code 206
partial_content = Status_Code_Data 206
## 300 Multiple Choices.
multiple_choices : Status_Code
multiple_choices = Status_Code 300
multiple_choices = Status_Code_Data 300
## 301 Moved Permanently.
moved_permanently : Status_Code
moved_permanently = Status_Code 301
moved_permanently = Status_Code_Data 301
## 302 Found.
found : Status_Code
found = Status_Code 302
found = Status_Code_Data 302
## 303 See Other.
see_other : Status_Code
see_other = Status_Code 303
see_other = Status_Code_Data 303
## 304 Not Modified.
not_modified : Status_Code
not_modified = Status_Code 304
not_modified = Status_Code_Data 304
## 305 Use Proxy.
use_proxy : Status_Code
use_proxy = Status_Code 305
use_proxy = Status_Code_Data 305
## 307 Temporary Redirect.
temporary_redirect : Status_Code
temporary_redirect = Status_Code 307
temporary_redirect = Status_Code_Data 307
## 400 Bad Request.
bad_request : Status_Code
bad_request = Status_Code 400
bad_request = Status_Code_Data 400
## 401 Unauthorized.
unauthorized : Status_Code
unauthorized = Status_Code 401
unauthorized = Status_Code_Data 401
## 402 Payment Required.
payment_required : Status_Code
payment_required = Status_Code 402
payment_required = Status_Code_Data 402
## 403 Forbidden.
forbidden : Status_Code
forbidden = Status_Code 403
forbidden = Status_Code_Data 403
## 404 Not Found.
not_found : Status_Code
not_found = Status_Code 404
not_found = Status_Code_Data 404
## 405 Method Not Allowed.
method_not_allowed : Status_Code
method_not_allowed = Status_Code 405
method_not_allowed = Status_Code_Data 405
## 406 Not Acceptable.
not_acceptable : Status_Code
not_acceptable = Status_Code 406
not_acceptable = Status_Code_Data 406
## 407 Proxy Authentication Required.
proxy_authentication_required : Status_Code
proxy_authentication_required = Status_Code 407
proxy_authentication_required = Status_Code_Data 407
## 408 Request Timeout.
request_timeout : Status_Code
request_timeout = Status_Code 408
request_timeout = Status_Code_Data 408
## 409 Conflict.
conflict : Status_Code
conflict = Status_Code 409
conflict = Status_Code_Data 409
## 410 Gone.
gone : Status_Code
gone = Status_Code 410
gone = Status_Code_Data 410
## 411 Length Required.
length_required : Status_Code
length_required = Status_Code 411
length_required = Status_Code_Data 411
## 412 Precondition Failed.
precondition_failed : Status_Code
precondition_failed = Status_Code 412
precondition_failed = Status_Code_Data 412
## 413 Request Entity Too Large.
request_entity_too_large : Status_Code
request_entity_too_large = Status_Code 413
request_entity_too_large = Status_Code_Data 413
## 414 Request-URI Too Long.
request_uri_too_long : Status_Code
request_uri_too_long = Status_Code 414
request_uri_too_long = Status_Code_Data 414
## 415 Unsupported Media Type.
unsupported_media_type : Status_Code
unsupported_media_type = Status_Code 415
unsupported_media_type = Status_Code_Data 415
## 416 Requested Range Not Satisfiable.
requested_range_not_satisfiable : Status_Code
requested_range_not_satisfiable = Status_Code 416
requested_range_not_satisfiable = Status_Code_Data 416
## 417 Expectation Failed.
expectation_failed : Status_Code
expectation_failed = Status_Code 417
expectation_failed = Status_Code_Data 417
## 500 Internal Server Error.
internal_server_error : Status_Code
internal_server_error = Status_Code 500
internal_server_error = Status_Code_Data 500
## 501 Not Implemented.
not_implemented : Status_Code
not_implemented = Status_Code 501
not_implemented = Status_Code_Data 501
## 502 Bad Gateway.
bad_gateway : Status_Code
bad_gateway = Status_Code 502
bad_gateway = Status_Code_Data 502
## 503 Service Unavailable.
service_unavailable : Status_Code
service_unavailable = Status_Code 503
service_unavailable = Status_Code_Data 503
## 504 Gateway Timeout
gateway_timeout : Status_Code
gateway_timeout = Status_Code 504
gateway_timeout = Status_Code_Data 504
## 505 HTTP Version Not Supported.
http_version_not_supported : Status_Code
http_version_not_supported = Status_Code 505
http_version_not_supported = Status_Code_Data 505

View File

@ -1,7 +1,7 @@
type Version
## HTTP version 1.1.
type Http_1_1
Http_1_1
## HTTP version 2.
type Http_2
Http_2

View File

@ -4,13 +4,13 @@ from Standard.Base import all
type Proxy
## The proxy is disabled.
type None
None
## Use the system proxy settings.
type System
System
## Use the provided proxy server.
type Proxy_Addr proxy_host proxy_port
Proxy_Addr proxy_host proxy_port
## Create new proxy settings from a host and port.

View File

@ -22,8 +22,8 @@ polyglot java import java.util.Optional
example_parse = URI.parse "http://example.com"
parse : Text -> URI ! Syntax_Error
parse text =
Panic.catch_java Any (URI (Java_URI.create text)) java_exception->
Error.throw (Syntax_Error ("URI syntax error: " + java_exception.getMessage))
Panic.catch_java Any (URI_Data (Java_URI.create text)) java_exception->
Error.throw (Syntax_Error_Data ("URI syntax error: " + java_exception.getMessage))
## Convert Text to a URI.
@ -46,7 +46,7 @@ type URI
Arguments:
- internal_uri: The internal representation of the URI.
type URI internal_uri
URI_Data internal_uri
## Convert this to URI.

View File

@ -1,15 +1,12 @@
from Standard.Base import Boolean, True
## The type that has only a singleton value. Nothing in Enso is used as an
universal value to indicate the lack of presence of a value.
It is often used alongside a value of type a to provide a Maybe or
Option abstraction.
@Builtin_Type
type Nothing
## The type that has only a singleton value. Nothing in Enso is used as an
universal value to indicate the lack of presence of a value.
It is often used alongside a value of type a to provide a Maybe or
Option abstraction. The type a | Nothing is semantically equivalent to
Maybe a.
@Builtin_Type
type Nothing
## Checks if the type is an instance of `Nothing`.
> Example

View File

@ -1,13 +1,10 @@
## Generic utilities for interacting with other languages.
## A type representing interactions with polyglot languages.
Polyglot is a term that refers to other languages (such as Java) that are
running on the same JVM.
@Builtin_Type
type Polyglot
## A type representing interactions with polyglot languages.
Polyglot is a term that refers to other languages (such as Java) that are
running on the same JVM.
@Builtin_Type
type Polyglot
## Reads the number of elements in a given polyglot array object.
Arguments:

View File

@ -1,9 +1,5 @@
## Utilities for working with Java polyglot objects.
type Java
## A type for operations specific to Java polyglot objects.
type Java
## Adds the provided entry to the host class path.
Arguments:

View File

@ -5,7 +5,7 @@ from Standard.Base import Polyglot, Array
Wrapper for Polyglot Arrays
type Proxy_Polyglot_Array
type Proxy_Polyglot_Array arr
Proxy_Polyglot_Array_Data arr
## Returns the number of elements stored in this Polyglot Array.

View File

@ -14,11 +14,11 @@ get_default_seed = System.nano_time
## Constructs a new random number generator.
new : Integer -> Random_Number_Generator
new seed=get_default_seed =
Random_Number_Generator (Java_Random.new seed)
Random_Number_Generator_Data (Java_Random.new seed)
type Random_Number_Generator
## A random number generator.
type Random_Number_Generator java_random
Random_Number_Generator_Data java_random
## Returns a new vector containing a random sample of the input vector, without
replacement.

View File

@ -2,7 +2,7 @@ import Standard.Base.Data.Vector
from Standard.Base.Data.Index_Sub_Range import First
import Standard.Base.Polyglot
import Standard.Base.Nothing
from Standard.Base.Runtime.Extensions import Source_Location
from Standard.Base.Runtime.Extensions import Source_Location, Source_Location_Data
## Utilities for interacting with the runtime.
@ -21,7 +21,7 @@ primitive_get_stack_trace = @Builtin_Method "Runtime.primitive_get_stack_trace"
get_stack_trace : Vector.Vector Stack_Trace_Element
get_stack_trace =
prim_stack = primitive_get_stack_trace
stack_with_prims = Vector.Vector prim_stack
stack_with_prims = Vector.Vector_Data prim_stack
stack = stack_with_prims.map wrap_primitive_stack_trace_element
# drop this frame and the one from `Runtime.primitive_get_stack_trace`
stack.drop (First 2)
@ -80,9 +80,9 @@ no_inline_with_arg function arg = @Builtin_Method "Runtime.no_inline_with_arg"
## PRIVATE
Converts a primitive stack trace element into the regular one.
wrap_primitive_stack_trace_element el =
loc = if Polyglot.has_source_location el then (Source_Location (Polyglot.get_source_location el)) else Nothing
loc = if Polyglot.has_source_location el then Source_Location_Data (Polyglot.get_source_location el) else Nothing
name = Polyglot.get_executable_name el
Stack_Trace_Element name loc
Stack_Trace_Element_Data name loc
## ADVANCED
UNSTABLE
@ -90,4 +90,4 @@ wrap_primitive_stack_trace_element el =
Represents a single stack frame in an Enso stack trace.
type Stack_Trace_Element
## PRIVATE
type Stack_Trace_Element name source_location
Stack_Trace_Element_Data name source_location

View File

@ -6,7 +6,7 @@ from Standard.Base import all
source file and code position within it.
type Source_Location
## PRIVATE
type Source_Location prim_location
Source_Location_Data prim_location
## UNSTABLE
Pretty prints the location.

View File

@ -1,10 +1,6 @@
## Utilities for working with mutable references.
## A mutable reference type.
@Builtin_Type
type Ref
## A mutable reference type.
@Builtin_Type
type Ref
## Gets the contents of this mutable reference ref.
> Example

View File

@ -19,17 +19,14 @@
bracket : Any -> (Any -> Nothing) -> (Any -> Any) -> Any
bracket ~constructor ~destructor ~action = @Builtin_Method "Resource.bracket"
## An API for automatic resource management.
## A managed resource is a special type of resource that is subject to
automated cleanup when it is no longer in use.
This API is intended for use by developers to provide easy-to-use
abstractions, and is not expected to be used by end-users.
@Builtin_Type
type Managed_Resource
## A managed resource is a special type of resource that is subject to
automated cleanup when it is no longer in use.
This API is intended for use by developers to provide easy-to-use
abstractions, and is not expected to be used by end-users.
@Builtin_Type
type Managed_Resource
## ADVANCED
Registers a resource with the resource manager to be cleaned up using

View File

@ -65,4 +65,5 @@ default_line_separator = Java_System.lineSeparator
- stdout: Any values printed to standard out by the child process.
- stderr: Any values printed to standard error by the child process.
@Builtin_Type
type System_Process_Result exit_code stdout stderr
type System_Process_Result
System_Process_Result_Data exit_code stdout stderr

View File

@ -40,7 +40,7 @@ new path =
case path of
Text -> get_file path
File -> path
_ -> Error.throw (Illegal_Argument_Error "new file should be either a File or a Text")
_ -> Error.throw (Illegal_Argument_Error_Data "new file should be either a File or a Text")
## Open and reads all bytes in the file at the provided `path` into a byte vector.
@ -169,14 +169,9 @@ list : (File | Text) -> Text -> Boolean -> Vector.Vector File
list directory name_filter=Nothing recursive=False =
new directory . list name_filter=name_filter recursive=recursive
@Builtin_Type
type File
## PRIVATE
A type representing a file.
@Builtin_Type
type File
## Creates a new output stream for this file and runs the specified action
on it.
@ -616,7 +611,7 @@ type File
opts = open_options . map (_.to_java) . to_array
stream = handle_java_exceptions self (self.input_stream opts)
resource = Managed_Resource.register stream close_stream
Input_Stream self resource
Input_Stream_Data self resource
## ADVANCED
@ -634,7 +629,7 @@ type File
stream = handle_java_exceptions self <|
self.output_stream opts
resource = Managed_Resource.register stream close_stream
Output_Stream self resource
Output_Stream_Data self resource
## PRIVATE
@ -733,7 +728,7 @@ type File
Utility function that lists immediate children of a directory.
list_immediate_children : Vector.Vector File
list_immediate_children self = Vector.Vector (self.list_immediate_children_array)
list_immediate_children self = Vector.Vector_Data (self.list_immediate_children_array)
## PRIVATE
@ -760,7 +755,7 @@ type Output_Stream
- file: The file which the output stream will write into.
- stream_resource: The internal resource that represents the underlying
stream.
type Output_Stream file stream_resource
Output_Stream_Data file stream_resource
## ADVANCED
@ -834,7 +829,7 @@ type Output_Stream
replacement_sequence = Encoding_Utils.INVALID_CHARACTER.bytes encoding on_problems=Problem_Behavior.Ignore
java_charset = encoding.to_java_charset
results = Encoding_Utils.with_stream_encoder java_stream java_charset replacement_sequence.to_array action
problems = Vector.from_polyglot_array results.problems . map Encoding_Error
problems = Vector.from_polyglot_array results.problems . map Encoding_Error_Data
on_problems.attach_problems_after results.result problems
## An input stream, allowing for interactive reading of contents from an open
@ -850,7 +845,7 @@ type Input_Stream
- file: The file from which the stream will read.
- stream_resource: The internal resource that represents the underlying
stream.
type Input_Stream file stream_resource
Input_Stream_Data file stream_resource
## ADVANCED
@ -902,7 +897,7 @@ type Input_Stream
read_n_bytes self n = self.stream_resource . with java_stream->
handle_java_exceptions self.file <|
bytes = java_stream.readNBytes n
Vector.Vector bytes
Vector.Vector_Data bytes
## ADVANCED
@ -968,7 +963,7 @@ type Input_Stream
with_stream_decoder self encoding on_problems action = self.stream_resource . with java_stream->
java_charset = encoding.to_java_charset
results = Encoding_Utils.with_stream_decoder java_stream java_charset action
problems = Vector.Vector results.problems . map Encoding_Error
problems = Vector.Vector_Data results.problems . map Encoding_Error_Data
on_problems.attach_problems_after results.result problems
## PRIVATE
@ -1002,17 +997,17 @@ type File_Error
Arguments:
- file: The file that doesn't exist.
type File_Not_Found file
File_Not_Found file
## Indicates that a destination file already exists.
type File_Already_Exists_Error file
File_Already_Exists_Error file
## A generic IO error.
Arguments:
- file: The file that couldn't be read.
- message: The message for the error.
type IO_Error file message
IO_Error file message
## UNSTABLE
@ -1113,7 +1108,7 @@ Text.write self path encoding=Encoding.utf_8 on_existing_file=Existing_File_Beha
[36, -62, -93, -62, -89, -30, -126, -84, -62, -94].write_bytes Examples.scratch_file.write_bytes Examples.scratch_file Existing_File_Behavior.Append
Vector.Vector.write_bytes : (File|Text) -> Existing_File_Behavior -> Nothing ! Illegal_Argument_Error | File_Not_Found | IO_Error | File_Already_Exists_Error
Vector.Vector.write_bytes self path on_existing_file=Existing_File_Behavior.Backup =
Panic.catch Unsupported_Argument_Types handler=(Error.throw (Illegal_Argument_Error "Only Vectors consisting of bytes (integers in the range from -128 to 127) are supported by the `write_bytes` method.")) <|
Panic.catch Unsupported_Argument_Types_Data handler=(Error.throw (Illegal_Argument_Error_Data "Only Vectors consisting of bytes (integers in the range from -128 to 127) are supported by the `write_bytes` method.")) <|
## Convert to a byte array before writing - and fail early if there is any problem.
byte_array = Array_Utils.ensureByteArray self.to_array

View File

@ -11,21 +11,21 @@ type Existing_File_Behavior
Note: There is a risk of data loss if a failure occurs during the write
operation.
type Overwrite
Overwrite
## Creates a backup of the existing file (by appending a `.bak` suffix to
the name) before replacing it with the new contents.
Note: This requires sufficient storage to have two copies of the file.
If an existing `.bak` file exists, it will be replaced.
type Backup
Backup
## Appends data to the existing file.
type Append
Append
## If the file already exists, a `File_Already_Exists_Error` error is
raised.
type Error
Error
## PRIVATE
Runs the `action` which is given a file output stream and should write
@ -50,7 +50,7 @@ type Existing_File_Behavior
handle_write_failure_dataflow caught_panic =
Common.Error.throw caught_panic.payload.cause
handle_file_already_exists = Panic.catch File_Already_Exists_Error handler=handle_existing_file
handle_internal_dataflow = Panic.catch Internal_Write_Operation_Errored handler=handle_write_failure_dataflow
handle_internal_dataflow = Panic.catch Internal_Write_Operation_Errored_Data handler=handle_write_failure_dataflow
## We first attempt to write the file to the original
destination, but if that files due to the file already
existing, we will run the alternative algorithm which uses a
@ -58,7 +58,7 @@ type Existing_File_Behavior
handle_file_already_exists <| handle_internal_dataflow <|
Panic.rethrow <| file.with_output_stream [Option.Write, Option.Create_New] output_stream->
action output_stream . catch Any dataflow_error->
Panic.throw (Internal_Write_Operation_Errored dataflow_error)
Panic.throw (Internal_Write_Operation_Errored_Data dataflow_error)
## PRIVATE
write_file_backing_up_old_one : File -> (Output_Stream -> Nothing) -> Nothing ! File_Not_Found | IO_Error | File_Already_Exists_Error
@ -80,15 +80,15 @@ write_file_backing_up_old_one file action = Panic.recover [IO_Error, File_Not_Fo
new_file.delete
Common.Error.throw caught_panic.payload.cause
handle_file_already_exists = Panic.catch File_Already_Exists_Error handler=handle_existing_file
handle_internal_dataflow = Panic.catch Internal_Write_Operation_Errored handler=handle_write_failure_dataflow
handle_internal_panic = Panic.catch Internal_Write_Operation_Panicked handler=handle_write_failure_panic
handle_internal_dataflow = Panic.catch Internal_Write_Operation_Errored_Data handler=handle_write_failure_dataflow
handle_internal_panic = Panic.catch Internal_Write_Operation_Panicked_Data handler=handle_write_failure_panic
handle_file_already_exists <| handle_internal_dataflow <| handle_internal_panic <|
Panic.rethrow <|
new_file.with_output_stream [Option.Write, Option.Create_New] output_stream->
result = Panic.catch Any (action output_stream) caught_panic->
Panic.throw (Internal_Write_Operation_Panicked caught_panic)
Panic.throw (Internal_Write_Operation_Panicked_Data caught_panic)
result.catch Any dataflow_error->
Panic.throw (Internal_Write_Operation_Errored dataflow_error)
Panic.throw (Internal_Write_Operation_Errored_Data dataflow_error)
## We ignore the file not found error, because it means that there
is no file to back-up. This may also be caused by someone
removing the original file during the time when we have been
@ -102,7 +102,9 @@ write_file_backing_up_old_one file action = Panic.recover [IO_Error, File_Not_Fo
## PRIVATE
type Internal_Write_Operation_Panicked (cause : Caught_Panic)
type Internal_Write_Operation_Panicked
Internal_Write_Operation_Panicked_Data (cause : Caught_Panic)
## PRIVATE
type Internal_Write_Operation_Errored (cause : Any)
type Internal_Write_Operation_Errored
Internal_Write_Operation_Errored_Data (cause : Any)

View File

@ -4,17 +4,17 @@ polyglot java import java.nio.file.attribute.PosixFilePermission
type Permission
## Permission for read access for a given entity.
type Read
Read
## Permission for write access for a given entity.
type Write
Write
## Permission for execute access for a given entity.
type Execute
Execute
type File_Permissions
## Access permissions for a file.
type File_Permissions (owner : Vector Permission) (group : Vector Permission) (others : Vector Permission)
File_Permissions_Data (owner : Vector Permission) (group : Vector Permission) (others : Vector Permission)
## Converts the Enso atom to its Java enum counterpart.
to_java : Vector PosixFilePermission
@ -101,4 +101,4 @@ from_java_set java_set =
if java_set.contains PosixFilePermission.OTHERS_EXECUTE then
others.append Execute
File_Permissions owner.to_vector group.to_vector others.to_vector
File_Permissions_Data owner.to_vector group.to_vector others.to_vector

View File

@ -11,37 +11,37 @@ type Option
## If the file is opened for `Write` access then bytes will be written to
the end of the file rather than the beginning.
type Append
Append
## Create a new file if it does not exist.
type Create
Create
## Create a new file, failing if the file already exists.
type Create_New
Create_New
## Delete the underlying file on closing the stream.
type Delete_On_Close
Delete_On_Close
## Requires that every update to the file's content be written
synchronously to the underlying storage device.
type Dsync
Dsync
## Open for read access.
type Read
Read
## Sparse file.
type Sparse
Sparse
## Requires that every update to the file's content or metadata be written
synchronously to the underlying storage device.
type Sync
Sync
## If the file already exists and is opened for `Write` access,
the original contents will be removed.
type Truncate_Existing
Truncate_Existing
## Open file for write access.
type Write
Write
## PRIVATE

View File

@ -4,16 +4,16 @@ import Standard.Base.System
type Os
## The Linux operating system.
type Linux
Linux
## The macOS operating system.
type Mac_OS
Mac_OS
## The Windows operating system.
type Windows
Windows
## An unknown operating system.
type Unknown
Unknown
## Return the type of operating system.

View File

@ -40,7 +40,7 @@ run command arguments=[] =
example_new_builder = Process.new_builder "echo"
new_builder : Text -> Vector Text -> Text -> Builder
new_builder command arguments=[] stdin="" = Builder command arguments stdin
new_builder command arguments=[] stdin="" = Builder_Data command arguments stdin
## UNSTABLE
@ -60,7 +60,7 @@ type Builder
We recommend that you use this type with its builder interface. Start
by creating a `Builder "command"` and then call functions on it to
set arguments and standard output. It results in much clearer code.
type Builder command arguments stdin
Builder_Data command arguments stdin
## UNSTABLE
@ -78,7 +78,7 @@ type Builder
builder = Process.new_builder "echo"
builder.set_arguments ["hello, world!"]
set_arguments : Vector.Vector Text -> Builder
set_arguments self arguments = Builder self.command arguments self.stdin
set_arguments self arguments = Builder_Data self.command arguments self.stdin
## UNSTABLE
@ -97,7 +97,7 @@ type Builder
builder = Process.new_builder "echo"
builder.set_stdin "hello, world!"
set_stdin : Text -> Builder
set_stdin self stdin = Builder self.command self.arguments stdin
set_stdin self stdin = Builder_Data self.command self.arguments stdin
## UNSTABLE
@ -115,7 +115,7 @@ type Builder
create : Result
create self =
result = System.create_process self.command self.arguments.to_array self.stdin redirect_in=False redirect_out=False redirect_err=False
Result (Exit_Code.from_number result.exit_code) result.stdout result.stderr
Result_Data (Exit_Code.from_number result.exit_code) result.stdout result.stderr
## UNSTABLE
@ -125,4 +125,5 @@ type Builder
- exit_code: The exit code for the process.
- stdout: The contents of the process' standard output.
- stderr: The contents of the process' standard error.
type Result exit_code stdout stderr
type Result
Result_Data exit_code stdout stderr

View File

@ -4,13 +4,13 @@ from Standard.Base import all
type Exit_Code
## The process exited with a success.
type Exit_Success
Exit_Success
## The process exited with a failure.
Arguments:
- code: The exit code for the failure.
type Exit_Failure code
Exit_Failure code
## Convert exit code to a number.

View File

@ -1,15 +1,10 @@
from Standard.Base import all
from Standard.Base.Data.Index_Sub_Range import While
from Standard.Base.Runtime import Stack_Trace_Element
from Standard.Base.Runtime import Stack_Trace_Element_Data
## A representation of a dataflow warning attached to a value.
@Builtin_Type
type Warning
## PRIVATE
The constructor to wrap primitive warnings.
@Builtin_Type
type Warning
## UNSTABLE
Returns the warning value usually its explanation or other contents.
@ -41,11 +36,11 @@ type Warning
nature is preserved.
reassignments : Vector.Vector Stack_Trace_Element
reassignments self =
Vector.Vector self.get_reassignments . map r->
Vector.Vector_Data self.get_reassignments . map r->
loc = case Polyglot.has_source_location r of
False -> Nothing
True -> Source_Location (Polyglot.get_source_location r)
Stack_Trace_Element (Polyglot.get_executable_name r) loc
True -> Source_Location_Data (Polyglot.get_source_location r)
Stack_Trace_Element_Data (Polyglot.get_executable_name r) loc
## PRIVATE
@ -83,7 +78,7 @@ attach_with_stacktrace value warning origin = @Builtin_Method "Warning.attach_wi
Gets all the warnings attached to the given value. Warnings are returned in the
reverse-chronological order with respect to their attachment time.
get_all : Any -> Vector.Vector Warning
get_all value = Vector.Vector (get_all_array value)
get_all value = Vector.Vector_Data (get_all_array value)
## PRIVATE
@ -206,7 +201,7 @@ detach_selected_warnings value predicate =
result = warnings.partition w-> predicate w.value
matched = result.first
remaining = result.second
Pair (set remaining value) matched
Pair_Data (set remaining value) matched
## UNSTABLE
A helper function which gathers warnings matching some predicate and passes

View File

@ -7,7 +7,7 @@ type Client_Certificate
- cert_file: path to the client certificate file.
- key_file: path to the client key file.
- key_password: password for the client key file.
type Client_Certificate cert_file:(File|Text) key_file:(File|Text) (key_password:Text='')
Client_Certificate_Data cert_file:(File|Text) key_file:(File|Text) (key_password:Text='')
## PRIVATE
Creates the JDBC properties for the client certificate.
@ -18,5 +18,5 @@ type Client_Certificate
- sslpass: password for the client key file.
properties : Vector
properties self =
base = [Pair 'sslcert' (File.new self.cert_file).absolute.path, Pair 'sslkey' (File.new self.key_file).absolute.path]
if self.key_password == "" then base else base + [Pair 'sslpassword' self.key_password]
base = [Pair_Data 'sslcert' (File.new self.cert_file).absolute.path, Pair_Data 'sslkey' (File.new self.key_file).absolute.path]
if self.key_password == "" then base else base + [Pair_Data 'sslpassword' self.key_password]

View File

@ -11,7 +11,7 @@ import Standard.Table.Data.Table as Materialized_Table
import Standard.Table.Data.Storage
import Standard.Table.Internal.Java_Exports
import Standard.Database.Data.Internal.Base_Generator
from Standard.Database.Data.Sql import Sql_Type
from Standard.Database.Data.Sql import Sql_Type, Sql_Type_Data
polyglot java import java.lang.UnsupportedOperationException
polyglot java import java.util.ArrayList
@ -35,7 +35,7 @@ type Connection
- dialect: the dialect associated with the database we are connected to.
Allows accessing tables from a database.
type Connection connection_resource dialect
Connection_Data connection_resource dialect
## UNSTABLE
@ -95,7 +95,7 @@ type Connection
Vector.new ncols ix->
typeid = metadata.getColumnType ix+1
name = metadata.getColumnTypeName ix+1
Sql_Type typeid name
Sql_Type_Data typeid name
column_builders = column_types.map typ->
create_builder typ
go has_next = if has_next.not then Nothing else
@ -141,7 +141,7 @@ type Connection
case query of
Text -> go query []
Sql.Statement _ ->
Sql.Statement_Data _ ->
compiled = query.prepare
go compiled.first compiled.second
@ -165,7 +165,7 @@ type Connection
name = metadata.getColumnName ix+1
typeid = metadata.getColumnType ix+1
typename = metadata.getColumnTypeName ix+1
[name, Sql_Type typeid typename]
[name, Sql_Type_Data typeid typename]
Vector.new ncols resolve_column
## PRIVATE
@ -186,7 +186,7 @@ type Connection
- batch_size: Specifies how many rows should be uploaded in a single
batch.
upload_table : Text -> Materialized_Table -> Boolean -> Integer -> Database_Table
upload_table self name table temporary=True batch_size=1000 = Panic.recover Illegal_State_Error <| handle_sql_errors <|
upload_table self name table temporary=True batch_size=1000 = Panic.recover Illegal_State_Error_Data <| handle_sql_errors <|
column_types = table.columns.map col-> default_storage_type col.storage_type
column_names = table.columns.map .name
col_makers = column_names.zip column_types name-> typ->
@ -210,10 +210,10 @@ type Connection
columns = table.columns
check_rows updates_polyglot_array expected_size =
updates = Vector.from_polyglot_array updates_polyglot_array
if updates.length != expected_size then Panic.throw <| Illegal_State_Error "The batch update unexpectedly affected "+updates.length.to_text+" rows instead of "+expected_size.to_text+"." else
if updates.length != expected_size then Panic.throw <| Illegal_State_Error_Data "The batch update unexpectedly affected "+updates.length.to_text+" rows instead of "+expected_size.to_text+"." else
updates.each affected_rows->
if affected_rows != 1 then
Panic.throw <| Illegal_State_Error "A single update within the batch unexpectedly affected "+affected_rows.to_text+" rows."
Panic.throw <| Illegal_State_Error_Data "A single update within the batch unexpectedly affected "+affected_rows.to_text+" rows."
0.up_to num_rows . each row_id->
values = columns.map col-> col.at row_id
holes = values.zip db_types
@ -248,7 +248,7 @@ type Builder
Arguments:
- java_builder: The underlying builder object.
type Builder_Inferred java_builder
Builder_Inferred java_builder
## PRIVATE
@ -256,7 +256,7 @@ type Builder
Arguments:
- java_builder: The underlying builder object.
type Builder_Double java_builder
Builder_Double java_builder
## PRIVATE
@ -264,7 +264,7 @@ type Builder
Arguments:
- java_builder: The underlying builder object.
type Builder_Long java_builder
Builder_Long java_builder
## PRIVATE
@ -272,7 +272,7 @@ type Builder
Arguments:
- java_builder: The underlying builder object.
type Builder_Boolean java_builder
Builder_Boolean java_builder
## PRIVATE
@ -321,7 +321,8 @@ type Builder
Argument:
- url: The URL for which the dialect could not be deduced.
type Unsupported_Dialect url
type Unsupported_Dialect
Unsupported_Dialect_Data url
## Pretty print the error about unsupported SQL dialects.
Unsupported_Dialect.to_display_text : Text
@ -344,7 +345,7 @@ create_jdbc_connection url properties dialect = handle_sql_errors <|
java_props.setProperty pair.first pair.second
java_connection = JDBCProxy.getConnection url java_props
resource = Managed_Resource.register java_connection close_connection
Connection resource dialect
Connection_Data resource dialect
## PRIVATE
@ -367,7 +368,7 @@ type Sql_Error
- java_exception: The underlying exception.
- related_query (optional): A string representation of a query that this
error is related to.
type Sql_Error java_exception related_query=Nothing
Sql_Error_Data java_exception related_query=Nothing
## UNSTABLE
@ -393,7 +394,7 @@ type Sql_Timeout_Error
- java_exception: The underlying exception.
- related_query (optional): A string representation of a query that this
error is related to.
type Sql_Timeout_Error java_exception related_query=Nothing
Sql_Timeout_Error_Data java_exception related_query=Nothing
## UNSTABLE
@ -419,7 +420,7 @@ type Sql_Timeout_Error
- action: The computation to execute. This computation may throw SQL errors.
handle_sql_errors : Any -> (Text | Nothing) -> Any ! (Sql_Error | Sql_Timeout_Error)
handle_sql_errors ~action related_query=Nothing =
Panic.recover [Sql_Error, Sql_Timeout_Error] <|
Panic.recover [Sql_Error_Data, Sql_Timeout_Error_Data] <|
wrap_sql_errors action related_query
## PRIVATE
@ -436,8 +437,8 @@ wrap_sql_errors ~action related_query=Nothing =
Panic.catch SQLException action caught_panic->
exc = caught_panic.payload.cause
case Java.is_instance exc SQLTimeoutException of
True -> Panic.throw (Sql_Timeout_Error exc related_query)
False -> Panic.throw (Sql_Error exc related_query)
True -> Panic.throw (Sql_Timeout_Error_Data exc related_query)
False -> Panic.throw (Sql_Error_Data exc related_query)
## PRIVATE
Returns the default database type corresponding to an in-memory storage

View File

@ -2,7 +2,7 @@ from Standard.Base import all
type Connection_Options
## Hold a set of key value pairs used to configure the connection.
type Connection_Options options:Vector=[]
Connection_Options_Data options:Vector=[]
## Merge the base set of options with the overrides in this object.
merge : Vector -> Vector

View File

@ -2,7 +2,7 @@ from Standard.Base import all
type Credentials
## Simple username and password type.
type Credentials username:Text password:Text
Credentials_Data username:Text password:Text
## Override `to_text` to mask the password field.
to_text : Text

View File

@ -1,6 +1,6 @@
from Standard.Base import all
from Standard.Database.Connection.Connection_Options import Connection_Options
from Standard.Database.Connection.Connection_Options import Connection_Options, Connection_Options_Data
import Standard.Database.Connection.Postgres
import Standard.Database.Connection.SQLite
@ -16,5 +16,5 @@ from Standard.Database.Connection.Connection import Connection, Sql_Error
- details: Connection_Details to use to connect.
- options: Any overriding options to use.
connect : (Postgres|SQLite|Redshift) -> Connection_Options -> Connection ! Sql_Error
connect details options=Connection_Options =
connect details options=Connection_Options_Data =
details.connect options

View File

@ -1,10 +1,10 @@
from Standard.Base import all
from Standard.Base.Data.Numbers import Parse_Error
from Standard.Base.Data.Numbers import Parse_Error_Data
import Standard.Database.Data.Dialect
import Standard.Database.Connection.Connection
from Standard.Database.Connection.Credentials import Credentials
from Standard.Database.Connection.Credentials import Credentials_Data, Credentials
import Standard.Database.Connection.Connection_Options
import Standard.Database.Connection.SSL_Mode
from Standard.Database.Connection.SSL_Mode import all
@ -23,7 +23,7 @@ type Postgres
- credentials: The credentials to use for the connection (defaults to PGPass or No Authentication).
- use_ssl: Whether to use SSL (defaults to `Prefer`).
- client_cert: The client certificate to use or `Nothing` if not needed.
type Postgres (host:Text=default_postgres_host) (port:Integer=default_postgres_port) (database:Text=default_postgres_database) (credentials:(Credentials|Nothing)=Nothing) (use_ssl:SSL_Mode=Prefer) (client_cert:(Client_Certificate|Nothing)=Nothing)
Postgres_Data (host:Text=default_postgres_host) (port:Integer=default_postgres_port) (database:Text=default_postgres_database) (credentials:(Credentials|Nothing)=Nothing) (use_ssl:SSL_Mode=Prefer) (client_cert:(Client_Certificate|Nothing)=Nothing)
## Build the Connection resource.
@ -48,17 +48,17 @@ type Postgres
Nothing ->
env_user = Environment.get "PGUSER"
env_password = Environment.get "PGPASSWORD"
case Pair env_user env_password of
Pair Nothing Nothing ->
case Pair_Data env_user env_password of
Pair_Data Nothing Nothing ->
Pgpass.read self.host self.port self.database
Pair Nothing _ ->
Error.throw (Illegal_State_Error "PGPASSWORD is set, but PGUSER is not.")
Pair username Nothing ->
Pair_Data Nothing _ ->
Error.throw (Illegal_State_Error_Data "PGPASSWORD is set, but PGUSER is not.")
Pair_Data username Nothing ->
Pgpass.read self.host self.port self.database username
Pair username password ->
[Pair 'user' username, Pair 'password' password]
Credentials username password ->
[Pair 'user' username, Pair 'password' password]
Pair_Data username password ->
[Pair_Data 'user' username, Pair_Data 'password' password]
Credentials_Data username password ->
[Pair_Data 'user' username, Pair_Data 'password' password]
ssl_properties = ssl_mode_to_jdbc_properties self.use_ssl
@ -77,14 +77,14 @@ type Postgres
ssl_mode_to_jdbc_properties : SSL_Mode -> [Pair Text Text]
ssl_mode_to_jdbc_properties use_ssl = case use_ssl of
Disable -> []
Prefer -> [Pair 'sslmode' 'prefer']
Require -> [Pair 'sslmode' 'require']
Prefer -> [Pair_Data 'sslmode' 'prefer']
Require -> [Pair_Data 'sslmode' 'require']
Verify_CA cert_file ->
if cert_file.is_nothing then [Pair 'sslmode' 'verify-ca'] else
[Pair 'sslmode' 'verify-ca', Pair 'sslrootcert' (File.new cert_file).absolute.path]
if cert_file.is_nothing then [Pair_Data 'sslmode' 'verify-ca'] else
[Pair_Data 'sslmode' 'verify-ca', Pair_Data 'sslrootcert' (File.new cert_file).absolute.path]
Full_Verification cert_file ->
if cert_file.is_nothing then [Pair 'sslmode' 'verify-full'] else
[Pair 'sslmode' 'verify-full', Pair 'sslrootcert' (File.new cert_file).absolute.path]
if cert_file.is_nothing then [Pair_Data 'sslmode' 'verify-full'] else
[Pair_Data 'sslmode' 'verify-full', Pair_Data 'sslrootcert' (File.new cert_file).absolute.path]
## PRIVATE
default_postgres_host = Environment.get_or_else "PGHOST" "localhost"
@ -94,7 +94,7 @@ default_postgres_port =
hardcoded_port = 5432
case Environment.get "PGPORT" of
Nothing -> hardcoded_port
port -> Integer.parse port . catch Parse_Error (_->hardcoded_port)
port -> Integer.parse port . catch Parse_Error_Data (_->hardcoded_port)
## PRIVATE
default_postgres_database = Environment.get_or_else "PGDATABASE" ""

View File

@ -2,7 +2,7 @@ from Standard.Base import all
import Standard.Database.Data.Dialect
import Standard.Database.Connection.Connection
from Standard.Database.Connection.Credentials import Credentials
from Standard.Database.Connection.Credentials import Credentials, Credentials_Data
import Standard.Database.Connection.Connection_Options
import Standard.Database.Connection.SSL_Mode
from Standard.Database.Connection.SSL_Mode import all
@ -23,7 +23,7 @@ type Redshift
- credentials: The credentials to use for the connection (defaults to PGPass or No Authentication).
- use_ssl: Whether to use SSL (defaults to `Require`).
- client_cert: The client certificate to use or `Nothing` if not needed.
type Redshift (host:Text) (port:Integer=5439) (schema:Text='') (credentials:Credentials|AWS_Profile|AWS_Key|Nothing=Nothing) (use_ssl:(Disable|Require|Verify_CA|Full_Verification)=Require) (client_cert:Client_Certificate|Nothing=Nothing)
Redshift_Data (host:Text) (port:Integer=5439) (schema:Text='') (credentials:Credentials|AWS_Credential|Nothing=Nothing) (use_ssl:(Disable|Require|Verify_CA|Full_Verification)=Require) (client_cert:Client_Certificate|Nothing=Nothing)
## Build the Connection resource.
@ -57,7 +57,7 @@ type Redshift
[Pair 'user' db_user] + (if profile == '' then [] else [Pair 'profile' profile])
AWS_Key db_user access_key secret_access_key ->
[Pair 'user' db_user, Pair 'AccessKeyID' access_key, Pair 'SecretAccessKey' secret_access_key]
Credentials username password ->
Credentials_Data username password ->
[Pair 'user' username, Pair 'password' password]
## Disabled as Redshift SSL settings are different to PostgreSQL.
@ -72,13 +72,13 @@ type Redshift
dialect : Dialect
dialect self = Dialect.redshift
type AWS_Profile
type AWS_Credential
## Access Redshift using IAM via an AWS profile.
Arguments:
- db_user: Redshift username to connect as.
- profile: AWS profile name (if empty uses default).
type AWS_Profile db_user:Text profile:Text=''
AWS_Profile db_user:Text profile:Text=''
## Access Redshift using IAM via an AWS access key ID and secret access key.
@ -87,4 +87,4 @@ type AWS_Profile
- db_user: Redshift username to connect as.
- access_key: AWS access key ID.
- secret_access_key: AWS secret access key.
type AWS_Key db_user:Text access_key:Text secret_access_key:Text
AWS_Key db_user:Text access_key:Text secret_access_key:Text

View File

@ -6,7 +6,7 @@ import Standard.Database.Connection.Connection_Options
## Connect to a SQLite DB File or InMemory DB.
type SQLite
type SQLite (location:(In_Memory|File|Text))
SQLite_Data (location:(In_Memory|File|Text))
## Build the Connection resource.
connect : Connection_Options
@ -30,4 +30,3 @@ type SQLite
## Connect to an in-memory SQLite database.
type In_Memory
type In_Memory

View File

@ -2,18 +2,18 @@ from Standard.Base import all
type SSL_Mode
## Do not use SSL for the connection.
type Disable
Disable
## Prefer SSL for the connection, but does not verify the server certificate.
type Prefer
Prefer
## Will use SSL but does not verify the server certificate.
type Require
Require
## Will use SSL, validating the certificate but not verifying the hostname.
If `ca_file` is `Nothing`, the default CA certificate store will be used.
type Verify_CA ca_file:Nothing|File|Text=Nothing
Verify_CA ca_file:Nothing|File|Text=Nothing
## Will use SSL, validating the certificate and checking the hostname matches.
If `ca_file` is `Nothing`, the default CA certificate store will be used.
type Full_Verification ca_file:Nothing|File|Text=Nothing
Full_Verification ca_file:Nothing|File|Text=Nothing

View File

@ -33,7 +33,7 @@ type Column
# type Column (name : Text) (connection : Connection)
# (sql_type : Sql_Type) (expression : IR.Expression)
# (context : IR.Context)
type Column name connection sql_type expression context
Column_Data name connection sql_type expression context
## UNSTABLE
@ -68,7 +68,7 @@ type Column
Converts this column into a single-column table.
to_table : Table.Table
to_table self =
Table.Table self.name self.connection [self.as_internal] self.context
Table.Table_Data self.name self.connection [self.as_internal] self.context
## UNSTABLE
@ -118,18 +118,18 @@ type Column
make_binary_op self op_kind operand new_type=Nothing operand_type=Nothing =
actual_new_type = new_type.if_nothing self.sql_type
case operand of
Column _ _ _ other_expr _ ->
Column_Data _ _ _ other_expr _ ->
case Helpers.check_integrity self operand of
False ->
Error.throw <| Unsupported_Database_Operation_Error "Cannot compare columns coming from different contexts. Only columns of a single table can be compared."
True ->
new_expr = IR.Operation op_kind [self.expression, other_expr]
Column self.name self.connection actual_new_type new_expr self.context
Column_Data self.name self.connection actual_new_type new_expr self.context
_ ->
actual_operand_type = operand_type.if_nothing self.sql_type
other = IR.make_constant actual_operand_type operand
new_expr = IR.Operation op_kind [self.expression, other]
Column self.name self.connection actual_new_type new_expr self.context
Column_Data self.name self.connection actual_new_type new_expr self.context
## PRIVATE
@ -143,7 +143,7 @@ type Column
make_unary_op self op_kind new_type=Nothing =
actual_new_type = new_type.if_nothing self.sql_type
new_expr = IR.Operation op_kind [self.expression]
Column self.name self.connection actual_new_type new_expr self.context
Column_Data self.name self.connection actual_new_type new_expr self.context
## UNSTABLE
@ -421,7 +421,7 @@ type Column
True ->
new_filters = self.context.where_filters + [filter.expression]
new_ctx = self.context.set_where_filters new_filters
Column self.name self.connection self.sql_type self.expression new_ctx
Column_Data self.name self.connection self.sql_type self.expression new_ctx
## UNSTABLE
@ -440,9 +440,9 @@ type Column
True ->
is_used_in_index = self.context.meta_index.exists i-> i.name == new_name
case is_used_in_index of
True -> Error.throw <| Illegal_State_Error "Cannot rename the column to "+new_name+", because it has an index with the same name."
True -> Error.throw <| Illegal_State_Error_Data "Cannot rename the column to "+new_name+", because it has an index with the same name."
False ->
Column new_name self.connection self.sql_type self.expression self.context
Column_Data new_name self.connection self.sql_type self.expression self.context
## UNSTABLE
@ -534,7 +534,7 @@ type Column
## PRIVATE
as_internal : IR.Internal_Column
as_internal self = IR.Internal_Column self.name self.sql_type self.expression
as_internal self = IR.Internal_Column_Data self.name self.sql_type self.expression
type Aggregate_Column_Builder
@ -553,7 +553,7 @@ type Aggregate_Column_Builder
# type Aggregate_Column_Builder (name : Text) (connection : Connection)
# (sql_type : Sql_Type) (expression : IR.Expression)
# (context : IR.Context)
type Aggregate_Column_Builder name connection sql_type expression context
Aggregate_Column_Builder_Data name connection sql_type expression context
## UNSTABLE
@ -616,7 +616,7 @@ type Aggregate_Column_Builder
ungrouped : Column
ungrouped self =
new_ctx = self.context.set_groups []
Column self.name self.connection self.sql_type self.expression new_ctx
Column_Data self.name self.connection self.sql_type self.expression new_ctx
## PRIVATE
@ -654,12 +654,12 @@ lift_aggregate new_name connection expected_type expr context =
# aggregate into a subquery, thus making it safe to use it everywhere. A
# more complex solution may be adopted at some point.
ixes = Table.freshen_columns [new_name] context.meta_index
col = IR.Internal_Column new_name expected_type expr
col = IR.Internal_Column_Data new_name expected_type expr
setup = context.as_subquery new_name+"_sub" [[col], ixes]
subquery = setup.first
cols = setup.second
new_col = cols.first.first
new_ixes = cols.second
new_ctx = IR.subquery_as_ctx subquery . set_index new_ixes
Column new_name connection new_col.sql_type new_col.expression new_ctx
Column_Data new_name connection new_col.sql_type new_col.expression new_ctx

View File

@ -19,7 +19,7 @@ type Dialect
This is a fake constructor to make the compiler accept this type
definition. It can and should be removed once interface definitions are
allowed.
type Dialect
Dialect_Data
## PRIVATE
Name of the dialect.
name : Text

View File

@ -26,7 +26,7 @@ import Standard.Database.Data.Sql
is escaped by doubling each occurrence.
make_concat make_raw_concat_expr make_contains_expr has_quote args =
expected_args = if has_quote then 5 else 4
if args.length != expected_args then Error.throw (Illegal_State_Error "Unexpected number of arguments for the concat operation.") else
if args.length != expected_args then Error.throw (Illegal_State_Error_Data "Unexpected number of arguments for the concat operation.") else
expr = args.at 0
separator = args.at 1
prefix = args.at 2

View File

@ -15,7 +15,7 @@ from Standard.Database.Error import Unsupported_Database_Operation_Error
The dialect of PostgreSQL databases.
postgres : Dialect
postgres =
Postgres_Dialect make_internal_generator_dialect
Postgres_Dialect_Data make_internal_generator_dialect
## PRIVATE
@ -25,7 +25,7 @@ type Postgres_Dialect
## PRIVATE
The dialect of PostgreSQL databases.
type Postgres_Dialect internal_generator_dialect
Postgres_Dialect_Data internal_generator_dialect
## PRIVATE
Name of the dialect.
@ -150,7 +150,7 @@ first_last_aggregators =
[["FIRST", first], ["FIRST_NOT_NULL", first_not_null], ["LAST", last], ["LAST_NOT_NULL", last_not_null]]
make_first_aggregator reverse ignore_null args =
if args.length < 2 then Error.throw (Illegal_State_Error "Insufficient number of arguments for the operation.") else
if args.length < 2 then Error.throw (Illegal_State_Error_Data "Insufficient number of arguments for the operation.") else
result_expr = args.head
order_bys = args.tail
@ -184,7 +184,7 @@ concat_ops =
## PRIVATE
agg_count_distinct args = if args.is_empty then (Error.throw (Illegal_Argument_Error "COUNT_DISTINCT requires at least one argument.")) else
agg_count_distinct args = if args.is_empty then (Error.throw (Illegal_Argument_Error_Data "COUNT_DISTINCT requires at least one argument.")) else
case args.length == 1 of
True ->
## A single null value will be skipped.
@ -231,15 +231,15 @@ make_order_descriptor internal_column sort_direction text_ordering =
if text_ordering.sort_digits_as_numbers then Error.throw (Unsupported_Database_Operation_Error "Natural ordering is currently not supported. You may need to materialize the Table to perform this operation.") else
case text_ordering.case_sensitive of
Nothing ->
IR.Order_Descriptor internal_column.expression sort_direction nulls_order=nulls collation=Nothing
IR.Order_Descriptor_Data internal_column.expression sort_direction nulls_order=nulls collation=Nothing
True ->
IR.Order_Descriptor internal_column.expression sort_direction nulls_order=nulls collation="ucs_basic"
Case_Insensitive locale -> case locale == Locale.default of
IR.Order_Descriptor_Data internal_column.expression sort_direction nulls_order=nulls collation="ucs_basic"
Case_Insensitive_Data locale -> case locale == Locale.default of
False ->
Error.throw (Unsupported_Database_Operation_Error "Case insensitive ordering with custom locale is currently not supported. You may need to materialize the Table to perform this operation.")
True ->
upper = IR.Operation "UPPER" [internal_column.expression]
folded_expression = IR.Operation "LOWER" [upper]
IR.Order_Descriptor folded_expression sort_direction nulls_order=nulls collation=Nothing
IR.Order_Descriptor_Data folded_expression sort_direction nulls_order=nulls collation=Nothing
False ->
IR.Order_Descriptor internal_column.expression sort_direction nulls_order=nulls collation=Nothing
IR.Order_Descriptor_Data internal_column.expression sort_direction nulls_order=nulls collation=Nothing

View File

@ -12,7 +12,7 @@ import Standard.Database.Data.Internal.Base_Generator
The dialect for Redshift connections.
redshift : Dialect
redshift =
Redshift_Dialect Postgres.make_internal_generator_dialect
Redshift_Dialect_Data Postgres.make_internal_generator_dialect
## PRIVATE
@ -21,7 +21,7 @@ type Redshift_Dialect
## PRIVATE
The dialect for Redshift connections.
type Redshift_Dialect internal_generator_dialect
Redshift_Dialect_Data internal_generator_dialect
## PRIVATE
Name of the dialect.

View File

@ -7,14 +7,14 @@ import Standard.Database.Data.Dialect
import Standard.Database.Data.Dialect.Helpers
import Standard.Database.Data.Internal.Base_Generator
import Standard.Database.Data.Internal.IR
from Standard.Database.Error import Unsupported_Database_Operation_Error
from Standard.Database.Error import Unsupported_Database_Operation_Error_Data
## PRIVATE
The dialect of SQLite databases.
sqlite : Dialect
sqlite =
SQLite_Dialect make_internal_generator_dialect
SQLite_Dialect_Data make_internal_generator_dialect
## PRIVATE
@ -23,7 +23,7 @@ type SQLite_Dialect
## PRIVATE
The dialect of SQLite databases.
type SQLite_Dialect internal_generator_dialect
SQLite_Dialect_Data internal_generator_dialect
## PRIVATE
Name of the dialect.
@ -53,19 +53,19 @@ type SQLite_Dialect
prepare_order_descriptor : IR.Internal_Column -> Sort_Direction -> Text_Ordering -> IR.Order_Descriptor
prepare_order_descriptor self internal_column sort_direction text_ordering = case internal_column.sql_type.is_likely_text of
True ->
if text_ordering.sort_digits_as_numbers then Error.throw (Unsupported_Database_Operation_Error "Natural ordering is not supported by the SQLite backend. You may need to materialize the Table to perform this operation.") else
if text_ordering.sort_digits_as_numbers then Error.throw (Unsupported_Database_Operation_Error_Data "Natural ordering is not supported by the SQLite backend. You may need to materialize the Table to perform this operation.") else
case text_ordering.case_sensitive of
Nothing ->
IR.Order_Descriptor internal_column.expression sort_direction collation=Nothing
IR.Order_Descriptor_Data internal_column.expression sort_direction collation=Nothing
True ->
IR.Order_Descriptor internal_column.expression sort_direction collation="BINARY"
Case_Insensitive locale -> case locale == Locale.default of
IR.Order_Descriptor_Data internal_column.expression sort_direction collation="BINARY"
Case_Insensitive_Data locale -> case locale == Locale.default of
False ->
Error.throw (Unsupported_Database_Operation_Error "Case insensitive ordering with custom locale is not supported by the SQLite backend. You may need to materialize the Table to perform this operation.")
Error.throw (Unsupported_Database_Operation_Error_Data "Case insensitive ordering with custom locale is not supported by the SQLite backend. You may need to materialize the Table to perform this operation.")
True ->
IR.Order_Descriptor internal_column.expression sort_direction collation="NOCASE"
IR.Order_Descriptor_Data internal_column.expression sort_direction collation="NOCASE"
False ->
IR.Order_Descriptor internal_column.expression sort_direction collation=Nothing
IR.Order_Descriptor_Data internal_column.expression sort_direction collation=Nothing
## PRIVATE
make_internal_generator_dialect =
@ -104,7 +104,7 @@ resolve_target_sql_type aggregate = case aggregate of
## PRIVATE
unsupported name =
Error.throw (Unsupported_Database_Operation_Error name+" is not supported by SQLite backend. You may need to materialize the table and perform the operation in-memory.")
Error.throw (Unsupported_Database_Operation_Error_Data name+" is not supported by SQLite backend. You may need to materialize the table and perform the operation in-memory.")
## PRIVATE
agg_count_is_null = Base_Generator.lift_unary_op "COUNT_IS_NULL" arg->
@ -148,7 +148,7 @@ first_last_aggregators =
## PRIVATE
window_aggregate window_type ignore_null args =
if args.length < 2 then Error.throw (Illegal_State_Error "Insufficient number of arguments for the operation.") else
if args.length < 2 then Error.throw (Illegal_State_Error_Data "Insufficient number of arguments for the operation.") else
result_expr = args.head
order_exprs = args.tail
@ -168,7 +168,7 @@ concat_ops =
## PRIVATE
agg_count_distinct args = case args.length == 1 of
True -> Sql.code "COUNT(DISTINCT (" ++ args.first ++ Sql.code "))"
False -> Error.throw (Illegal_Argument_Error "COUNT_DISTINCT supports only single arguments in SQLite.")
False -> Error.throw (Illegal_Argument_Error_Data "COUNT_DISTINCT supports only single arguments in SQLite.")
## PRIVATE
agg_count_distinct_include_null args = case args.length == 1 of
@ -177,7 +177,7 @@ agg_count_distinct_include_null args = case args.length == 1 of
count = Sql.code "COUNT(DISTINCT " ++ arg ++ Sql.code ")"
all_nulls_case = Sql.code "CASE WHEN COUNT(CASE WHEN " ++ arg ++ Sql.code "IS NULL THEN 1 END) > 0 THEN 1 ELSE 0 END"
count ++ Sql.code " + " ++ all_nulls_case
False -> Error.throw (Illegal_Argument_Error "COUNT_DISTINCT supports only single arguments in SQLite.")
False -> Error.throw (Illegal_Argument_Error_Data "COUNT_DISTINCT supports only single arguments in SQLite.")
## PRIVATE
starts_with = Base_Generator.lift_binary_op "starts_with" str-> sub->

View File

@ -1,9 +1,10 @@
from Standard.Base import all hiding First, Last
from Standard.Base.Data.Text.Text_Ordering import Text_Ordering_Data
from Standard.Table.Data.Aggregate_Column import all
import Standard.Database.Data.Internal.IR
from Standard.Database.Data.Sql import Sql_Type
from Standard.Database.Error import Unsupported_Database_Operation_Error
from Standard.Database.Error import Unsupported_Database_Operation_Error_Data
## PRIVATE
Creates an `Internal_Column` that computes the specified statistic.
@ -15,7 +16,7 @@ make_aggregate_column : Table -> Aggregate_Column -> Text -> IR.Internal_Column
make_aggregate_column table aggregate new_name =
sql_type = table.connection.dialect.resolve_target_sql_type aggregate
expression = make_expression aggregate table.connection.dialect
IR.Internal_Column new_name sql_type expression
IR.Internal_Column_Data new_name sql_type expression
## PRIVATE
Creates an Internal Representation of the expression that computes a
@ -37,16 +38,16 @@ make_expression aggregate dialect =
Percentile p c _ -> IR.Operation "PERCENTILE" [IR.Constant Sql_Type.double p, c.expression]
Mode c _ -> IR.Operation "MODE" [c.expression]
First c _ ignore_nothing order_by -> case is_non_empty_selector order_by of
False -> Error.throw (Unsupported_Database_Operation_Error "`First` aggregation requires at least one `order_by` column.")
False -> Error.throw (Unsupported_Database_Operation_Error_Data "`First` aggregation requires at least one `order_by` column.")
True ->
order_bys = order_by.columns.map c-> dialect.prepare_order_descriptor c.column.as_internal c.direction Text_Ordering
order_bys = order_by.columns.map c-> dialect.prepare_order_descriptor c.column.as_internal c.direction Text_Ordering_Data
case ignore_nothing of
False -> IR.Operation "FIRST" [c.expression]+order_bys
True -> IR.Operation "FIRST_NOT_NULL" [c.expression]+order_bys
Last c _ ignore_nothing order_by -> case is_non_empty_selector order_by of
False -> Error.throw (Unsupported_Database_Operation_Error "`Last` aggregation requires at least one `order_by` column.")
False -> Error.throw (Unsupported_Database_Operation_Error_Data "`Last` aggregation requires at least one `order_by` column.")
True ->
order_bys = order_by.columns.map c-> dialect.prepare_order_descriptor c.column.as_internal c.direction Text_Ordering
order_bys = order_by.columns.map c-> dialect.prepare_order_descriptor c.column.as_internal c.direction Text_Ordering_Data
case ignore_nothing of
False -> IR.Operation "LAST" [c.expression]+order_bys
True -> IR.Operation "LAST_NOT_NULL" [c.expression]+order_bys

View File

@ -24,7 +24,7 @@ type Internal_Dialect
# type Internal_Dialect (operation_map : Map Text (Vector Sql.Builder -> Sql.Builder))
# (identifier_wrapper : Text -> Sql.Builder)
type Internal_Dialect operation_map wrap_identifier
Internal_Dialect_Data operation_map wrap_identifier
## PRIVATE
@ -35,7 +35,7 @@ type Internal_Dialect
extend_with : Vector Any -> Internal_Dialect
extend_with self mappings =
new_map = mappings.fold self.operation_map (m -> el -> m.insert (el.at 0) (el.at 1))
Internal_Dialect new_map self.wrap_identifier
Internal_Dialect_Data new_map self.wrap_identifier
## PRIVATE
@ -51,7 +51,7 @@ make_binary_op name =
op = Sql.code " "+name+" "
(arguments.at 0)++op++(arguments.at 1) . paren
False ->
Error.throw <| Illegal_State_Error ("Invalid amount of arguments for operation " + name)
Error.throw <| Illegal_State_Error_Data ("Invalid amount of arguments for operation " + name)
## PRIVATE
@ -66,7 +66,7 @@ make_unary_op name =
True ->
(Sql.code name+" ")++(arguments.at 0) . paren
False ->
Error.throw <| Illegal_State_Error ("Invalid amount of arguments for operation " + name)
Error.throw <| Illegal_State_Error_Data ("Invalid amount of arguments for operation " + name)
## PRIVATE
@ -80,7 +80,7 @@ make_unary_op name =
lift_unary_op : Text -> (Sql.Builder -> Sql.Builder) -> [Text, (Vector Sql.Builder -> Sql.Builder)]
lift_unary_op name function =
generator = arguments -> case arguments.length == 1 of
False -> Error.throw <| Illegal_State_Error ("Invalid amount of arguments for operation " + name + ".")
False -> Error.throw <| Illegal_State_Error_Data ("Invalid amount of arguments for operation " + name + ".")
True -> function (arguments.at 0)
[name, generator]
@ -96,7 +96,7 @@ lift_unary_op name function =
lift_binary_op : Text -> (Sql.Builder -> Sql.Builder -> Sql.Builder) -> [Text, (Vector Sql.Builder -> Sql.Builder)]
lift_binary_op name function =
generator = arguments -> case arguments.length == 2 of
False -> Error.throw <| Illegal_State_Error ("Invalid amount of arguments for operation " + name + ".")
False -> Error.throw <| Illegal_State_Error_Data ("Invalid amount of arguments for operation " + name + ".")
True -> function (arguments.at 0) (arguments.at 1)
[name, generator]
@ -136,7 +136,7 @@ make_function name =
make_constant : Text -> Vector Sql.Builder -> Sql.Builder
make_constant code =
arguments ->
if arguments.not_empty then Error.throw <| Illegal_State_Error "No arguments were expected" else
if arguments.not_empty then Error.throw <| Illegal_State_Error_Data "No arguments were expected" else
Sql.code code
## PRIVATE
@ -171,7 +171,7 @@ base_dialect =
counts = [fun "COUNT", ["COUNT_ROWS", make_constant "COUNT(*)"]]
nulls = [["ISNULL", make_right_unary_op "IS NULL"], ["FILLNULL", make_function "COALESCE"]]
base_map = Map.from_vector (arith + logic + compare + agg + nulls + counts)
Internal_Dialect base_map wrap_in_quotes
Internal_Dialect_Data base_map wrap_in_quotes
## PRIVATE
@ -190,7 +190,7 @@ generate_expression dialect expr = case expr of
op = dialect.operation_map.get_or_else kind (Error.throw <| Unsupported_Database_Operation_Error kind)
parsed_args = arguments.map (generate_expression dialect)
op parsed_args
IR.Order_Descriptor _ _ _ _ -> generate_order dialect expr
IR.Order_Descriptor_Data _ _ _ _ -> generate_order dialect expr
## PRIVATE

View File

@ -33,7 +33,7 @@ check_integrity entity1 entity2 =
as-is, otherwise it is wrapped in a singleton vector.
unify_vector_singleton : (Any | Vector.Vector Any) -> Vector.Vector Any
unify_vector_singleton x = case x of
Vector.Vector _ -> x
Vector.Vector_Data _ -> x
_ -> [x]
## UNSTABLE

View File

@ -17,7 +17,7 @@ type Expression
originates from, it corresponds to the `alias` field in `from_spec`.
- name: the name of the column directly in the table or its alias in a
sub-query.
type Column (origin : Text) (name : Text)
Column (origin : Text) (name : Text)
## PRIVATE
@ -29,7 +29,7 @@ type Expression
It is usually inferred from the expression's context.
- value: the value to be interpolated; it should be a simple Number, Text
or other types that are serializable for JDBC.
type Constant (sql_type : Sql.Sql_Type) (value : Any)
Constant (sql_type : Sql.Sql_Type) (value : Any)
## PRIVATE
@ -42,7 +42,7 @@ type Expression
dialect.
- expressions: a list of expressions which are arguments to the operation
different operations support different amounts of arguments.
type Operation (kind : Text) (expressions : Vector Expression)
Operation (kind : Text) (expressions : Vector Expression)
type Internal_Column
## PRIVATE
@ -53,7 +53,7 @@ type Internal_Column
- name: The column name.
- sql_type: The SQL type of the column.
- expression: An expression for applying to the column.
type Internal_Column name sql_type expression
Internal_Column_Data name sql_type expression
## PRIVATE
@ -62,7 +62,7 @@ type Internal_Column
Arguments:
- new_name: The new name for the column.
rename : Text -> Internal_Column
rename self new_name = Internal_Column new_name self.sql_type self.expression
rename self new_name = Internal_Column_Data new_name self.sql_type self.expression
## PRIVATE
@ -91,7 +91,7 @@ type Context
- meta_index: a list of internal columns to use for joining or grouping.
- limit: an optional maximum number of elements that the equery should
return.
type Context (from_spec : From_Spec) (where_filters : Vector Expression) (orders : Vector Order_Descriptor) (groups : Vector Expression) (meta_index : Vector Internal_Column) (limit : Nothing | Integer)
Context_Data (from_spec : From_Spec) (where_filters : Vector Expression) (orders : Vector Order_Descriptor) (groups : Vector Expression) (meta_index : Vector Internal_Column) (limit : Nothing | Integer)
## PRIVATE
@ -101,7 +101,7 @@ type Context
- new_index: The new index to set in the query.
set_index : Vector Internal_Column -> Context
set_index self new_index =
Context self.from_spec self.where_filters self.orders self.groups new_index self.limit
Context_Data self.from_spec self.where_filters self.orders self.groups new_index self.limit
## PRIVATE
@ -111,7 +111,7 @@ type Context
- new_filters: The new filters to set in the query.
set_where_filters : Vector Expression -> Context
set_where_filters self new_filters =
Context self.from_spec new_filters self.orders self.groups self.meta_index self.limit
Context_Data self.from_spec new_filters self.orders self.groups self.meta_index self.limit
## PRIVATE
@ -121,7 +121,7 @@ type Context
- new_orders: The new ordering clauses to set in the query.
set_orders : Vector Order_Descriptor -> Context
set_orders self new_orders =
Context self.from_spec self.where_filters new_orders self.groups self.meta_index self.limit
Context_Data self.from_spec self.where_filters new_orders self.groups self.meta_index self.limit
## PRIVATE
@ -138,7 +138,7 @@ type Context
- new_orders: The new ordering clauses to add to the query.
add_orders : Vector Order_Descriptor -> Context
add_orders self new_orders =
Context self.from_spec self.where_filters new_orders+self.orders self.groups self.meta_index self.limit
Context_Data self.from_spec self.where_filters new_orders+self.orders self.groups self.meta_index self.limit
## PRIVATE
@ -148,7 +148,7 @@ type Context
- new_groups: The new grouping clauses to set in the query.
set_groups : Vector Expression -> Context
set_groups self new_groups =
Context self.from_spec self.where_filters self.orders new_groups self.meta_index self.limit
Context_Data self.from_spec self.where_filters self.orders new_groups self.meta_index self.limit
## PRIVATE
@ -158,7 +158,7 @@ type Context
- new_limit: The new limit clauses to set in the query.
set_limit : (Nothing | Integer) -> Context
set_limit self new_limit =
Context self.from_spec self.where_filters self.orders self.groups self.meta_index new_limit
Context_Data self.from_spec self.where_filters self.orders self.groups self.meta_index new_limit
## PRIVATE
@ -179,7 +179,7 @@ type Context
as_subquery self alias column_lists =
rewrite_internal_column : Internal_Column -> Internal_Column
rewrite_internal_column column =
Internal_Column column.name column.sql_type (IR.Column alias column.name)
Internal_Column_Data column.name column.sql_type (IR.Column alias column.name)
new_columns = column_lists.map columns->
columns.map rewrite_internal_column
@ -206,7 +206,7 @@ type From_Spec
parts of the query, this is especially useful for example in
self-joins, allowing to differentiate between different instances of
the same table.
type From_Table (table_name : Text) (alias : Text)
From_Table (table_name : Text) (alias : Text)
## PRIVATE
@ -219,7 +219,7 @@ type From_Spec
- on: a list of expressions that will be used as join conditions, these
are usually be equalities between expressions from the left and right
sources.
type Join (kind : Join_Kind) (left_spec : From_Spec) (right_spec : From_Spec) (on : Vector Expression)
Join (kind : Join_Kind) (left_spec : From_Spec) (right_spec : From_Spec) (on : Vector Expression)
## PRIVATE
@ -232,7 +232,7 @@ type From_Spec
- context: the context for the sub-query.
- alias: the name upon which the results of this sub-query can be
referred to in other parts of the query.
type Sub_Query (columns : Vector (Pair Text Expression)) (context : Context) (alias : Text)
Sub_Query (columns : Vector (Pair Text Expression)) (context : Context) (alias : Text)
## PRIVATE
@ -244,7 +244,7 @@ type Join_Kind
The result will contain only rows that had a match in both left and right
source.
type Join_Inner
Join_Inner
## PRIVATE
@ -255,7 +255,7 @@ type Join_Kind
the left source has no match on the right, it will be present exactly
once in the result and the fields corresponding to the right source will
be set to NULL.
type Join_Left
Join_Left
## PRIVATE
@ -266,7 +266,7 @@ type Join_Kind
the right source has no match on the left, it will be present exactly
once in the result and the fields corresponding to the left source will
be set to NULL.
type Join_Right
Join_Right
## PRIVATE
@ -275,10 +275,11 @@ type Join_Kind
The result will contain a cross product of rows from the left source with
the right source. Its `on` list should be empty, instead `where_filters`
in the query can be used to filter the results.
type Join_Cross
Join_Cross
## PRIVATE
type Order_Descriptor (expression : Expression) (direction : Sort_Direction) (nulls_order : Nothing | Nulls_Order = Nothing) (collation : Nothing | Text = Nothing)
type Order_Descriptor
Order_Descriptor_Data (expression : Expression) (direction : Sort_Direction) (nulls_order : Nothing | Nulls_Order = Nothing) (collation : Nothing | Text = Nothing)
## PRIVATE
@ -288,12 +289,12 @@ type Nulls_Order
## PRIVATE
Null values are included before any other values in the ordering.
type Nulls_First
Nulls_First
## PRIVATE
Null values are included after all other values in the ordering.
type Nulls_Last
Nulls_Last
## PRIVATE
@ -309,7 +310,7 @@ type Query
is a pair whose first element is the name of the materialized column
and the second element is the expression to compute.
- context: The query context, see `Context` for more detail.
type Select (expressions : Vector (Pair Text Expression)) (context : Context)
Select (expressions : Vector (Pair Text Expression)) (context : Context)
## PRIVATE
@ -317,7 +318,7 @@ type Query
Arguments:
- context: The query context, see `Context` for more detail.
type Select_All context
Select_All context
## PRIVATE
@ -326,7 +327,7 @@ type Query
Arguments:
- table_name: The name of the table to insert to.
- pairs: A list of pairs consisting of a column name and and expression.
type Insert table_name pairs
Insert table_name pairs
## PRIVATE
@ -337,7 +338,7 @@ type Query
- table_name: The name of the tanle for which the context is being created.
make_ctx_from : Text -> Context
make_ctx_from table_name =
Context (From_Table table_name table_name) [] [] [] [] Nothing
Context_Data (From_Table table_name table_name) [] [] [] [] Nothing
## PRIVATE
@ -347,7 +348,7 @@ make_ctx_from table_name =
- subquery: The subquery to lift into a context.
subquery_as_ctx : Sub_Query -> Context
subquery_as_ctx subquery =
Context subquery [] [] [] [] Nothing
Context_Data subquery [] [] [] [] Nothing
## PRIVATE
@ -389,5 +390,5 @@ substitute_origin old_origin new_origin expr = case expr of
- col: The column over which to apply `f`.
lift_expression_map : (Expression -> Expression) -> Internal_Column -> Internal_Column
lift_expression_map f col =
Internal_Column col.name col.sql_type (f col.expression)
Internal_Column_Data col.name col.sql_type (f col.expression)

Some files were not shown because too many files have changed in this diff Show More