mirror of
https://github.com/enso-org/enso.git
synced 2025-01-03 21:12:11 +03:00
Proper implementation of Value Types in Table (#6073)
This is the first part of the #5158 umbrella task. It closes #5158, follow-up tasks are listed as a comment in the issue. - Updates all prototype methods dealing with `Value_Type` with a proper implementation. - Adds a more precise mapping from in-memory storage to `Value_Type`. - Adds a dialect-dependent mapping between `SQL_Type` and `Value_Type`. - Removes obsolete methods and constants on `SQL_Type` that were not portable. - Ensures that in the Database backend, operation results are computed based on what the Database is meaning to return (by asking the Database about expected types of each operation). - But also ensures that the result types are sane. - While SQLite does not officially support a BOOLEAN affinity, we add a set of type overrides to our operations to ensure that Boolean operations will return Boolean values and will not be changed to integers as SQLite would suggest. - Some methods in SQLite fallback to a NUMERIC affinity unnecessarily, so stuff like `max(text, text)` will keep the `text` type instead of falling back to numeric as SQLite would suggest. - Adds ability to use custom fetch / builder logic for various types, so that we can support vendor specific types (for example, Postgres dates). # Important Notes - There are some TODOs left in the code. I'm still aligning follow-up tasks - once done I will try to add references to relevant tasks in them.
This commit is contained in:
parent
5a100ea79b
commit
6f86115498
@ -370,6 +370,7 @@
|
|||||||
`use_regex` flag.][5959]
|
`use_regex` flag.][5959]
|
||||||
- [Removed many regex compile flags from `split`; added `only_first` and
|
- [Removed many regex compile flags from `split`; added `only_first` and
|
||||||
`use_regex` flag.][6116]
|
`use_regex` flag.][6116]
|
||||||
|
- [Implemented proper support for Value Types in the Table library.][6073]
|
||||||
|
|
||||||
[debug-shortcuts]:
|
[debug-shortcuts]:
|
||||||
https://github.com/enso-org/enso/blob/develop/app/gui/docs/product/shortcuts.md#debug
|
https://github.com/enso-org/enso/blob/develop/app/gui/docs/product/shortcuts.md#debug
|
||||||
@ -559,6 +560,7 @@
|
|||||||
[5705]: https://github.com/enso-org/enso/pull/5705
|
[5705]: https://github.com/enso-org/enso/pull/5705
|
||||||
[5959]: https://github.com/enso-org/enso/pull/5959
|
[5959]: https://github.com/enso-org/enso/pull/5959
|
||||||
[6116]: https://github.com/enso-org/enso/pull/6116
|
[6116]: https://github.com/enso-org/enso/pull/6116
|
||||||
|
[6073]: https://github.com/enso-org/enso/pull/6073
|
||||||
|
|
||||||
#### Enso Compiler
|
#### Enso Compiler
|
||||||
|
|
||||||
|
@ -126,10 +126,6 @@ class SqlVisualization extends Visualization {
|
|||||||
this.dom.appendChild(parentContainer)
|
this.dom.appendChild(parentContainer)
|
||||||
|
|
||||||
const tooltip = new Tooltip(parentContainer)
|
const tooltip = new Tooltip(parentContainer)
|
||||||
const baseMismatches = this.dom.getElementsByClassName('mismatch')
|
|
||||||
const extendedMismatchAreas = this.dom.getElementsByClassName('mismatch-mouse-area')
|
|
||||||
setupMouseInteractionForMismatches(tooltip, baseMismatches)
|
|
||||||
setupMouseInteractionForMismatches(tooltip, extendedMismatchAreas)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
@ -260,7 +256,7 @@ class QualifiedTypeName {
|
|||||||
* Renders HTML for displaying an Enso parameter that is interpolated into the SQL code.
|
* Renders HTML for displaying an Enso parameter that is interpolated into the SQL code.
|
||||||
*/
|
*/
|
||||||
function renderInterpolationParameter(theme, param) {
|
function renderInterpolationParameter(theme, param) {
|
||||||
const actualType = param.actual_type
|
const actualType = param.enso_type
|
||||||
let value = param.value
|
let value = param.value
|
||||||
|
|
||||||
if (actualType === textType) {
|
if (actualType === textType) {
|
||||||
@ -270,37 +266,8 @@ function renderInterpolationParameter(theme, param) {
|
|||||||
const actualTypeColor = theme.getColorForType(actualType)
|
const actualTypeColor = theme.getColorForType(actualType)
|
||||||
const fgColor = actualTypeColor
|
const fgColor = actualTypeColor
|
||||||
let bgColor = replaceAlpha(fgColor, interpolationBacgroundOpacity)
|
let bgColor = replaceAlpha(fgColor, interpolationBacgroundOpacity)
|
||||||
const expectedEnsoType = param.expected_enso_type
|
|
||||||
|
|
||||||
if (actualType === expectedEnsoType) {
|
return renderRegularInterpolation(value, fgColor, bgColor)
|
||||||
return renderRegularInterpolation(value, fgColor, bgColor)
|
|
||||||
} else {
|
|
||||||
let expectedType = expectedEnsoType
|
|
||||||
if (expectedType === null) {
|
|
||||||
expectedType = customSqlTypePrefix + param.expected_sql_type
|
|
||||||
}
|
|
||||||
|
|
||||||
const expectedTypeColor = theme.getColorForType(expectedType)
|
|
||||||
const hoverBgColor = expectedTypeColor
|
|
||||||
bgColor = replaceAlpha(hoverBgColor, interpolationBacgroundOpacity)
|
|
||||||
const hoverFgColor = theme.getForegroundColorForType(expectedType)
|
|
||||||
|
|
||||||
const message = renderTypeHintMessage(
|
|
||||||
actualType,
|
|
||||||
expectedType,
|
|
||||||
actualTypeColor,
|
|
||||||
expectedTypeColor
|
|
||||||
)
|
|
||||||
|
|
||||||
return renderMismatchedInterpolation(
|
|
||||||
value,
|
|
||||||
message,
|
|
||||||
fgColor,
|
|
||||||
bgColor,
|
|
||||||
hoverFgColor,
|
|
||||||
hoverBgColor
|
|
||||||
)
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
@ -318,38 +285,6 @@ function renderRegularInterpolation(value, fgColor, bgColor) {
|
|||||||
return html
|
return html
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
|
||||||
* A helper that renders the HTML representation of a type-mismatched SQL interpolation.
|
|
||||||
*
|
|
||||||
* This only prepares the HTML code, to setup the interactions, `setupMouseInteractionForMismatches`
|
|
||||||
* must be called after these HTML elements are added to the DOM.
|
|
||||||
*/
|
|
||||||
function renderMismatchedInterpolation(
|
|
||||||
value,
|
|
||||||
message,
|
|
||||||
fgColor,
|
|
||||||
bgColor,
|
|
||||||
hoverFgColor,
|
|
||||||
hoverBgColor
|
|
||||||
) {
|
|
||||||
let html = '<div class="mismatch-parent">'
|
|
||||||
html += '<div class="mismatch-mouse-area"></div>'
|
|
||||||
html += '<div class="interpolation mismatch"'
|
|
||||||
let style = 'color:' + convertColorToRgba(fgColor) + ';'
|
|
||||||
style += 'background-color:' + convertColorToRgba(bgColor) + ';'
|
|
||||||
html += ' style="' + style + '"'
|
|
||||||
html += ' data-fgColor="' + convertColorToRgba(fgColor) + '"'
|
|
||||||
html += ' data-bgColor="' + convertColorToRgba(bgColor) + '"'
|
|
||||||
html += ' data-fgColorHover="' + convertColorToRgba(hoverFgColor) + '"'
|
|
||||||
html += ' data-bgColorHover="' + convertColorToRgba(hoverBgColor) + '"'
|
|
||||||
html += ' data-message="' + encodeURIComponent(message) + '"'
|
|
||||||
html += '>'
|
|
||||||
html += value
|
|
||||||
html += '</div>'
|
|
||||||
html += '</div>'
|
|
||||||
return html
|
|
||||||
}
|
|
||||||
|
|
||||||
// === Tooltip ===
|
// === Tooltip ===
|
||||||
|
|
||||||
/**
|
/**
|
||||||
@ -408,32 +343,4 @@ class Tooltip {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
|
||||||
* Sets up mouse events for the interpolated parameters that have a type mismatch.
|
|
||||||
*/
|
|
||||||
function setupMouseInteractionForMismatches(tooltip, elements) {
|
|
||||||
function interpolationMouseEnter(event) {
|
|
||||||
const target = this.parentElement.getElementsByClassName('mismatch')[0]
|
|
||||||
const fg = target.getAttribute('data-fgColorHover')
|
|
||||||
const bg = target.getAttribute('data-bgColorHover')
|
|
||||||
const message = decodeURIComponent(target.getAttribute('data-message'))
|
|
||||||
tooltip.show(target, message)
|
|
||||||
target.style.color = fg
|
|
||||||
target.style.backgroundColor = bg
|
|
||||||
}
|
|
||||||
function interpolationMouseLeave(event) {
|
|
||||||
const target = this.parentElement.getElementsByClassName('mismatch')[0]
|
|
||||||
const fg = target.getAttribute('data-fgColor')
|
|
||||||
const bg = target.getAttribute('data-bgColor')
|
|
||||||
target.style.color = fg
|
|
||||||
target.style.backgroundColor = bg
|
|
||||||
tooltip.hide(target)
|
|
||||||
}
|
|
||||||
|
|
||||||
for (let i = 0; i < elements.length; ++i) {
|
|
||||||
elements[i].addEventListener('mouseenter', interpolationMouseEnter)
|
|
||||||
elements[i].addEventListener('mouseleave', interpolationMouseLeave)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return SqlVisualization
|
return SqlVisualization
|
||||||
|
@ -1144,7 +1144,7 @@ val distributionEnvironmentOverrides = {
|
|||||||
)
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
val frgaalSourceLevel = "19"
|
val frgaalSourceLevel = FrgaalJavaCompiler.sourceLevel
|
||||||
|
|
||||||
/** A setting to replace javac with Frgaal compiler, allowing to use latest Java features in the code
|
/** A setting to replace javac with Frgaal compiler, allowing to use latest Java features in the code
|
||||||
* and still compile down to JDK 11
|
* and still compile down to JDK 11
|
||||||
|
@ -96,6 +96,14 @@ type Stack_Trace_Element
|
|||||||
## PRIVATE
|
## PRIVATE
|
||||||
Value name source_location
|
Value name source_location
|
||||||
|
|
||||||
|
## PRIVATE
|
||||||
|
to_display_text : Text
|
||||||
|
to_display_text self =
|
||||||
|
loc = case self.source_location of
|
||||||
|
Nothing -> "Unknown location"
|
||||||
|
loc -> loc.formatted_coordinates
|
||||||
|
"at "+self.name+" ("+loc+")"
|
||||||
|
|
||||||
## ADVANCED
|
## ADVANCED
|
||||||
|
|
||||||
Types indicating allowed IO operations
|
Types indicating allowed IO operations
|
||||||
|
@ -0,0 +1,75 @@
|
|||||||
|
import project.Any.Any
|
||||||
|
import project.Error.Error
|
||||||
|
import project.Nothing.Nothing
|
||||||
|
import project.Panic.Caught_Panic
|
||||||
|
import project.Panic.Panic
|
||||||
|
import project.Runtime.Ref.Ref
|
||||||
|
|
||||||
|
## Holds a value that is computed on first access.
|
||||||
|
type Lazy
|
||||||
|
## PRIVATE
|
||||||
|
Lazy (cached_ref : Ref) (builder : Nothing -> Any)
|
||||||
|
|
||||||
|
## PRIVATE
|
||||||
|
Eager (value : Any)
|
||||||
|
|
||||||
|
## Creates a new lazy value.
|
||||||
|
new : Any -> Lazy
|
||||||
|
new ~lazy_computation =
|
||||||
|
builder _ = lazy_computation
|
||||||
|
cached_ref = Ref.new Lazy_Not_Computed_Mark
|
||||||
|
Lazy.Lazy cached_ref builder
|
||||||
|
|
||||||
|
## Creates a pre-computed lazy value.
|
||||||
|
This can be useful if a value needs to admit the Lazy type API, but is
|
||||||
|
known beforehand.
|
||||||
|
new_eager value = Lazy.Eager value
|
||||||
|
|
||||||
|
## Returns the stored value.
|
||||||
|
|
||||||
|
The value will be computed on first access and cached.
|
||||||
|
get : Any
|
||||||
|
get self = case self of
|
||||||
|
Lazy.Lazy cached_ref builder -> case cached_ref.get of
|
||||||
|
Lazy_Not_Computed_Mark ->
|
||||||
|
cached_value = Cached_Value.freeze builder
|
||||||
|
cached_ref.put cached_value
|
||||||
|
cached_value.get
|
||||||
|
cached_value -> cached_value.get
|
||||||
|
Lazy.Eager value -> value
|
||||||
|
|
||||||
|
## PRIVATE
|
||||||
|
This is a special value that should never be returned from a lazy computation
|
||||||
|
as it will prevent the lazy value from being cached.
|
||||||
|
type Lazy_Not_Computed_Mark
|
||||||
|
|
||||||
|
## PRIVATE
|
||||||
|
type Cached_Value
|
||||||
|
## PRIVATE
|
||||||
|
Value value
|
||||||
|
|
||||||
|
## PRIVATE
|
||||||
|
Error error
|
||||||
|
|
||||||
|
## PRIVATE
|
||||||
|
Panic (caught_panic : Caught_Panic)
|
||||||
|
|
||||||
|
## PRIVATE
|
||||||
|
Accesses the cached value as if it was just computed - any stored errors
|
||||||
|
or panics will be propagated.
|
||||||
|
get : Any
|
||||||
|
get self = case self of
|
||||||
|
Cached_Value.Value value -> value
|
||||||
|
Cached_Value.Error error -> Error.throw error
|
||||||
|
Cached_Value.Panic caught_panic -> Panic.throw caught_panic
|
||||||
|
|
||||||
|
## PRIVATE
|
||||||
|
Runs the provided `builder` with a `Nothing` argument, handling any
|
||||||
|
errors or panics and saving them as a `Cached_Value`.
|
||||||
|
freeze : (Nothing -> Any) -> Cached_Value
|
||||||
|
freeze builder =
|
||||||
|
save_panic caught_panic = Cached_Value.Panic caught_panic
|
||||||
|
Panic.catch Any handler=save_panic <|
|
||||||
|
result = Cached_Value.Value (builder Nothing)
|
||||||
|
result.catch Any dataflow_error->
|
||||||
|
Cached_Value.Error dataflow_error
|
@ -1,5 +1,6 @@
|
|||||||
from Standard.Base import all
|
from Standard.Base import all
|
||||||
import Standard.Base.Errors.Illegal_State.Illegal_State
|
import Standard.Base.Errors.Illegal_State.Illegal_State
|
||||||
|
import Standard.Base.Runtime.Managed_Resource.Managed_Resource
|
||||||
|
|
||||||
import Standard.Table.Data.Table.Table as Materialized_Table
|
import Standard.Table.Data.Table.Table as Materialized_Table
|
||||||
|
|
||||||
@ -7,11 +8,12 @@ import project.Data.SQL_Query.SQL_Query
|
|||||||
import project.Data.SQL_Statement.SQL_Statement
|
import project.Data.SQL_Statement.SQL_Statement
|
||||||
import project.Data.SQL_Type.SQL_Type
|
import project.Data.SQL_Type.SQL_Type
|
||||||
import project.Data.Table.Table
|
import project.Data.Table.Table
|
||||||
|
import project.Data.Table as Database_Table_Module
|
||||||
import project.Internal.IR.Context.Context
|
import project.Internal.IR.Context.Context
|
||||||
import project.Internal.IR.SQL_Expression.SQL_Expression
|
import project.Internal.IR.SQL_Expression.SQL_Expression
|
||||||
import project.Internal.IR.Query.Query
|
import project.Internal.IR.Query.Query
|
||||||
|
import project.Internal.SQL_Type_Reference.SQL_Type_Reference
|
||||||
import project.Data.Table as Database_Table_Module
|
import project.Internal.Statement_Setter.Statement_Setter
|
||||||
|
|
||||||
from project.Internal.Result_Set import read_column, result_set_to_table
|
from project.Internal.Result_Set import read_column, result_set_to_table
|
||||||
from project.Internal.JDBC_Connection import create_table_statement, handle_sql_errors
|
from project.Internal.JDBC_Connection import create_table_statement, handle_sql_errors
|
||||||
@ -99,9 +101,8 @@ type Connection
|
|||||||
types_array = if types.is_nothing then Nothing else types.to_array
|
types_array = if types.is_nothing then Nothing else types.to_array
|
||||||
name_map = Map.from_vector [["TABLE_CAT", "Database"], ["TABLE_SCHEM", "Schema"], ["TABLE_NAME", "Name"], ["TABLE_TYPE", "Type"], ["REMARKS", "Description"], ["TYPE_CAT", "Type Database"], ["TYPE_SCHEM", "Type Schema"], ["TYPE_NAME", "Type Name"]]
|
name_map = Map.from_vector [["TABLE_CAT", "Database"], ["TABLE_SCHEM", "Schema"], ["TABLE_NAME", "Name"], ["TABLE_TYPE", "Type"], ["REMARKS", "Description"], ["TYPE_CAT", "Type Database"], ["TYPE_SCHEM", "Type Schema"], ["TYPE_NAME", "Type Name"]]
|
||||||
self.jdbc_connection.with_metadata metadata->
|
self.jdbc_connection.with_metadata metadata->
|
||||||
table = result_set_to_table <|
|
table = Managed_Resource.bracket (metadata.getTables database schema name_like types_array) .close result_set->
|
||||||
metadata.getTables database schema name_like types_array
|
result_set_to_table result_set self.dialect.make_column_fetcher_for_type
|
||||||
|
|
||||||
renamed = table.rename_columns name_map
|
renamed = table.rename_columns name_map
|
||||||
if all_fields then renamed else
|
if all_fields then renamed else
|
||||||
renamed.select_columns ["Database", "Schema", "Name", "Type", "Description"]
|
renamed.select_columns ["Database", "Schema", "Name", "Type", "Description"]
|
||||||
@ -134,14 +135,17 @@ type Connection
|
|||||||
False ->
|
False ->
|
||||||
Error.throw (Table_Not_Found.Error query sql_error treated_as_query=True)
|
Error.throw (Table_Not_Found.Error query sql_error treated_as_query=True)
|
||||||
SQL_Query.Raw_SQL raw_sql -> handle_sql_errors <|
|
SQL_Query.Raw_SQL raw_sql -> handle_sql_errors <|
|
||||||
columns = self.jdbc_connection.fetch_columns raw_sql
|
self.jdbc_connection.ensure_query_has_no_holes raw_sql . if_not_error <|
|
||||||
name = if alias == "" then (UUID.randomUUID.to_text) else alias
|
columns = self.jdbc_connection.fetch_columns raw_sql Statement_Setter.null
|
||||||
ctx = Context.for_query raw_sql name
|
name = if alias == "" then (UUID.randomUUID.to_text) else alias
|
||||||
Database_Table_Module.make_table self name columns ctx
|
ctx = Context.for_query raw_sql name
|
||||||
|
Database_Table_Module.make_table self name columns ctx
|
||||||
SQL_Query.Table_Name name ->
|
SQL_Query.Table_Name name ->
|
||||||
result = handle_sql_errors <|
|
result = handle_sql_errors <|
|
||||||
ctx = Context.for_table name (if alias == "" then name else alias)
|
ctx = Context.for_table name (if alias == "" then name else alias)
|
||||||
columns = self.jdbc_connection.fetch_columns (self.dialect.generate_sql (Query.Select Nothing ctx))
|
statement = self.dialect.generate_sql (Query.Select Nothing ctx)
|
||||||
|
statement_setter = self.dialect.get_statement_setter
|
||||||
|
columns = self.jdbc_connection.fetch_columns statement statement_setter
|
||||||
Database_Table_Module.make_table self name columns ctx
|
Database_Table_Module.make_table self name columns ctx
|
||||||
result.catch SQL_Error sql_error->
|
result.catch SQL_Error sql_error->
|
||||||
Error.throw (Table_Not_Found.Error name sql_error treated_as_query=False)
|
Error.throw (Table_Not_Found.Error name sql_error treated_as_query=False)
|
||||||
@ -161,22 +165,18 @@ type Connection
|
|||||||
|
|
||||||
Arguments:
|
Arguments:
|
||||||
- statement: SQL_Statement to execute.
|
- statement: SQL_Statement to execute.
|
||||||
- expected_types: Optional vector of expected types for each column.
|
- column_type_suggestions: A vector of SQL type references that can act
|
||||||
read_statement : SQL_Statement -> (Nothing | Vector SQL_Type) -> Materialized_Table
|
as suggested column types. By default, the overrides are respected and
|
||||||
read_statement self statement expected_types=Nothing =
|
types that should be computed by the database are passed as `Nothing`
|
||||||
self.jdbc_connection.with_prepared_statement statement stmt->
|
to ensure that default `ResultSet` metadata is used for these columns.
|
||||||
result_set_to_table stmt.executeQuery expected_types
|
- last_row_only: If set true, only the last row of the query is fetched.
|
||||||
|
Defaults to false.
|
||||||
## PRIVATE
|
read_statement : SQL_Statement -> (Nothing | Vector SQL_Type_Reference) -> Materialized_Table
|
||||||
Internal read function for a statement with optional types returning just last row.
|
read_statement self statement column_type_suggestions=Nothing last_row_only=False =
|
||||||
|
type_overrides = self.dialect.get_type_mapping.prepare_type_overrides column_type_suggestions
|
||||||
Arguments:
|
statement_setter = self.dialect.get_statement_setter
|
||||||
- statement: SQL_Statement to execute.
|
self.jdbc_connection.with_prepared_statement statement statement_setter stmt->
|
||||||
- expected_types: Optional vector of expected types for each column.
|
result_set_to_table stmt.executeQuery self.dialect.make_column_fetcher_for_type type_overrides last_row_only
|
||||||
read_last_row : SQL_Statement -> (Nothing | Vector SQL_Type) -> Materialized_Table
|
|
||||||
read_last_row self statement expected_types=Nothing =
|
|
||||||
self.jdbc_connection.with_prepared_statement statement stmt->
|
|
||||||
result_set_to_table stmt.executeQuery expected_types True
|
|
||||||
|
|
||||||
## ADVANCED
|
## ADVANCED
|
||||||
|
|
||||||
@ -189,7 +189,8 @@ type Connection
|
|||||||
representing the query to execute.
|
representing the query to execute.
|
||||||
execute_update : Text | SQL_Statement -> Integer
|
execute_update : Text | SQL_Statement -> Integer
|
||||||
execute_update self query =
|
execute_update self query =
|
||||||
self.jdbc_connection.with_prepared_statement query stmt->
|
statement_setter = self.dialect.get_statement_setter
|
||||||
|
self.jdbc_connection.with_prepared_statement query statement_setter stmt->
|
||||||
Panic.catch UnsupportedOperationException stmt.executeLargeUpdate _->
|
Panic.catch UnsupportedOperationException stmt.executeLargeUpdate _->
|
||||||
stmt.executeUpdate
|
stmt.executeUpdate
|
||||||
|
|
||||||
@ -218,14 +219,19 @@ type Connection
|
|||||||
batch.
|
batch.
|
||||||
upload_table : Text -> Materialized_Table -> Boolean -> Integer -> Table
|
upload_table : Text -> Materialized_Table -> Boolean -> Integer -> Table
|
||||||
upload_table self name table temporary=True batch_size=1000 = Panic.recover Illegal_State <|
|
upload_table self name table temporary=True batch_size=1000 = Panic.recover Illegal_State <|
|
||||||
create_sql = create_table_statement name table temporary
|
type_mapping = self.dialect.get_type_mapping
|
||||||
|
## TODO [RW] problem handling! probably want to add on_problems to this method?
|
||||||
|
This is just a prototype, so ignoring this. To be revisited as part of #5161.
|
||||||
|
type_mapper value_type = type_mapping.value_type_to_sql value_type Problem_Behavior.Report_Error
|
||||||
|
create_sql = create_table_statement type_mapper name table temporary
|
||||||
create_table = self.execute_update create_sql
|
create_table = self.execute_update create_sql
|
||||||
|
|
||||||
db_table = if create_table.is_error then create_table else self.query (SQL_Query.Table_Name name)
|
db_table = if create_table.is_error then create_table else self.query (SQL_Query.Table_Name name)
|
||||||
if db_table.is_error.not then
|
if db_table.is_error.not then
|
||||||
pairs = db_table.internal_columns.map col->[col.name, SQL_Expression.Constant col.sql_type Nothing]
|
pairs = db_table.internal_columns.map col->[col.name, SQL_Expression.Constant Nothing]
|
||||||
insert_query = self.dialect.generate_sql <| Query.Insert name pairs
|
insert_query = self.dialect.generate_sql <| Query.Insert name pairs
|
||||||
insert_template = insert_query.prepare.first
|
insert_template = insert_query.prepare.first
|
||||||
self.jdbc_connection.load_table insert_template db_table table batch_size
|
statement_setter = self.dialect.get_statement_setter
|
||||||
|
self.jdbc_connection.load_table insert_template statement_setter table batch_size
|
||||||
|
|
||||||
db_table
|
db_table
|
||||||
|
@ -2,14 +2,13 @@ from Standard.Base import all
|
|||||||
import Standard.Base.Errors.Illegal_Argument.Illegal_Argument
|
import Standard.Base.Errors.Illegal_Argument.Illegal_Argument
|
||||||
import Standard.Base.Errors.Illegal_State.Illegal_State
|
import Standard.Base.Errors.Illegal_State.Illegal_State
|
||||||
|
|
||||||
from Standard.Table import Sort_Column, Data_Formatter
|
import Standard.Table.Data.Type.Enso_Types
|
||||||
import Standard.Table.Data.Value_Type.Value_Type
|
|
||||||
import Standard.Table.Data.Column.Column as Materialized_Column
|
import Standard.Table.Data.Column.Column as Materialized_Column
|
||||||
import Standard.Table.Internal.Java_Problems
|
import Standard.Table.Internal.Java_Problems
|
||||||
import Standard.Table.Internal.Problem_Builder.Problem_Builder
|
import Standard.Table.Internal.Problem_Builder.Problem_Builder
|
||||||
import Standard.Table.Internal.Widget_Helpers
|
import Standard.Table.Internal.Widget_Helpers
|
||||||
from Standard.Table.Data.Value_Type import Value_Type, Auto
|
from Standard.Table import Sort_Column, Data_Formatter, Value_Type, Auto
|
||||||
from Standard.Table.Errors import Floating_Point_Equality
|
from Standard.Table.Errors import Floating_Point_Equality, Inexact_Type_Coercion
|
||||||
|
|
||||||
import project.Data.SQL_Statement.SQL_Statement
|
import project.Data.SQL_Statement.SQL_Statement
|
||||||
import project.Data.SQL_Type.SQL_Type
|
import project.Data.SQL_Type.SQL_Type
|
||||||
@ -18,6 +17,7 @@ import project.Internal.IR.Context.Context
|
|||||||
import project.Internal.IR.SQL_Expression.SQL_Expression
|
import project.Internal.IR.SQL_Expression.SQL_Expression
|
||||||
import project.Internal.IR.Internal_Column.Internal_Column
|
import project.Internal.IR.Internal_Column.Internal_Column
|
||||||
import project.Internal.IR.Query.Query
|
import project.Internal.IR.Query.Query
|
||||||
|
import project.Internal.SQL_Type_Reference.SQL_Type_Reference
|
||||||
|
|
||||||
from project.Data.Table import Table, freshen_columns
|
from project.Data.Table import Table, freshen_columns
|
||||||
|
|
||||||
@ -35,6 +35,7 @@ type Column
|
|||||||
Arguments:
|
Arguments:
|
||||||
- name: The name of the column.
|
- name: The name of the column.
|
||||||
- connection: The connection with which the column is associated.
|
- connection: The connection with which the column is associated.
|
||||||
|
- sql_type_reference: Lazily computed SQL type of the column.
|
||||||
- expression: The expressions to apply to the column.
|
- expression: The expressions to apply to the column.
|
||||||
- context: The SQl context in which the column exists.
|
- context: The SQl context in which the column exists.
|
||||||
|
|
||||||
@ -44,7 +45,7 @@ type Column
|
|||||||
which they come. Combined expressions must come from the same context -
|
which they come. Combined expressions must come from the same context -
|
||||||
they must both have the same filtering, grouping etc. rules applied to be
|
they must both have the same filtering, grouping etc. rules applied to be
|
||||||
able to be combined.
|
able to be combined.
|
||||||
Value name:Text connection:Connection sql_type:SQL_Type expression:SQL_Expression context:Context
|
Value name:Text connection:Connection sql_type_reference:SQL_Type_Reference expression:SQL_Expression context:Context
|
||||||
|
|
||||||
## UNSTABLE
|
## UNSTABLE
|
||||||
|
|
||||||
@ -106,12 +107,9 @@ type Column
|
|||||||
implemented.
|
implemented.
|
||||||
value_type : Value_Type
|
value_type : Value_Type
|
||||||
value_type self =
|
value_type self =
|
||||||
## TODO This is a temporary implementation that should be revised once
|
mapping = self.connection.dialect.get_type_mapping
|
||||||
types are implemented properly.
|
mapping.sql_type_to_value_type self.sql_type
|
||||||
if self.sql_type.is_definitely_boolean then Value_Type.Boolean else
|
|
||||||
if self.sql_type.is_definitely_text then Value_Type.Char else
|
|
||||||
if self.sql_type.is_definitely_double then Value_Type.Float else
|
|
||||||
Value_Type.Unsupported_Data_Type self.sql_type.name
|
|
||||||
## UNSTABLE
|
## UNSTABLE
|
||||||
|
|
||||||
Returns an SQL statement that will be used for materializing this column.
|
Returns an SQL statement that will be used for materializing this column.
|
||||||
@ -126,27 +124,22 @@ type Column
|
|||||||
- operands: A vector of additional operation arguments (the column itself
|
- operands: A vector of additional operation arguments (the column itself
|
||||||
is always passed as the first argument).
|
is always passed as the first argument).
|
||||||
- new_name: The name of the resulting column.
|
- new_name: The name of the resulting column.
|
||||||
- new_type: The type of the SQL column that results from applying the
|
make_op self op_kind operands new_name =
|
||||||
operator. If not specified, the type of this column is used.
|
type_mapping = self.connection.dialect.get_type_mapping
|
||||||
- operand_types: The SQL types of the additional arguments. They are used
|
prepare_operand operand = case operand of
|
||||||
if additional arguments are constants (and if not provided, the type of
|
|
||||||
this column is used). If the other argument is a column, its type is
|
|
||||||
used.
|
|
||||||
make_op self op_kind operands new_name new_type=Nothing operand_types=Nothing =
|
|
||||||
prepare_operand operand operand_type = case operand of
|
|
||||||
other_column : Column ->
|
other_column : Column ->
|
||||||
if Helpers.check_integrity self other_column then other_column.expression else
|
if Helpers.check_integrity self other_column then other_column.expression else
|
||||||
Error.throw <| Unsupported_Database_Operation.Error "Cannot use columns coming from different contexts in one expression without a join."
|
Error.throw <| Unsupported_Database_Operation.Error "Cannot use columns coming from different contexts in one expression without a join."
|
||||||
constant ->
|
constant ->
|
||||||
actual_operand_type = operand_type.if_nothing self.sql_type
|
SQL_Expression.Constant constant
|
||||||
SQL_Expression.Constant actual_operand_type constant
|
|
||||||
|
|
||||||
actual_operand_types = operand_types.if_nothing (Vector.fill operands.length Nothing)
|
expressions = operands.map prepare_operand
|
||||||
expressions = operands.zip actual_operand_types prepare_operand
|
|
||||||
|
|
||||||
actual_new_type = new_type.if_nothing self.sql_type
|
|
||||||
new_expr = SQL_Expression.Operation op_kind ([self.expression] + expressions)
|
new_expr = SQL_Expression.Operation op_kind ([self.expression] + expressions)
|
||||||
Column.Value new_name self.connection actual_new_type new_expr self.context
|
|
||||||
|
infer_from_database_callback expression =
|
||||||
|
SQL_Type_Reference.new self.connection self.context expression
|
||||||
|
new_type_ref = type_mapping.infer_return_type infer_from_database_callback op_kind [self]+operands new_expr
|
||||||
|
Column.Value new_name self.connection new_type_ref new_expr self.context
|
||||||
|
|
||||||
## PRIVATE
|
## PRIVATE
|
||||||
|
|
||||||
@ -155,18 +148,12 @@ type Column
|
|||||||
Arguments:
|
Arguments:
|
||||||
- op_kind: The kind of binary operator.
|
- op_kind: The kind of binary operator.
|
||||||
- operand: The right operand to the binary operator.
|
- operand: The right operand to the binary operator.
|
||||||
- new_type: The type of the SQL column that results from applying the
|
- new_name: The name of the resulting column.
|
||||||
operator.
|
make_binary_op : Text -> Text -> (Text | Nothing) -> Column
|
||||||
- operand_type: The SQL type of the operand.
|
make_binary_op self op_kind operand new_name=Nothing =
|
||||||
|
|
||||||
If not specified, the `new_type` is the same as the current one.
|
|
||||||
`operand_type` is only relevant if the operand is not a column, it
|
|
||||||
defaults to the current type if not provided.
|
|
||||||
make_binary_op : Text -> Text -> (Text | Nothing) -> (SQL_Type | Nothing) -> (SQL_Type | Nothing) -> Column
|
|
||||||
make_binary_op self op_kind operand new_name=Nothing new_type=Nothing operand_type=Nothing =
|
|
||||||
effective_new_name = new_name.if_nothing <|
|
effective_new_name = new_name.if_nothing <|
|
||||||
self.naming_helpers.binary_operation_name op_kind self operand
|
self.naming_helpers.binary_operation_name op_kind self operand
|
||||||
self.make_op op_kind [operand] effective_new_name new_type [operand_type]
|
self.make_op op_kind [operand] effective_new_name
|
||||||
|
|
||||||
## PRIVATE
|
## PRIVATE
|
||||||
|
|
||||||
@ -174,10 +161,9 @@ type Column
|
|||||||
|
|
||||||
Arguments:
|
Arguments:
|
||||||
- op_kind: The kind of the unary operator.
|
- op_kind: The kind of the unary operator.
|
||||||
- new_type: The type of the SQL column that results from applying the
|
- new_name: The name of the resulting column.
|
||||||
operator.
|
make_unary_op : Text -> Text -> Column
|
||||||
make_unary_op : Text -> Text -> (SQL_Type | Nothing) -> Column
|
make_unary_op self op_kind new_name = self.make_op op_kind [] new_name
|
||||||
make_unary_op self op_kind new_name new_type=Nothing = self.make_op op_kind [] new_name new_type
|
|
||||||
|
|
||||||
## UNSTABLE
|
## UNSTABLE
|
||||||
|
|
||||||
@ -245,7 +231,7 @@ type Column
|
|||||||
equals_ignore_case self other locale=Locale.default =
|
equals_ignore_case self other locale=Locale.default =
|
||||||
Helpers.assume_default_locale locale <|
|
Helpers.assume_default_locale locale <|
|
||||||
new_name = self.naming_helpers.function_name "equals_ignore_case" [self, other]
|
new_name = self.naming_helpers.function_name "equals_ignore_case" [self, other]
|
||||||
self.make_binary_op "equals_ignore_case" other new_name new_type=SQL_Type.boolean
|
self.make_binary_op "equals_ignore_case" other new_name
|
||||||
|
|
||||||
## Element-wise non-equality comparison.
|
## Element-wise non-equality comparison.
|
||||||
|
|
||||||
@ -291,7 +277,7 @@ type Column
|
|||||||
`other`. If `other` is a column, the comparison is performed pairwise
|
`other`. If `other` is a column, the comparison is performed pairwise
|
||||||
between corresponding elements of `self` and `other`.
|
between corresponding elements of `self` and `other`.
|
||||||
>= : Column | Any -> Column
|
>= : Column | Any -> Column
|
||||||
>= self other = self.make_binary_op ">=" other new_type=SQL_Type.boolean
|
>= self other = self.make_binary_op ">=" other
|
||||||
|
|
||||||
## UNSTABLE
|
## UNSTABLE
|
||||||
|
|
||||||
@ -304,7 +290,7 @@ type Column
|
|||||||
`other`. If `other` is a column, the comparison is performed pairwise
|
`other`. If `other` is a column, the comparison is performed pairwise
|
||||||
between corresponding elements of `self` and `other`.
|
between corresponding elements of `self` and `other`.
|
||||||
<= : Column | Any -> Column
|
<= : Column | Any -> Column
|
||||||
<= self other = self.make_binary_op "<=" other new_type=SQL_Type.boolean
|
<= self other = self.make_binary_op "<=" other
|
||||||
|
|
||||||
## UNSTABLE
|
## UNSTABLE
|
||||||
|
|
||||||
@ -317,7 +303,7 @@ type Column
|
|||||||
`other`. If `other` is a column, the comparison is performed pairwise
|
`other`. If `other` is a column, the comparison is performed pairwise
|
||||||
between corresponding elements of `self` and `other`.
|
between corresponding elements of `self` and `other`.
|
||||||
> : Column | Any -> Column
|
> : Column | Any -> Column
|
||||||
> self other = self.make_binary_op ">" other new_type=SQL_Type.boolean
|
> self other = self.make_binary_op ">" other
|
||||||
|
|
||||||
## UNSTABLE
|
## UNSTABLE
|
||||||
|
|
||||||
@ -330,7 +316,7 @@ type Column
|
|||||||
`other`. If `other` is a column, the comparison is performed pairwise
|
`other`. If `other` is a column, the comparison is performed pairwise
|
||||||
between corresponding elements of `self` and `other`.
|
between corresponding elements of `self` and `other`.
|
||||||
< : Column | Any -> Column
|
< : Column | Any -> Column
|
||||||
< self other = self.make_binary_op "<" other new_type=SQL_Type.boolean
|
< self other = self.make_binary_op "<" other
|
||||||
|
|
||||||
## Element-wise inclusive bounds check.
|
## Element-wise inclusive bounds check.
|
||||||
|
|
||||||
@ -347,7 +333,7 @@ type Column
|
|||||||
between : (Column | Any) -> (Column | Any) -> Column
|
between : (Column | Any) -> (Column | Any) -> Column
|
||||||
between self lower upper =
|
between self lower upper =
|
||||||
new_name = self.naming_helpers.to_expression_text self + " between " + self.naming_helpers.to_expression_text lower + " and " + self.naming_helpers.to_expression_text upper
|
new_name = self.naming_helpers.to_expression_text self + " between " + self.naming_helpers.to_expression_text lower + " and " + self.naming_helpers.to_expression_text upper
|
||||||
self.make_op "BETWEEN" [lower, upper] new_name new_type=SQL_Type.boolean
|
self.make_op "BETWEEN" [lower, upper] new_name
|
||||||
|
|
||||||
## UNSTABLE
|
## UNSTABLE
|
||||||
|
|
||||||
@ -361,14 +347,14 @@ type Column
|
|||||||
between corresponding elements of `self` and `other`.
|
between corresponding elements of `self` and `other`.
|
||||||
+ : Column | Any -> Column
|
+ : Column | Any -> Column
|
||||||
+ self other =
|
+ self other =
|
||||||
## TODO: Revisit this as part of the column value type work.
|
self_type = self.value_type
|
||||||
op = case other of
|
other_type = find_argument_type other
|
||||||
_ : Column -> if self.sql_type.is_definitely_numeric || other.sql_type.is_definitely_numeric then 'ADD_NUMBER' else 'ADD_TEXT'
|
op = if self_type.is_numeric && (other_type.is_nothing || other_type.is_numeric) then 'ADD_NUMBER' else
|
||||||
_ -> if self.sql_type.is_definitely_numeric then 'ADD_NUMBER' else 'ADD_TEXT'
|
if self_type.is_text && (other_type.is_nothing || other_type.is_text) then 'ADD_TEXT' else
|
||||||
new_type = if op == 'ADD_TEXT' then self.sql_type else
|
Error.throw <| Illegal_Argument.Error <|
|
||||||
SQL_Type.merge_type self.sql_type (SQL_Type.approximate_type other)
|
"Cannot perform addition on a pair of values of types " + self_type.to_text + " and " + other_type.to_text + ". Addition can only be performed if both columns are of some numeric type or are both text."
|
||||||
new_name = self.naming_helpers.binary_operation_name "+" self other
|
new_name = self.naming_helpers.binary_operation_name "+" self other
|
||||||
self.make_binary_op op other new_name new_type=new_type
|
self.make_binary_op op other new_name
|
||||||
|
|
||||||
## UNSTABLE
|
## UNSTABLE
|
||||||
|
|
||||||
@ -381,9 +367,7 @@ type Column
|
|||||||
element of `self`. If `other` is a column, the operation is performed
|
element of `self`. If `other` is a column, the operation is performed
|
||||||
pairwise between corresponding elements of `self` and `other`.
|
pairwise between corresponding elements of `self` and `other`.
|
||||||
- : Column | Any -> Column
|
- : Column | Any -> Column
|
||||||
- self other =
|
- self other = self.make_binary_op "-" other
|
||||||
new_type = SQL_Type.merge_type self.sql_type (SQL_Type.approximate_type other)
|
|
||||||
self.make_binary_op "-" other new_type=new_type
|
|
||||||
|
|
||||||
## UNSTABLE
|
## UNSTABLE
|
||||||
|
|
||||||
@ -396,9 +380,7 @@ type Column
|
|||||||
element of `self`. If `other` is a column, the operation is performed
|
element of `self`. If `other` is a column, the operation is performed
|
||||||
pairwise between corresponding elements of `self` and `other`.
|
pairwise between corresponding elements of `self` and `other`.
|
||||||
* : Column | Any -> Column
|
* : Column | Any -> Column
|
||||||
* self other =
|
* self other = self.make_binary_op "*" other
|
||||||
new_type = SQL_Type.merge_type self.sql_type (SQL_Type.approximate_type other)
|
|
||||||
self.make_binary_op "*" other new_type=new_type
|
|
||||||
|
|
||||||
## ALIAS Divide Columns
|
## ALIAS Divide Columns
|
||||||
|
|
||||||
@ -432,7 +414,7 @@ type Column
|
|||||||
example_div = Examples.integer_column / 10
|
example_div = Examples.integer_column / 10
|
||||||
/ : Column | Any -> Column
|
/ : Column | Any -> Column
|
||||||
/ self other =
|
/ self other =
|
||||||
self.make_binary_op "/" other new_type=SQL_Type.double
|
self.make_binary_op "/" other
|
||||||
|
|
||||||
## Element-wise modulus.
|
## Element-wise modulus.
|
||||||
|
|
||||||
@ -464,10 +446,10 @@ type Column
|
|||||||
example_mod = Examples.integer_column % 3
|
example_mod = Examples.integer_column % 3
|
||||||
% : Column | Any -> Column
|
% : Column | Any -> Column
|
||||||
% self other =
|
% self other =
|
||||||
new_type = SQL_Type.merge_type self.sql_type (SQL_Type.approximate_type other)
|
other_type = find_argument_type other
|
||||||
op = if new_type == SQL_Type.integer then "%" else "mod"
|
op = if self.value_type.is_integer && (other_type.is_nothing || other_type.is_integer) then "%" else "mod"
|
||||||
new_name = self.naming_helpers.binary_operation_name "%" self other
|
new_name = self.naming_helpers.binary_operation_name "%" self other
|
||||||
self.make_binary_op op other new_name new_type=new_type
|
self.make_binary_op op other new_name
|
||||||
|
|
||||||
## ALIAS Power
|
## ALIAS Power
|
||||||
|
|
||||||
@ -496,7 +478,7 @@ type Column
|
|||||||
example_div = Examples.decimal_column ^ Examples.integer_column
|
example_div = Examples.decimal_column ^ Examples.integer_column
|
||||||
^ : Column | Any -> Column
|
^ : Column | Any -> Column
|
||||||
^ self other =
|
^ self other =
|
||||||
self.make_binary_op '^' other new_type=SQL_Type.double
|
self.make_binary_op '^' other
|
||||||
|
|
||||||
## UNSTABLE
|
## UNSTABLE
|
||||||
|
|
||||||
@ -512,7 +494,7 @@ type Column
|
|||||||
&& : Column | Any -> Column
|
&& : Column | Any -> Column
|
||||||
&& self other =
|
&& self other =
|
||||||
new_name = self.naming_helpers.binary_operation_name "&&" self other
|
new_name = self.naming_helpers.binary_operation_name "&&" self other
|
||||||
self.make_binary_op "AND" other new_name new_type=SQL_Type.boolean
|
self.make_binary_op "AND" other new_name
|
||||||
|
|
||||||
## UNSTABLE
|
## UNSTABLE
|
||||||
|
|
||||||
@ -528,7 +510,7 @@ type Column
|
|||||||
|| : Column | Any -> Column
|
|| : Column | Any -> Column
|
||||||
|| self other =
|
|| self other =
|
||||||
new_name = self.naming_helpers.binary_operation_name "||" self other
|
new_name = self.naming_helpers.binary_operation_name "||" self other
|
||||||
self.make_binary_op "OR" other new_name new_type=SQL_Type.boolean
|
self.make_binary_op "OR" other new_name
|
||||||
|
|
||||||
## UNSTABLE
|
## UNSTABLE
|
||||||
|
|
||||||
@ -548,22 +530,8 @@ type Column
|
|||||||
- when_false: value or column when `self` is `False`.
|
- when_false: value or column when `self` is `False`.
|
||||||
iif : Any -> Any -> Column
|
iif : Any -> Any -> Column
|
||||||
iif self when_true when_false =
|
iif self when_true when_false =
|
||||||
## TODO we should adjust new_type based on types when_true and
|
|
||||||
when_false, but this relies on the Value Types design which is still
|
|
||||||
in progress. This function has status of an internal prototype for
|
|
||||||
now, so we just rely on a simplified handling. Once Value Types are
|
|
||||||
properly implemented, this should be accordingly extended for the
|
|
||||||
full implementation of IIF. We will need to handle when_true and
|
|
||||||
when_false being either columns or regular values and rely on a
|
|
||||||
mapping of Enso base types to SQL types, and a rule for extracting a
|
|
||||||
common type.
|
|
||||||
left_type = get_approximate_type when_true self.sql_type
|
|
||||||
right_type = get_approximate_type when_false self.sql_type
|
|
||||||
new_type = SQL_Type.merge_type left_type right_type
|
|
||||||
new_name = "if " + self.naming_helpers.to_expression_text self + " then " + self.naming_helpers.to_expression_text when_true + " else " + self.naming_helpers.to_expression_text when_false
|
new_name = "if " + self.naming_helpers.to_expression_text self + " then " + self.naming_helpers.to_expression_text when_true + " else " + self.naming_helpers.to_expression_text when_false
|
||||||
|
self.make_op "IIF" [when_true, when_false] new_name
|
||||||
if new_type.is_error then Error.throw (Illegal_Argument.Error "when_true and when_false types do not match") else
|
|
||||||
self.make_op "IIF" [when_true, when_false] new_name new_type=new_type
|
|
||||||
|
|
||||||
## Returns a column of first non-`Nothing` value on each row of `self` and
|
## Returns a column of first non-`Nothing` value on each row of `self` and
|
||||||
`values` list.
|
`values` list.
|
||||||
@ -580,10 +548,8 @@ type Column
|
|||||||
coalesce : (Any | Vector Any) -> Column
|
coalesce : (Any | Vector Any) -> Column
|
||||||
coalesce self values = case values of
|
coalesce self values = case values of
|
||||||
_ : Vector ->
|
_ : Vector ->
|
||||||
fold_type = values.fold self.sql_type c->v-> SQL_Type.merge_type c (get_approximate_type v self.sql_type)
|
new_name = self.naming_helpers.function_name "coalesce" [self]+values
|
||||||
if fold_type.is_error then Error.throw (Illegal_Argument.Error "self and values types do not all match") else
|
self.make_op "COALESCE" values new_name
|
||||||
new_name = self.naming_helpers.function_name "coalesce" [self]+values
|
|
||||||
self.make_op "COALESCE" values new_name new_type=fold_type
|
|
||||||
_ : Array -> self.coalesce (Vector.from_polyglot_array values)
|
_ : Array -> self.coalesce (Vector.from_polyglot_array values)
|
||||||
_ -> self.coalesce [values]
|
_ -> self.coalesce [values]
|
||||||
|
|
||||||
@ -601,10 +567,8 @@ type Column
|
|||||||
min : (Any | Vector Any) -> Column
|
min : (Any | Vector Any) -> Column
|
||||||
min self values = case values of
|
min self values = case values of
|
||||||
_ : Vector ->
|
_ : Vector ->
|
||||||
fold_type = values.fold self.sql_type c->v-> SQL_Type.merge_type c (get_approximate_type v self.sql_type)
|
new_name = self.naming_helpers.function_name "min" [self]+values
|
||||||
if fold_type.is_error then Error.throw (Illegal_Argument.Error "self and values types do not all match") else
|
self.make_op "ROW_MIN" values new_name
|
||||||
new_name = self.naming_helpers.function_name "min" [self]+values
|
|
||||||
self.make_op "ROW_MIN" values new_name new_type=fold_type
|
|
||||||
_ : Array -> self.min (Vector.from_polyglot_array values)
|
_ : Array -> self.min (Vector.from_polyglot_array values)
|
||||||
_ -> self.min [values]
|
_ -> self.min [values]
|
||||||
|
|
||||||
@ -622,10 +586,8 @@ type Column
|
|||||||
max : (Any | Vector Any) -> Column
|
max : (Any | Vector Any) -> Column
|
||||||
max self values = case values of
|
max self values = case values of
|
||||||
_ : Vector ->
|
_ : Vector ->
|
||||||
fold_type = values.fold self.sql_type c->v-> SQL_Type.merge_type c (get_approximate_type v self.sql_type)
|
new_name = self.naming_helpers.function_name "max" [self]+values
|
||||||
if fold_type.is_error then Error.throw (Illegal_Argument.Error "self and values types do not all match") else
|
self.make_op "ROW_MAX" values new_name
|
||||||
new_name = self.naming_helpers.function_name "max" [self]+values
|
|
||||||
self.make_op "ROW_MAX" values new_name new_type=fold_type
|
|
||||||
_ : Array -> self.max (Vector.from_polyglot_array values)
|
_ : Array -> self.max (Vector.from_polyglot_array values)
|
||||||
_ -> self.max [values]
|
_ -> self.max [values]
|
||||||
|
|
||||||
@ -636,7 +598,7 @@ type Column
|
|||||||
is_nothing : Column
|
is_nothing : Column
|
||||||
is_nothing self =
|
is_nothing self =
|
||||||
new_name = self.naming_helpers.to_expression_text self + " is null"
|
new_name = self.naming_helpers.to_expression_text self + " is null"
|
||||||
self.make_unary_op "IS_NULL" new_name new_type=SQL_Type.boolean
|
self.make_unary_op "IS_NULL" new_name
|
||||||
|
|
||||||
## UNSTABLE
|
## UNSTABLE
|
||||||
Returns a column of booleans, with `True` items at the positions where
|
Returns a column of booleans, with `True` items at the positions where
|
||||||
@ -644,7 +606,7 @@ type Column
|
|||||||
is_nan : Column
|
is_nan : Column
|
||||||
is_nan self =
|
is_nan self =
|
||||||
new_name = self.naming_helpers.function_name "is_nan" [self]
|
new_name = self.naming_helpers.function_name "is_nan" [self]
|
||||||
self.make_unary_op "IS_NAN" new_name new_type=SQL_Type.boolean
|
self.make_unary_op "IS_NAN" new_name
|
||||||
|
|
||||||
## PRIVATE
|
## PRIVATE
|
||||||
Returns a column of booleans, with `True` items at the positions where
|
Returns a column of booleans, with `True` items at the positions where
|
||||||
@ -652,7 +614,7 @@ type Column
|
|||||||
is_empty : Column
|
is_empty : Column
|
||||||
is_empty self =
|
is_empty self =
|
||||||
new_name = self.naming_helpers.to_expression_text self + " is empty"
|
new_name = self.naming_helpers.to_expression_text self + " is empty"
|
||||||
self.make_unary_op "IS_EMPTY" new_name new_type=SQL_Type.boolean
|
self.make_unary_op "IS_EMPTY" new_name
|
||||||
|
|
||||||
## Returns a column of booleans, with `True` items at the positions where
|
## Returns a column of booleans, with `True` items at the positions where
|
||||||
this column does not contain a `Nothing`.
|
this column does not contain a `Nothing`.
|
||||||
@ -681,10 +643,11 @@ type Column
|
|||||||
is_blank : Boolean -> Column
|
is_blank : Boolean -> Column
|
||||||
is_blank self treat_nans_as_blank=False =
|
is_blank self treat_nans_as_blank=False =
|
||||||
new_name = self.naming_helpers.function_name "is_blank" [self]
|
new_name = self.naming_helpers.function_name "is_blank" [self]
|
||||||
is_blank = case self.sql_type.is_definitely_text of
|
self_type = self.value_type
|
||||||
|
is_blank = case self_type.is_text of
|
||||||
True -> self.is_empty
|
True -> self.is_empty
|
||||||
False -> self.is_nothing
|
False -> self.is_nothing
|
||||||
result = case treat_nans_as_blank && self.sql_type.is_definitely_double of
|
result = case treat_nans_as_blank && self_type.is_floating_point of
|
||||||
True -> is_blank || self.is_nan
|
True -> is_blank || self.is_nan
|
||||||
False -> is_blank
|
False -> is_blank
|
||||||
result.rename new_name
|
result.rename new_name
|
||||||
@ -727,7 +690,7 @@ type Column
|
|||||||
example_rename = Examples.integer_column.rename "My Numbers"
|
example_rename = Examples.integer_column.rename "My Numbers"
|
||||||
rename : Text -> Column ! Illegal_Argument
|
rename : Text -> Column ! Illegal_Argument
|
||||||
rename self name = self.naming_helpers.ensure_name_is_valid name <|
|
rename self name = self.naming_helpers.ensure_name_is_valid name <|
|
||||||
Column.Value name self.connection self.sql_type self.expression self.context
|
Column.Value name self.connection self.sql_type_reference self.expression self.context
|
||||||
|
|
||||||
## UNSTABLE
|
## UNSTABLE
|
||||||
|
|
||||||
@ -847,7 +810,7 @@ type Column
|
|||||||
like : Column | Text -> Column
|
like : Column | Text -> Column
|
||||||
like self pattern =
|
like self pattern =
|
||||||
new_name = self.naming_helpers.binary_operation_name "like" self pattern
|
new_name = self.naming_helpers.binary_operation_name "like" self pattern
|
||||||
self.make_binary_op "LIKE" pattern new_name new_type=SQL_Type.boolean
|
self.make_binary_op "LIKE" pattern new_name
|
||||||
|
|
||||||
## Checks for each element of the column if it is contained within the
|
## Checks for each element of the column if it is contained within the
|
||||||
provided vector or column.
|
provided vector or column.
|
||||||
@ -888,7 +851,7 @@ type Column
|
|||||||
begin with). The implementation also ensures that even
|
begin with). The implementation also ensures that even
|
||||||
`NULL IN (...)` is coalesced to False, so that negation works as
|
`NULL IN (...)` is coalesced to False, so that negation works as
|
||||||
expected.
|
expected.
|
||||||
is_in_not_null = self.make_op "IS_IN" operands=non_nulls new_name=new_name new_type=SQL_Type.boolean
|
is_in_not_null = self.make_op "IS_IN" operands=non_nulls new_name=new_name
|
||||||
result = case nulls.not_empty of
|
result = case nulls.not_empty of
|
||||||
True -> is_in_not_null || self.is_nothing
|
True -> is_in_not_null || self.is_nothing
|
||||||
False -> is_in_not_null
|
False -> is_in_not_null
|
||||||
@ -907,7 +870,11 @@ type Column
|
|||||||
has_nulls_expression = SQL_Expression.Operation "BOOL_OR" [column.is_nothing.expression]
|
has_nulls_expression = SQL_Expression.Operation "BOOL_OR" [column.is_nothing.expression]
|
||||||
has_nulls_subquery = Query.Select [Pair.new "has_nulls" has_nulls_expression] column.context
|
has_nulls_subquery = Query.Select [Pair.new "has_nulls" has_nulls_expression] column.context
|
||||||
new_expr = SQL_Expression.Operation "IS_IN_COLUMN" [self.expression, in_subquery, has_nulls_subquery]
|
new_expr = SQL_Expression.Operation "IS_IN_COLUMN" [self.expression, in_subquery, has_nulls_subquery]
|
||||||
Column.Value new_name self.connection SQL_Type.boolean new_expr self.context
|
# This mapping should never be imprecise, if there are errors we need to amend the implementation.
|
||||||
|
sql_type = self.connection.dialect.get_type_mapping.value_type_to_sql Value_Type.Boolean Problem_Behavior.Report_Error
|
||||||
|
new_type_ref = SQL_Type_Reference.from_constant sql_type . catch Inexact_Type_Coercion _->
|
||||||
|
Error.throw (Illegal_State.Error "The dialect "+self.connection.dialect.name+" does not support a boolean type. The implementation of `is_in` should be revised to account for this. This is an internal issue with the Database library.")
|
||||||
|
Column.Value new_name self.connection new_type_ref new_expr self.context
|
||||||
|
|
||||||
## Parsing values is not supported in database columns.
|
## Parsing values is not supported in database columns.
|
||||||
@type Widget_Helpers.parse_type_selector
|
@type Widget_Helpers.parse_type_selector
|
||||||
@ -966,12 +933,16 @@ type Column
|
|||||||
|
|
||||||
## PRIVATE
|
## PRIVATE
|
||||||
as_internal : Internal_Column
|
as_internal : Internal_Column
|
||||||
as_internal self = Internal_Column.Value self.name self.sql_type self.expression
|
as_internal self = Internal_Column.Value self.name self.sql_type_reference self.expression
|
||||||
|
|
||||||
## Provides a simplified text representation for display in the REPL and errors.
|
## Provides a simplified text representation for display in the REPL and errors.
|
||||||
to_text : Text
|
to_text : Text
|
||||||
to_text self = "(Database Column "+self.name.to_text+")"
|
to_text self = "(Database Column "+self.name.to_text+")"
|
||||||
|
|
||||||
|
## PRIVATE
|
||||||
|
sql_type : SQL_Type
|
||||||
|
sql_type self = self.sql_type_reference.get
|
||||||
|
|
||||||
## PRIVATE
|
## PRIVATE
|
||||||
naming_helpers self = self.connection.dialect.get_naming_helpers
|
naming_helpers self = self.connection.dialect.get_naming_helpers
|
||||||
|
|
||||||
@ -981,27 +952,40 @@ type Column
|
|||||||
var_args_functions = ['is_in', 'coalesce', 'min', 'max']
|
var_args_functions = ['is_in', 'coalesce', 'min', 'max']
|
||||||
|
|
||||||
## PRIVATE
|
## PRIVATE
|
||||||
TODO: Revisit this as part of the column value type work.
|
Finds the type of an argument to a column operation.
|
||||||
get_approximate_type value default = case value of
|
|
||||||
_ : Column -> value.sql_type
|
If the argument is a column, the type of that column is returned. If it is an
|
||||||
Nothing -> default
|
Enso value, the smallest `Value_Type` that can fit that value will be
|
||||||
_ -> SQL_Type.approximate_type value
|
returned (but the Database is free to widen it to the closest type that it
|
||||||
|
supports without warning).
|
||||||
|
|
||||||
|
Since there is no special type for `Nothing` and `Nothing` technically can
|
||||||
|
fit any nullable type, it usually needs to be handled specially. This method
|
||||||
|
returns `Nothing` if the value is `Nothing` - so the caller can try to treat
|
||||||
|
this value as fitting any type, or accordingly to specific semantics of each
|
||||||
|
method.
|
||||||
|
find_argument_type : Any -> Value_Type | Nothing
|
||||||
|
find_argument_type value = case value of
|
||||||
|
_ : Column -> value.value_type
|
||||||
|
_ : Internal_Column -> Panic.throw (Illegal_State.Error "This path is not implemented. If this is ever reached, that is a bug in the Database library.")
|
||||||
|
Nothing -> Nothing
|
||||||
|
_ -> Enso_Types.most_specific_value_type value use_smallest=True
|
||||||
|
|
||||||
## PRIVATE
|
## PRIVATE
|
||||||
Helper for case case_sensitivity based text operations
|
Helper for case case_sensitivity based text operations
|
||||||
make_text_case_op left op other case_sensitivity new_name =
|
make_text_case_op left op other case_sensitivity new_name =
|
||||||
result = Value_Type.expect_text left.value_type <| case case_sensitivity of
|
result = Value_Type.expect_text left.value_type <| case case_sensitivity of
|
||||||
Case_Sensitivity.Default -> left.make_binary_op op other new_type=SQL_Type.boolean
|
Case_Sensitivity.Default -> left.make_binary_op op other
|
||||||
Case_Sensitivity.Sensitive ->
|
Case_Sensitivity.Sensitive ->
|
||||||
make_sensitive column =
|
make_sensitive column =
|
||||||
column.make_unary_op "MAKE_CASE_SENSITIVE" "MAKE_CASE_SENSITIVE"
|
column.make_unary_op "MAKE_CASE_SENSITIVE" "MAKE_CASE_SENSITIVE"
|
||||||
cs_other = if other.is_a Column then make_sensitive other else other
|
cs_other = if other.is_a Column then make_sensitive other else other
|
||||||
(make_sensitive left) . make_binary_op op cs_other new_type=SQL_Type.boolean
|
(make_sensitive left) . make_binary_op op cs_other
|
||||||
Case_Sensitivity.Insensitive locale -> Helpers.assume_default_locale locale <|
|
Case_Sensitivity.Insensitive locale -> Helpers.assume_default_locale locale <|
|
||||||
fold_case column =
|
fold_case column =
|
||||||
column.make_unary_op "FOLD_CASE" "FOLD_CASE"
|
column.make_unary_op "FOLD_CASE" "FOLD_CASE"
|
||||||
ci_other = if other.is_a Column then fold_case other else other.to_case Case.Lower
|
ci_other = if other.is_a Column then fold_case other else other.to_case Case.Lower
|
||||||
(fold_case left) . make_binary_op op ci_other new_type=SQL_Type.boolean
|
(fold_case left) . make_binary_op op ci_other
|
||||||
result.rename new_name
|
result.rename new_name
|
||||||
|
|
||||||
## PRIVATE
|
## PRIVATE
|
||||||
@ -1020,6 +1004,6 @@ make_equality_check_with_floating_point_handling column other op =
|
|||||||
_ : Decimal ->
|
_ : Decimal ->
|
||||||
problem_builder.reportFloatingPointEquality -1
|
problem_builder.reportFloatingPointEquality -1
|
||||||
_ -> Nothing
|
_ -> Nothing
|
||||||
result = column.make_binary_op op other new_type=SQL_Type.boolean
|
result = column.make_binary_op op other
|
||||||
Problem_Behavior.Report_Warning.attach_problems_after result <|
|
Problem_Behavior.Report_Warning.attach_problems_after result <|
|
||||||
Java_Problems.parse_aggregated_problems problem_builder.getProblems
|
Java_Problems.parse_aggregated_problems problem_builder.getProblems
|
||||||
|
@ -9,6 +9,7 @@ import project.Connection.Connection.Connection
|
|||||||
import project.Data.SQL_Statement.SQL_Statement
|
import project.Data.SQL_Statement.SQL_Statement
|
||||||
import project.Data.SQL_Type.SQL_Type
|
import project.Data.SQL_Type.SQL_Type
|
||||||
import project.Data.Table.Table
|
import project.Data.Table.Table
|
||||||
|
import project.Internal.Column_Fetcher.Column_Fetcher
|
||||||
import project.Internal.IR.From_Spec.From_Spec
|
import project.Internal.IR.From_Spec.From_Spec
|
||||||
import project.Internal.IR.Internal_Column.Internal_Column
|
import project.Internal.IR.Internal_Column.Internal_Column
|
||||||
import project.Internal.IR.Order_Descriptor.Order_Descriptor
|
import project.Internal.IR.Order_Descriptor.Order_Descriptor
|
||||||
@ -16,6 +17,9 @@ import project.Internal.IR.Query.Query
|
|||||||
import project.Internal.Postgres.Postgres_Dialect
|
import project.Internal.Postgres.Postgres_Dialect
|
||||||
import project.Internal.Redshift.Redshift_Dialect
|
import project.Internal.Redshift.Redshift_Dialect
|
||||||
import project.Internal.SQLite.SQLite_Dialect
|
import project.Internal.SQLite.SQLite_Dialect
|
||||||
|
import project.Internal.SQL_Type_Mapping.SQL_Type_Mapping
|
||||||
|
import project.Internal.Statement_Setter.Statement_Setter
|
||||||
|
from project.Errors import Unsupported_Database_Operation
|
||||||
|
|
||||||
## PRIVATE
|
## PRIVATE
|
||||||
|
|
||||||
@ -37,22 +41,19 @@ type Dialect
|
|||||||
_ = [query]
|
_ = [query]
|
||||||
Unimplemented.throw "This is an interface only."
|
Unimplemented.throw "This is an interface only."
|
||||||
|
|
||||||
## PRIVATE
|
|
||||||
Deduces the result type for an aggregation operation.
|
|
||||||
|
|
||||||
The provided aggregate is assumed to contain only already resolved columns.
|
|
||||||
You may need to transform it with `resolve_aggregate` first.
|
|
||||||
resolve_target_sql_type : Aggregate_Column -> SQL_Type
|
|
||||||
resolve_target_sql_type self aggregate =
|
|
||||||
_ = [aggregate]
|
|
||||||
Unimplemented.throw "This is an interface only."
|
|
||||||
|
|
||||||
## PRIVATE
|
## PRIVATE
|
||||||
Prepares an ordering descriptor.
|
Prepares an ordering descriptor.
|
||||||
|
|
||||||
One of the purposes of this method is to verify if the expected ordering
|
One of the purposes of this method is to verify if the expected ordering
|
||||||
settings are supported by the given database backend.
|
settings are supported by the given database backend.
|
||||||
prepare_order_descriptor : Internal_Column -> Sort_Direction -> Text_Ordering -> Order_Descriptor
|
|
||||||
|
Arguments:
|
||||||
|
- internal_column: the column to order by.
|
||||||
|
- sort_direction: the direction of the ordering.
|
||||||
|
- text_ordering: If provided, specifies that the column should be treated
|
||||||
|
as text values according to the provided ordering. For non-text types,
|
||||||
|
it should be set to `Nothing`.
|
||||||
|
prepare_order_descriptor : Internal_Column -> Sort_Direction -> Nothing | Text_Ordering -> Order_Descriptor
|
||||||
prepare_order_descriptor self internal_column sort_direction text_ordering =
|
prepare_order_descriptor self internal_column sort_direction text_ordering =
|
||||||
_ = [internal_column, sort_direction, text_ordering]
|
_ = [internal_column, sort_direction, text_ordering]
|
||||||
Unimplemented.throw "This is an interface only."
|
Unimplemented.throw "This is an interface only."
|
||||||
@ -87,6 +88,40 @@ type Dialect
|
|||||||
get_naming_helpers self =
|
get_naming_helpers self =
|
||||||
Unimplemented.throw "This is an interface only."
|
Unimplemented.throw "This is an interface only."
|
||||||
|
|
||||||
|
## PRIVATE
|
||||||
|
Returns the mapping between SQL types of this dialect and Enso
|
||||||
|
`Value_Type`.
|
||||||
|
get_type_mapping : SQL_Type_Mapping
|
||||||
|
get_type_mapping self =
|
||||||
|
Unimplemented.throw "This is an interface only."
|
||||||
|
|
||||||
|
## PRIVATE
|
||||||
|
Creates a `Column_Fetcher` used to fetch data from a result set and build
|
||||||
|
an in-memory column from it, based on the given column type.
|
||||||
|
make_column_fetcher_for_type : SQL_Type -> Column_Fetcher
|
||||||
|
make_column_fetcher_for_type self sql_type =
|
||||||
|
_ = sql_type
|
||||||
|
Unimplemented.throw "This is an interface only."
|
||||||
|
|
||||||
|
## PRIVATE
|
||||||
|
Returns a helper object that handles the logic of setting values in a
|
||||||
|
prepared statement.
|
||||||
|
|
||||||
|
This object may provide custom logic for handling dialect-specific
|
||||||
|
handling of some types.
|
||||||
|
get_statement_setter : Statement_Setter
|
||||||
|
get_statement_setter self =
|
||||||
|
Unimplemented.throw "This is an interface only."
|
||||||
|
|
||||||
|
## PRIVATE
|
||||||
|
Checks if the given aggregate is supported.
|
||||||
|
|
||||||
|
Should raise an appropriate dataflow error if not, or just return `True`.
|
||||||
|
check_aggregate_support : Aggregate_Column -> Boolean ! Unsupported_Database_Operation
|
||||||
|
check_aggregate_support self aggregate =
|
||||||
|
_ = aggregate
|
||||||
|
Unimplemented.throw "This is an interface only."
|
||||||
|
|
||||||
## PRIVATE
|
## PRIVATE
|
||||||
|
|
||||||
The dialect of SQLite databases.
|
The dialect of SQLite databases.
|
||||||
|
@ -28,10 +28,8 @@ type SQL_Fragment
|
|||||||
the query.
|
the query.
|
||||||
|
|
||||||
Arguments:
|
Arguments:
|
||||||
- sql_type: The expected SQL type of `object`.
|
- object: A value that will be interpolated into the query.
|
||||||
- object: A value that will be interpolated into the query, interpreted
|
Interpolation object:Any
|
||||||
as having the type `sql_type`.
|
|
||||||
Interpolation sql_type:SQL_Type object:Any
|
|
||||||
|
|
||||||
type Builder
|
type Builder
|
||||||
## Creates a Builder representing and empty code fragment.
|
## Creates a Builder representing and empty code fragment.
|
||||||
@ -51,11 +49,10 @@ type Builder
|
|||||||
## Creates a Builder representing an interpolation of the given object.
|
## Creates a Builder representing an interpolation of the given object.
|
||||||
|
|
||||||
Arguments:
|
Arguments:
|
||||||
- sql_type: The expected SQL type of `object`.
|
|
||||||
- object: The object to be interpolated into the query as if it has the type
|
- object: The object to be interpolated into the query as if it has the type
|
||||||
given by `sql_type`.
|
given by `sql_type`.
|
||||||
interpolation : SQL_Type -> Any -> Builder
|
interpolation : Any -> Builder
|
||||||
interpolation sql_type object = Builder.Value (Vector_Builder.from_vector [SQL_Fragment.Interpolation sql_type object])
|
interpolation object = Builder.Value (Vector_Builder.from_vector [SQL_Fragment.Interpolation object])
|
||||||
|
|
||||||
## Joins a vector of code fragments with the provided separator.
|
## Joins a vector of code fragments with the provided separator.
|
||||||
|
|
||||||
@ -146,7 +143,7 @@ optimize_fragments fragments =
|
|||||||
Nothing -> SQL_Fragment.Code_Part code
|
Nothing -> SQL_Fragment.Code_Part code
|
||||||
SQL_Fragment.Code_Part other -> SQL_Fragment.Code_Part other+code
|
SQL_Fragment.Code_Part other -> SQL_Fragment.Code_Part other+code
|
||||||
State.put SQL_Fragment.Code_Part new_part
|
State.put SQL_Fragment.Code_Part new_part
|
||||||
SQL_Fragment.Interpolation _ _ ->
|
SQL_Fragment.Interpolation _ ->
|
||||||
case last_part of
|
case last_part of
|
||||||
Nothing -> Nothing
|
Nothing -> Nothing
|
||||||
SQL_Fragment.Code_Part _ ->
|
SQL_Fragment.Code_Part _ ->
|
||||||
|
@ -44,7 +44,7 @@ type SQL_Statement
|
|||||||
SQL_Fragment.Code_Part code -> code
|
SQL_Fragment.Code_Part code -> code
|
||||||
# TODO at some point we may try more sophisticated serialization based on data type
|
# TODO at some point we may try more sophisticated serialization based on data type
|
||||||
# TODO #183734954: date and time formatting is limited and will lose sub-second precision and timezone offset.
|
# TODO #183734954: date and time formatting is limited and will lose sub-second precision and timezone offset.
|
||||||
SQL_Fragment.Interpolation _ obj -> case obj of
|
SQL_Fragment.Interpolation obj -> case obj of
|
||||||
Number -> obj.to_text
|
Number -> obj.to_text
|
||||||
Date_Time -> "'" + (obj.format "yyyy-MM-dd HH:mm:ss") + "'"
|
Date_Time -> "'" + (obj.format "yyyy-MM-dd HH:mm:ss") + "'"
|
||||||
Date -> "'" + (obj.format "yyyy-MM-dd") + "'"
|
Date -> "'" + (obj.format "yyyy-MM-dd") + "'"
|
||||||
@ -56,14 +56,13 @@ type SQL_Statement
|
|||||||
|
|
||||||
Returns a pair consisting of the SQL code with holes for values and
|
Returns a pair consisting of the SQL code with holes for values and
|
||||||
a list for values that should be substituted.
|
a list for values that should be substituted.
|
||||||
# prepare : [Text, Vector Any]
|
|
||||||
prepare self =
|
prepare self =
|
||||||
to_code fragment = case fragment of
|
to_code fragment = case fragment of
|
||||||
SQL_Fragment.Code_Part code -> code
|
SQL_Fragment.Code_Part code -> code
|
||||||
SQL_Fragment.Interpolation _ _ -> "?"
|
SQL_Fragment.Interpolation _ -> "?"
|
||||||
to_subst fragment = case fragment of
|
to_subst fragment = case fragment of
|
||||||
SQL_Fragment.Code_Part _ -> []
|
SQL_Fragment.Code_Part _ -> []
|
||||||
SQL_Fragment.Interpolation typ obj -> [[obj, typ]]
|
SQL_Fragment.Interpolation obj -> [obj]
|
||||||
sql = self.fragments.map to_code . join ""
|
sql = self.fragments.map to_code . join ""
|
||||||
substitutions = self.fragments.flat_map to_subst
|
substitutions = self.fragments.flat_map to_subst
|
||||||
[sql, substitutions]
|
[sql, substitutions]
|
||||||
@ -75,8 +74,8 @@ type SQL_Statement
|
|||||||
to_js_object self =
|
to_js_object self =
|
||||||
jsonify fragment = case fragment of
|
jsonify fragment = case fragment of
|
||||||
SQL_Fragment.Code_Part code -> JS_Object.from_pairs [["sql_code", code]]
|
SQL_Fragment.Code_Part code -> JS_Object.from_pairs [["sql_code", code]]
|
||||||
SQL_Fragment.Interpolation typ obj ->
|
SQL_Fragment.Interpolation obj ->
|
||||||
inner = JS_Object.from_pairs [["value", obj], ["expected_sql_type", typ.name]]
|
inner = JS_Object.from_pairs [["value", obj]]
|
||||||
JS_Object.from_pairs [["sql_interpolation", inner]]
|
JS_Object.from_pairs [["sql_interpolation", inner]]
|
||||||
fragments = self.internal_fragments.map jsonify
|
fragments = self.internal_fragments.map jsonify
|
||||||
JS_Object.from_pairs [["query", fragments]]
|
JS_Object.from_pairs [["query", fragments]]
|
||||||
|
@ -1,11 +1,10 @@
|
|||||||
from Standard.Base import all
|
from Standard.Base import all
|
||||||
import Standard.Base.Errors.Illegal_Argument.Illegal_Argument
|
import Standard.Base.Errors.Illegal_Argument.Illegal_Argument
|
||||||
|
|
||||||
from Standard.Base.Data.Ordering import all
|
|
||||||
|
|
||||||
import project.Data.Column.Column
|
import project.Data.Column.Column
|
||||||
|
|
||||||
polyglot java import java.sql.Types
|
polyglot java import java.sql.Types
|
||||||
|
polyglot java import java.sql.ResultSetMetaData
|
||||||
|
|
||||||
## Represents an internal SQL data-type.
|
## Represents an internal SQL data-type.
|
||||||
type SQL_Type
|
type SQL_Type
|
||||||
@ -15,182 +14,31 @@ type SQL_Type
|
|||||||
Arguments:
|
Arguments:
|
||||||
- typeid: a numerical type id, as defined in `java.sql.Types`.
|
- typeid: a numerical type id, as defined in `java.sql.Types`.
|
||||||
- name: a database-specific type name, used for pretty printing.
|
- name: a database-specific type name, used for pretty printing.
|
||||||
Value typeid name
|
- precision: For character types, specifies their length.
|
||||||
|
See `ResultSetMetaData.getPrecision`.
|
||||||
|
- scale: The scale for fixed precision numeric types. Not applicable for
|
||||||
|
other types, so it's value is undefined and will usually just be 0.
|
||||||
|
See `ResultSetMetaData.getScale`.
|
||||||
|
- nullable: Specifies if the given column is nullable. May be `Nothing`
|
||||||
|
if that is unknown / irrelevant for the type.
|
||||||
|
TODO: the precise meaning of this will be revised with #5872.
|
||||||
|
Value (typeid : Integer) (name : Text) (precision : Nothing | Integer = Nothing) (scale : Integer = 0) (nullable : Boolean | Nothing = Nothing)
|
||||||
|
|
||||||
## The SQL representation of `Boolean` type.
|
## The SQL type representing a null value.
|
||||||
boolean : SQL_Type
|
|
||||||
boolean = SQL_Type.Value Types.BOOLEAN "BOOLEAN"
|
|
||||||
|
|
||||||
## The SQL representation of `Integer` type.
|
|
||||||
integer : SQL_Type
|
|
||||||
integer = SQL_Type.Value Types.INTEGER "INTEGER"
|
|
||||||
|
|
||||||
## The SQL representation of the `BIGINT` type.
|
|
||||||
bigint : SQL_Type
|
|
||||||
bigint = SQL_Type.Value Types.BIGINT "BIGINT"
|
|
||||||
|
|
||||||
## The SQL representation of the `TINYINT` type.
|
|
||||||
tinyint : SQL_Type
|
|
||||||
tinyint = SQL_Type.Value Types.TINYINT "TINYINT"
|
|
||||||
|
|
||||||
## The SQL representation of the `SMALLINT` type.
|
|
||||||
smallint : SQL_Type
|
|
||||||
smallint = SQL_Type.Value Types.SMALLINT "SMALLINT"
|
|
||||||
|
|
||||||
## The SQL type representing decimal numbers.
|
|
||||||
decimal : SQL_Type
|
|
||||||
decimal = SQL_Type.Value Types.DECIMAL "DECIMAL"
|
|
||||||
|
|
||||||
## The SQL type representing decimal numbers.
|
|
||||||
real : SQL_Type
|
|
||||||
real = SQL_Type.Value Types.REAL "REAL"
|
|
||||||
|
|
||||||
## The SQL type representing double-precision floating-point numbers.
|
|
||||||
double : SQL_Type
|
|
||||||
double = SQL_Type.Value Types.DOUBLE "DOUBLE PRECISION"
|
|
||||||
|
|
||||||
## The SQL type representing a general numeric type.
|
|
||||||
numeric : SQL_Type
|
|
||||||
numeric = SQL_Type.Value Types.NUMERIC "NUMERIC"
|
|
||||||
|
|
||||||
## The SQL type representing one of the supported textual types.
|
|
||||||
It seems that JDBC treats the `TEXT` and `VARCHAR` types as interchangeable.
|
|
||||||
text : SQL_Type
|
|
||||||
text = SQL_Type.Value Types.VARCHAR "VARCHAR"
|
|
||||||
|
|
||||||
## The SQL type representing a binary object.
|
|
||||||
blob : SQL_Type
|
|
||||||
blob = SQL_Type.Value Types.BLOB "BLOB"
|
|
||||||
|
|
||||||
## The SQL type representing a date type.
|
|
||||||
date : SQL_Type
|
|
||||||
date = SQL_Type.Value Types.DATE "DATE"
|
|
||||||
|
|
||||||
## The SQL type representing a time type.
|
|
||||||
time : SQL_Type
|
|
||||||
time = SQL_Type.Value Types.TIME "TIME"
|
|
||||||
|
|
||||||
## The SQL type representing a time type.
|
|
||||||
date_time : SQL_Type
|
|
||||||
date_time = SQL_Type.Value Types.TIMESTAMP_WITH_TIMEZONE "TIMESTAMP"
|
|
||||||
|
|
||||||
## The SQL type representing a null column.
|
|
||||||
null : SQL_Type
|
null : SQL_Type
|
||||||
null = SQL_Type.Value Types.NULL "NULL"
|
null = SQL_Type.Value Types.NULL "NULL"
|
||||||
|
|
||||||
## ADVANCED
|
|
||||||
Given an Enso value gets the approximate SQL type.
|
|
||||||
approximate_type : Any -> SQL_Type ! Illegal_Argument
|
|
||||||
approximate_type value = case value of
|
|
||||||
_ : Column -> value.sql_type
|
|
||||||
_ : Boolean -> SQL_Type.boolean
|
|
||||||
_ : Integer -> if value.abs >= 2^32 then SQL_Type.bigint else SQL_Type.integer
|
|
||||||
_ : Decimal -> SQL_Type.double
|
|
||||||
_ : Text -> SQL_Type.text
|
|
||||||
_ : Date -> SQL_Type.date
|
|
||||||
_ : Time_Of_Day -> SQL_Type.time
|
|
||||||
_ : Date_Time -> SQL_Type.date_time
|
|
||||||
Nothing -> SQL_Type.null
|
|
||||||
_ -> Error.throw (Illegal_Argument.Error "Unsupported type.")
|
|
||||||
|
|
||||||
## PRIVATE
|
## PRIVATE
|
||||||
Returns the SQL type that is the result of applying an operation to the
|
Constructs a `SQL_Type` from a `ResultSetMetaData` object.
|
||||||
two given types.
|
from_metadata metadata ix =
|
||||||
merge_type : SQL_Type -> SQL_Type -> SQL_Type ! Illegal_Argument
|
typeid = metadata.getColumnType ix
|
||||||
merge_type left right =
|
typename = metadata.getColumnTypeName ix
|
||||||
if left.typeid == right.typeid then left else
|
precision = case metadata.getPrecision ix of
|
||||||
if left.is_null.not && right.is_null then left else
|
0 -> Nothing
|
||||||
if left.is_null && right.is_null.not then right else
|
p : Integer -> p
|
||||||
case left.is_definitely_numeric && right.is_definitely_numeric of
|
scale = metadata.getScale ix
|
||||||
True -> if left.is_definitely_integer && right.is_definitely_integer then merge_integer_type left right else
|
nullable_id = metadata.isNullable ix
|
||||||
merge_number_type left right
|
nullable = if nullable_id == ResultSetMetaData.columnNoNulls then False else
|
||||||
False -> if left.is_definitely_text && right.is_definitely_text then SQL_Type.text else
|
if nullable_id == ResultSetMetaData.columnNullable then True else
|
||||||
Error.throw (Illegal_Argument.Error "Unmatched types for operation.")
|
Nothing
|
||||||
|
SQL_Type.Value typeid typename precision scale nullable
|
||||||
## PRIVATE
|
|
||||||
|
|
||||||
Returns True if this type represents an integer or a double.
|
|
||||||
|
|
||||||
It only handles the standard types so it may return false negatives for
|
|
||||||
non-standard ones.
|
|
||||||
is_definitely_numeric : Boolean
|
|
||||||
is_definitely_numeric self = self.is_definitely_double || self.is_definitely_integer
|
|
||||||
|
|
||||||
## PRIVATE
|
|
||||||
|
|
||||||
Returns True if this type represents an integer.
|
|
||||||
|
|
||||||
It only handles the standard types so it may return false negatives for
|
|
||||||
non-standard ones.
|
|
||||||
is_definitely_integer : Boolean
|
|
||||||
is_definitely_integer self =
|
|
||||||
[Types.INTEGER, Types.BIGINT, Types.SMALLINT, Types.TINYINT].contains self.typeid
|
|
||||||
|
|
||||||
## PRIVATE
|
|
||||||
|
|
||||||
Returns True if this type represents a boolean.
|
|
||||||
|
|
||||||
It only handles the standard types so it may return false negatives for
|
|
||||||
non-standard ones.
|
|
||||||
is_definitely_boolean : Boolean
|
|
||||||
is_definitely_boolean self =
|
|
||||||
[Types.BOOLEAN, Types.BIT].contains self.typeid
|
|
||||||
|
|
||||||
## PRIVATE
|
|
||||||
|
|
||||||
Returns True if this type represents a floating point number.
|
|
||||||
|
|
||||||
It only handles the standard types so it may return false negatives for
|
|
||||||
non-standard ones.
|
|
||||||
is_definitely_double : Boolean
|
|
||||||
is_definitely_double self =
|
|
||||||
[Types.FLOAT, Types.DOUBLE, Types.REAL].contains self.typeid
|
|
||||||
|
|
||||||
## PRIVATE
|
|
||||||
Returns True if this type represents a Text.
|
|
||||||
is_definitely_text : Boolean
|
|
||||||
is_definitely_text self =
|
|
||||||
[Types.VARCHAR, Types.LONGVARCHAR, Types.NVARCHAR, Types.LONGNVARCHAR].contains self.typeid
|
|
||||||
|
|
||||||
## PRIVATE
|
|
||||||
Returns True if this type represents a Text, using heuristics that may
|
|
||||||
match more possible types.
|
|
||||||
is_likely_text : Boolean
|
|
||||||
is_likely_text self =
|
|
||||||
self.is_definitely_text || self.name.contains "text" Case_Sensitivity.Insensitive
|
|
||||||
|
|
||||||
## PRIVATE
|
|
||||||
is_null : Boolean
|
|
||||||
is_null self = self.typeid == Types.NULL
|
|
||||||
|
|
||||||
## PRIVATE
|
|
||||||
Joins two integer SQL types into the larger one.
|
|
||||||
merge_integer_type : SQL_Type -> SQL_Type -> SQL_Type
|
|
||||||
merge_integer_type left right =
|
|
||||||
integer_types = [Types.TINYINT, Types.SMALLINT, Types.INTEGER, Types.BIGINT]
|
|
||||||
left_index = integer_types.index_of left.typeid
|
|
||||||
right_index = integer_types.index_of right.typeid
|
|
||||||
new_index = left_index.max right_index
|
|
||||||
[SQL_Type.tinyint, SQL_Type.smallint, SQL_Type.integer, SQL_Type.bigint].at new_index
|
|
||||||
|
|
||||||
## PRIVATE
|
|
||||||
Joins two numeric SQL types into the larger one.
|
|
||||||
One of the types must be non-integer (otherwise use merge_integer_type).
|
|
||||||
merge_number_type : SQL_Type -> SQL_Type -> SQL_Type
|
|
||||||
merge_number_type left right = if left.is_definitely_integer then merge_number_type right left else
|
|
||||||
numeric_types = [Types.NUMERIC, Types.DECIMAL, Types.FLOAT, Types.REAL, Types.DOUBLE]
|
|
||||||
left_index = numeric_types.index_of left.typeid
|
|
||||||
right_index = numeric_types.index_of right.typeid
|
|
||||||
if right_index.is_nothing then left else
|
|
||||||
new_index = left_index.max right_index
|
|
||||||
[SQL_Type.numeric, SQL_Type.decimal, SQL_Type.real, SQL_Type.real, SQL_Type.double].at new_index
|
|
||||||
|
|
||||||
## PRIVATE
|
|
||||||
type SQL_Type_Comparator
|
|
||||||
compare x y =
|
|
||||||
if x.typeid == y.typeid then Ordering.Equal else
|
|
||||||
Nothing
|
|
||||||
|
|
||||||
hash x = x.typeid.hashCode
|
|
||||||
|
|
||||||
Comparable.from (_:SQL_Type) = SQL_Type_Comparator
|
|
||||||
|
@ -8,7 +8,7 @@ import Standard.Base.Errors.Illegal_Argument.Illegal_Argument
|
|||||||
import Standard.Base.Errors.Illegal_State.Illegal_State
|
import Standard.Base.Errors.Illegal_State.Illegal_State
|
||||||
import Standard.Base.Errors.Unimplemented.Unimplemented
|
import Standard.Base.Errors.Unimplemented.Unimplemented
|
||||||
|
|
||||||
from Standard.Table import Auto_Detect, Aggregate_Column, Data_Formatter, Column_Selector, Sort_Column, Match_Columns, Position, Set_Mode, Auto
|
from Standard.Table import Auto_Detect, Aggregate_Column, Data_Formatter, Column_Selector, Sort_Column, Match_Columns, Position, Set_Mode, Auto, Value_Type
|
||||||
import Standard.Table.Data.Expression.Expression
|
import Standard.Table.Data.Expression.Expression
|
||||||
import Standard.Table.Data.Expression.Expression_Error
|
import Standard.Table.Data.Expression.Expression_Error
|
||||||
import Standard.Table.Data.Join_Condition.Join_Condition
|
import Standard.Table.Data.Join_Condition.Join_Condition
|
||||||
@ -16,8 +16,6 @@ import Standard.Table.Data.Join_Kind.Join_Kind
|
|||||||
import Standard.Table.Data.Report_Unmatched.Report_Unmatched
|
import Standard.Table.Data.Report_Unmatched.Report_Unmatched
|
||||||
import Standard.Table.Data.Row.Row
|
import Standard.Table.Data.Row.Row
|
||||||
import Standard.Table.Data.Table.Table as Materialized_Table
|
import Standard.Table.Data.Table.Table as Materialized_Table
|
||||||
import Standard.Table.Data.Value_Type.Auto
|
|
||||||
import Standard.Table.Data.Value_Type.Value_Type
|
|
||||||
import Standard.Table.Internal.Aggregate_Column_Helper
|
import Standard.Table.Internal.Aggregate_Column_Helper
|
||||||
import Standard.Table.Internal.Java_Exports
|
import Standard.Table.Internal.Java_Exports
|
||||||
import Standard.Table.Internal.Table_Helpers
|
import Standard.Table.Internal.Table_Helpers
|
||||||
@ -41,6 +39,8 @@ import project.Internal.IR.From_Spec.From_Spec
|
|||||||
import project.Internal.IR.Internal_Column.Internal_Column
|
import project.Internal.IR.Internal_Column.Internal_Column
|
||||||
import project.Internal.IR.SQL_Join_Kind.SQL_Join_Kind
|
import project.Internal.IR.SQL_Join_Kind.SQL_Join_Kind
|
||||||
import project.Internal.IR.Query.Query
|
import project.Internal.IR.Query.Query
|
||||||
|
import project.Internal.SQL_Type_Reference.SQL_Type_Reference
|
||||||
|
from project.Data.Column import find_argument_type
|
||||||
|
|
||||||
from project.Errors import Unsupported_Database_Operation, Integrity_Error, Unsupported_Name
|
from project.Errors import Unsupported_Database_Operation, Integrity_Error, Unsupported_Name
|
||||||
|
|
||||||
@ -511,13 +511,6 @@ type Table
|
|||||||
column = self.compute expression on_problems
|
column = self.compute expression on_problems
|
||||||
self.filter column Filter_Condition.Is_True
|
self.filter column Filter_Condition.Is_True
|
||||||
|
|
||||||
## PRIVATE
|
|
||||||
with_no_rows self =
|
|
||||||
false_expression = SQL_Expression.Operation "==" [SQL_Expression.Constant SQL_Type.integer 1, SQL_Expression.Constant SQL_Type.integer 2]
|
|
||||||
new_filters = self.context.where_filters + [false_expression]
|
|
||||||
new_ctx = self.context.set_where_filters new_filters
|
|
||||||
self.updated_context new_ctx
|
|
||||||
|
|
||||||
## UNSTABLE
|
## UNSTABLE
|
||||||
Creates a new Table with the specified range of rows from the input
|
Creates a new Table with the specified range of rows from the input
|
||||||
Table.
|
Table.
|
||||||
@ -667,10 +660,15 @@ type Table
|
|||||||
compute : Text -> Problem_Behavior -> Column ! No_Such_Column | Invalid_Value_Type | Expression_Error
|
compute : Text -> Problem_Behavior -> Column ! No_Such_Column | Invalid_Value_Type | Expression_Error
|
||||||
compute self expression on_problems=Report_Warning =
|
compute self expression on_problems=Report_Warning =
|
||||||
get_column name = self.at name
|
get_column name = self.at name
|
||||||
|
type_mapping = self.connection.dialect.get_type_mapping
|
||||||
make_constant value =
|
make_constant value =
|
||||||
new_type = SQL_Type.approximate_type value
|
argument_value_type = find_argument_type value
|
||||||
other = SQL_Expression.Constant new_type value
|
sql_type = case argument_value_type of
|
||||||
Column.Value ("Constant_" + UUID.randomUUID.to_text) self.connection new_type other self.context
|
Nothing -> SQL_Type.null
|
||||||
|
_ -> type_mapping.value_type_to_sql argument_value_type Problem_Behavior.Ignore
|
||||||
|
expr = SQL_Expression.Constant value
|
||||||
|
new_type_ref = SQL_Type_Reference.from_constant sql_type
|
||||||
|
Column.Value ("Constant_" + UUID.randomUUID.to_text) self.connection new_type_ref expr self.context
|
||||||
new_column = Expression.evaluate expression get_column make_constant "Standard.Database.Data.Column" "Column" Column.var_args_functions
|
new_column = Expression.evaluate expression get_column make_constant "Standard.Database.Data.Column" "Column" Column.var_args_functions
|
||||||
problems = Warning.get_all new_column . map .value
|
problems = Warning.get_all new_column . map .value
|
||||||
result = new_column.rename (self.connection.dialect.get_naming_helpers.sanitize_name expression)
|
result = new_column.rename (self.connection.dialect.get_naming_helpers.sanitize_name expression)
|
||||||
@ -706,28 +704,26 @@ type Table
|
|||||||
self.read max_rows=max_rows . rows
|
self.read max_rows=max_rows . rows
|
||||||
|
|
||||||
## Returns the first row of the table.
|
## Returns the first row of the table.
|
||||||
|
|
||||||
In the database backend, it first materializes the table to in-memory.
|
|
||||||
first_row : Row ! Index_Out_Of_Bounds
|
first_row : Row ! Index_Out_Of_Bounds
|
||||||
first_row self =
|
first_row self =
|
||||||
self.read max_rows=1 . rows . first
|
self.read max_rows=1 . rows . first
|
||||||
|
|
||||||
## Returns the second row of the table.
|
## Returns the second row of the table.
|
||||||
|
|
||||||
In the database backend, it first materializes the table to in-memory.
|
|
||||||
second_row : Row ! Index_Out_Of_Bounds
|
second_row : Row ! Index_Out_Of_Bounds
|
||||||
second_row self =
|
second_row self =
|
||||||
self.read max_rows=2 . rows . second
|
self.read max_rows=2 . rows . second
|
||||||
|
|
||||||
## Returns the last row of the table.
|
## Returns the last row of the table.
|
||||||
|
|
||||||
In the database backend, it first materializes the table to in-memory.
|
In the database backend, this function has to scan through all the
|
||||||
|
results of the query.
|
||||||
last_row : Row ! Index_Out_Of_Bounds
|
last_row : Row ! Index_Out_Of_Bounds
|
||||||
last_row self =
|
last_row self =
|
||||||
if self.internal_columns.is_empty then Error.throw (Illegal_Argument.Error "Cannot create a table with no columns.") else
|
if self.internal_columns.is_empty then Error.throw (Illegal_Argument.Error "Cannot create a table with no columns.") else
|
||||||
sql = self.to_sql
|
sql = self.to_sql
|
||||||
expected_types = self.internal_columns.map .sql_type
|
column_type_suggestions = self.internal_columns.map .sql_type_reference
|
||||||
self.connection.read_last_row sql expected_types . rows . first
|
table = self.connection.read_statement sql column_type_suggestions last_row_only=True
|
||||||
|
table.rows.first
|
||||||
|
|
||||||
## ALIAS sort
|
## ALIAS sort
|
||||||
Sorts the rows of the table according to the specified columns and order.
|
Sorts the rows of the table according to the specified columns and order.
|
||||||
@ -801,13 +797,14 @@ type Table
|
|||||||
columns_for_ordering = Table_Helpers.prepare_order_by self.columns columns problem_builder
|
columns_for_ordering = Table_Helpers.prepare_order_by self.columns columns problem_builder
|
||||||
problem_builder.attach_problems_before on_problems <|
|
problem_builder.attach_problems_before on_problems <|
|
||||||
new_order_descriptors = columns_for_ordering.map selected_column->
|
new_order_descriptors = columns_for_ordering.map selected_column->
|
||||||
internal_column = selected_column.column
|
column = selected_column.column
|
||||||
associated_selector = selected_column.associated_selector
|
associated_selector = selected_column.associated_selector
|
||||||
|
effective_text_ordering = if column.value_type.is_text then text_ordering else Nothing
|
||||||
## FIXME [RW] this is only needed because `Vector.map` does not
|
## FIXME [RW] this is only needed because `Vector.map` does not
|
||||||
propagate dataflow errors correctly. See:
|
propagate dataflow errors correctly. See:
|
||||||
https://www.pivotaltracker.com/story/show/181057718
|
https://www.pivotaltracker.com/story/show/181057718
|
||||||
Panic.throw_wrapped_if_error <|
|
Panic.throw_wrapped_if_error <|
|
||||||
self.connection.dialect.prepare_order_descriptor internal_column associated_selector.direction text_ordering
|
self.connection.dialect.prepare_order_descriptor column associated_selector.direction effective_text_ordering
|
||||||
new_ctx = self.context.add_orders new_order_descriptors
|
new_ctx = self.context.add_orders new_order_descriptors
|
||||||
self.updated_context new_ctx
|
self.updated_context new_ctx
|
||||||
|
|
||||||
@ -1201,19 +1198,37 @@ type Table
|
|||||||
resolved_aggregates = validated.valid_columns
|
resolved_aggregates = validated.valid_columns
|
||||||
key_expressions = key_columns.map .expression
|
key_expressions = key_columns.map .expression
|
||||||
new_ctx = self.context.set_groups key_expressions
|
new_ctx = self.context.set_groups key_expressions
|
||||||
|
## TODO [RW] here we will perform as many fetches as there are
|
||||||
|
aggregate columns, but technically we could perform just one
|
||||||
|
fetch fetching all column types - TODO we should do that. We can
|
||||||
|
do it here by creating a builder that will gather all requests
|
||||||
|
from the executed callbacks and create Lazy references that all
|
||||||
|
point to a single query.
|
||||||
|
See #6118.
|
||||||
|
infer_from_database_callback expression =
|
||||||
|
SQL_Type_Reference.new self.connection self.context expression
|
||||||
|
dialect = self.connection.dialect
|
||||||
|
type_mapping = dialect.get_type_mapping
|
||||||
|
infer_return_type op_kind columns expression =
|
||||||
|
type_mapping.infer_return_type infer_from_database_callback op_kind columns expression
|
||||||
results = resolved_aggregates.map p->
|
results = resolved_aggregates.map p->
|
||||||
agg = p.second
|
agg = p.second
|
||||||
new_name = p.first
|
new_name = p.first
|
||||||
Aggregate_Helper.make_aggregate_column self agg new_name . catch
|
result = Aggregate_Helper.make_aggregate_column agg new_name dialect infer_return_type
|
||||||
|
## If the `result` did contain an error, we catch it to be
|
||||||
|
able to store it in a vector and then we will partition the
|
||||||
|
created columns and failures.
|
||||||
|
result.catch Any error->
|
||||||
|
Wrapped_Error.Value error
|
||||||
|
|
||||||
partitioned = results.partition (_.is_a Internal_Column)
|
partitioned = results.partition (_.is_a Wrapped_Error)
|
||||||
|
|
||||||
## When working on join we may encounter further issues with having
|
## When working on join we may encounter further issues with having
|
||||||
aggregate columns exposed directly, it may be useful to re-use
|
aggregate columns exposed directly, it may be useful to re-use
|
||||||
the `lift_aggregate` method to push the aggregates into a
|
the `lift_aggregate` method to push the aggregates into a
|
||||||
subquery.
|
subquery.
|
||||||
new_columns = partitioned.first
|
new_columns = partitioned.second
|
||||||
problems = partitioned.second
|
problems = partitioned.first.map .value
|
||||||
on_problems.attach_problems_before problems <|
|
on_problems.attach_problems_before problems <|
|
||||||
self.updated_context_and_columns new_ctx new_columns subquery=True
|
self.updated_context_and_columns new_ctx new_columns subquery=True
|
||||||
|
|
||||||
@ -1356,8 +1371,8 @@ type Table
|
|||||||
Error.throw (Illegal_Argument.Error "Cannot create a table with no columns.")
|
Error.throw (Illegal_Argument.Error "Cannot create a table with no columns.")
|
||||||
False ->
|
False ->
|
||||||
sql = preprocessed.to_sql
|
sql = preprocessed.to_sql
|
||||||
expected_types = preprocessed.internal_columns.map .sql_type
|
column_type_suggestions = preprocessed.internal_columns.map .sql_type_reference
|
||||||
self.connection.read_statement sql expected_types
|
self.connection.read_statement sql column_type_suggestions
|
||||||
|
|
||||||
## UNSTABLE
|
## UNSTABLE
|
||||||
|
|
||||||
@ -1391,8 +1406,10 @@ type Table
|
|||||||
self.connection.dialect.generate_sql query
|
self.connection.dialect.generate_sql query
|
||||||
count_table = self.connection.read_statement count_query
|
count_table = self.connection.read_statement count_query
|
||||||
counts = if cols.is_empty then [] else count_table.columns.map c-> c.at 0
|
counts = if cols.is_empty then [] else count_table.columns.map c-> c.at 0
|
||||||
types = cols.map c-> c.sql_type.name
|
type_mapping = self.connection.dialect.get_type_mapping
|
||||||
Materialized_Table.new [["Column", cols.map .name], ["Items Count", counts], ["SQL Type", types]]
|
types = cols.map col->
|
||||||
|
type_mapping.sql_type_to_value_type col.sql_type_reference.get
|
||||||
|
Materialized_Table.new [["Column", cols.map .name], ["Items Count", counts], ["Value Type", types]]
|
||||||
|
|
||||||
## PRIVATE
|
## PRIVATE
|
||||||
|
|
||||||
@ -1402,7 +1419,7 @@ type Table
|
|||||||
- internal: The internal column to use for creating a column.
|
- internal: The internal column to use for creating a column.
|
||||||
make_column : Internal_Column -> Column
|
make_column : Internal_Column -> Column
|
||||||
make_column self internal =
|
make_column self internal =
|
||||||
Column.Value internal.name self.connection internal.sql_type internal.expression self.context
|
Column.Value internal.name self.connection internal.sql_type_reference internal.expression self.context
|
||||||
|
|
||||||
## PRIVATE
|
## PRIVATE
|
||||||
columns_helper : Table_Column_Helper
|
columns_helper : Table_Column_Helper
|
||||||
@ -1470,7 +1487,7 @@ type Table
|
|||||||
_ -> Error.throw <| Illegal_State.Error "Inserting can only be performed on tables as returned by `query`, any further processing is not allowed."
|
_ -> Error.throw <| Illegal_State.Error "Inserting can only be performed on tables as returned by `query`, any further processing is not allowed."
|
||||||
# TODO [RW] before removing the PRIVATE tag, add a check that no bad stuff was done to the table as described above
|
# TODO [RW] before removing the PRIVATE tag, add a check that no bad stuff was done to the table as described above
|
||||||
pairs = self.internal_columns.zip values col-> value->
|
pairs = self.internal_columns.zip values col-> value->
|
||||||
[col.name, SQL_Expression.Constant col.sql_type value]
|
[col.name, SQL_Expression.Constant value]
|
||||||
query = self.connection.dialect.generate_sql <| Query.Insert table_name pairs
|
query = self.connection.dialect.generate_sql <| Query.Insert table_name pairs
|
||||||
affected_rows = self.connection.execute_update query
|
affected_rows = self.connection.execute_update query
|
||||||
case affected_rows == 1 of
|
case affected_rows == 1 of
|
||||||
@ -1553,7 +1570,10 @@ type Table
|
|||||||
make_table : Connection -> Text -> Vector -> Context -> Table
|
make_table : Connection -> Text -> Vector -> Context -> Table
|
||||||
make_table connection table_name columns ctx =
|
make_table connection table_name columns ctx =
|
||||||
if columns.is_empty then Error.throw (Illegal_State.Error "Unexpectedly attempting to create a Database Table with no columns. This is a bug in the Database library.") else
|
if columns.is_empty then Error.throw (Illegal_State.Error "Unexpectedly attempting to create a Database Table with no columns. This is a bug in the Database library.") else
|
||||||
cols = columns.map (p -> Internal_Column.Value p.first p.second (SQL_Expression.Column table_name p.first))
|
cols = columns.map p->
|
||||||
|
name = p.first
|
||||||
|
sql_type = p.second
|
||||||
|
Internal_Column.Value name (SQL_Type_Reference.from_constant sql_type) (SQL_Expression.Column table_name name)
|
||||||
Table.Value table_name connection cols ctx
|
Table.Value table_name connection cols ctx
|
||||||
|
|
||||||
## PRIVATE
|
## PRIVATE
|
||||||
@ -1626,3 +1646,8 @@ freshen_columns : Vector Text -> Vector Internal_Column -> Vector Internal_Colum
|
|||||||
freshen_columns used_names columns =
|
freshen_columns used_names columns =
|
||||||
new_names = fresh_names used_names (columns.map .name)
|
new_names = fresh_names used_names (columns.map .name)
|
||||||
Helpers.rename_internal_columns columns new_names
|
Helpers.rename_internal_columns columns new_names
|
||||||
|
|
||||||
|
## PRIVATE
|
||||||
|
type Wrapped_Error
|
||||||
|
## PRIVATE
|
||||||
|
Value value
|
||||||
|
@ -13,62 +13,82 @@ import project.Internal.IR.Internal_Column.Internal_Column
|
|||||||
from project.Errors import Unsupported_Database_Operation
|
from project.Errors import Unsupported_Database_Operation
|
||||||
|
|
||||||
## PRIVATE
|
## PRIVATE
|
||||||
Creates an `Internal_Column` that computes the specified statistic.
|
Creates an `Internal_Column` that will represent the computed aggregate.
|
||||||
It returns a dataflow error if the given operation is not supported.
|
|
||||||
|
|
||||||
The provided `aggregate` is assumed to contain only already resolved columns.
|
Arguments:
|
||||||
You may need to transform it with `resolve_aggregate` first.
|
- aggregate: The description of the aggregation to compute.
|
||||||
make_aggregate_column : Table -> Aggregate_Column -> Text -> Internal_Column
|
- new_name: The name for the created column.
|
||||||
make_aggregate_column table aggregate new_name =
|
- dialect: The dialect of the database to generate the SQL for.
|
||||||
sql_type = table.connection.dialect.resolve_target_sql_type aggregate
|
- infer_return_type: A function that takes 3 arguments (name of the
|
||||||
expression = make_expression aggregate table.connection.dialect
|
operation, list of input columns and a raw SQL IR Expression) and returns
|
||||||
Internal_Column.Value new_name sql_type expression
|
the inferred type for the aggregation.
|
||||||
|
make_aggregate_column : Aggregate_Column -> Text -> Dialect -> SQL_Expression
|
||||||
## PRIVATE
|
make_aggregate_column aggregate new_name dialect infer_return_type =
|
||||||
Creates an Internal Representation of the expression that computes a
|
|
||||||
requested statistic.
|
|
||||||
make_expression : Aggregate_Column -> Dialect -> SQL_Expression
|
|
||||||
make_expression aggregate dialect =
|
|
||||||
is_non_empty_selector v = v.is_nothing.not && v.not_empty
|
is_non_empty_selector v = v.is_nothing.not && v.not_empty
|
||||||
case aggregate of
|
simple_aggregate op_kind columns =
|
||||||
Group_By c _ -> c.expression
|
expression = SQL_Expression.Operation op_kind (columns.map .expression)
|
||||||
Count _ -> SQL_Expression.Operation "COUNT_ROWS" []
|
sql_type_ref = infer_return_type op_kind columns expression
|
||||||
|
Internal_Column.Value new_name sql_type_ref expression
|
||||||
|
|
||||||
|
aggregate_with_order_by op_kind column order_by =
|
||||||
|
order_bys = order_by.map sc->
|
||||||
|
effective_ordering = if sc.column.value_type.is_text then Text_Ordering.Default else Nothing
|
||||||
|
dialect.prepare_order_descriptor sc.column.as_internal sc.direction effective_ordering
|
||||||
|
expression = SQL_Expression.Operation op_kind [column.expression]+order_bys
|
||||||
|
sql_type_ref = infer_return_type op_kind [column] expression
|
||||||
|
Internal_Column.Value new_name sql_type_ref expression
|
||||||
|
|
||||||
|
dialect.check_aggregate_support aggregate . if_not_error <| case aggregate of
|
||||||
|
Group_By c _ ->
|
||||||
|
Internal_Column.Value new_name c.sql_type_reference c.expression
|
||||||
|
Count _ -> simple_aggregate "COUNT_ROWS" []
|
||||||
Count_Distinct columns _ ignore_nothing -> if columns.is_empty then Error.throw (Illegal_Argument.Error "Count_Distinct must have at least one column.") else
|
Count_Distinct columns _ ignore_nothing -> if columns.is_empty then Error.throw (Illegal_Argument.Error "Count_Distinct must have at least one column.") else
|
||||||
case ignore_nothing of
|
case ignore_nothing of
|
||||||
True -> SQL_Expression.Operation "COUNT_DISTINCT" (columns.map .expression)
|
True -> simple_aggregate "COUNT_DISTINCT" columns
|
||||||
False -> SQL_Expression.Operation "COUNT_DISTINCT_INCLUDE_NULL" (columns.map .expression)
|
False -> simple_aggregate "COUNT_DISTINCT_INCLUDE_NULL" columns
|
||||||
Count_Not_Nothing c _ -> SQL_Expression.Operation "COUNT" [c.expression]
|
Count_Not_Nothing c _ -> simple_aggregate "COUNT" [c]
|
||||||
Count_Nothing c _ -> SQL_Expression.Operation "COUNT_IS_NULL" [c.expression]
|
Count_Nothing c _ -> simple_aggregate "COUNT_IS_NULL" [c]
|
||||||
Count_Not_Empty c _ -> SQL_Expression.Operation "COUNT_NOT_EMPTY" [c.expression]
|
Count_Not_Empty c _ -> simple_aggregate "COUNT_NOT_EMPTY" [c]
|
||||||
Count_Empty c _ -> SQL_Expression.Operation "COUNT_EMPTY" [c.expression]
|
Count_Empty c _ -> simple_aggregate "COUNT_EMPTY" [c]
|
||||||
Percentile p c _ -> SQL_Expression.Operation "PERCENTILE" [SQL_Expression.Constant SQL_Type.double p, c.expression]
|
Percentile p c _ ->
|
||||||
Mode c _ -> SQL_Expression.Operation "MODE" [c.expression]
|
op_kind = "PERCENTILE"
|
||||||
|
expression = SQL_Expression.Operation op_kind [SQL_Expression.Constant p, c.expression]
|
||||||
|
sql_type_ref = infer_return_type op_kind [c] expression
|
||||||
|
Internal_Column.Value new_name sql_type_ref expression
|
||||||
|
Mode c _ -> simple_aggregate "MODE" [c]
|
||||||
First c _ ignore_nothing order_by -> case is_non_empty_selector order_by of
|
First c _ ignore_nothing order_by -> case is_non_empty_selector order_by of
|
||||||
False -> Error.throw (Unsupported_Database_Operation.Error "`First` aggregation requires at least one `order_by` column.")
|
False -> Error.throw (Unsupported_Database_Operation.Error "`First` aggregation requires at least one `order_by` column.")
|
||||||
True ->
|
True ->
|
||||||
order_bys = order_by.map c-> dialect.prepare_order_descriptor c.column.as_internal c.direction Text_Ordering.Default
|
op = case ignore_nothing of
|
||||||
case ignore_nothing of
|
False -> "FIRST"
|
||||||
False -> SQL_Expression.Operation "FIRST" [c.expression]+order_bys
|
True -> "FIRST_NOT_NULL"
|
||||||
True -> SQL_Expression.Operation "FIRST_NOT_NULL" [c.expression]+order_bys
|
aggregate_with_order_by op c order_by
|
||||||
Last c _ ignore_nothing order_by -> case is_non_empty_selector order_by of
|
Last c _ ignore_nothing order_by -> case is_non_empty_selector order_by of
|
||||||
False -> Error.throw (Unsupported_Database_Operation.Error "`Last` aggregation requires at least one `order_by` column.")
|
False -> Error.throw (Unsupported_Database_Operation.Error "`Last` aggregation requires at least one `order_by` column.")
|
||||||
True ->
|
True ->
|
||||||
order_bys = order_by.map c-> dialect.prepare_order_descriptor c.column.as_internal c.direction Text_Ordering.Default
|
op = case ignore_nothing of
|
||||||
case ignore_nothing of
|
False -> "LAST"
|
||||||
False -> SQL_Expression.Operation "LAST" [c.expression]+order_bys
|
True -> "LAST_NOT_NULL"
|
||||||
True -> SQL_Expression.Operation "LAST_NOT_NULL" [c.expression]+order_bys
|
aggregate_with_order_by op c order_by
|
||||||
Maximum c _ -> SQL_Expression.Operation "MAX" [c.expression]
|
Maximum c _ -> simple_aggregate "MAX" [c]
|
||||||
Minimum c _ -> SQL_Expression.Operation "MIN" [c.expression]
|
Minimum c _ -> simple_aggregate "MIN" [c]
|
||||||
Shortest c _ -> SQL_Expression.Operation "SHORTEST" [c.expression]
|
Shortest c _ -> simple_aggregate "SHORTEST" [c]
|
||||||
Longest c _ -> SQL_Expression.Operation "LONGEST" [c.expression]
|
Longest c _ -> simple_aggregate "LONGEST" [c]
|
||||||
Standard_Deviation c _ population -> case population of
|
Standard_Deviation c _ population -> case population of
|
||||||
True -> SQL_Expression.Operation "STDDEV_POP" [c.expression]
|
True -> simple_aggregate "STDDEV_POP" [c]
|
||||||
False -> SQL_Expression.Operation "STDDEV_SAMP" [c.expression]
|
False -> simple_aggregate "STDDEV_SAMP" [c]
|
||||||
Concatenate c _ separator prefix suffix quote_char ->
|
Concatenate c _ separator prefix suffix quote_char ->
|
||||||
base_args = [c.expression, SQL_Expression.Constant SQL_Type.text separator, SQL_Expression.Constant SQL_Type.text prefix, SQL_Expression.Constant SQL_Type.text suffix]
|
base_args = [c.expression, SQL_Expression.Constant separator, SQL_Expression.Constant prefix, SQL_Expression.Constant suffix]
|
||||||
case quote_char.is_empty of
|
op_kind = case quote_char.is_empty of
|
||||||
True -> SQL_Expression.Operation "CONCAT" base_args
|
True -> "CONCAT"
|
||||||
False -> SQL_Expression.Operation "CONCAT_QUOTE_IF_NEEDED" base_args+[SQL_Expression.Constant SQL_Type.text quote_char]
|
False -> "CONCAT_QUOTE_IF_NEEDED"
|
||||||
Sum c _ -> SQL_Expression.Operation "SUM" [c.expression]
|
effective_args = case op_kind of
|
||||||
Average c _ -> SQL_Expression.Operation "AVG" [c.expression]
|
"CONCAT_QUOTE_IF_NEEDED" ->
|
||||||
Median c _ -> SQL_Expression.Operation "MEDIAN" [c.expression]
|
base_args+[SQL_Expression.Constant quote_char]
|
||||||
|
"CONCAT" -> base_args
|
||||||
|
expression = SQL_Expression.Operation op_kind effective_args
|
||||||
|
sql_type_ref = infer_return_type op_kind [c] expression
|
||||||
|
Internal_Column.Value new_name sql_type_ref expression
|
||||||
|
Sum c _ -> simple_aggregate "SUM" [c]
|
||||||
|
Average c _ -> simple_aggregate "AVG" [c]
|
||||||
|
Median c _ -> simple_aggregate "MEDIAN" [c]
|
||||||
|
@ -195,7 +195,7 @@ make_iif arguments = case arguments.length of
|
|||||||
expr = arguments.at 0
|
expr = arguments.at 0
|
||||||
when_true = arguments.at 1
|
when_true = arguments.at 1
|
||||||
when_false = arguments.at 2
|
when_false = arguments.at 2
|
||||||
(Builder.code "CASE WHEN" ++ expr ++ " THEN " ++ when_true ++ " WHEN " ++ expr ++ " IS NULL THEN NULL ELSE " ++ when_false ++ " END").paren
|
(Builder.code "CASE WHEN " ++ expr ++ " THEN " ++ when_true ++ " WHEN " ++ expr ++ " IS NULL THEN NULL ELSE " ++ when_false ++ " END").paren
|
||||||
_ ->
|
_ ->
|
||||||
Error.throw <| Illegal_State.Error ("Invalid amount of arguments for operation IIF")
|
Error.throw <| Illegal_State.Error ("Invalid amount of arguments for operation IIF")
|
||||||
|
|
||||||
@ -248,7 +248,7 @@ generate_expression : Internal_Dialect -> SQL_Expression | Order_Descriptor | Qu
|
|||||||
generate_expression dialect expr = case expr of
|
generate_expression dialect expr = case expr of
|
||||||
SQL_Expression.Column origin name ->
|
SQL_Expression.Column origin name ->
|
||||||
dialect.wrap_identifier origin ++ '.' ++ dialect.wrap_identifier name
|
dialect.wrap_identifier origin ++ '.' ++ dialect.wrap_identifier name
|
||||||
SQL_Expression.Constant sql_type value -> Builder.interpolation sql_type value
|
SQL_Expression.Constant value -> Builder.interpolation value
|
||||||
SQL_Expression.Operation kind arguments ->
|
SQL_Expression.Operation kind arguments ->
|
||||||
op = dialect.operation_map.get kind (Error.throw <| Unsupported_Database_Operation.Error kind)
|
op = dialect.operation_map.get kind (Error.throw <| Unsupported_Database_Operation.Error kind)
|
||||||
parsed_args = arguments.map (generate_expression dialect)
|
parsed_args = arguments.map (generate_expression dialect)
|
||||||
@ -395,7 +395,7 @@ generate_query dialect query = case query of
|
|||||||
Builder.code "SELECT " ++ prefix ++ cols ++ generate_select_context dialect ctx
|
Builder.code "SELECT " ++ prefix ++ cols ++ generate_select_context dialect ctx
|
||||||
Query.Insert table_name pairs ->
|
Query.Insert table_name pairs ->
|
||||||
generate_insert_query dialect table_name pairs
|
generate_insert_query dialect table_name pairs
|
||||||
_ -> Error.throw <| Unsupported_Database_Operation.Error "Unsupported query type."
|
_ -> Error.throw <| Unsupported_Database_Operation.Error "Unsupported query type: "+query.to_text
|
||||||
|
|
||||||
## PRIVATE
|
## PRIVATE
|
||||||
Arguments:
|
Arguments:
|
||||||
|
@ -0,0 +1,136 @@
|
|||||||
|
from Standard.Base import all
|
||||||
|
|
||||||
|
import Standard.Table.Data.Column.Column as Materialized_Column
|
||||||
|
import Standard.Table.Data.Type.Value_Type.Value_Type
|
||||||
|
import Standard.Table.Internal.Java_Exports
|
||||||
|
|
||||||
|
polyglot java import java.sql.ResultSet
|
||||||
|
|
||||||
|
type Column_Fetcher
|
||||||
|
## PRIVATE
|
||||||
|
A helper for fetching data from a result set and possibly building a
|
||||||
|
column out of it.
|
||||||
|
|
||||||
|
Arguments:
|
||||||
|
- fetch_value: A function that fetches a value from a result set.
|
||||||
|
- make_builder: A function that creates a builder for a column.
|
||||||
|
It takes an initial size as an argument. That size is only a suggestion
|
||||||
|
for initial capacity and the builder must be ready to accept more or
|
||||||
|
less rows than that.
|
||||||
|
Value (fetch_value : ResultSet -> Integer -> Any) (make_builder : Integer -> Builder)
|
||||||
|
|
||||||
|
## We could use `Storage.make_builder` here, but this builder allows us to pass
|
||||||
|
raw Truffle values around (like `long`) instead of boxing them.
|
||||||
|
|
||||||
|
I suspect this can allow the Truffle PE to compile this into tighter loop,
|
||||||
|
but so far I have no proof. If it turns out to be an unnecessary
|
||||||
|
micro-optimization, we can always switch to `Storage.make_builder`.
|
||||||
|
type Builder
|
||||||
|
## PRIVATE
|
||||||
|
Wraps an underlying builder to provide a generic interface.
|
||||||
|
|
||||||
|
Arguments:
|
||||||
|
- append: A function that appends a value to the underlying builder.
|
||||||
|
By default, it must support appending `Nothing`, unless the column was
|
||||||
|
explicitly declared as non-nullable.
|
||||||
|
- make_column: A function that creates a column from the underlying
|
||||||
|
builder. It takes the desired column name as argument.
|
||||||
|
Value (append : Any -> Nothing) (make_column : Text -> Materialized_Column)
|
||||||
|
|
||||||
|
## PRIVATE
|
||||||
|
boolean_fetcher : Column_Fetcher
|
||||||
|
boolean_fetcher =
|
||||||
|
fetch_value rs i =
|
||||||
|
b = rs.getBoolean i
|
||||||
|
if rs.wasNull then Nothing else b
|
||||||
|
make_builder _ =
|
||||||
|
java_builder = Java_Exports.make_bool_builder
|
||||||
|
append v =
|
||||||
|
if v.is_nothing then java_builder.appendNulls 1 else
|
||||||
|
java_builder.appendBoolean v
|
||||||
|
Builder.Value append (seal_java_builder java_builder)
|
||||||
|
Column_Fetcher.Value fetch_value make_builder
|
||||||
|
|
||||||
|
## PRIVATE
|
||||||
|
double_fetcher : Column_Fetcher
|
||||||
|
double_fetcher =
|
||||||
|
fetch_value rs i =
|
||||||
|
d = rs.getDouble i
|
||||||
|
if rs.wasNull then Nothing else d
|
||||||
|
make_builder initial_size =
|
||||||
|
java_builder = Java_Exports.make_double_builder initial_size
|
||||||
|
append v =
|
||||||
|
if v.is_nothing then java_builder.appendNulls 1 else
|
||||||
|
java_builder.appendDouble v
|
||||||
|
Builder.Value append (seal_java_builder java_builder)
|
||||||
|
Column_Fetcher.Value fetch_value make_builder
|
||||||
|
|
||||||
|
## PRIVATE
|
||||||
|
long_fetcher : Column_Fetcher
|
||||||
|
long_fetcher =
|
||||||
|
fetch_value rs i =
|
||||||
|
l = rs.getLong i
|
||||||
|
if rs.wasNull then Nothing else l
|
||||||
|
make_builder initial_size =
|
||||||
|
java_builder = Java_Exports.make_long_builder initial_size
|
||||||
|
append v =
|
||||||
|
if v.is_nothing then java_builder.appendNulls 1 else
|
||||||
|
java_builder.appendLong v
|
||||||
|
Builder.Value append (seal_java_builder java_builder)
|
||||||
|
Column_Fetcher.Value fetch_value make_builder
|
||||||
|
|
||||||
|
## PRIVATE
|
||||||
|
text_fetcher : Column_Fetcher
|
||||||
|
text_fetcher =
|
||||||
|
fetch_value rs i =
|
||||||
|
t = rs.getString i
|
||||||
|
if rs.wasNull then Nothing else t
|
||||||
|
make_builder initial_size =
|
||||||
|
java_builder = Java_Exports.make_string_builder initial_size
|
||||||
|
append v =
|
||||||
|
if v.is_nothing then java_builder.appendNulls 1 else
|
||||||
|
java_builder.append v
|
||||||
|
Builder.Value append (seal_java_builder java_builder)
|
||||||
|
Column_Fetcher.Value fetch_value make_builder
|
||||||
|
|
||||||
|
## PRIVATE
|
||||||
|
A fallback fetcher that can be used for any type.
|
||||||
|
It will use `getObject` to get the desired value and the `InferredBuilder`
|
||||||
|
to create a Java column that will suit the values present.
|
||||||
|
|
||||||
|
It is used as a default fallback. It may not work correctly for specialized
|
||||||
|
types like dates, so a specialized fetcher should be used instead.
|
||||||
|
fallback_fetcher : Column_Fetcher
|
||||||
|
fallback_fetcher =
|
||||||
|
fetch_value rs i =
|
||||||
|
v = rs.getObject i
|
||||||
|
if rs.wasNull then Nothing else v
|
||||||
|
make_builder initial_size =
|
||||||
|
java_builder = Java_Exports.make_inferred_builder initial_size
|
||||||
|
append v =
|
||||||
|
if v.is_nothing then java_builder.appendNulls 1 else
|
||||||
|
java_builder.append v
|
||||||
|
Builder.Value append (seal_java_builder java_builder)
|
||||||
|
Column_Fetcher.Value fetch_value make_builder
|
||||||
|
|
||||||
|
## PRIVATE
|
||||||
|
A default implementation that will assign specialized fetchers for the
|
||||||
|
Integer, Float, Char and Boolean value types and a fallback for any other
|
||||||
|
type.
|
||||||
|
|
||||||
|
This should try to be aligned with `Storage.make_builder`.
|
||||||
|
default_fetcher_for_value_type : Value_Type -> Column_Fetcher
|
||||||
|
default_fetcher_for_value_type value_type =
|
||||||
|
case value_type of
|
||||||
|
## TODO [RW] once we support varying bit-width in storages, we should specify it
|
||||||
|
Revisit in #5159.
|
||||||
|
Value_Type.Integer _ -> long_fetcher
|
||||||
|
Value_Type.Float _ -> double_fetcher
|
||||||
|
Value_Type.Char _ _ -> text_fetcher
|
||||||
|
Value_Type.Boolean -> boolean_fetcher
|
||||||
|
_ -> fallback_fetcher
|
||||||
|
|
||||||
|
## PRIVATE
|
||||||
|
seal_java_builder java_builder column_name =
|
||||||
|
storage = java_builder.seal
|
||||||
|
Java_Exports.make_column column_name storage
|
@ -6,13 +6,12 @@ import project.Internal.Helpers
|
|||||||
import project.Internal.IR.SQL_Expression.SQL_Expression
|
import project.Internal.IR.SQL_Expression.SQL_Expression
|
||||||
|
|
||||||
## PRIVATE
|
## PRIVATE
|
||||||
make_distinct_expression text_case_sensitivity problem_builder key_column =
|
make_distinct_expression text_case_sensitivity problem_builder key_column value_type =
|
||||||
if key_column.sql_type.is_definitely_double then
|
if value_type.is_floating_point then
|
||||||
problem_builder.report_other_warning (Floating_Point_Equality.Error key_column.name)
|
problem_builder.report_other_warning (Floating_Point_Equality.Error key_column.name)
|
||||||
|
|
||||||
expr = key_column.expression
|
expr = key_column.expression
|
||||||
|
if value_type.is_text.not then expr else case text_case_sensitivity of
|
||||||
if key_column.sql_type.is_definitely_text.not then expr else case text_case_sensitivity of
|
|
||||||
Case_Sensitivity.Insensitive locale ->
|
Case_Sensitivity.Insensitive locale ->
|
||||||
Helpers.assume_default_locale locale <|
|
Helpers.assume_default_locale locale <|
|
||||||
SQL_Expression.Operation "FOLD_CASE" [expr]
|
SQL_Expression.Operation "FOLD_CASE" [expr]
|
||||||
|
@ -13,6 +13,7 @@ import project.Internal.IR.Context.Context
|
|||||||
import project.Internal.IR.From_Spec.From_Spec
|
import project.Internal.IR.From_Spec.From_Spec
|
||||||
import project.Internal.IR.Internal_Column.Internal_Column
|
import project.Internal.IR.Internal_Column.Internal_Column
|
||||||
import project.Internal.IR.SQL_Expression.SQL_Expression
|
import project.Internal.IR.SQL_Expression.SQL_Expression
|
||||||
|
import project.Internal.SQL_Type_Reference.SQL_Type_Reference
|
||||||
|
|
||||||
## PRIVATE
|
## PRIVATE
|
||||||
default_prepare_join connection join_kind new_table_name left_subquery right_subquery on_expressions where_expressions columns_to_select =
|
default_prepare_join connection join_kind new_table_name left_subquery right_subquery on_expressions where_expressions columns_to_select =
|
||||||
@ -31,9 +32,9 @@ make_join_helpers left_table right_table left_column_mapping right_column_mappin
|
|||||||
resolve_right = resolve_target_expression right_column_mapping
|
resolve_right = resolve_target_expression right_column_mapping
|
||||||
|
|
||||||
make_equals problem_builder left right =
|
make_equals problem_builder left right =
|
||||||
if left.sql_type.is_definitely_double then
|
if left.value_type.is_floating_point then
|
||||||
problem_builder.report_other_warning (Floating_Point_Equality.Error left.name)
|
problem_builder.report_other_warning (Floating_Point_Equality.Error left.name)
|
||||||
if right.sql_type.is_definitely_double then
|
if right.value_type.is_floating_point then
|
||||||
problem_builder.report_other_warning (Floating_Point_Equality.Error right.name)
|
problem_builder.report_other_warning (Floating_Point_Equality.Error right.name)
|
||||||
SQL_Expression.Operation "==" [resolve_left left, resolve_right right]
|
SQL_Expression.Operation "==" [resolve_left left, resolve_right right]
|
||||||
make_equals_ignore_case _ left right locale =
|
make_equals_ignore_case _ left right locale =
|
||||||
@ -93,12 +94,12 @@ prepare_subqueries left right needs_left_indicator needs_right_indicator =
|
|||||||
renamer = Unique_Name_Strategy.new
|
renamer = Unique_Name_Strategy.new
|
||||||
renamer.mark_used (left.internal_columns.map .name)
|
renamer.mark_used (left.internal_columns.map .name)
|
||||||
# This is an operation, not a constant to avoid adding unnecessary interpolations to the query.
|
# This is an operation, not a constant to avoid adding unnecessary interpolations to the query.
|
||||||
[Internal_Column.Value (renamer.make_unique "left_indicator") SQL_Type.boolean (SQL_Expression.Operation "TRUE" [])]
|
[Internal_Column.Value (renamer.make_unique "left_indicator") SQL_Type_Reference.null (SQL_Expression.Operation "TRUE" [])]
|
||||||
|
|
||||||
right_indicators = if needs_right_indicator.not then [] else
|
right_indicators = if needs_right_indicator.not then [] else
|
||||||
renamer = Unique_Name_Strategy.new
|
renamer = Unique_Name_Strategy.new
|
||||||
renamer.mark_used (right.internal_columns.map .name)
|
renamer.mark_used (right.internal_columns.map .name)
|
||||||
[Internal_Column.Value (renamer.make_unique "right_indicator") SQL_Type.boolean (SQL_Expression.Operation "TRUE" [])]
|
[Internal_Column.Value (renamer.make_unique "right_indicator") SQL_Type_Reference.null (SQL_Expression.Operation "TRUE" [])]
|
||||||
|
|
||||||
# Create subqueries that encapsulate the original queries and provide needed columns.
|
# Create subqueries that encapsulate the original queries and provide needed columns.
|
||||||
# The generated new sets of columns refer to the encapsulated expressions within the subquery and are
|
# The generated new sets of columns refer to the encapsulated expressions within the subquery and are
|
||||||
|
@ -75,6 +75,17 @@ type Context
|
|||||||
set_where_filters self new_filters =
|
set_where_filters self new_filters =
|
||||||
Context.Value self.from_spec new_filters self.orders self.groups self.limit self.distinct_on
|
Context.Value self.from_spec new_filters self.orders self.groups self.limit self.distinct_on
|
||||||
|
|
||||||
|
## PRIVATE
|
||||||
|
|
||||||
|
Returns a copy of the context with added `where_filters`.
|
||||||
|
|
||||||
|
Arguments:
|
||||||
|
- new_filters: The new filters to add to the existing filters in the
|
||||||
|
query.
|
||||||
|
add_where_filters : Vector SQL_Expression -> Context
|
||||||
|
add_where_filters self new_filters =
|
||||||
|
Context.Value self.from_spec (self.where_filters+new_filters) self.orders self.groups self.limit self.distinct_on
|
||||||
|
|
||||||
## PRIVATE
|
## PRIVATE
|
||||||
|
|
||||||
Returns a copy of the context with changed `orders`.
|
Returns a copy of the context with changed `orders`.
|
||||||
@ -147,7 +158,7 @@ type Context
|
|||||||
as_subquery self alias column_lists =
|
as_subquery self alias column_lists =
|
||||||
rewrite_internal_column : Internal_Column -> Internal_Column
|
rewrite_internal_column : Internal_Column -> Internal_Column
|
||||||
rewrite_internal_column column =
|
rewrite_internal_column column =
|
||||||
Internal_Column.Value column.name column.sql_type (SQL_Expression.Column alias column.name)
|
Internal_Column.Value column.name column.sql_type_reference (SQL_Expression.Column alias column.name)
|
||||||
|
|
||||||
new_columns = column_lists.map columns->
|
new_columns = column_lists.map columns->
|
||||||
columns.map rewrite_internal_column
|
columns.map rewrite_internal_column
|
||||||
|
@ -1,6 +1,7 @@
|
|||||||
from Standard.Base import all
|
from Standard.Base import all
|
||||||
|
|
||||||
import project.Data.SQL_Type.SQL_Type
|
import project.Data.SQL_Type.SQL_Type
|
||||||
|
import project.Internal.SQL_Type_Reference.SQL_Type_Reference
|
||||||
import project.Internal.IR.SQL_Expression.SQL_Expression
|
import project.Internal.IR.SQL_Expression.SQL_Expression
|
||||||
|
|
||||||
type Internal_Column
|
type Internal_Column
|
||||||
@ -10,9 +11,9 @@ type Internal_Column
|
|||||||
|
|
||||||
Arguments:
|
Arguments:
|
||||||
- name: The column name.
|
- name: The column name.
|
||||||
- sql_type: The SQL type of the column.
|
- sql_type_reference: Lazily computed SQL type of the column.
|
||||||
- expression: An expression for applying to the column.
|
- expression: An expression for applying to the column.
|
||||||
Value name:Text sql_type:SQL_Type expression:SQL_Expression
|
Value name:Text sql_type_reference:SQL_Type_Reference expression:SQL_Expression
|
||||||
|
|
||||||
## PRIVATE
|
## PRIVATE
|
||||||
|
|
||||||
@ -21,4 +22,4 @@ type Internal_Column
|
|||||||
Arguments:
|
Arguments:
|
||||||
- new_name: The new name for the column.
|
- new_name: The new name for the column.
|
||||||
rename : Text -> Internal_Column
|
rename : Text -> Internal_Column
|
||||||
rename self new_name = Internal_Column.Value new_name self.sql_type self.expression
|
rename self new_name = Internal_Column.Value new_name self.sql_type_reference self.expression
|
||||||
|
@ -26,11 +26,9 @@ type SQL_Expression
|
|||||||
be interpolated when building the query.
|
be interpolated when building the query.
|
||||||
|
|
||||||
Arguments:
|
Arguments:
|
||||||
- sql_type: The SQL type that this object is going to be serialized to.
|
- value: the value to be interpolated; the set of supported interpolation
|
||||||
It is usually inferred from the expression's context.
|
values depends on the database backend.
|
||||||
- value: the value to be interpolated; it should be a simple Number, Text
|
Constant (value : Any)
|
||||||
or other types that are serializable for JDBC.
|
|
||||||
Constant (sql_type : SQL_Type) (value : Any)
|
|
||||||
|
|
||||||
## PRIVATE
|
## PRIVATE
|
||||||
|
|
||||||
|
@ -1,16 +1,18 @@
|
|||||||
from Standard.Base import all
|
from Standard.Base import all
|
||||||
|
import Standard.Base.Errors.Illegal_Argument.Illegal_Argument
|
||||||
import Standard.Base.Errors.Illegal_State.Illegal_State
|
import Standard.Base.Errors.Illegal_State.Illegal_State
|
||||||
|
import Standard.Base.Errors.Unimplemented.Unimplemented
|
||||||
import Standard.Base.Runtime.Managed_Resource.Managed_Resource
|
import Standard.Base.Runtime.Managed_Resource.Managed_Resource
|
||||||
|
|
||||||
import Standard.Table.Data.Storage.Storage
|
|
||||||
import Standard.Table.Data.Table.Table as Materialized_Table
|
import Standard.Table.Data.Table.Table as Materialized_Table
|
||||||
|
import Standard.Table.Data.Type.Value_Type.Value_Type
|
||||||
|
|
||||||
import project.Data.SQL.Builder
|
import project.Data.SQL.Builder
|
||||||
import project.Data.SQL_Statement.SQL_Statement
|
import project.Data.SQL_Statement.SQL_Statement
|
||||||
import project.Data.SQL_Type.SQL_Type
|
import project.Data.SQL_Type.SQL_Type
|
||||||
import project.Internal.Base_Generator
|
|
||||||
|
|
||||||
import project.Data.Table.Table as Database_Table
|
import project.Data.Table.Table as Database_Table
|
||||||
|
import project.Internal.Base_Generator
|
||||||
|
import project.Internal.Statement_Setter.Statement_Setter
|
||||||
|
|
||||||
from project.Errors import SQL_Error, SQL_Timeout
|
from project.Errors import SQL_Error, SQL_Timeout
|
||||||
|
|
||||||
@ -24,7 +26,6 @@ polyglot java import java.sql.SQLException
|
|||||||
polyglot java import java.sql.SQLTimeoutException
|
polyglot java import java.sql.SQLTimeoutException
|
||||||
|
|
||||||
polyglot java import org.enso.database.JDBCProxy
|
polyglot java import org.enso.database.JDBCProxy
|
||||||
polyglot java import org.enso.database.JDBCUtils
|
|
||||||
|
|
||||||
type JDBC_Connection
|
type JDBC_Connection
|
||||||
## PRIVATE
|
## PRIVATE
|
||||||
@ -63,18 +64,24 @@ type JDBC_Connection
|
|||||||
|
|
||||||
Runs the provided action with a prepared statement, adding contextual
|
Runs the provided action with a prepared statement, adding contextual
|
||||||
information to any thrown SQL errors.
|
information to any thrown SQL errors.
|
||||||
with_prepared_statement : Text | SQL_Statement -> (PreparedStatement -> Any) -> Any
|
with_prepared_statement : Text | SQL_Statement -> Statement_Setter -> (PreparedStatement -> Any) -> Any
|
||||||
with_prepared_statement self query action =
|
with_prepared_statement self query statement_setter action =
|
||||||
prepare template holes = self.connection_resource.with java_connection->
|
prepare template values = self.connection_resource.with java_connection->
|
||||||
stmt = java_connection.prepareStatement template
|
stmt = java_connection.prepareStatement template
|
||||||
Panic.catch Any (set_statement_values stmt holes) caught_panic->
|
handle_illegal_state caught_panic =
|
||||||
|
Error.throw (Illegal_Argument.Error caught_panic.payload.message)
|
||||||
|
handle_any caught_panic =
|
||||||
stmt.close
|
stmt.close
|
||||||
Panic.throw caught_panic
|
Panic.throw caught_panic
|
||||||
stmt
|
result = Panic.catch Illegal_State handler=handle_illegal_state <|
|
||||||
|
Panic.catch Any handler=handle_any <|
|
||||||
|
set_statement_values stmt statement_setter values
|
||||||
|
result.if_not_error <|
|
||||||
|
stmt
|
||||||
|
|
||||||
go template holes =
|
go template values =
|
||||||
handle_sql_errors related_query=template <|
|
handle_sql_errors related_query=template <|
|
||||||
Managed_Resource.bracket (prepare template holes) .close action
|
Managed_Resource.bracket (prepare template values) .close action
|
||||||
|
|
||||||
case query of
|
case query of
|
||||||
_ : Text -> go query []
|
_ : Text -> go query []
|
||||||
@ -83,29 +90,40 @@ type JDBC_Connection
|
|||||||
go compiled.first compiled.second
|
go compiled.first compiled.second
|
||||||
|
|
||||||
## PRIVATE
|
## PRIVATE
|
||||||
|
|
||||||
Given a prepared statement, gets the column names and types for the
|
Given a prepared statement, gets the column names and types for the
|
||||||
result set.
|
result set.
|
||||||
fetch_columns : Text | SQL_Statement -> Any
|
fetch_columns : Text | SQL_Statement -> Statement_Setter -> Any
|
||||||
fetch_columns self statement =
|
fetch_columns self statement statement_setter =
|
||||||
self.with_prepared_statement statement stmt->
|
self.with_prepared_statement statement statement_setter stmt->
|
||||||
metadata = stmt.executeQuery.getMetaData
|
metadata = stmt.executeQuery.getMetaData
|
||||||
|
|
||||||
resolve_column ix =
|
resolve_column ix =
|
||||||
name = metadata.getColumnName ix+1
|
name = metadata.getColumnLabel ix+1
|
||||||
typeid = metadata.getColumnType ix+1
|
sql_type = SQL_Type.from_metadata metadata ix+1
|
||||||
typename = metadata.getColumnTypeName ix+1
|
[name, sql_type]
|
||||||
[name, SQL_Type.Value typeid typename]
|
|
||||||
|
|
||||||
Vector.new metadata.getColumnCount resolve_column
|
Vector.new metadata.getColumnCount resolve_column
|
||||||
|
|
||||||
|
## PRIVATE
|
||||||
|
Checks that the query has no holes, and if it does, throws an error.
|
||||||
|
ensure_query_has_no_holes : Text -> Nothing ! Illegal_Argument
|
||||||
|
ensure_query_has_no_holes self raw_sql =
|
||||||
|
self.with_prepared_statement raw_sql Statement_Setter.null stmt->
|
||||||
|
## We cannot run this check on every query, because in some
|
||||||
|
backends (e.g. Postgres) running `getParameterMetaData`
|
||||||
|
seems to trigger logic for figuring out types of the holes.
|
||||||
|
In some of our generated queries, the driver is unable to
|
||||||
|
figure out the types and fails with an exception.
|
||||||
|
expected_parameter_count = stmt.getParameterMetaData.getParameterCount
|
||||||
|
if expected_parameter_count != 0 then
|
||||||
|
Error.throw <| Illegal_Argument.Error 'The provided raw SQL query should not contain any holes ("?").'
|
||||||
|
|
||||||
## PRIVATE
|
## PRIVATE
|
||||||
|
|
||||||
Given an insert query template and the associated Database_Table, and a
|
Given an insert query template and the associated Database_Table, and a
|
||||||
Materialized_Table of data, load to the database.
|
Materialized_Table of data, load to the database.
|
||||||
load_table : Text -> Database_Table -> Materialized_Table -> Integer -> Nothing
|
load_table : Text -> Statement_Setter -> Materialized_Table -> Integer -> Nothing
|
||||||
load_table self insert_template db_table table batch_size =
|
load_table self insert_template statement_setter table batch_size =
|
||||||
db_types = db_table.internal_columns.map .sql_type
|
|
||||||
self.with_connection java_connection->
|
self.with_connection java_connection->
|
||||||
default_autocommit = java_connection.getAutoCommit
|
default_autocommit = java_connection.getAutoCommit
|
||||||
java_connection.setAutoCommit False
|
java_connection.setAutoCommit False
|
||||||
@ -121,8 +139,7 @@ type JDBC_Connection
|
|||||||
Panic.throw <| Illegal_State.Error "A single update within the batch unexpectedly affected "+affected_rows.to_text+" rows."
|
Panic.throw <| Illegal_State.Error "A single update within the batch unexpectedly affected "+affected_rows.to_text+" rows."
|
||||||
0.up_to num_rows . each row_id->
|
0.up_to num_rows . each row_id->
|
||||||
values = columns.map col-> col.at row_id
|
values = columns.map col-> col.at row_id
|
||||||
holes = values.zip db_types
|
set_statement_values stmt statement_setter values
|
||||||
set_statement_values stmt holes
|
|
||||||
stmt.addBatch
|
stmt.addBatch
|
||||||
if (row_id+1 % batch_size) == 0 then check_rows stmt.executeBatch batch_size
|
if (row_id+1 % batch_size) == 0 then check_rows stmt.executeBatch batch_size
|
||||||
if (num_rows % batch_size) != 0 then check_rows stmt.executeBatch (num_rows % batch_size)
|
if (num_rows % batch_size) != 0 then check_rows stmt.executeBatch (num_rows % batch_size)
|
||||||
@ -174,41 +191,19 @@ handle_sql_errors ~action related_query=Nothing =
|
|||||||
exc -> Error.throw (SQL_Error.Error exc related_query)
|
exc -> Error.throw (SQL_Error.Error exc related_query)
|
||||||
|
|
||||||
## PRIVATE
|
## PRIVATE
|
||||||
Sets values inside of a prepared statement.
|
Uses the provided `Statement_Setter` strategy to fill holes in a
|
||||||
set_statement_values : PreparedStatement -> Vector (Pair Any SQL_Type) -> Nothing
|
provided `PreparedStatement`.
|
||||||
set_statement_values stmt holes =
|
set_statement_values stmt statement_setter values =
|
||||||
holes.map_with_index ix-> obj->
|
values.each_with_index ix-> value->
|
||||||
position = ix + 1
|
statement_setter.fill_hole stmt (ix + 1) value
|
||||||
case obj.first of
|
|
||||||
Nothing ->
|
|
||||||
## If we really don't have a clue what this should be, we choose a varchar for a blank column.
|
|
||||||
sql_type = if obj.second == SQL_Type.null then SQL_Type.text else obj.second
|
|
||||||
stmt.setNull position sql_type.typeid
|
|
||||||
_ : Date_Time -> stmt.setTimestamp position (JDBCUtils.getTimestamp obj.first)
|
|
||||||
_ -> stmt.setObject position obj.first
|
|
||||||
|
|
||||||
## PRIVATE
|
## PRIVATE
|
||||||
Given a Materialized_Table, create a SQL statement to build the table.
|
Given a Materialized_Table, create a SQL statement to build the table.
|
||||||
create_table_statement : Text -> Materialized_Table -> Boolean -> SQL_Statement
|
create_table_statement : (Value_Type -> SQL_Type) -> Text -> Materialized_Table -> Boolean -> SQL_Statement
|
||||||
create_table_statement name table temporary =
|
create_table_statement type_mapper name table temporary =
|
||||||
column_types = table.columns.map col-> default_storage_type col.storage_type
|
column_types = table.columns.map col-> type_mapper col.value_type
|
||||||
column_names = table.columns.map .name
|
column_names = table.columns.map .name
|
||||||
col_makers = column_names.zip column_types name-> typ->
|
col_makers = column_names.zip column_types name-> typ->
|
||||||
Base_Generator.wrap_in_quotes name ++ " " ++ typ.name
|
Base_Generator.wrap_in_quotes name ++ " " ++ typ.name
|
||||||
create_prefix = Builder.code <| if temporary then "CREATE TEMPORARY TABLE " else "CREATE TABLE "
|
create_prefix = Builder.code <| if temporary then "CREATE TEMPORARY TABLE " else "CREATE TABLE "
|
||||||
(create_prefix ++ Base_Generator.wrap_in_quotes name ++ " (" ++ (Builder.join ", " col_makers) ++ ")").build
|
(create_prefix ++ Base_Generator.wrap_in_quotes name ++ " (" ++ (Builder.join ", " col_makers) ++ ")").build
|
||||||
|
|
||||||
## PRIVATE
|
|
||||||
Returns the default database type corresponding to an in-memory storage type.
|
|
||||||
default_storage_type : Storage -> SQL_Type
|
|
||||||
default_storage_type storage_type = case storage_type of
|
|
||||||
Storage.Text -> SQL_Type.text
|
|
||||||
Storage.Integer -> SQL_Type.integer
|
|
||||||
Storage.Decimal -> SQL_Type.double
|
|
||||||
Storage.Boolean -> SQL_Type.boolean
|
|
||||||
Storage.Date -> SQL_Type.date
|
|
||||||
Storage.Time_Of_Day -> SQL_Type.time_of_day
|
|
||||||
Storage.Date_Time -> SQL_Type.date_time
|
|
||||||
## Support for mixed type columns in Table upload is currently very limited,
|
|
||||||
falling back to treating everything as text.
|
|
||||||
Storage.Any -> SQL_Type.text
|
|
||||||
|
@ -14,6 +14,7 @@ import project.Data.SQL_Statement.SQL_Statement
|
|||||||
import project.Data.SQL_Type.SQL_Type
|
import project.Data.SQL_Type.SQL_Type
|
||||||
import project.Data.Table.Table as Database_Table
|
import project.Data.Table.Table as Database_Table
|
||||||
import project.Internal.JDBC_Connection
|
import project.Internal.JDBC_Connection
|
||||||
|
import project.Internal.SQL_Type_Reference.SQL_Type_Reference
|
||||||
|
|
||||||
from project.Internal.Result_Set import read_column
|
from project.Internal.Result_Set import read_column
|
||||||
|
|
||||||
@ -113,26 +114,6 @@ type Postgres_Connection
|
|||||||
read : Text | SQL_Query -> Integer | Nothing -> Materialized_Table
|
read : Text | SQL_Query -> Integer | Nothing -> Materialized_Table
|
||||||
read self query limit=Nothing = self.connection.read query limit
|
read self query limit=Nothing = self.connection.read query limit
|
||||||
|
|
||||||
## PRIVATE
|
|
||||||
Internal read function for a statement with optional types.
|
|
||||||
|
|
||||||
Arguments:
|
|
||||||
- statement: SQL_Statement to execute.
|
|
||||||
- expected_types: Optional vector of expected types for each column.
|
|
||||||
read_statement : SQL_Statement -> (Nothing | Vector SQL_Type) -> Materialized_Table
|
|
||||||
read_statement self statement expected_types=Nothing =
|
|
||||||
self.connection.read_statement statement expected_types
|
|
||||||
|
|
||||||
## PRIVATE
|
|
||||||
Internal read function for a statement with optional types returning just last row.
|
|
||||||
|
|
||||||
Arguments:
|
|
||||||
- statement: SQL_Statement to execute.
|
|
||||||
- expected_types: Optional vector of expected types for each column.
|
|
||||||
read_last_row : SQL_Statement -> (Nothing | Vector SQL_Type) -> Materialized_Table
|
|
||||||
read_last_row self statement expected_types=Nothing =
|
|
||||||
self.connection.read_last_row statement expected_types
|
|
||||||
|
|
||||||
## ADVANCED
|
## ADVANCED
|
||||||
|
|
||||||
Executes a raw update query. If the query was inserting, updating or
|
Executes a raw update query. If the query was inserting, updating or
|
||||||
|
@ -14,6 +14,8 @@ import project.Data.SQL_Statement.SQL_Statement
|
|||||||
import project.Data.SQL_Type.SQL_Type
|
import project.Data.SQL_Type.SQL_Type
|
||||||
import project.Data.Table.Table
|
import project.Data.Table.Table
|
||||||
import project.Internal.Base_Generator
|
import project.Internal.Base_Generator
|
||||||
|
import project.Internal.Column_Fetcher.Column_Fetcher
|
||||||
|
import project.Internal.Column_Fetcher as Column_Fetcher_Module
|
||||||
import project.Internal.Common.Database_Distinct_Helper
|
import project.Internal.Common.Database_Distinct_Helper
|
||||||
import project.Internal.Common.Database_Join_Helper
|
import project.Internal.Common.Database_Join_Helper
|
||||||
import project.Internal.IR.Context.Context
|
import project.Internal.IR.Context.Context
|
||||||
@ -24,9 +26,13 @@ import project.Internal.IR.Order_Descriptor.Order_Descriptor
|
|||||||
import project.Internal.IR.Nulls_Order.Nulls_Order
|
import project.Internal.IR.Nulls_Order.Nulls_Order
|
||||||
import project.Internal.IR.SQL_Join_Kind.SQL_Join_Kind
|
import project.Internal.IR.SQL_Join_Kind.SQL_Join_Kind
|
||||||
import project.Internal.IR.Query.Query
|
import project.Internal.IR.Query.Query
|
||||||
|
import project.Internal.Postgres.Postgres_Type_Mapping.Postgres_Type_Mapping
|
||||||
|
import project.Internal.SQL_Type_Mapping.SQL_Type_Mapping
|
||||||
|
import project.Internal.Statement_Setter.Statement_Setter
|
||||||
from project.Errors import Unsupported_Database_Operation
|
from project.Errors import Unsupported_Database_Operation
|
||||||
|
|
||||||
|
polyglot java import org.enso.database.JDBCUtils
|
||||||
|
|
||||||
## PRIVATE
|
## PRIVATE
|
||||||
|
|
||||||
The dialect of PostgreSQL databases.
|
The dialect of PostgreSQL databases.
|
||||||
@ -48,6 +54,9 @@ type Postgres_Dialect
|
|||||||
name : Text
|
name : Text
|
||||||
name self = "PostgreSQL"
|
name self = "PostgreSQL"
|
||||||
|
|
||||||
|
## PRIVATE
|
||||||
|
to_text self = "Postgres_Dialect"
|
||||||
|
|
||||||
## PRIVATE
|
## PRIVATE
|
||||||
A function which generates SQL code from the internal representation
|
A function which generates SQL code from the internal representation
|
||||||
according to the specific dialect.
|
according to the specific dialect.
|
||||||
@ -55,20 +64,19 @@ type Postgres_Dialect
|
|||||||
generate_sql self query =
|
generate_sql self query =
|
||||||
Base_Generator.generate_query self.internal_generator_dialect query . build
|
Base_Generator.generate_query self.internal_generator_dialect query . build
|
||||||
|
|
||||||
## PRIVATE
|
|
||||||
Deduces the result type for an aggregation operation.
|
|
||||||
|
|
||||||
The provided aggregate is assumed to contain only already resolved columns.
|
|
||||||
You may need to transform it with `resolve_aggregate` first.
|
|
||||||
resolve_target_sql_type : Aggregate_Column -> SQL_Type
|
|
||||||
resolve_target_sql_type self aggregate = resolve_target_sql_type aggregate
|
|
||||||
|
|
||||||
## PRIVATE
|
## PRIVATE
|
||||||
Prepares an ordering descriptor.
|
Prepares an ordering descriptor.
|
||||||
|
|
||||||
One of the purposes of this method is to verify if the expected ordering
|
One of the purposes of this method is to verify if the expected ordering
|
||||||
settings are supported by the given database backend.
|
settings are supported by the given database backend.
|
||||||
prepare_order_descriptor : Internal_Column -> Sort_Direction -> Text_Ordering -> Order_Descriptor
|
|
||||||
|
Arguments:
|
||||||
|
- internal_column: the column to order by.
|
||||||
|
- sort_direction: the direction of the ordering.
|
||||||
|
- text_ordering: If provided, specifies that the column should be treated
|
||||||
|
as text values according to the provided ordering. For non-text types,
|
||||||
|
it should be set to `Nothing`.
|
||||||
|
prepare_order_descriptor : Internal_Column -> Sort_Direction -> Nothing | Text_Ordering -> Order_Descriptor
|
||||||
prepare_order_descriptor self internal_column sort_direction text_ordering =
|
prepare_order_descriptor self internal_column sort_direction text_ordering =
|
||||||
make_order_descriptor internal_column sort_direction text_ordering
|
make_order_descriptor internal_column sort_direction text_ordering
|
||||||
|
|
||||||
@ -87,7 +95,10 @@ type Postgres_Dialect
|
|||||||
new_columns = setup.new_columns.first
|
new_columns = setup.new_columns.first
|
||||||
column_mapping = Map.from_vector <| new_columns.map c-> [c.name, c]
|
column_mapping = Map.from_vector <| new_columns.map c-> [c.name, c]
|
||||||
new_key_columns = key_columns.map c-> column_mapping.at c.name
|
new_key_columns = key_columns.map c-> column_mapping.at c.name
|
||||||
distinct_expressions = new_key_columns.map (Database_Distinct_Helper.make_distinct_expression case_sensitivity problem_builder)
|
type_mapping = self.get_type_mapping
|
||||||
|
distinct_expressions = new_key_columns.map column->
|
||||||
|
value_type = type_mapping.sql_type_to_value_type column.sql_type_reference.get
|
||||||
|
Database_Distinct_Helper.make_distinct_expression case_sensitivity problem_builder column value_type
|
||||||
new_context = Context.for_subquery setup.subquery . set_distinct_on distinct_expressions
|
new_context = Context.for_subquery setup.subquery . set_distinct_on distinct_expressions
|
||||||
table.updated_context_and_columns new_context new_columns subquery=True
|
table.updated_context_and_columns new_context new_columns subquery=True
|
||||||
|
|
||||||
@ -104,6 +115,31 @@ type Postgres_Dialect
|
|||||||
get_naming_helpers : Naming_Helpers
|
get_naming_helpers : Naming_Helpers
|
||||||
get_naming_helpers self = Naming_Helpers
|
get_naming_helpers self = Naming_Helpers
|
||||||
|
|
||||||
|
## PRIVATE
|
||||||
|
Returns the mapping between SQL types of this dialect and Enso
|
||||||
|
`Value_Type`.
|
||||||
|
get_type_mapping : SQL_Type_Mapping
|
||||||
|
get_type_mapping self = Postgres_Type_Mapping
|
||||||
|
|
||||||
|
## PRIVATE
|
||||||
|
Creates a `Column_Fetcher` used to fetch data from a result set and build
|
||||||
|
an in-memory column from it, based on the given column type.
|
||||||
|
make_column_fetcher_for_type : SQL_Type -> Column_Fetcher
|
||||||
|
make_column_fetcher_for_type self sql_type =
|
||||||
|
type_mapping = self.get_type_mapping
|
||||||
|
value_type = type_mapping.sql_type_to_value_type sql_type
|
||||||
|
Column_Fetcher_Module.default_fetcher_for_value_type value_type
|
||||||
|
|
||||||
|
## PRIVATE
|
||||||
|
get_statement_setter : Statement_Setter
|
||||||
|
get_statement_setter self = postgres_statement_setter
|
||||||
|
|
||||||
|
## PRIVATE
|
||||||
|
check_aggregate_support : Aggregate_Column -> Boolean ! Unsupported_Database_Operation
|
||||||
|
check_aggregate_support self aggregate =
|
||||||
|
_ = aggregate
|
||||||
|
True
|
||||||
|
|
||||||
## PRIVATE
|
## PRIVATE
|
||||||
make_internal_generator_dialect =
|
make_internal_generator_dialect =
|
||||||
cases = [["LOWER", Base_Generator.make_function "LOWER"], ["UPPER", Base_Generator.make_function "UPPER"]]
|
cases = [["LOWER", Base_Generator.make_function "LOWER"], ["UPPER", Base_Generator.make_function "UPPER"]]
|
||||||
@ -118,37 +154,6 @@ make_internal_generator_dialect =
|
|||||||
my_mappings = text + counts + stats + first_last_aggregators + arith_extensions + bool
|
my_mappings = text + counts + stats + first_last_aggregators + arith_extensions + bool
|
||||||
Base_Generator.base_dialect . extend_with my_mappings
|
Base_Generator.base_dialect . extend_with my_mappings
|
||||||
|
|
||||||
## PRIVATE
|
|
||||||
The provided aggregate is assumed to contain only already resolved columns.
|
|
||||||
You may need to transform it with `resolve_aggregate` first.
|
|
||||||
resolve_target_sql_type aggregate = case aggregate of
|
|
||||||
Group_By c _ -> c.sql_type
|
|
||||||
Count _ -> SQL_Type.bigint
|
|
||||||
Count_Distinct _ _ _ -> SQL_Type.bigint
|
|
||||||
Count_Not_Nothing _ _ -> SQL_Type.bigint
|
|
||||||
Count_Nothing _ _ -> SQL_Type.bigint
|
|
||||||
Count_Not_Empty _ _ -> SQL_Type.bigint
|
|
||||||
Count_Empty _ _ -> SQL_Type.bigint
|
|
||||||
Percentile _ _ _ -> SQL_Type.double
|
|
||||||
Mode c _ -> c.sql_type
|
|
||||||
First c _ _ _ -> c.sql_type
|
|
||||||
Last c _ _ _ -> c.sql_type
|
|
||||||
Maximum c _ -> c.sql_type
|
|
||||||
Minimum c _ -> c.sql_type
|
|
||||||
Shortest c _ -> c.sql_type
|
|
||||||
Longest c _ -> c.sql_type
|
|
||||||
Standard_Deviation _ _ _ -> SQL_Type.double
|
|
||||||
Concatenate _ _ _ _ _ _ -> SQL_Type.text
|
|
||||||
Sum c _ ->
|
|
||||||
if (c.sql_type == SQL_Type.integer) || (c.sql_type == SQL_Type.smallint) then SQL_Type.bigint else
|
|
||||||
if c.sql_type == SQL_Type.bigint then SQL_Type.numeric else
|
|
||||||
c.sql_type
|
|
||||||
Average c _ ->
|
|
||||||
if c.sql_type.is_definitely_integer then SQL_Type.numeric else
|
|
||||||
if c.sql_type.is_definitely_double then SQL_Type.double else
|
|
||||||
c.sql_type
|
|
||||||
Median _ _ -> SQL_Type.double
|
|
||||||
|
|
||||||
## PRIVATE
|
## PRIVATE
|
||||||
agg_count_is_null = Base_Generator.lift_unary_op "COUNT_IS_NULL" arg->
|
agg_count_is_null = Base_Generator.lift_unary_op "COUNT_IS_NULL" arg->
|
||||||
Builder.code "COUNT(CASE WHEN " ++ arg.paren ++ " IS NULL THEN 1 END)"
|
Builder.code "COUNT(CASE WHEN " ++ arg.paren ++ " IS NULL THEN 1 END)"
|
||||||
@ -283,8 +288,10 @@ make_order_descriptor internal_column sort_direction text_ordering =
|
|||||||
nulls = case sort_direction of
|
nulls = case sort_direction of
|
||||||
Sort_Direction.Ascending -> Nulls_Order.First
|
Sort_Direction.Ascending -> Nulls_Order.First
|
||||||
Sort_Direction.Descending -> Nulls_Order.Last
|
Sort_Direction.Descending -> Nulls_Order.Last
|
||||||
case internal_column.sql_type.is_likely_text of
|
case text_ordering of
|
||||||
True ->
|
Nothing ->
|
||||||
|
Order_Descriptor.Value internal_column.expression sort_direction nulls_order=nulls collation=Nothing
|
||||||
|
_ ->
|
||||||
## In the future we can modify this error to suggest using a custom defined collation.
|
## In the future we can modify this error to suggest using a custom defined collation.
|
||||||
if text_ordering.sort_digits_as_numbers then Error.throw (Unsupported_Database_Operation.Error "Natural ordering is currently not supported. You may need to materialize the Table to perform this operation.") else
|
if text_ordering.sort_digits_as_numbers then Error.throw (Unsupported_Database_Operation.Error "Natural ordering is currently not supported. You may need to materialize the Table to perform this operation.") else
|
||||||
case text_ordering.case_sensitivity of
|
case text_ordering.case_sensitivity of
|
||||||
@ -299,8 +306,6 @@ make_order_descriptor internal_column sort_direction text_ordering =
|
|||||||
upper = SQL_Expression.Operation "UPPER" [internal_column.expression]
|
upper = SQL_Expression.Operation "UPPER" [internal_column.expression]
|
||||||
folded_expression = SQL_Expression.Operation "LOWER" [upper]
|
folded_expression = SQL_Expression.Operation "LOWER" [upper]
|
||||||
Order_Descriptor.Value folded_expression sort_direction nulls_order=nulls collation=Nothing
|
Order_Descriptor.Value folded_expression sort_direction nulls_order=nulls collation=Nothing
|
||||||
False ->
|
|
||||||
Order_Descriptor.Value internal_column.expression sort_direction nulls_order=nulls collation=Nothing
|
|
||||||
|
|
||||||
## PRIVATE
|
## PRIVATE
|
||||||
is_nan = Base_Generator.lift_unary_op "IS_NAN" arg->
|
is_nan = Base_Generator.lift_unary_op "IS_NAN" arg->
|
||||||
@ -317,3 +322,16 @@ decimal_div = Base_Generator.lift_binary_op "/" x-> y->
|
|||||||
## PRIVATE
|
## PRIVATE
|
||||||
mod_op = Base_Generator.lift_binary_op "mod" x-> y->
|
mod_op = Base_Generator.lift_binary_op "mod" x-> y->
|
||||||
x ++ " - FLOOR(CAST(" ++ x ++ " AS double precision) / CAST(" ++ y ++ " AS double precision)) * " ++ y
|
x ++ " - FLOOR(CAST(" ++ x ++ " AS double precision) / CAST(" ++ y ++ " AS double precision)) * " ++ y
|
||||||
|
|
||||||
|
## PRIVATE
|
||||||
|
postgres_statement_setter : Statement_Setter
|
||||||
|
postgres_statement_setter =
|
||||||
|
default = Statement_Setter.default
|
||||||
|
fill_hole stmt i value = case value of
|
||||||
|
# TODO [RW] Postgres date handling #6115
|
||||||
|
_ : Date_Time ->
|
||||||
|
stmt.setTimestamp i (JDBCUtils.getTimestamp value)
|
||||||
|
# _ : Date ->
|
||||||
|
# _ : Time_Of_Day ->
|
||||||
|
_ -> default.fill_hole stmt i value
|
||||||
|
Statement_Setter.Value fill_hole
|
||||||
|
@ -0,0 +1,129 @@
|
|||||||
|
from Standard.Base import all
|
||||||
|
import Standard.Base.Errors.Illegal_Argument.Illegal_Argument
|
||||||
|
|
||||||
|
import Standard.Table.Data.Type.Value_Type.Value_Type
|
||||||
|
import Standard.Table.Data.Type.Value_Type.Bits
|
||||||
|
from Standard.Table.Errors import Inexact_Type_Coercion
|
||||||
|
|
||||||
|
import project.Data.SQL_Type.SQL_Type
|
||||||
|
import project.Internal.IR.SQL_Expression.SQL_Expression
|
||||||
|
import project.Internal.SQL_Type_Reference.SQL_Type_Reference
|
||||||
|
|
||||||
|
polyglot java import java.sql.Types
|
||||||
|
|
||||||
|
## PRIVATE
|
||||||
|
type Postgres_Type_Mapping
|
||||||
|
## PRIVATE
|
||||||
|
value_type_to_sql : Value_Type -> Problem_Behavior -> SQL_Type
|
||||||
|
value_type_to_sql value_type on_problems =
|
||||||
|
result = case value_type of
|
||||||
|
Value_Type.Boolean ->
|
||||||
|
SQL_Type.Value Types.BIT "bool" precision=1
|
||||||
|
# Byte is not available on Postgres so we substitute it with int2, the closest integral type.
|
||||||
|
Value_Type.Byte ->
|
||||||
|
SQL_Type.Value Types.SMALLINT "int2"
|
||||||
|
Value_Type.Integer Bits.Bits_16 ->
|
||||||
|
SQL_Type.Value Types.SMALLINT "int2"
|
||||||
|
Value_Type.Integer Bits.Bits_32 ->
|
||||||
|
SQL_Type.Value Types.INTEGER "int4"
|
||||||
|
Value_Type.Integer Bits.Bits_64 ->
|
||||||
|
SQL_Type.Value Types.BIGINT "int8"
|
||||||
|
Value_Type.Float Bits.Bits_32 ->
|
||||||
|
SQL_Type.Value Types.REAL "float4"
|
||||||
|
Value_Type.Float Bits.Bits_64 ->
|
||||||
|
SQL_Type.Value Types.DOUBLE "float8"
|
||||||
|
Value_Type.Decimal precision scale ->
|
||||||
|
SQL_Type.Value Types.DECIMAL "decimal" precision scale
|
||||||
|
Value_Type.Char size variable ->
|
||||||
|
case variable of
|
||||||
|
True -> case size of
|
||||||
|
Nothing -> SQL_Type.Value Types.VARCHAR "text"
|
||||||
|
_ -> SQL_Type.Value Types.VARCHAR "varchar" size
|
||||||
|
False -> SQL_Type.Value Types.CHAR "char" size
|
||||||
|
Value_Type.Time ->
|
||||||
|
SQL_Type.Value Types.TIME "time"
|
||||||
|
Value_Type.Date ->
|
||||||
|
SQL_Type.Value Types.DATE "date"
|
||||||
|
Value_Type.Date_Time with_timezone ->
|
||||||
|
type_name = if with_timezone then "timestamptz" else "timestamp"
|
||||||
|
SQL_Type.Value Types.TIMESTAMP type_name
|
||||||
|
Value_Type.Binary _ _ ->
|
||||||
|
# This is the maximum size that JDBC driver reports for Postgres.
|
||||||
|
max_int4 = 2147483647
|
||||||
|
SQL_Type.Value Types.BINARY "bytea" precision=max_int4
|
||||||
|
Value_Type.Mixed ->
|
||||||
|
Error.throw (Illegal_Argument.Error "Postgres tables do not support Mixed types.")
|
||||||
|
Value_Type.Unsupported_Data_Type type_name underlying_type ->
|
||||||
|
underlying_type.if_nothing <|
|
||||||
|
Error.throw <|
|
||||||
|
Illegal_Argument.Error <|
|
||||||
|
"An unsupported SQL type ["+type_name.to_text+"] cannot be converted into an SQL type because it did not contain the SQL metadata needed to reconstruct it."
|
||||||
|
approximated_value_type = Postgres_Type_Mapping.sql_type_to_value_type result
|
||||||
|
problems = if approximated_value_type == value_type then [] else [Inexact_Type_Coercion.Warning value_type approximated_value_type]
|
||||||
|
on_problems.attach_problems_before problems result
|
||||||
|
|
||||||
|
## PRIVATE
|
||||||
|
sql_type_to_value_type : SQL_Type -> Value_Type
|
||||||
|
sql_type_to_value_type sql_type =
|
||||||
|
simple_type = simple_types_map.get sql_type.typeid Nothing
|
||||||
|
simple_type.if_nothing <|
|
||||||
|
## If we didn't match any of the types from the simple mapping, we
|
||||||
|
continue with the more complex mappings that take stuff like
|
||||||
|
precision into account.
|
||||||
|
case complex_types_map.get sql_type.typeid Nothing of
|
||||||
|
Nothing -> on_unknown_type sql_type
|
||||||
|
builder -> builder sql_type
|
||||||
|
|
||||||
|
## PRIVATE
|
||||||
|
The Postgres type mapping always relies on the return type determined by
|
||||||
|
the database backend.
|
||||||
|
infer_return_type : (SQL_Expression -> SQL_Type_Reference) -> Text -> Vector -> SQL_Expression -> SQL_Type_Reference
|
||||||
|
infer_return_type infer_from_database_callback op_name arguments expression =
|
||||||
|
_ = [op_name, arguments]
|
||||||
|
infer_from_database_callback expression
|
||||||
|
|
||||||
|
## PRIVATE
|
||||||
|
We want to respect any overriding references, but references that rely on
|
||||||
|
computing the type by the database are resolved to Nothing to just rely
|
||||||
|
on the `ResultSet` metadata and decrease overhead.
|
||||||
|
prepare_type_overrides : Nothing | Vector SQL_Type_Reference -> Nothing | Vector (Nothing | SQL_Type)
|
||||||
|
prepare_type_overrides column_type_suggestions = case column_type_suggestions of
|
||||||
|
Nothing -> Nothing
|
||||||
|
_ : Vector -> column_type_suggestions.map .to_type_override
|
||||||
|
|
||||||
|
## PRIVATE
|
||||||
|
simple_types_map = Map.from_vector <|
|
||||||
|
ints = [[Types.SMALLINT, Value_Type.Integer Bits.Bits_16], [Types.BIGINT, Value_Type.Integer Bits.Bits_64], [Types.INTEGER, Value_Type.Integer Bits.Bits_32]]
|
||||||
|
floats = [[Types.DOUBLE, Value_Type.Float Bits.Bits_64], [Types.REAL, Value_Type.Float Bits.Bits_32]]
|
||||||
|
# TODO Bit1, Date_Time
|
||||||
|
other = [[Types.DATE, Value_Type.Date], [Types.TIME, Value_Type.Time]]
|
||||||
|
ints + floats + other
|
||||||
|
|
||||||
|
## PRIVATE
|
||||||
|
complex_types_map = Map.from_vector <|
|
||||||
|
make_decimal sql_type =
|
||||||
|
Value_Type.Decimal sql_type.precision sql_type.scale
|
||||||
|
make_varchar sql_type =
|
||||||
|
Value_Type.Char size=sql_type.precision variable_length=True
|
||||||
|
make_char sql_type =
|
||||||
|
Value_Type.Char size=sql_type.precision variable_length=False
|
||||||
|
make_binary variable sql_type =
|
||||||
|
Value_Type.Binary size=sql_type.precision variable_length=variable
|
||||||
|
handle_bit sql_type =
|
||||||
|
if sql_type.name == "bool" then Value_Type.Boolean else
|
||||||
|
# We currently do not support bit types.
|
||||||
|
on_unknown_type sql_type
|
||||||
|
handle_timestamp sql_type = case sql_type.name of
|
||||||
|
"timestamptz" -> Value_Type.Date_Time with_timezone=True
|
||||||
|
"timestamp" -> Value_Type.Date_Time with_timezone=False
|
||||||
|
_ -> on_unknown_type sql_type
|
||||||
|
|
||||||
|
numerics = [[Types.DECIMAL, make_decimal], [Types.NUMERIC, make_decimal]]
|
||||||
|
strings = [[Types.VARCHAR, make_varchar], [Types.CHAR, make_char], [Types.CLOB, make_varchar]]
|
||||||
|
binaries = [[Types.BINARY, make_binary True], [Types.BIT, handle_bit]]
|
||||||
|
others = [[Types.TIMESTAMP, handle_timestamp]]
|
||||||
|
numerics + strings + binaries + others
|
||||||
|
|
||||||
|
## PRIVATE
|
||||||
|
on_unknown_type sql_type =
|
||||||
|
Value_Type.Unsupported_Data_Type sql_type.name sql_type
|
@ -8,6 +8,8 @@ import project.Data.SQL_Statement.SQL_Statement
|
|||||||
import project.Data.SQL_Type.SQL_Type
|
import project.Data.SQL_Type.SQL_Type
|
||||||
import project.Data.Table.Table
|
import project.Data.Table.Table
|
||||||
import project.Internal.Base_Generator
|
import project.Internal.Base_Generator
|
||||||
|
import project.Internal.Column_Fetcher.Column_Fetcher
|
||||||
|
import project.Internal.Column_Fetcher as Column_Fetcher_Module
|
||||||
import project.Internal.IR.From_Spec.From_Spec
|
import project.Internal.IR.From_Spec.From_Spec
|
||||||
import project.Internal.IR.Internal_Column.Internal_Column
|
import project.Internal.IR.Internal_Column.Internal_Column
|
||||||
import project.Internal.IR.SQL_Join_Kind.SQL_Join_Kind
|
import project.Internal.IR.SQL_Join_Kind.SQL_Join_Kind
|
||||||
@ -15,6 +17,10 @@ import project.Internal.IR.Order_Descriptor.Order_Descriptor
|
|||||||
import project.Internal.IR.Query.Query
|
import project.Internal.IR.Query.Query
|
||||||
import project.Internal.Postgres.Postgres_Dialect
|
import project.Internal.Postgres.Postgres_Dialect
|
||||||
import project.Internal.Common.Database_Join_Helper
|
import project.Internal.Common.Database_Join_Helper
|
||||||
|
import project.Internal.Postgres.Postgres_Type_Mapping.Postgres_Type_Mapping
|
||||||
|
import project.Internal.SQL_Type_Mapping.SQL_Type_Mapping
|
||||||
|
import project.Internal.Statement_Setter.Statement_Setter
|
||||||
|
from project.Errors import Unsupported_Database_Operation
|
||||||
|
|
||||||
## PRIVATE
|
## PRIVATE
|
||||||
|
|
||||||
@ -37,6 +43,9 @@ type Redshift_Dialect
|
|||||||
name : Text
|
name : Text
|
||||||
name self = "redshift"
|
name self = "redshift"
|
||||||
|
|
||||||
|
## PRIVATE
|
||||||
|
to_text self = "Redshift_Dialect"
|
||||||
|
|
||||||
## PRIVATE
|
## PRIVATE
|
||||||
A function which generates SQL code from the internal representation
|
A function which generates SQL code from the internal representation
|
||||||
according to the specific dialect.
|
according to the specific dialect.
|
||||||
@ -44,21 +53,19 @@ type Redshift_Dialect
|
|||||||
generate_sql self query =
|
generate_sql self query =
|
||||||
Base_Generator.generate_query self.internal_generator_dialect query . build
|
Base_Generator.generate_query self.internal_generator_dialect query . build
|
||||||
|
|
||||||
## PRIVATE
|
|
||||||
Deduces the result type for an aggregation operation.
|
|
||||||
|
|
||||||
The provided aggregate is assumed to contain only already resolved columns.
|
|
||||||
You may need to transform it with `resolve_aggregate` first.
|
|
||||||
resolve_target_sql_type : Aggregate_Column -> SQL_Type
|
|
||||||
resolve_target_sql_type self aggregate =
|
|
||||||
Postgres_Dialect.resolve_target_sql_type aggregate
|
|
||||||
|
|
||||||
## PRIVATE
|
## PRIVATE
|
||||||
Prepares an ordering descriptor.
|
Prepares an ordering descriptor.
|
||||||
|
|
||||||
One of the purposes of this method is to verify if the expected ordering
|
One of the purposes of this method is to verify if the expected ordering
|
||||||
settings are supported by the given database backend.
|
settings are supported by the given database backend.
|
||||||
prepare_order_descriptor : Internal_Column -> Sort_Direction -> Text_Ordering -> Order_Descriptor
|
|
||||||
|
Arguments:
|
||||||
|
- internal_column: the column to order by.
|
||||||
|
- sort_direction: the direction of the ordering.
|
||||||
|
- text_ordering: If provided, specifies that the column should be treated
|
||||||
|
as text values according to the provided ordering. For non-text types,
|
||||||
|
it should be set to `Nothing`.
|
||||||
|
prepare_order_descriptor : Internal_Column -> Sort_Direction -> Nothing | Text_Ordering -> Order_Descriptor
|
||||||
prepare_order_descriptor self internal_column sort_direction text_ordering =
|
prepare_order_descriptor self internal_column sort_direction text_ordering =
|
||||||
Postgres_Dialect.make_order_descriptor internal_column sort_direction text_ordering
|
Postgres_Dialect.make_order_descriptor internal_column sort_direction text_ordering
|
||||||
|
|
||||||
@ -81,3 +88,28 @@ type Redshift_Dialect
|
|||||||
given backend.
|
given backend.
|
||||||
get_naming_helpers : Naming_Helpers
|
get_naming_helpers : Naming_Helpers
|
||||||
get_naming_helpers self = Naming_Helpers
|
get_naming_helpers self = Naming_Helpers
|
||||||
|
|
||||||
|
## PRIVATE
|
||||||
|
Returns the mapping between SQL types of this dialect and Enso
|
||||||
|
`Value_Type`.
|
||||||
|
get_type_mapping : SQL_Type_Mapping
|
||||||
|
get_type_mapping self = Postgres_Type_Mapping
|
||||||
|
|
||||||
|
## PRIVATE
|
||||||
|
Creates a `Column_Fetcher` used to fetch data from a result set and build
|
||||||
|
an in-memory column from it, based on the given column type.
|
||||||
|
make_column_fetcher_for_type : SQL_Type -> Column_Fetcher
|
||||||
|
make_column_fetcher_for_type self sql_type =
|
||||||
|
type_mapping = self.get_type_mapping
|
||||||
|
value_type = type_mapping.sql_type_to_value_type sql_type
|
||||||
|
Column_Fetcher_Module.default_fetcher_for_value_type value_type
|
||||||
|
|
||||||
|
## PRIVATE
|
||||||
|
get_statement_setter : Statement_Setter
|
||||||
|
get_statement_setter self = Postgres_Dialect.postgres_statement_setter
|
||||||
|
|
||||||
|
## PRIVATE
|
||||||
|
check_aggregate_support : Aggregate_Column -> Boolean ! Unsupported_Database_Operation
|
||||||
|
check_aggregate_support self aggregate =
|
||||||
|
_ = aggregate
|
||||||
|
True
|
||||||
|
@ -1,10 +1,9 @@
|
|||||||
from Standard.Base import all
|
from Standard.Base import all
|
||||||
|
|
||||||
import Standard.Table.Data.Table.Table as Materialized_Table
|
import Standard.Table.Data.Table.Table as Materialized_Table
|
||||||
import Standard.Table.Data.Column.Column as Materialized_Column
|
|
||||||
import Standard.Table.Internal.Java_Exports
|
|
||||||
|
|
||||||
import project.Data.SQL_Type.SQL_Type
|
import project.Data.SQL_Type.SQL_Type
|
||||||
|
import project.Internal.Column_Fetcher.Column_Fetcher
|
||||||
|
|
||||||
polyglot java import java.sql.ResultSet
|
polyglot java import java.sql.ResultSet
|
||||||
|
|
||||||
@ -12,6 +11,7 @@ polyglot java import java.sql.ResultSet
|
|||||||
Read a single column from a ResultSet into a Vector
|
Read a single column from a ResultSet into a Vector
|
||||||
read_column : ResultSet -> Text -> Vector
|
read_column : ResultSet -> Text -> Vector
|
||||||
read_column result_set column_name =
|
read_column result_set column_name =
|
||||||
|
# TODO use fetcher
|
||||||
if result_set.isClosed then [] else
|
if result_set.isClosed then [] else
|
||||||
index = result_set.findColumn column_name
|
index = result_set.findColumn column_name
|
||||||
|
|
||||||
@ -25,17 +25,25 @@ read_column result_set column_name =
|
|||||||
|
|
||||||
## PRIVATE
|
## PRIVATE
|
||||||
Converts a ResultSet into a Materialized_Table.
|
Converts a ResultSet into a Materialized_Table.
|
||||||
result_set_to_table : ResultSet -> (Vector | Nothing) -> Boolean -> Materialized_Table
|
result_set_to_table : ResultSet -> (SQL_Type -> Column_Fetcher) -> (Vector (SQL_Type | Nothing) | Nothing) -> Boolean -> Materialized_Table
|
||||||
result_set_to_table result_set expected_types=Nothing last_row_only=False =
|
result_set_to_table result_set make_column_fetcher type_overrides=Nothing last_row_only=False =
|
||||||
metadata = result_set.getMetaData
|
metadata = result_set.getMetaData
|
||||||
ncols = metadata.getColumnCount
|
ncols = metadata.getColumnCount
|
||||||
column_names = Vector.new ncols ix-> metadata.getColumnName ix+1
|
column_names = Vector.new ncols ix-> metadata.getColumnName ix+1
|
||||||
column_types = if expected_types.is_nothing.not then expected_types else
|
metadata_types = Vector.new ncols ix-> SQL_Type.from_metadata metadata ix+1
|
||||||
Vector.new ncols ix->
|
column_types = case type_overrides of
|
||||||
typeid = metadata.getColumnType ix+1
|
Nothing -> metadata_types
|
||||||
name = metadata.getColumnTypeName ix+1
|
_ : Vector ->
|
||||||
SQL_Type.Value typeid name
|
effective_types = type_overrides.zip metadata_types overridden_type-> metadata_type->
|
||||||
column_builders = column_types.map create_builder
|
case overridden_type of
|
||||||
|
Nothing -> metadata_type
|
||||||
|
_ -> overridden_type
|
||||||
|
effective_types
|
||||||
|
column_fetchers = column_types.map make_column_fetcher
|
||||||
|
initial_size = 10
|
||||||
|
column_builders = column_fetchers.map fetcher->
|
||||||
|
fetcher.make_builder initial_size
|
||||||
|
fetchers_and_builders = column_fetchers.zip column_builders
|
||||||
case last_row_only of
|
case last_row_only of
|
||||||
True ->
|
True ->
|
||||||
## Not using the `ResultSet.last` as not supported by all connection types.
|
## Not using the `ResultSet.last` as not supported by all connection types.
|
||||||
@ -45,109 +53,18 @@ result_set_to_table result_set expected_types=Nothing last_row_only=False =
|
|||||||
column_builders.each_with_index ix-> builder-> builder.append (current.at ix)
|
column_builders.each_with_index ix-> builder-> builder.append (current.at ix)
|
||||||
Nothing
|
Nothing
|
||||||
False ->
|
False ->
|
||||||
values = column_builders.map_with_index ix-> builder-> builder.fetch_value result_set ix+1
|
values = column_fetchers.map_with_index ix-> fetcher-> fetcher.fetch_value result_set ix+1
|
||||||
@Tail_Call go result_set.next values
|
@Tail_Call go result_set.next values
|
||||||
go result_set.next Nothing
|
go result_set.next Nothing
|
||||||
False ->
|
False ->
|
||||||
go has_next = if has_next.not then Nothing else
|
go has_next = if has_next.not then Nothing else
|
||||||
column_builders.map_with_index ix-> builder->
|
fetchers_and_builders.each_with_index ix-> pair->
|
||||||
builder.fetch_and_append result_set ix+1
|
fetcher = pair.first
|
||||||
|
builder = pair.second
|
||||||
|
value = fetcher.fetch_value result_set ix+1
|
||||||
|
builder.append value
|
||||||
@Tail_Call go result_set.next
|
@Tail_Call go result_set.next
|
||||||
go result_set.next
|
go result_set.next
|
||||||
columns = column_builders.zip column_names builder-> name->
|
columns = column_builders.zip column_names builder-> name->
|
||||||
builder.make_column name
|
builder.make_column name
|
||||||
Materialized_Table.new columns
|
Materialized_Table.new columns
|
||||||
|
|
||||||
## PRIVATE
|
|
||||||
|
|
||||||
Creates a builder for a column based on a provided SQL type, trying to infer
|
|
||||||
the best type for the builder.
|
|
||||||
|
|
||||||
Arguments:
|
|
||||||
- sql_type: The SQL type of the column to create a builder for.
|
|
||||||
create_builder : SQL_Type -> Builder
|
|
||||||
create_builder sql_type =
|
|
||||||
initial_size = 10
|
|
||||||
if sql_type.is_definitely_boolean then Builder.Builder_Boolean (Java_Exports.make_bool_builder) else
|
|
||||||
if sql_type.is_definitely_integer then Builder.Builder_Long (Java_Exports.make_long_builder initial_size) else
|
|
||||||
if sql_type.is_definitely_double then Builder.Builder_Double (Java_Exports.make_double_builder initial_size) else
|
|
||||||
Builder.Builder_Inferred (Java_Exports.make_inferred_builder initial_size)
|
|
||||||
|
|
||||||
type Builder
|
|
||||||
|
|
||||||
## PRIVATE
|
|
||||||
|
|
||||||
A builder that has an inferred column type at runtime.
|
|
||||||
|
|
||||||
Arguments:
|
|
||||||
- java_builder: The underlying builder object.
|
|
||||||
Builder_Inferred java_builder
|
|
||||||
|
|
||||||
## PRIVATE
|
|
||||||
|
|
||||||
A builder that has a Decimal column type at runtime.
|
|
||||||
|
|
||||||
Arguments:
|
|
||||||
- java_builder: The underlying double NumericBuilder object.
|
|
||||||
Builder_Double java_builder
|
|
||||||
|
|
||||||
## PRIVATE
|
|
||||||
|
|
||||||
A builder that has an Integer column type at runtime.
|
|
||||||
|
|
||||||
Arguments:
|
|
||||||
- java_builder: The underlying long NumericBuilder object.
|
|
||||||
Builder_Long java_builder
|
|
||||||
|
|
||||||
## PRIVATE
|
|
||||||
|
|
||||||
A builder that has an Boolean column type at runtime.
|
|
||||||
|
|
||||||
Arguments:
|
|
||||||
- java_builder: The underlying BoolBuilder object.
|
|
||||||
Builder_Boolean java_builder
|
|
||||||
|
|
||||||
## PRIVATE
|
|
||||||
|
|
||||||
Fetches the value of ith column from the current row of the result set
|
|
||||||
and appends it to the builder.
|
|
||||||
|
|
||||||
Arguments:
|
|
||||||
- rs: the Java ResultSet from which the value will be fetched.
|
|
||||||
- i: the index of the column to fetch from (starting from 1 as is the
|
|
||||||
ResultSet convention).
|
|
||||||
fetch_and_append : ResultSet -> Integer -> Nothing
|
|
||||||
fetch_and_append self rs i =
|
|
||||||
value = self.fetch_value rs i
|
|
||||||
self.append value
|
|
||||||
|
|
||||||
## PRIVATE
|
|
||||||
Fetches the value of ith column from the current row of the result set
|
|
||||||
fetch_value : ResultSet -> Integer -> Any
|
|
||||||
fetch_value self rs i =
|
|
||||||
value = case self of
|
|
||||||
Builder.Builder_Inferred _ -> rs.getObject i
|
|
||||||
Builder.Builder_Boolean _ -> rs.getBoolean i
|
|
||||||
Builder.Builder_Long _ -> rs.getLong i
|
|
||||||
Builder.Builder_Double _ -> rs.getDouble i
|
|
||||||
if rs.wasNull then Nothing else value
|
|
||||||
|
|
||||||
## PRIVATE
|
|
||||||
append : Any -> Nothing
|
|
||||||
append self value = if value.is_nothing then self.java_builder.appendNulls 1 else
|
|
||||||
case self of
|
|
||||||
Builder.Builder_Inferred _ -> self.java_builder.append value
|
|
||||||
Builder.Builder_Boolean _ -> self.java_builder.appendBoolean value
|
|
||||||
Builder.Builder_Long _ -> self.java_builder.appendLong value
|
|
||||||
Builder.Builder_Double _ -> self.java_builder.appendDouble value
|
|
||||||
|
|
||||||
## PRIVATE
|
|
||||||
|
|
||||||
Seals the builder and returns a built Java-column.
|
|
||||||
|
|
||||||
Argument:
|
|
||||||
- name: The name of the column.
|
|
||||||
make_column : Text -> Materialized_Column
|
|
||||||
make_column self name =
|
|
||||||
storage = self.java_builder.seal
|
|
||||||
Java_Exports.make_column name storage
|
|
||||||
|
@ -0,0 +1,72 @@
|
|||||||
|
from Standard.Base import all
|
||||||
|
import Standard.Base.Errors.Unimplemented.Unimplemented
|
||||||
|
|
||||||
|
import Standard.Table.Data.Type.Value_Type.Value_Type
|
||||||
|
|
||||||
|
import project.Data.SQL_Type.SQL_Type
|
||||||
|
import project.Internal.IR.SQL_Expression.SQL_Expression
|
||||||
|
import project.Internal.SQL_Type_Reference.SQL_Type_Reference
|
||||||
|
|
||||||
|
type SQL_Type_Mapping
|
||||||
|
## Converts the given Value_Type to its corresponding SQL_Type.
|
||||||
|
|
||||||
|
Some SQL dialects may not support all Value_Types (in fact most will
|
||||||
|
have at least a few exceptions, and some like SQLite may have very
|
||||||
|
limited support). If an SQL_Type that matches the Value_Type cannot be
|
||||||
|
found, a closest approximate match is returned instead. If an exact match
|
||||||
|
cannot be found, an `Inexact_Type_Coercion` warning is reported according
|
||||||
|
to the `on_problems` setting.
|
||||||
|
|
||||||
|
If the conversion is exact, it should be reversible, i.e.
|
||||||
|
`sql_type_to_value_type (value_type_to_sql x Problem_Behavior.Report_Error) = x`.
|
||||||
|
value_type_to_sql : Value_Type -> Problem_Behavior -> SQL_Type
|
||||||
|
value_type_to_sql value_type on_problems =
|
||||||
|
_ = [value_type, on_problems]
|
||||||
|
Unimplemented.throw "This is an interface only."
|
||||||
|
|
||||||
|
## Converts the given SQL_Type to its corresponding Value_Type.
|
||||||
|
sql_type_to_value_type : SQL_Type -> Value_Type
|
||||||
|
sql_type_to_value_type sql_type =
|
||||||
|
_ = sql_type
|
||||||
|
Unimplemented.throw "This is an interface only."
|
||||||
|
|
||||||
|
## Returns a `SQL_Type_Reference` that will resolve to the resulting type of
|
||||||
|
the given operation.
|
||||||
|
|
||||||
|
In most cases this will just delegate to `infer_from_database_callback`
|
||||||
|
which should ask the particular database backend to infer the type, but
|
||||||
|
some specific cases may override the default behavior. The notable
|
||||||
|
example is the ability to support Boolean types in SQLite.
|
||||||
|
|
||||||
|
The particular operation is identified by its name. It also gets a vector
|
||||||
|
of supplied arguments in case the result type may depend on them. The
|
||||||
|
arguments are passed as-is, i.e. they may be Database columns or raw Enso
|
||||||
|
values. The generated IR expression is also provided as
|
||||||
|
depending on the backend the raw arguments or the target expression may
|
||||||
|
be more useful to create the return type. In particular, the expression
|
||||||
|
may be used as an argument for the `infer_from_database_callback`.
|
||||||
|
infer_return_type : (SQL_Expression -> SQL_Type_Reference) -> Text -> Vector -> SQL_Expression -> SQL_Type_Reference
|
||||||
|
infer_return_type infer_from_database_callback op_name arguments expression =
|
||||||
|
_ = [infer_from_database_callback, op_name, arguments, expression]
|
||||||
|
Unimplemented.throw "This is an interface only."
|
||||||
|
|
||||||
|
## PRIVATE
|
||||||
|
Gets a list of type suggestions and returns a list of type overrides for
|
||||||
|
a query.
|
||||||
|
|
||||||
|
This is used to customize type override behavior - most backends will
|
||||||
|
correctly infer types from metadata, so unless specifically overridden,
|
||||||
|
we can rely on the `ResultSet` metadata and reduce any overhead. However,
|
||||||
|
in some backends (SQLite) the metadata may not be as useful (in SQLite,
|
||||||
|
the metadata is changing depending on the result row, so the first row
|
||||||
|
that is usually used may not reflect the needs of the whole column) -
|
||||||
|
this method allows to provide custom overrides in such case.
|
||||||
|
|
||||||
|
If the vector contains a `Nothing` at a given position, that column type
|
||||||
|
will be inferred from the `ResultSet` metadata. If it contains a concrete
|
||||||
|
type, that type will be used instead, regardless of what is coming from
|
||||||
|
the metadata.
|
||||||
|
prepare_type_overrides : Nothing | Vector SQL_Type_Reference -> Nothing | Vector (Nothing | SQL_Type)
|
||||||
|
prepare_type_overrides column_type_suggestions =
|
||||||
|
_ = column_type_suggestions
|
||||||
|
Unimplemented.throw "This is an interface only."
|
@ -0,0 +1,76 @@
|
|||||||
|
from Standard.Base import all
|
||||||
|
import Standard.Base.Errors.Illegal_State.Illegal_State
|
||||||
|
import Standard.Base.Runtime.Lazy.Lazy
|
||||||
|
|
||||||
|
import project.Connection.Connection.Connection
|
||||||
|
import project.Data.SQL_Type.SQL_Type
|
||||||
|
import project.Internal.IR.Context.Context
|
||||||
|
import project.Internal.IR.SQL_Expression.SQL_Expression
|
||||||
|
import project.Internal.IR.Query.Query
|
||||||
|
|
||||||
|
type SQL_Type_Reference
|
||||||
|
## Refers to the SQL type of a given column, as computed by the Database
|
||||||
|
itself.
|
||||||
|
|
||||||
|
Since fetching this type requires querying the database, it is computed
|
||||||
|
lazily and cached.
|
||||||
|
Computed_By_Database (lazy_ref : Lazy)
|
||||||
|
|
||||||
|
## Refers to an SQL type that is overridden by the dialect's type system.
|
||||||
|
Overridden (value : SQL_Type)
|
||||||
|
|
||||||
|
## PRIVATE
|
||||||
|
Returns the stored SQL type.
|
||||||
|
|
||||||
|
This may perform a database query on first access.
|
||||||
|
get : SQL_Type
|
||||||
|
get self = case self of
|
||||||
|
SQL_Type_Reference.Computed_By_Database lazy_ref -> lazy_ref.get
|
||||||
|
SQL_Type_Reference.Overridden value -> value
|
||||||
|
|
||||||
|
## PRIVATE
|
||||||
|
Creates an `SQL_Type_Reference` from a known constant.
|
||||||
|
|
||||||
|
This is useful when the type is already known (for example in
|
||||||
|
`Database.make_table`, because the column types were already fetched) or when
|
||||||
|
the type is overridden (for example when pretending that SQLite has a boolean
|
||||||
|
type).
|
||||||
|
from_constant : SQL_Type -> SQL_Type_Reference
|
||||||
|
from_constant sql_type = SQL_Type_Reference.Overridden sql_type
|
||||||
|
|
||||||
|
## PRIVATE
|
||||||
|
Creates a new `SQL_Type_Reference` from a given SQL expression evaluated in a
|
||||||
|
provided context. The connection is used to ask the database engine what the
|
||||||
|
expected type will be.
|
||||||
|
new : Connection -> Context -> SQL_Expression -> SQL_Type_Reference
|
||||||
|
new connection context expression =
|
||||||
|
do_fetch =
|
||||||
|
empty_context = context.add_where_filters [SQL_Expression.Constant False]
|
||||||
|
statement = connection.dialect.generate_sql (Query.Select [["typed_column", expression]] empty_context)
|
||||||
|
statement_setter = connection.dialect.get_statement_setter
|
||||||
|
columns = connection.jdbc_connection.fetch_columns statement statement_setter
|
||||||
|
only_column = columns.first
|
||||||
|
only_column.second
|
||||||
|
SQL_Type_Reference.Computed_By_Database (Lazy.new do_fetch)
|
||||||
|
|
||||||
|
## PRIVATE
|
||||||
|
Creates a new `SQL_Type_Reference` that should never be used.
|
||||||
|
This is used by some internal methods which need to construct an internal
|
||||||
|
column, but we can guarantee that its SQL Type will never be checked.
|
||||||
|
null : SQL_Type_Reference
|
||||||
|
null =
|
||||||
|
getter =
|
||||||
|
Error.throw (Illegal_State.Error "Getting the SQL_Type from SQL_Type_Reference.null is not allowed. This indicates a bug in the Database library.")
|
||||||
|
SQL_Type_Reference.Computed_By_Database (Lazy.new getter)
|
||||||
|
|
||||||
|
## PRIVATE
|
||||||
|
Turns this reference into a type override.
|
||||||
|
|
||||||
|
If the type is computed by the database, this will return `Nothing`,
|
||||||
|
allowing the fetch method to read the type from query metadata. However,
|
||||||
|
if it was overridden, it will return that override to be used instead of
|
||||||
|
the type coming from the metadata.
|
||||||
|
to_type_override : SQL_Type | Nothing
|
||||||
|
to_type_override self = case self of
|
||||||
|
SQL_Type_Reference.Overridden sql_type -> sql_type
|
||||||
|
SQL_Type_Reference.Computed_By_Database _ -> Nothing
|
@ -7,12 +7,13 @@ import Standard.Base.Metadata.Display
|
|||||||
|
|
||||||
import Standard.Table.Data.Table.Table as Materialized_Table
|
import Standard.Table.Data.Table.Table as Materialized_Table
|
||||||
|
|
||||||
|
import project.Connection.Connection.Connection
|
||||||
import project.Data.SQL_Query.SQL_Query
|
import project.Data.SQL_Query.SQL_Query
|
||||||
import project.Data.SQL_Type.SQL_Type
|
import project.Data.SQL_Type.SQL_Type
|
||||||
import project.Internal.JDBC_Connection
|
|
||||||
import project.Data.Dialect
|
import project.Data.Dialect
|
||||||
import project.Connection.Connection.Connection
|
|
||||||
import project.Data.Table.Table as Database_Table
|
import project.Data.Table.Table as Database_Table
|
||||||
|
import project.Internal.JDBC_Connection
|
||||||
|
import project.Internal.SQL_Type_Reference.SQL_Type_Reference
|
||||||
|
|
||||||
import project.Data.SQL_Statement.SQL_Statement
|
import project.Data.SQL_Statement.SQL_Statement
|
||||||
from project.Errors import SQL_Error
|
from project.Errors import SQL_Error
|
||||||
@ -106,26 +107,6 @@ type SQLite_Connection
|
|||||||
read : Text | SQL_Query -> Integer | Nothing -> Materialized_Table
|
read : Text | SQL_Query -> Integer | Nothing -> Materialized_Table
|
||||||
read self query limit=Nothing = self.connection.read query limit
|
read self query limit=Nothing = self.connection.read query limit
|
||||||
|
|
||||||
## PRIVATE
|
|
||||||
Internal read function for a statement with optional types.
|
|
||||||
|
|
||||||
Arguments:
|
|
||||||
- statement: SQL_Statement to execute.
|
|
||||||
- expected_types: Optional vector of expected types for each column.
|
|
||||||
read_statement : SQL_Statement -> (Nothing | Vector SQL_Type) -> Materialized_Table
|
|
||||||
read_statement self statement expected_types=Nothing =
|
|
||||||
self.connection.read_statement statement expected_types
|
|
||||||
|
|
||||||
## PRIVATE
|
|
||||||
Internal read function for a statement with optional types returning just last row.
|
|
||||||
|
|
||||||
Arguments:
|
|
||||||
- statement: SQL_Statement to execute.
|
|
||||||
- expected_types: Optional vector of expected types for each column.
|
|
||||||
read_last_row : SQL_Statement -> (Nothing | Vector SQL_Type) -> Materialized_Table
|
|
||||||
read_last_row self statement expected_types=Nothing =
|
|
||||||
self.connection.read_last_row statement expected_types
|
|
||||||
|
|
||||||
## ADVANCED
|
## ADVANCED
|
||||||
|
|
||||||
Executes a raw update query. If the query was inserting, updating or
|
Executes a raw update query. If the query was inserting, updating or
|
||||||
|
@ -13,6 +13,8 @@ import project.Data.SQL_Statement.SQL_Statement
|
|||||||
import project.Data.SQL_Type.SQL_Type
|
import project.Data.SQL_Type.SQL_Type
|
||||||
import project.Data.Table.Table
|
import project.Data.Table.Table
|
||||||
import project.Internal.Base_Generator
|
import project.Internal.Base_Generator
|
||||||
|
import project.Internal.Column_Fetcher.Column_Fetcher
|
||||||
|
import project.Internal.Column_Fetcher as Column_Fetcher_Module
|
||||||
import project.Internal.IR.Context.Context
|
import project.Internal.IR.Context.Context
|
||||||
import project.Internal.IR.From_Spec.From_Spec
|
import project.Internal.IR.From_Spec.From_Spec
|
||||||
import project.Internal.IR.Internal_Column.Internal_Column
|
import project.Internal.IR.Internal_Column.Internal_Column
|
||||||
@ -21,7 +23,9 @@ import project.Internal.IR.Order_Descriptor.Order_Descriptor
|
|||||||
import project.Internal.IR.Query.Query
|
import project.Internal.IR.Query.Query
|
||||||
import project.Internal.Common.Database_Distinct_Helper
|
import project.Internal.Common.Database_Distinct_Helper
|
||||||
import project.Internal.Common.Database_Join_Helper
|
import project.Internal.Common.Database_Join_Helper
|
||||||
|
import project.Internal.SQL_Type_Mapping.SQL_Type_Mapping
|
||||||
|
import project.Internal.SQLite.SQLite_Type_Mapping.SQLite_Type_Mapping
|
||||||
|
import project.Internal.Statement_Setter.Statement_Setter
|
||||||
from project.Errors import Unsupported_Database_Operation
|
from project.Errors import Unsupported_Database_Operation
|
||||||
|
|
||||||
## PRIVATE
|
## PRIVATE
|
||||||
@ -45,6 +49,9 @@ type SQLite_Dialect
|
|||||||
name : Text
|
name : Text
|
||||||
name self = "SQLite"
|
name self = "SQLite"
|
||||||
|
|
||||||
|
## PRIVATE
|
||||||
|
to_text self = "SQLite_Dialect"
|
||||||
|
|
||||||
## PRIVATE
|
## PRIVATE
|
||||||
A function which generates SQL code from the internal representation
|
A function which generates SQL code from the internal representation
|
||||||
according to the specific dialect.
|
according to the specific dialect.
|
||||||
@ -52,22 +59,23 @@ type SQLite_Dialect
|
|||||||
generate_sql self query =
|
generate_sql self query =
|
||||||
Base_Generator.generate_query self.internal_generator_dialect query . build
|
Base_Generator.generate_query self.internal_generator_dialect query . build
|
||||||
|
|
||||||
## PRIVATE
|
|
||||||
Deduces the result type for an aggregation operation.
|
|
||||||
|
|
||||||
The provided aggregate is assumed to contain only already resolved columns.
|
|
||||||
You may need to transform it with `resolve_aggregate` first.
|
|
||||||
resolve_target_sql_type : Aggregate_Column -> SQL_Type
|
|
||||||
resolve_target_sql_type self aggregate = resolve_target_sql_type aggregate
|
|
||||||
|
|
||||||
## PRIVATE
|
## PRIVATE
|
||||||
Prepares an ordering descriptor.
|
Prepares an ordering descriptor.
|
||||||
|
|
||||||
One of the purposes of this method is to verify if the expected ordering
|
One of the purposes of this method is to verify if the expected ordering
|
||||||
settings are supported by the given database backend.
|
settings are supported by the given database backend.
|
||||||
prepare_order_descriptor : Internal_Column -> Sort_Direction -> Text_Ordering -> Order_Descriptor
|
|
||||||
prepare_order_descriptor self internal_column sort_direction text_ordering = case internal_column.sql_type.is_likely_text of
|
Arguments:
|
||||||
True ->
|
- internal_column: the column to order by.
|
||||||
|
- sort_direction: the direction of the ordering.
|
||||||
|
- text_ordering: If provided, specifies that the column should be treated
|
||||||
|
as text values according to the provided ordering. For non-text types,
|
||||||
|
it should be set to `Nothing`.
|
||||||
|
prepare_order_descriptor : Internal_Column -> Sort_Direction -> Nothing | Text_Ordering -> Order_Descriptor
|
||||||
|
prepare_order_descriptor self internal_column sort_direction text_ordering = case text_ordering of
|
||||||
|
Nothing ->
|
||||||
|
Order_Descriptor.Value internal_column.expression sort_direction collation=Nothing
|
||||||
|
_ ->
|
||||||
if text_ordering.sort_digits_as_numbers then Error.throw (Unsupported_Database_Operation.Error "Natural ordering is not supported by the SQLite backend. You may need to materialize the Table to perform this operation.") else
|
if text_ordering.sort_digits_as_numbers then Error.throw (Unsupported_Database_Operation.Error "Natural ordering is not supported by the SQLite backend. You may need to materialize the Table to perform this operation.") else
|
||||||
case text_ordering.case_sensitivity of
|
case text_ordering.case_sensitivity of
|
||||||
Case_Sensitivity.Default ->
|
Case_Sensitivity.Default ->
|
||||||
@ -79,8 +87,6 @@ type SQLite_Dialect
|
|||||||
Error.throw (Unsupported_Database_Operation.Error "Case insensitive ordering with custom locale is not supported by the SQLite backend. You may need to materialize the Table to perform this operation.")
|
Error.throw (Unsupported_Database_Operation.Error "Case insensitive ordering with custom locale is not supported by the SQLite backend. You may need to materialize the Table to perform this operation.")
|
||||||
True ->
|
True ->
|
||||||
Order_Descriptor.Value internal_column.expression sort_direction collation="NOCASE"
|
Order_Descriptor.Value internal_column.expression sort_direction collation="NOCASE"
|
||||||
False ->
|
|
||||||
Order_Descriptor.Value internal_column.expression sort_direction collation=Nothing
|
|
||||||
|
|
||||||
## PRIVATE
|
## PRIVATE
|
||||||
Prepares a join operation, returning a new table instance encapsulating a
|
Prepares a join operation, returning a new table instance encapsulating a
|
||||||
@ -106,7 +112,10 @@ type SQLite_Dialect
|
|||||||
new_columns = setup.new_columns.first
|
new_columns = setup.new_columns.first
|
||||||
column_mapping = Map.from_vector <| new_columns.map c-> [c.name, c]
|
column_mapping = Map.from_vector <| new_columns.map c-> [c.name, c]
|
||||||
new_key_columns = key_columns.map c-> column_mapping.at c.name
|
new_key_columns = key_columns.map c-> column_mapping.at c.name
|
||||||
distinct_expressions = new_key_columns.map (Database_Distinct_Helper.make_distinct_expression case_sensitivity problem_builder)
|
type_mapping = self.get_type_mapping
|
||||||
|
distinct_expressions = new_key_columns.map column->
|
||||||
|
value_type = type_mapping.sql_type_to_value_type column.sql_type_reference.get
|
||||||
|
Database_Distinct_Helper.make_distinct_expression case_sensitivity problem_builder column value_type
|
||||||
new_context = Context.for_subquery setup.subquery . set_groups distinct_expressions
|
new_context = Context.for_subquery setup.subquery . set_groups distinct_expressions
|
||||||
table.updated_context_and_columns new_context new_columns subquery=True
|
table.updated_context_and_columns new_context new_columns subquery=True
|
||||||
|
|
||||||
@ -123,6 +132,51 @@ type SQLite_Dialect
|
|||||||
get_naming_helpers : Naming_Helpers
|
get_naming_helpers : Naming_Helpers
|
||||||
get_naming_helpers self = Naming_Helpers
|
get_naming_helpers self = Naming_Helpers
|
||||||
|
|
||||||
|
## PRIVATE
|
||||||
|
Returns the mapping between SQL types of this dialect and Enso
|
||||||
|
`Value_Type`.
|
||||||
|
get_type_mapping : SQL_Type_Mapping
|
||||||
|
get_type_mapping self = SQLite_Type_Mapping
|
||||||
|
|
||||||
|
## PRIVATE
|
||||||
|
Creates a `Column_Fetcher` used to fetch data from a result set and build
|
||||||
|
an in-memory column from it, based on the given column type.
|
||||||
|
make_column_fetcher_for_type : SQL_Type -> Column_Fetcher
|
||||||
|
make_column_fetcher_for_type self sql_type =
|
||||||
|
type_mapping = self.get_type_mapping
|
||||||
|
value_type = type_mapping.sql_type_to_value_type sql_type
|
||||||
|
Column_Fetcher_Module.default_fetcher_for_value_type value_type
|
||||||
|
|
||||||
|
## PRIVATE
|
||||||
|
get_statement_setter : Statement_Setter
|
||||||
|
get_statement_setter self = Statement_Setter.default
|
||||||
|
|
||||||
|
## PRIVATE
|
||||||
|
check_aggregate_support : Aggregate_Column -> Boolean ! Unsupported_Database_Operation
|
||||||
|
check_aggregate_support self aggregate = case aggregate of
|
||||||
|
Group_By _ _ -> True
|
||||||
|
Count _ -> True
|
||||||
|
Count_Distinct columns _ _ ->
|
||||||
|
if columns.length == 1 then True else
|
||||||
|
unsupported "Count_Distinct on multiple columns"
|
||||||
|
Count_Not_Nothing _ _ -> True
|
||||||
|
Count_Nothing _ _ -> True
|
||||||
|
Count_Not_Empty _ _ -> True
|
||||||
|
Count_Empty _ _ -> True
|
||||||
|
Percentile _ _ _ -> unsupported "Percentile"
|
||||||
|
Mode _ _ -> unsupported "Mode"
|
||||||
|
First _ _ _ _ -> unsupported "First"
|
||||||
|
Last _ _ _ _ -> unsupported "Last"
|
||||||
|
Maximum _ _ -> True
|
||||||
|
Minimum _ _ -> True
|
||||||
|
Shortest _ _ -> unsupported "Shortest"
|
||||||
|
Longest _ _ -> unsupported "Longest"
|
||||||
|
Standard_Deviation _ _ _ -> True
|
||||||
|
Concatenate _ _ _ _ _ _ -> True
|
||||||
|
Sum _ _ -> True
|
||||||
|
Average _ _ -> True
|
||||||
|
Median _ _ -> unsupported "Median"
|
||||||
|
|
||||||
## PRIVATE
|
## PRIVATE
|
||||||
make_internal_generator_dialect =
|
make_internal_generator_dialect =
|
||||||
text = [starts_with, contains, ends_with, make_case_sensitive]+concat_ops
|
text = [starts_with, contains, ends_with, make_case_sensitive]+concat_ops
|
||||||
@ -134,33 +188,6 @@ make_internal_generator_dialect =
|
|||||||
my_mappings = text + counts + stats + arith_extensions + bool
|
my_mappings = text + counts + stats + arith_extensions + bool
|
||||||
Base_Generator.base_dialect . extend_with my_mappings
|
Base_Generator.base_dialect . extend_with my_mappings
|
||||||
|
|
||||||
## PRIVATE
|
|
||||||
The provided aggregate is assumed to contain only already resolved columns.
|
|
||||||
You may need to transform it with `resolve_aggregate` first.
|
|
||||||
resolve_target_sql_type aggregate = case aggregate of
|
|
||||||
Group_By c _ -> c.sql_type
|
|
||||||
Count _ -> SQL_Type.integer
|
|
||||||
Count_Distinct columns _ _ ->
|
|
||||||
if columns.length == 1 then SQL_Type.integer else
|
|
||||||
unsupported "Count_Distinct on multiple columns"
|
|
||||||
Count_Not_Nothing _ _ -> SQL_Type.integer
|
|
||||||
Count_Nothing _ _ -> SQL_Type.integer
|
|
||||||
Count_Not_Empty _ _ -> SQL_Type.integer
|
|
||||||
Count_Empty _ _ -> SQL_Type.integer
|
|
||||||
Percentile _ _ _ -> unsupported "Percentile"
|
|
||||||
Mode _ _ -> unsupported "Mode"
|
|
||||||
First _ _ _ _ -> unsupported "First"
|
|
||||||
Last _ _ _ _ -> unsupported "Last"
|
|
||||||
Maximum c _ -> c.sql_type
|
|
||||||
Minimum c _ -> c.sql_type
|
|
||||||
Shortest _ _ -> unsupported "Shortest"
|
|
||||||
Longest _ _ -> unsupported "Longest"
|
|
||||||
Standard_Deviation _ _ _ -> SQL_Type.real
|
|
||||||
Concatenate _ _ _ _ _ _ -> SQL_Type.text
|
|
||||||
Sum c _ -> c.sql_type
|
|
||||||
Average _ _ -> SQL_Type.real
|
|
||||||
Median _ _ -> unsupported "Median"
|
|
||||||
|
|
||||||
## PRIVATE
|
## PRIVATE
|
||||||
unsupported name =
|
unsupported name =
|
||||||
Error.throw (Unsupported_Database_Operation.Error name+" is not supported by SQLite backend. You may need to materialize the table and perform the operation in-memory.")
|
Error.throw (Unsupported_Database_Operation.Error name+" is not supported by SQLite backend. You may need to materialize the table and perform the operation in-memory.")
|
||||||
|
@ -0,0 +1,179 @@
|
|||||||
|
from Standard.Base import all
|
||||||
|
import Standard.Base.Errors.Illegal_Argument.Illegal_Argument
|
||||||
|
import Standard.Base.Errors.Illegal_State.Illegal_State
|
||||||
|
|
||||||
|
import Standard.Table.Data.Type.Enso_Types
|
||||||
|
import Standard.Table.Data.Type.Value_Type.Value_Type
|
||||||
|
import Standard.Table.Data.Type.Value_Type.Bits
|
||||||
|
from Standard.Table.Errors import Inexact_Type_Coercion
|
||||||
|
|
||||||
|
import project.Data.Column.Column
|
||||||
|
import project.Data.SQL_Type.SQL_Type
|
||||||
|
import project.Internal.IR.Internal_Column.Internal_Column
|
||||||
|
import project.Internal.IR.SQL_Expression.SQL_Expression
|
||||||
|
import project.Internal.SQL_Type_Reference.SQL_Type_Reference
|
||||||
|
|
||||||
|
polyglot java import java.sql.Types
|
||||||
|
|
||||||
|
## PRIVATE
|
||||||
|
Mapping from Value_Type to SQLite is done by finding the closest matching
|
||||||
|
type corresponding to one of the 4 supported affinities: INTEGER, REAL,
|
||||||
|
NUMERIC, TEXT, BLOB. Thus many value types will end up being approximated by
|
||||||
|
a close but inexact type. Apart from that, a fourth artificial affinity is
|
||||||
|
introduced: BOOLEAN. Normally, SQLite does not have a dedicated boolean type
|
||||||
|
and uses INTEGER instead. However, it is useful for our users to distinguish
|
||||||
|
the boolean columns. We do this by manually overriding the type of columns
|
||||||
|
detected as boolean or returned from our boolean operations. The JDBC
|
||||||
|
automatically handles translating between the underlying INTEGER storage and
|
||||||
|
Java Booleans, so it is all seamless - only our type logic needs to be aware
|
||||||
|
that it cannot rely on the JDBC metadata as the type reported for boolean
|
||||||
|
operations will be INTEGER - so we need to carefully ensure there is the
|
||||||
|
override.
|
||||||
|
|
||||||
|
While the JDBC driver tries to approximate more precise types based on the
|
||||||
|
type name, these approximations are not fully true as the underlying SQLite
|
||||||
|
storage is still only one of the supported affinities. So to avoid suggesting
|
||||||
|
to the user that the database can do stuff which it cannot (like storing
|
||||||
|
integers truncating them at 32-bits or storing fixed-length text) we
|
||||||
|
approximate the supported types by data types that correspond to what can
|
||||||
|
actually be stored in the given column to match its affinity. While SQLite
|
||||||
|
allows to store any data in a column, we restrict the data to only what can
|
||||||
|
match the column's affinity to be aligned with our other backends.
|
||||||
|
|
||||||
|
We map the BLOB affinity to our Mixed type to allow for Mixed type columns.
|
||||||
|
One can still store binary data in such a column.
|
||||||
|
|
||||||
|
See `JDBC3ResultSet::getColumnType` method in the `org.xerial.sqlite-jdbc`
|
||||||
|
module for the logic JDBC is using to map the SQLite types.
|
||||||
|
type SQLite_Type_Mapping
|
||||||
|
## PRIVATE
|
||||||
|
value_type_to_sql : Value_Type -> Problem_Behavior -> SQL_Type
|
||||||
|
value_type_to_sql value_type on_problems =
|
||||||
|
result = case value_type of
|
||||||
|
Value_Type.Boolean -> SQLite_Types.boolean
|
||||||
|
Value_Type.Byte -> SQLite_Types.integer
|
||||||
|
Value_Type.Integer _ -> SQLite_Types.integer
|
||||||
|
Value_Type.Float _ -> SQLite_Types.real
|
||||||
|
Value_Type.Decimal _ _ -> SQLite_Types.numeric
|
||||||
|
Value_Type.Char _ _ -> SQLite_Types.text
|
||||||
|
Value_Type.Time -> SQLite_Types.blob
|
||||||
|
Value_Type.Date -> SQLite_Types.blob
|
||||||
|
Value_Type.Date_Time _ -> SQLite_Types.blob
|
||||||
|
Value_Type.Binary _ _ -> SQLite_Types.blob
|
||||||
|
Value_Type.Mixed -> SQLite_Types.blob
|
||||||
|
Value_Type.Unsupported_Data_Type type_name underlying_type ->
|
||||||
|
underlying_type.if_nothing <|
|
||||||
|
Error.throw <|
|
||||||
|
Illegal_Argument.Error <|
|
||||||
|
"An unsupported SQL type ["+type_name.to_text+"] cannot be converted into an SQL type because it did not contain the SQL metadata needed to reconstruct it."
|
||||||
|
approximated_value_type = SQLite_Type_Mapping.sql_type_to_value_type result
|
||||||
|
problems = if approximated_value_type == value_type then [] else [Inexact_Type_Coercion.Warning value_type approximated_value_type]
|
||||||
|
on_problems.attach_problems_before problems result
|
||||||
|
|
||||||
|
## PRIVATE
|
||||||
|
sql_type_to_value_type : SQL_Type -> Value_Type
|
||||||
|
sql_type_to_value_type sql_type =
|
||||||
|
on_not_found =
|
||||||
|
Value_Type.Unsupported_Data_Type sql_type.name sql_type
|
||||||
|
simple_types_map.get sql_type.typeid on_not_found
|
||||||
|
|
||||||
|
## PRIVATE
|
||||||
|
The SQLite type mapping takes special measures to keep boolean columns
|
||||||
|
boolean even if the Database will say that they are numeric.
|
||||||
|
|
||||||
|
To do so, any operation that returns booleans will override its return
|
||||||
|
type to boolean, and operations that return the same type as inputs will
|
||||||
|
also ensure to override to the boolean type if the input was boolean. In
|
||||||
|
particular, if the operations accept multiple arguments, they will
|
||||||
|
override the return type to boolean if all the input arguments had
|
||||||
|
boolean type.
|
||||||
|
infer_return_type : (SQL_Expression -> SQL_Type_Reference) -> Text -> Vector -> SQL_Expression -> SQL_Type_Reference
|
||||||
|
infer_return_type infer_from_database_callback op_name arguments expression =
|
||||||
|
return value_type =
|
||||||
|
sql_type = SQLite_Type_Mapping.value_type_to_sql value_type Problem_Behavior.Ignore
|
||||||
|
SQL_Type_Reference.from_constant sql_type
|
||||||
|
infer_default_type =
|
||||||
|
infer_from_database_callback expression
|
||||||
|
|
||||||
|
find_type arg = case arg of
|
||||||
|
column : Column -> column.value_type
|
||||||
|
internal_column : Internal_Column ->
|
||||||
|
SQLite_Type_Mapping.sql_type_to_value_type internal_column.sql_type_reference.get
|
||||||
|
enso_value -> Enso_Types.most_specific_value_type enso_value use_smallest=True
|
||||||
|
|
||||||
|
handle_preserve_input_type _ =
|
||||||
|
inputs_types = arguments.map find_type
|
||||||
|
if inputs_types.is_empty then infer_default_type else
|
||||||
|
first_type = inputs_types.first
|
||||||
|
if inputs_types.all (== first_type) then return first_type else
|
||||||
|
infer_default_type
|
||||||
|
|
||||||
|
handle_iif _ =
|
||||||
|
if arguments.length != 3 then
|
||||||
|
Panic.throw (Illegal_State.Error "Impossible: IIF must have 3 arguments. This is a bug in the Database library.")
|
||||||
|
inputs_types = arguments.drop 1 . map find_type
|
||||||
|
if inputs_types.first == inputs_types.second then return inputs_types.first else
|
||||||
|
infer_default_type
|
||||||
|
|
||||||
|
always_boolean_ops = ["==", "!=", "equals_ignore_case", ">=", "<=", "<", ">", "BETWEEN", "AND", "OR", "NOT", "IS_NULL", "IS_NAN", "IS_EMPTY", "LIKE", "IS_IN", "starts_with", "ends_with", "contains"]
|
||||||
|
always_text_ops = ["ADD_TEXT", "CONCAT", "CONCAT_QUOTE_IF_NEEDED"]
|
||||||
|
preserve_input_type_ops = ["ROW_MAX", "ROW_MIN", "MAX", "MIN", "FIRST", "LAST", "FIRST_NOT_NULL", "LAST_NOT_NULL", "FILL_NULL"]
|
||||||
|
others = [["IIF", handle_iif]]
|
||||||
|
mapping = Map.from_vector <|
|
||||||
|
v1 = always_boolean_ops.map [_, const (return Value_Type.Boolean)]
|
||||||
|
v2 = preserve_input_type_ops.map [_, handle_preserve_input_type]
|
||||||
|
v3 = always_text_ops.map [_, const (return default_text)]
|
||||||
|
v1 + v2 + v3 + others
|
||||||
|
handler = mapping.get op_name (_ -> infer_default_type)
|
||||||
|
handler Nothing
|
||||||
|
|
||||||
|
## PRIVATE
|
||||||
|
SQLite `ResultSet` metadata may differ row-by-row, so we cannot rely on
|
||||||
|
this metadata. Instead, we get the types inferred for each colum,
|
||||||
|
regardless if it was initially overridden or not.
|
||||||
|
prepare_type_overrides : Nothing | Vector SQL_Type_Reference -> Nothing | Vector (Nothing | SQL_Type)
|
||||||
|
prepare_type_overrides column_type_suggestions = case column_type_suggestions of
|
||||||
|
Nothing -> Nothing
|
||||||
|
_ : Vector -> column_type_suggestions.map .get
|
||||||
|
|
||||||
|
## The types that SQLite JDBC driver will report are: BOOLEAN, TINYINT,
|
||||||
|
SMALLINT, BIGINT, INTEGER, DECIMAL, DOUBLE, REAL, FLOAT, NUMERIC, DATE,
|
||||||
|
TIMESTAMP, CHAR, VARCHAR, BINARY, BLOB, CLOB.
|
||||||
|
|
||||||
|
We map the types to how they are actually stored, with the exception of
|
||||||
|
boolean which is mapped as boolean type as explained above.
|
||||||
|
|
||||||
|
For types like dates - we map them to unsupported type, because date
|
||||||
|
operations in SQLite are currently not supported due to their weird storage.
|
||||||
|
simple_types_map = Map.from_vector <|
|
||||||
|
ints = [Types.TINYINT, Types.SMALLINT, Types.BIGINT, Types.INTEGER] . map x-> [x, Value_Type.Integer Bits.Bits_64]
|
||||||
|
floats = [Types.DOUBLE, Types.REAL, Types.FLOAT] . map x-> [x, Value_Type.Float Bits.Bits_64]
|
||||||
|
# We treat numeric as a float, since that is what really sits in SQLite under the hood.
|
||||||
|
numerics = [Types.DECIMAL, Types.NUMERIC] . map x-> [x, Value_Type.Float Bits.Bits_64]
|
||||||
|
strings = [Types.CHAR, Types.VARCHAR] . map x-> [x, default_text]
|
||||||
|
blobs = [Types.BINARY, Types.BLOB, Types.CLOB] . map x-> [x, Value_Type.Mixed]
|
||||||
|
special_types = [[Types.BOOLEAN, Value_Type.Boolean]]
|
||||||
|
ints + floats + numerics + strings + blobs + special_types
|
||||||
|
|
||||||
|
type SQLite_Types
|
||||||
|
## PRIVATE
|
||||||
|
text = SQL_Type.Value Types.VARCHAR "TEXT"
|
||||||
|
|
||||||
|
## PRIVATE
|
||||||
|
numeric = SQL_Type.Value Types.NUMERIC "NUMERIC"
|
||||||
|
|
||||||
|
## PRIVATE
|
||||||
|
integer = SQL_Type.Value Types.INTEGER "INTEGER"
|
||||||
|
|
||||||
|
## PRIVATE
|
||||||
|
real = SQL_Type.Value Types.REAL "REAL"
|
||||||
|
|
||||||
|
## PRIVATE
|
||||||
|
blob = SQL_Type.Value Types.BLOB "BLOB"
|
||||||
|
|
||||||
|
## PRIVATE
|
||||||
|
The artificial 6th affinity that is used to distinguish boolean columns.
|
||||||
|
boolean = SQL_Type.Value Types.BOOLEAN "BOOLEAN"
|
||||||
|
|
||||||
|
## PRIVATE
|
||||||
|
default_text = Value_Type.Char size=Nothing variable_length=True
|
@ -0,0 +1,35 @@
|
|||||||
|
from Standard.Base import all
|
||||||
|
import Standard.Base.Errors.Illegal_State.Illegal_State
|
||||||
|
|
||||||
|
polyglot java import java.sql.PreparedStatement
|
||||||
|
polyglot java import java.sql.Types as Java_Types
|
||||||
|
|
||||||
|
type Statement_Setter
|
||||||
|
## PRIVATE
|
||||||
|
Encapsulates the logic for filling a hole in a prepared statement.
|
||||||
|
Value (fill_hole : PreparedStatement -> Integer -> Any -> Nothing)
|
||||||
|
|
||||||
|
## PRIVATE
|
||||||
|
The default setter that is handling simple commonly supported types.
|
||||||
|
default : Statement_Setter
|
||||||
|
default = Statement_Setter.Value fill_hole_default
|
||||||
|
|
||||||
|
## PRIVATE
|
||||||
|
Used internally to mark statements that do not expect to have any values
|
||||||
|
to set.
|
||||||
|
|
||||||
|
It will panic if called.
|
||||||
|
null : Statement_Setter
|
||||||
|
null =
|
||||||
|
fill_hole_unexpected _ _ _ =
|
||||||
|
Panic.throw (Illegal_State.Error "The associated statement does not expect any values to be set. This is a bug in the Database library.")
|
||||||
|
Statement_Setter.Value fill_hole_unexpected
|
||||||
|
|
||||||
|
## PRIVATE
|
||||||
|
fill_hole_default stmt i value = case value of
|
||||||
|
Nothing -> stmt.setNull i Java_Types.NULL
|
||||||
|
_ : Boolean -> stmt.setBoolean i value
|
||||||
|
_ : Integer -> stmt.setLong i value
|
||||||
|
_ : Decimal -> stmt.setDouble i value
|
||||||
|
_ : Text -> stmt.setString i value
|
||||||
|
_ -> stmt.setObject i value
|
@ -7,16 +7,16 @@ import Standard.Base.Errors.Illegal_State.Illegal_State
|
|||||||
import Standard.Base.Data.Index_Sub_Range as Index_Sub_Range_Module
|
import Standard.Base.Data.Index_Sub_Range as Index_Sub_Range_Module
|
||||||
|
|
||||||
import project.Data.Data_Formatter.Data_Formatter
|
import project.Data.Data_Formatter.Data_Formatter
|
||||||
import project.Data.Storage.Storage
|
import project.Data.Type.Storage
|
||||||
import project.Data.Table.Table
|
import project.Data.Table.Table
|
||||||
import project.Data.Value_Type.Value_Type
|
|
||||||
import project.Internal.Java_Problems
|
import project.Internal.Java_Problems
|
||||||
import project.Internal.Naming_Helpers.Naming_Helpers
|
import project.Internal.Naming_Helpers.Naming_Helpers
|
||||||
import project.Internal.Parse_Values_Helper
|
import project.Internal.Parse_Values_Helper
|
||||||
import project.Internal.Widget_Helpers
|
import project.Internal.Widget_Helpers
|
||||||
|
|
||||||
from project.Data.Table import print_table
|
from project.Data.Table import print_table
|
||||||
from project.Data.Value_Type import Auto, ensure_valid_parse_target
|
from project.Data.Type.Value_Type import Value_Type, Auto
|
||||||
|
from project.Data.Type.Value_Type_Helpers import ensure_valid_parse_target
|
||||||
from project.Errors import No_Index_Set_Error, Floating_Point_Equality
|
from project.Errors import No_Index_Set_Error, Floating_Point_Equality
|
||||||
|
|
||||||
polyglot java import org.enso.table.data.column.operation.map.MapOperationProblemBuilder
|
polyglot java import org.enso.table.data.column.operation.map.MapOperationProblemBuilder
|
||||||
@ -614,8 +614,8 @@ type Column
|
|||||||
|
|
||||||
example_if = Examples.bool_column_1.iif 1 0
|
example_if = Examples.bool_column_1.iif 1 0
|
||||||
iif : Any -> Any -> Column
|
iif : Any -> Any -> Column
|
||||||
iif self when_true when_false = case self.storage_type of
|
iif self when_true when_false = case self.value_type of
|
||||||
Storage.Boolean ->
|
Value_Type.Boolean ->
|
||||||
new_name = "if " + Naming_Helpers.to_expression_text self + " then " + Naming_Helpers.to_expression_text when_true + " else " + Naming_Helpers.to_expression_text when_false
|
new_name = "if " + Naming_Helpers.to_expression_text self + " then " + Naming_Helpers.to_expression_text when_true + " else " + Naming_Helpers.to_expression_text when_false
|
||||||
s = self.java_column.getStorage
|
s = self.java_column.getStorage
|
||||||
|
|
||||||
@ -744,10 +744,10 @@ type Column
|
|||||||
is_blank : Boolean -> Column
|
is_blank : Boolean -> Column
|
||||||
is_blank self treat_nans_as_blank=False =
|
is_blank self treat_nans_as_blank=False =
|
||||||
new_name = Naming_Helpers.function_name "is_blank" [self]
|
new_name = Naming_Helpers.function_name "is_blank" [self]
|
||||||
result = case self.storage_type of
|
result = case self.value_type of
|
||||||
Storage.Text -> self.is_empty
|
Value_Type.Char _ _ -> self.is_empty
|
||||||
Storage.Decimal -> if treat_nans_as_blank then self.is_nothing || self.is_nan else self.is_nothing
|
Value_Type.Float _ -> if treat_nans_as_blank then self.is_nothing || self.is_nan else self.is_nothing
|
||||||
Storage.Any -> if treat_nans_as_blank then self.is_empty || self.is_nan else self.is_empty
|
Value_Type.Mixed -> if treat_nans_as_blank then self.is_empty || self.is_nan else self.is_empty
|
||||||
_ -> self.is_nothing
|
_ -> self.is_nothing
|
||||||
result.rename new_name
|
result.rename new_name
|
||||||
|
|
||||||
@ -1171,22 +1171,14 @@ type Column
|
|||||||
to_vector : Vector
|
to_vector : Vector
|
||||||
to_vector self = Vector.from_polyglot_array self.java_column.getStorage.toList
|
to_vector self = Vector.from_polyglot_array self.java_column.getStorage.toList
|
||||||
|
|
||||||
## Returns the underlying storage type of this column.
|
## Returns the `Value_Type` associated with that column.
|
||||||
|
|
||||||
> Example
|
The value type determines what type of values the column is storing and
|
||||||
Get the storage type of a column.
|
what operations are permitted.
|
||||||
|
|
||||||
import Standard.Examples
|
|
||||||
|
|
||||||
example_storage_type = Examples.integer_column.storage_type
|
|
||||||
storage_type : Storage
|
|
||||||
storage_type self =
|
|
||||||
tp = self.java_column.getStorage.getType
|
|
||||||
Storage.from_java tp
|
|
||||||
|
|
||||||
## UNSTABLE TODO this is a prototype that will be revisited later on
|
|
||||||
value_type : Value_Type
|
value_type : Value_Type
|
||||||
value_type self = self.storage_type.to_approximate_value_type
|
value_type self =
|
||||||
|
storage_type = self.java_column.getStorage.getType
|
||||||
|
Storage.to_value_type storage_type
|
||||||
|
|
||||||
## UNSTABLE
|
## UNSTABLE
|
||||||
|
|
||||||
@ -1529,41 +1521,10 @@ slice_ranges column ranges =
|
|||||||
Creates a storage builder suitable for building a column for the provided
|
Creates a storage builder suitable for building a column for the provided
|
||||||
column type.
|
column type.
|
||||||
|
|
||||||
This relies on a rudimentary mapping between `Value_Type` and `Storage`. It
|
If a value type is not supported, its closest match is selected and
|
||||||
does not ensure validity checks for the particular type, like checking string
|
an `Inexact_Type_Coercion` problem is reported.
|
||||||
length or number size.
|
make_storage_builder_for_type value_type on_problems initial_size=128 =
|
||||||
|
closest_storage_type = Storage.from_value_type value_type on_problems
|
||||||
It may be tempting to return an `InferredBuilder` for the `Mixed` type - as
|
|
||||||
this will use a more compact storage if a mixed type column contains only
|
|
||||||
numbers. However, since currently `Column.value_type` is derived directly
|
|
||||||
from its storage type, that would result in a changed `value_type` in the
|
|
||||||
result. Whereas we want to ensure that if the requested type is `Mixed`, the
|
|
||||||
resulting column should also report `Mixed` value type. Once the types work
|
|
||||||
decouples `value_type` from `storage_type`, this logic could be adjusted.
|
|
||||||
|
|
||||||
Due to the coupling of value types and storage, `value_type` of the created
|
|
||||||
column may not be exactly the same as the one requested here, it will be the
|
|
||||||
closest one currently supported by our storage (i.e. any constraints like
|
|
||||||
integer size or constant text width will be dropped). This will need to be
|
|
||||||
revisited as part of the types work:
|
|
||||||
https://www.pivotaltracker.com/story/show/183854180
|
|
||||||
make_storage_builder_for_type value_type initial_size=128 =
|
|
||||||
closest_storage_type = case value_type of
|
|
||||||
Value_Type.Boolean -> Storage.Boolean
|
|
||||||
Value_Type.Byte -> Storage.Integer
|
|
||||||
Value_Type.Integer _ -> Storage.Integer
|
|
||||||
Value_Type.Float _ -> Storage.Decimal
|
|
||||||
## Arbitrary precision numbers are not currently representable by our
|
|
||||||
specialized in-memory storage, so falling back to object storage.
|
|
||||||
Value_Type.Decimal _ _ -> Storage.Any
|
|
||||||
Value_Type.Char _ _ -> Storage.Text
|
|
||||||
Value_Type.Date -> Storage.Date
|
|
||||||
Value_Type.Date_Time with_timezone ->
|
|
||||||
## Our specialized storage is only capable of storing date time with timezone. If we want to store a different kind of date-time, we will
|
|
||||||
if with_timezone then Storage.Date_Time else Storage.Any
|
|
||||||
Value_Type.Time -> Storage.Time_Of_Day
|
|
||||||
Value_Type.Mixed -> Storage.Any
|
|
||||||
_ -> Storage.Any
|
|
||||||
Storage.make_builder closest_storage_type initial_size
|
Storage.make_builder closest_storage_type initial_size
|
||||||
|
|
||||||
## PRIVATE
|
## PRIVATE
|
||||||
|
@ -1,9 +1,8 @@
|
|||||||
from Standard.Base import all
|
from Standard.Base import all
|
||||||
import Standard.Base.Errors.Illegal_Argument.Illegal_Argument
|
import Standard.Base.Errors.Illegal_Argument.Illegal_Argument
|
||||||
|
|
||||||
import project.Data.Value_Type.Auto
|
|
||||||
import project.Data.Storage.Storage
|
|
||||||
import project.Internal.Parse_Values_Helper
|
import project.Internal.Parse_Values_Helper
|
||||||
|
from project.Data.Type.Value_Type import Value_Type, Auto
|
||||||
|
|
||||||
polyglot java import org.enso.table.parsing.IntegerParser
|
polyglot java import org.enso.table.parsing.IntegerParser
|
||||||
polyglot java import org.enso.table.parsing.DecimalParser
|
polyglot java import org.enso.table.parsing.DecimalParser
|
||||||
@ -271,15 +270,15 @@ type Data_Formatter
|
|||||||
AnyObjectFormatter.new formatters.to_array
|
AnyObjectFormatter.new formatters.to_array
|
||||||
|
|
||||||
## PRIVATE
|
## PRIVATE
|
||||||
make_formatter_for_column_type self (column_type : Storage) = case column_type of
|
make_formatter_for_column_type self (column_type : Value_Type) = case column_type of
|
||||||
Storage.Text -> self.make_text_formatter
|
Value_Type.Char _ _ -> self.make_text_formatter
|
||||||
Storage.Integer -> self.make_integer_formatter
|
Value_Type.Integer _ -> self.make_integer_formatter
|
||||||
Storage.Decimal -> self.make_decimal_formatter
|
Value_Type.Float _ -> self.make_decimal_formatter
|
||||||
Storage.Boolean -> self.make_boolean_formatter
|
Value_Type.Boolean -> self.make_boolean_formatter
|
||||||
Storage.Date -> self.make_date_formatter
|
Value_Type.Date -> self.make_date_formatter
|
||||||
Storage.Time_Of_Day -> self.make_time_of_day_formatter
|
Value_Type.Time -> self.make_time_of_day_formatter
|
||||||
Storage.Date_Time -> self.make_date_time_formatter
|
Value_Type.Date_Time _ -> self.make_date_time_formatter
|
||||||
Storage.Any -> self.make_auto_formatter
|
_ -> self.make_auto_formatter
|
||||||
|
|
||||||
## PRIVATE
|
## PRIVATE
|
||||||
Utility function to convert single text value to a vector
|
Utility function to convert single text value to a vector
|
||||||
|
@ -1,81 +0,0 @@
|
|||||||
from Standard.Base import all
|
|
||||||
import Standard.Base.Errors.Common.Index_Out_Of_Bounds
|
|
||||||
import Standard.Base.Errors.Illegal_State.Illegal_State
|
|
||||||
|
|
||||||
import Standard.Table.Data.Value_Type.Value_Type
|
|
||||||
|
|
||||||
polyglot java import org.enso.table.data.column.builder.object.Builder
|
|
||||||
polyglot java import org.enso.table.data.column.storage.Storage as Java_Storage
|
|
||||||
|
|
||||||
## Represents different types of underlying storage for Columns.
|
|
||||||
type Storage
|
|
||||||
## A column storing text data.
|
|
||||||
Text
|
|
||||||
|
|
||||||
## A column storing integer data.
|
|
||||||
Integer
|
|
||||||
|
|
||||||
## A column storing decimal data.
|
|
||||||
Decimal
|
|
||||||
|
|
||||||
## A column storing boolean data.
|
|
||||||
Boolean
|
|
||||||
|
|
||||||
## A column storing dates.
|
|
||||||
Date
|
|
||||||
|
|
||||||
## A column storing date-times.
|
|
||||||
Date_Time
|
|
||||||
|
|
||||||
## A column storing time-of-day.
|
|
||||||
Time_Of_Day
|
|
||||||
|
|
||||||
## A column storing arbitrary data.
|
|
||||||
Any
|
|
||||||
|
|
||||||
## PRIVATE
|
|
||||||
Enumerates storage types in a way that is consistent with
|
|
||||||
`org.enso.table.data.Storage.Storage`, i.e.
|
|
||||||
`storage_type.at org.enso.table.data.Storage.Type.LONG` will yield the
|
|
||||||
corresponding `Storage.Integer`.
|
|
||||||
types : Vector Storage
|
|
||||||
types = [Storage.Any, Storage.Integer, Storage.Decimal, Storage.Text, Storage.Boolean, Storage.Date, Storage.Time_Of_Day, Storage.Date_Time]
|
|
||||||
|
|
||||||
## PRIVATE
|
|
||||||
Converts a `Storage` to a Java storage id.
|
|
||||||
to_java : Integer
|
|
||||||
to_java self = case self of
|
|
||||||
Storage.Any -> Java_Storage.Type.OBJECT
|
|
||||||
Storage.Integer -> Java_Storage.Type.LONG
|
|
||||||
Storage.Decimal -> Java_Storage.Type.DOUBLE
|
|
||||||
Storage.Text -> Java_Storage.Type.STRING
|
|
||||||
Storage.Boolean -> Java_Storage.Type.BOOL
|
|
||||||
Storage.Date -> Java_Storage.Type.DATE
|
|
||||||
Storage.Time_Of_Day -> Java_Storage.Type.TIME_OF_DAY
|
|
||||||
Storage.Date_Time -> Java_Storage.Type.DATE_TIME
|
|
||||||
|
|
||||||
## PRIVATE
|
|
||||||
Converts a Java storage id to a `Storage`.
|
|
||||||
from_java : Integer -> Storage
|
|
||||||
from_java id =
|
|
||||||
Storage.types.at id . catch Index_Out_Of_Bounds _->
|
|
||||||
Panic.throw (Illegal_State.Error "Unknown storage type: "+id.to_text)
|
|
||||||
|
|
||||||
## PRIVATE
|
|
||||||
Converts this storage type to a value type closest representing it.
|
|
||||||
to_approximate_value_type : Value_Type
|
|
||||||
to_approximate_value_type self = case self of
|
|
||||||
Storage.Text -> Value_Type.Char
|
|
||||||
Storage.Integer -> Value_Type.Integer
|
|
||||||
Storage.Decimal -> Value_Type.Float
|
|
||||||
Storage.Boolean -> Value_Type.Boolean
|
|
||||||
Storage.Date -> Value_Type.Date
|
|
||||||
Storage.Time_Of_Day -> Value_Type.Time
|
|
||||||
Storage.Date_Time -> Value_Type.Date_Time
|
|
||||||
Storage.Any -> Value_Type.Mixed
|
|
||||||
|
|
||||||
## PRIVATE
|
|
||||||
Creates a column storage builder for the given storage type.
|
|
||||||
make_builder : Storage -> Integer -> Builder
|
|
||||||
make_builder storage initial_size=64 =
|
|
||||||
Builder.getForType storage.to_java initial_size
|
|
@ -23,8 +23,6 @@ import project.Data.Report_Unmatched.Report_Unmatched
|
|||||||
import project.Data.Row.Row
|
import project.Data.Row.Row
|
||||||
import project.Data.Set_Mode.Set_Mode
|
import project.Data.Set_Mode.Set_Mode
|
||||||
import project.Data.Sort_Column.Sort_Column
|
import project.Data.Sort_Column.Sort_Column
|
||||||
import project.Data.Storage.Storage
|
|
||||||
import project.Data.Value_Type.Value_Type
|
|
||||||
import project.Internal.Aggregate_Column_Helper
|
import project.Internal.Aggregate_Column_Helper
|
||||||
import project.Internal.Java_Problems
|
import project.Internal.Java_Problems
|
||||||
import project.Internal.Join_Helpers
|
import project.Internal.Join_Helpers
|
||||||
@ -39,7 +37,8 @@ import project.Data.Expression.Expression
|
|||||||
import project.Data.Expression.Expression_Error
|
import project.Data.Expression.Expression_Error
|
||||||
import project.Delimited.Delimited_Format.Delimited_Format
|
import project.Delimited.Delimited_Format.Delimited_Format
|
||||||
|
|
||||||
from project.Data.Value_Type import Auto, ensure_valid_parse_target
|
from project.Data.Type.Value_Type import Value_Type, Auto
|
||||||
|
from project.Data.Type.Value_Type_Helpers import ensure_valid_parse_target
|
||||||
from project.Internal.Rows_View import Rows_View
|
from project.Internal.Rows_View import Rows_View
|
||||||
from project.Errors import all
|
from project.Errors import all
|
||||||
|
|
||||||
@ -892,15 +891,17 @@ type Table
|
|||||||
selection = self.columns_helper.select_columns_helper columns reorder=False problem_builder
|
selection = self.columns_helper.select_columns_helper columns reorder=False problem_builder
|
||||||
selected_names = Map.from_vector (selection.map column-> [column.name, True])
|
selected_names = Map.from_vector (selection.map column-> [column.name, True])
|
||||||
|
|
||||||
|
## TODO [RW] we should inherit the parent type here, but extend fixed length strings to varied length
|
||||||
|
To be done in #6106.
|
||||||
map_preserve_name column f = column.map f . rename column.name
|
map_preserve_name column f = column.map f . rename column.name
|
||||||
do_replace = _.replace term new_text case_sensitivity=case_sensitivity only_first=only_first use_regex=use_regex
|
do_replace = _.replace term new_text case_sensitivity=case_sensitivity only_first=only_first use_regex=use_regex
|
||||||
do_replace_only_text = case _ of
|
do_replace_only_text = case _ of
|
||||||
item : Text -> do_replace item
|
item : Text -> do_replace item
|
||||||
item -> item
|
item -> item
|
||||||
|
|
||||||
transform column = case column.storage_type of
|
transform column = case column.value_type of
|
||||||
Storage.Text -> map_preserve_name column do_replace
|
Value_Type.Char _ _ -> map_preserve_name column do_replace
|
||||||
Storage.Any -> map_preserve_name column do_replace_only_text
|
Value_Type.Mixed -> map_preserve_name column do_replace_only_text
|
||||||
_ ->
|
_ ->
|
||||||
problem = Invalid_Value_Type.Error Value_Type.Char column.value_type
|
problem = Invalid_Value_Type.Error Value_Type.Char column.value_type
|
||||||
problem_builder.report_other_warning problem
|
problem_builder.report_other_warning problem
|
||||||
@ -1009,9 +1010,6 @@ type Table
|
|||||||
column = self.compute expression on_problems
|
column = self.compute expression on_problems
|
||||||
self.filter column Filter_Condition.Is_True
|
self.filter column Filter_Condition.Is_True
|
||||||
|
|
||||||
## PRIVATE
|
|
||||||
with_no_rows self = self.take (First 0)
|
|
||||||
|
|
||||||
## Creates a new Table with the specified range of rows from the input
|
## Creates a new Table with the specified range of rows from the input
|
||||||
Table.
|
Table.
|
||||||
|
|
||||||
@ -1505,7 +1503,7 @@ type Table
|
|||||||
case Table_Helpers.unify_result_type_for_union column_set all_tables allow_type_widening problem_builder of
|
case Table_Helpers.unify_result_type_for_union column_set all_tables allow_type_widening problem_builder of
|
||||||
Nothing -> Nothing
|
Nothing -> Nothing
|
||||||
result_type : Value_Type ->
|
result_type : Value_Type ->
|
||||||
concat_columns column_set all_tables result_type result_row_count
|
concat_columns column_set all_tables result_type result_row_count on_problems
|
||||||
good_columns = merged_columns.filter Filter_Condition.Not_Nothing
|
good_columns = merged_columns.filter Filter_Condition.Not_Nothing
|
||||||
if good_columns.is_empty then Error.throw No_Output_Columns else
|
if good_columns.is_empty then Error.throw No_Output_Columns else
|
||||||
problem_builder.attach_problems_before on_problems <|
|
problem_builder.attach_problems_before on_problems <|
|
||||||
@ -1552,7 +1550,7 @@ type Table
|
|||||||
info : Table
|
info : Table
|
||||||
info self =
|
info self =
|
||||||
cols = self.columns
|
cols = self.columns
|
||||||
Table.new [["Column", cols.map .name], ["Items Count", cols.map .count], ["Storage Type", cols.map .storage_type]]
|
Table.new [["Column", cols.map .name], ["Items Count", cols.map .count], ["Value Type", cols.map .value_type]]
|
||||||
|
|
||||||
## Returns a new table with a chosen subset of columns left unchanged and
|
## Returns a new table with a chosen subset of columns left unchanged and
|
||||||
the other columns pivoted to rows with a single name field and a single
|
the other columns pivoted to rows with a single name field and a single
|
||||||
@ -1908,8 +1906,8 @@ check_table arg_name table =
|
|||||||
|
|
||||||
## PRIVATE
|
## PRIVATE
|
||||||
A helper that efficiently concatenates storages of in-memory columns.
|
A helper that efficiently concatenates storages of in-memory columns.
|
||||||
concat_columns column_set all_tables result_type result_row_count =
|
concat_columns column_set all_tables result_type result_row_count on_problems =
|
||||||
storage_builder = Column_Module.make_storage_builder_for_type result_type initial_size=result_row_count
|
storage_builder = Column_Module.make_storage_builder_for_type result_type on_problems initial_size=result_row_count
|
||||||
column_set.column_indices.zip all_tables i-> parent_table->
|
column_set.column_indices.zip all_tables i-> parent_table->
|
||||||
case i of
|
case i of
|
||||||
Nothing ->
|
Nothing ->
|
||||||
|
@ -0,0 +1,28 @@
|
|||||||
|
from Standard.Base import all
|
||||||
|
|
||||||
|
import project.Data.Type.Value_Type.Value_Type
|
||||||
|
import project.Data.Type.Value_Type.Bits
|
||||||
|
|
||||||
|
## PRIVATE
|
||||||
|
Finds the most specific `Value_Type` that can be used to hold the given
|
||||||
|
value.
|
||||||
|
|
||||||
|
This method will still prefer default types used in the in-memory backend, so
|
||||||
|
for integers it will return 64-bit integers even if the value could fit in a
|
||||||
|
smaller one; and for Text values variable-length text will be preferred over
|
||||||
|
fixed-length.
|
||||||
|
most_specific_value_type : Any -> Value_Type
|
||||||
|
most_specific_value_type value use_smallest=False =
|
||||||
|
## TODO implement the `use_smallest` logic
|
||||||
|
_ = use_smallest
|
||||||
|
case value of
|
||||||
|
_ : Integer -> Value_Type.Integer Bits.Bits_64
|
||||||
|
_ : Decimal -> Value_Type.Float Bits.Bits_64
|
||||||
|
_ : Text -> Value_Type.Char size=Nothing variable_length=True
|
||||||
|
_ : Boolean -> Value_Type.Boolean
|
||||||
|
_ : Date -> Value_Type.Date
|
||||||
|
_ : Time_Of_Day -> Value_Type.Time
|
||||||
|
_ : Date_Time -> Value_Type.Date_Time
|
||||||
|
## TODO [RW] once we add Enso Native Object Type Value Type, we probably
|
||||||
|
want to prefer it over Mixed
|
||||||
|
_ -> Value_Type.Mixed
|
@ -0,0 +1,67 @@
|
|||||||
|
from Standard.Base import all
|
||||||
|
import Standard.Base.Errors.Common.Index_Out_Of_Bounds
|
||||||
|
import Standard.Base.Errors.Illegal_State.Illegal_State
|
||||||
|
|
||||||
|
import Standard.Table.Data.Type.Value_Type.Value_Type
|
||||||
|
import Standard.Table.Data.Type.Value_Type.Bits
|
||||||
|
from Standard.Table.Errors import Inexact_Type_Coercion
|
||||||
|
|
||||||
|
polyglot java import org.enso.table.data.column.builder.object.Builder
|
||||||
|
polyglot java import org.enso.table.data.column.storage.type.StorageType
|
||||||
|
polyglot java import org.enso.table.data.column.storage.type.IntegerType
|
||||||
|
polyglot java import org.enso.table.data.column.storage.type.FloatType
|
||||||
|
polyglot java import org.enso.table.data.column.storage.type.BooleanType
|
||||||
|
polyglot java import org.enso.table.data.column.storage.type.TextType
|
||||||
|
polyglot java import org.enso.table.data.column.storage.type.DateType
|
||||||
|
polyglot java import org.enso.table.data.column.storage.type.DateTimeType
|
||||||
|
polyglot java import org.enso.table.data.column.storage.type.TimeOfDayType
|
||||||
|
polyglot java import org.enso.table.data.column.storage.type.AnyObjectType
|
||||||
|
|
||||||
|
## PRIVATE
|
||||||
|
Gets the value type represented by this Java Storage.
|
||||||
|
to_value_type : StorageType -> Value_Type
|
||||||
|
to_value_type storage_type = case storage_type of
|
||||||
|
i : IntegerType -> case i.bits.toInteger of
|
||||||
|
8 -> Value_Type.Byte
|
||||||
|
b -> Value_Type.Integer (Bits.from_bits b)
|
||||||
|
f : FloatType ->
|
||||||
|
bits = Bits.from_bits f.bits.toInteger
|
||||||
|
Value_Type.Float bits
|
||||||
|
_ : BooleanType -> Value_Type.Boolean
|
||||||
|
s : TextType ->
|
||||||
|
variable = s.fixedLength.not
|
||||||
|
size = if s.maxLength < 0 then Nothing else s.maxLength
|
||||||
|
Value_Type.Char size variable
|
||||||
|
_ : DateType -> Value_Type.Date
|
||||||
|
_ : DateTimeType -> Value_Type.Date_Time with_timezone=True
|
||||||
|
_ : TimeOfDayType -> Value_Type.Time
|
||||||
|
_ : AnyObjectType -> Value_Type.Mixed
|
||||||
|
|
||||||
|
## PRIVATE
|
||||||
|
closest_storage_type value_type = case value_type of
|
||||||
|
# TODO we will want builders and storages with bounds checking, but for now we approximate
|
||||||
|
Value_Type.Byte -> IntegerType.INT_64
|
||||||
|
Value_Type.Integer _ -> IntegerType.INT_64
|
||||||
|
Value_Type.Float _ -> FloatType.FLOAT_64
|
||||||
|
Value_Type.Boolean -> BooleanType.INSTANCE
|
||||||
|
Value_Type.Char _ _ -> TextType.VARIABLE_LENGTH
|
||||||
|
Value_Type.Date -> DateType.INSTANCE
|
||||||
|
# We currently will not support storing dates without timezones in in-memory mode.
|
||||||
|
Value_Type.Date_Time _ -> DateTimeType.INSTANCE
|
||||||
|
Value_Type.Time -> TimeOfDayType.INSTANCE
|
||||||
|
Value_Type.Mixed -> AnyObjectType.INSTANCE
|
||||||
|
|
||||||
|
## PRIVATE
|
||||||
|
from_value_type : Value_Type -> Problem_Behavior -> StorageType
|
||||||
|
from_value_type value_type on_problems =
|
||||||
|
approximate_storage = closest_storage_type value_type
|
||||||
|
approximated_value_type = to_value_type approximate_storage
|
||||||
|
problems = if approximated_value_type == value_type then [] else
|
||||||
|
[Inexact_Type_Coercion.Warning value_type approximated_value_type]
|
||||||
|
on_problems.attach_problems_before problems approximate_storage
|
||||||
|
|
||||||
|
## PRIVATE
|
||||||
|
Creates a column storage builder for the given storage type.
|
||||||
|
make_builder : StorageType -> Integer -> Builder
|
||||||
|
make_builder storage initial_size=64 =
|
||||||
|
Builder.getForType storage initial_size
|
@ -0,0 +1,244 @@
|
|||||||
|
from Standard.Base import all
|
||||||
|
import Standard.Base.Errors.Illegal_Argument.Illegal_Argument
|
||||||
|
|
||||||
|
from project.Errors import Invalid_Value_Type
|
||||||
|
|
||||||
|
## Type to represent the different sizes of integer or float storage.
|
||||||
|
type Bits
|
||||||
|
## 16-bit (2 byte) value
|
||||||
|
Bits_16
|
||||||
|
|
||||||
|
## 32-bit (4 byte) value
|
||||||
|
Bits_32
|
||||||
|
|
||||||
|
## 64-bit (8 byte) value
|
||||||
|
Bits_64
|
||||||
|
|
||||||
|
## PRIVATE
|
||||||
|
to_bits : Integer
|
||||||
|
to_bits self = case self of
|
||||||
|
Bits.Bits_16 -> 16
|
||||||
|
Bits.Bits_32 -> 32
|
||||||
|
Bits.Bits_64 -> 64
|
||||||
|
|
||||||
|
## PRIVATE
|
||||||
|
from_bits : Integer -> Bits
|
||||||
|
from_bits bits = case bits of
|
||||||
|
16 -> Bits.Bits_16
|
||||||
|
32 -> Bits.Bits_32
|
||||||
|
64 -> Bits.Bits_64
|
||||||
|
_ : Integer -> Error.throw (Illegal_Argument.Error "Invalid number of bits for a float or integer type.")
|
||||||
|
|
||||||
|
## Provides the text representation of the bit-size.
|
||||||
|
to_text : Text
|
||||||
|
to_text self = self.to_bits.to_text + " bits"
|
||||||
|
|
||||||
|
type Bits_Comparator
|
||||||
|
compare x y = Comparable.from x.to_bits . compare x.to_bits y.to_bits
|
||||||
|
hash x = Comparable.from x.to_bits . hash x.to_bits
|
||||||
|
|
||||||
|
Comparable.from (_:Bits) = Bits_Comparator
|
||||||
|
|
||||||
|
## Represents the different possible types of values within Table columns.
|
||||||
|
|
||||||
|
The types are tailored to correspond to RDBMS types, but they are also used
|
||||||
|
with our in-memory backend.
|
||||||
|
type Value_Type
|
||||||
|
## Boolean or Bit value: 0 or 1.
|
||||||
|
|
||||||
|
ANSI SQL: BIT / BOOLEAN
|
||||||
|
Boolean
|
||||||
|
|
||||||
|
## Integer value: 0 to 255
|
||||||
|
|
||||||
|
ANSI SQL: TINYINT
|
||||||
|
Byte
|
||||||
|
|
||||||
|
## Integer value:
|
||||||
|
|
||||||
|
16-bit: -32,768 to 32,767
|
||||||
|
32-bit: -2,147,483,648 to -2,147,483,648
|
||||||
|
64-bit: -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807
|
||||||
|
ANSI SQL: SMALLINT (16-bit), INT (32-bit), BIGINT (64-bit)
|
||||||
|
|
||||||
|
Arguments:
|
||||||
|
- size: the amount of bits used to store the values.
|
||||||
|
Integer size:Bits=Bits.Bits_64
|
||||||
|
|
||||||
|
## Floating point value.
|
||||||
|
|
||||||
|
ANSI SQL: REAL, FLOAT, DOUBLE
|
||||||
|
|
||||||
|
Arguments:
|
||||||
|
- size: the amount of bits used to store the values.
|
||||||
|
Float size:Bits=Bits.Bits_64
|
||||||
|
|
||||||
|
## Arbitrary precision numerical value with a scale and precision.
|
||||||
|
|
||||||
|
ANSI SQL: NUMERIC, DECIMAL
|
||||||
|
|
||||||
|
Arguments:
|
||||||
|
- precision: the total number of digits in the number.
|
||||||
|
- scale: the number of digits after the decimal point.
|
||||||
|
Decimal precision:(Integer|Nothing)=Nothing scale:(Integer|Nothing)=0
|
||||||
|
|
||||||
|
## Character string.
|
||||||
|
|
||||||
|
ANSI SQL: CHAR, VARCHAR, TEXT, LONGVARCHAR, NCHAR, NVARCHAR, TEXT, CLOB, NCLOB
|
||||||
|
|
||||||
|
Arguments:
|
||||||
|
- size: the maximum number of characters that can be stored in the
|
||||||
|
column.
|
||||||
|
- variable_length: whether the size is a maximum or a fixed length.
|
||||||
|
Char size:(Integer|Nothing)=Nothing variable_length:Boolean=True
|
||||||
|
|
||||||
|
## Date
|
||||||
|
|
||||||
|
ANSI SQL: DATE
|
||||||
|
Date
|
||||||
|
|
||||||
|
## Date and Time
|
||||||
|
|
||||||
|
ANSI SQL: TIMESTAMP / DateTime
|
||||||
|
|
||||||
|
Arguments:
|
||||||
|
- with_timezone: whether the values contain the timezone.
|
||||||
|
Date_Time with_timezone:Boolean=True
|
||||||
|
|
||||||
|
## Time of day
|
||||||
|
|
||||||
|
ANSI SQL: TIME, TIME WITHOUT TIME ZONE
|
||||||
|
Time
|
||||||
|
|
||||||
|
## Binary data.
|
||||||
|
|
||||||
|
ANSI SQL: BINARY, VARBINARY, LONGVARBINARY, BLOB
|
||||||
|
|
||||||
|
Arguments:
|
||||||
|
- size: the maximum number of bytes that can be stored in the
|
||||||
|
column.
|
||||||
|
- variable_length: whether the size is a maximum or a fixed length.
|
||||||
|
Binary size:(Integer|Nothing)=Nothing variable_length:Boolean=False
|
||||||
|
|
||||||
|
## Unsupported SQL type.
|
||||||
|
|
||||||
|
Fallback provided to allow describing types that are not supported by
|
||||||
|
Enso at this time.
|
||||||
|
Unsupported_Data_Type type_name:(Text|Nothing)=Nothing (underlying_type:SQL_Type|Nothing=Nothing)
|
||||||
|
|
||||||
|
## A mix of values can be stored in the Column.
|
||||||
|
|
||||||
|
In-Memory and SQLite tables support this.
|
||||||
|
Mixed
|
||||||
|
|
||||||
|
## ADVANCED
|
||||||
|
UNSTABLE
|
||||||
|
Checks if the provided value type is a textual type (with any settings)
|
||||||
|
and runs the following action or reports a type error.
|
||||||
|
expect_text : Value_Type -> Any -> Text -> Any ! Invalid_Value_Type
|
||||||
|
expect_text value_type ~action related_column=Nothing =
|
||||||
|
if Value_Type.is_text value_type then action else
|
||||||
|
Error.throw (Invalid_Value_Type.Error Value_Type.Char value_type related_column)
|
||||||
|
|
||||||
|
## ADVANCED
|
||||||
|
UNSTABLE
|
||||||
|
Checks if the provided value type is a boolean type and runs the
|
||||||
|
following action or reports a type error.
|
||||||
|
expect_boolean : Value_Type -> Any -> Any ! Invalid_Value_Type
|
||||||
|
expect_boolean value_type ~action = case value_type of
|
||||||
|
Value_Type.Boolean -> action
|
||||||
|
_ -> Error.throw (Invalid_Value_Type.Error Value_Type.Boolean value_type)
|
||||||
|
|
||||||
|
## UNSTABLE
|
||||||
|
Checks if the `Value_Type` represents a boolean type.
|
||||||
|
is_boolean : Boolean
|
||||||
|
is_boolean self = case self of
|
||||||
|
Value_Type.Boolean -> True
|
||||||
|
_ -> False
|
||||||
|
|
||||||
|
## UNSTABLE
|
||||||
|
Checks if the `Value_Type` represents a floating-point number type.
|
||||||
|
is_floating_point : Boolean
|
||||||
|
is_floating_point self = case self of
|
||||||
|
Value_Type.Float _ -> True
|
||||||
|
_ -> False
|
||||||
|
|
||||||
|
## UNSTABLE
|
||||||
|
Checks if the `Value_Type` represents a text type.
|
||||||
|
is_text : Boolean
|
||||||
|
is_text self = case self of
|
||||||
|
Value_Type.Char _ _ -> True
|
||||||
|
_ -> False
|
||||||
|
|
||||||
|
## UNSTABLE
|
||||||
|
Checks if the `Value_Type` represents any numeric type - integer,
|
||||||
|
floating point or decimal.
|
||||||
|
is_numeric : Boolean
|
||||||
|
is_numeric self = case self of
|
||||||
|
Value_Type.Integer _ -> True
|
||||||
|
Value_Type.Float _ -> True
|
||||||
|
Value_Type.Decimal _ _ -> True
|
||||||
|
_ -> False
|
||||||
|
|
||||||
|
## UNSTABLE
|
||||||
|
Checks if the `Value_Type` represents an integer type.
|
||||||
|
is_integer : Boolean
|
||||||
|
is_integer self = case self of
|
||||||
|
Value_Type.Integer _ -> True
|
||||||
|
_ -> False
|
||||||
|
|
||||||
|
## Provides a text representation of the `Value_Type` meant for
|
||||||
|
displaying to the user.
|
||||||
|
to_display_text : Text
|
||||||
|
to_display_text self = case self of
|
||||||
|
Value_Type.Boolean -> "Boolean"
|
||||||
|
Value_Type.Byte -> "Byte"
|
||||||
|
Value_Type.Integer size -> "Integer (" + size.to_text + ")"
|
||||||
|
Value_Type.Float size -> "Float (" + size.to_text + ")"
|
||||||
|
Value_Type.Decimal precision scale -> "Decimal (precision=" + precision.to_text + ", scale=" + scale.to_text + ")"
|
||||||
|
Value_Type.Char size variable_length -> "Char (max_size=" + size.to_text + ", variable_length=" + variable_length.to_text + ")"
|
||||||
|
Value_Type.Date -> "Date"
|
||||||
|
Value_Type.Date_Time with_timezone -> "Date_Time (with_timezone=" + with_timezone.to_text + ")"
|
||||||
|
Value_Type.Time -> "Time"
|
||||||
|
Value_Type.Binary size variable_length -> "Binary (max_size=" + size.to_text + " bytes, variable_length=" + variable_length.to_text + ")"
|
||||||
|
Value_Type.Unsupported_Data_Type type_name _ -> case type_name of
|
||||||
|
Nothing -> "Unsupported_Data_Type"
|
||||||
|
_ : Text -> "Unsupported_Data_Type (" + type_name + ")"
|
||||||
|
Value_Type.Mixed -> "Mixed"
|
||||||
|
|
||||||
|
## PRIVATE
|
||||||
|
Provides a JS object representation for use in visualizations.
|
||||||
|
to_js_object : JS_Object
|
||||||
|
to_js_object self =
|
||||||
|
constructor_name = Meta.meta self . constructor . name
|
||||||
|
display_text = self.to_display_text
|
||||||
|
additional_fields = case self of
|
||||||
|
Value_Type.Integer size ->
|
||||||
|
[["bits", size.to_bits]]
|
||||||
|
Value_Type.Float size ->
|
||||||
|
[["bits", size.to_bits]]
|
||||||
|
Value_Type.Decimal precision scale ->
|
||||||
|
[["precision", precision], ["scale", scale]]
|
||||||
|
Value_Type.Char size variable_length ->
|
||||||
|
[["size", size], ["variable_length", variable_length]]
|
||||||
|
Value_Type.Binary size variable_length ->
|
||||||
|
[["size", size], ["variable_length", variable_length]]
|
||||||
|
Value_Type.Unsupported_Data_Type type_name _ ->
|
||||||
|
[["type_name", type_name]]
|
||||||
|
_ -> []
|
||||||
|
JS_Object.from_pairs <|
|
||||||
|
[["type", "Value_Type"], ["constructor", constructor_name], ["_display_text_", display_text]] + additional_fields
|
||||||
|
|
||||||
|
## The type representing inferring the column type automatically based on values
|
||||||
|
present in the column.
|
||||||
|
|
||||||
|
The most specific type which is valid for all values in a column is chosen:
|
||||||
|
- if all values are integers, `Integer` is chosen,
|
||||||
|
- if all values are decimals or integers, `Decimal` is chosen,
|
||||||
|
- if the values are all the same time type (a date, a time or a date-time),
|
||||||
|
the corresponding type is chosen, `Date`, `Time_Of_Day` or `Date_Time`,
|
||||||
|
respectively,
|
||||||
|
- if all values are booleans, `Boolean` is chosen,
|
||||||
|
- otherwise, `Text` is chosen as a fallback and the column is kept as-is
|
||||||
|
without parsing.
|
||||||
|
type Auto
|
@ -0,0 +1,79 @@
|
|||||||
|
from Standard.Base import all
|
||||||
|
import Standard.Base.Errors.Illegal_Argument.Illegal_Argument
|
||||||
|
|
||||||
|
from project.Data.Type.Value_Type import Value_Type, Auto
|
||||||
|
|
||||||
|
## PRIVATE
|
||||||
|
Finds a type that can fit both a current type and a new type.
|
||||||
|
reconcile_types current new = case current of
|
||||||
|
Value_Type.Mixed -> Value_Type.Mixed
|
||||||
|
Value_Type.Integer size -> case new of
|
||||||
|
Value_Type.Integer new_size ->
|
||||||
|
Value_Type.Integer (Math.max size new_size)
|
||||||
|
Value_Type.Byte -> Value_Type.Integer size
|
||||||
|
Value_Type.Boolean -> Value_Type.Integer size
|
||||||
|
# If we unify integers with floats, we select the default Float 64 regardless of the input sizes.
|
||||||
|
Value_Type.Float _ -> Value_Type.Float
|
||||||
|
_ -> Value_Type.Mixed
|
||||||
|
Value_Type.Float size -> case new of
|
||||||
|
Value_Type.Float new_size ->
|
||||||
|
Value_Type.Float (Math.max size new_size)
|
||||||
|
# If we unify integers with floats, we select the default Float 64 regardless of the input sizes.
|
||||||
|
Value_Type.Integer _ -> Value_Type.Float
|
||||||
|
Value_Type.Byte -> Value_Type.Float
|
||||||
|
Value_Type.Boolean -> Value_Type.Float
|
||||||
|
_ -> Value_Type.Mixed
|
||||||
|
Value_Type.Byte -> case new of
|
||||||
|
Value_Type.Byte -> Value_Type.Byte
|
||||||
|
Value_Type.Integer size ->
|
||||||
|
Value_Type.Integer size
|
||||||
|
Value_Type.Boolean -> Value_Type.Byte
|
||||||
|
Value_Type.Float _ -> Value_Type.Float
|
||||||
|
_ -> Value_Type.Mixed
|
||||||
|
Value_Type.Boolean -> case new of
|
||||||
|
Value_Type.Boolean -> Value_Type.Boolean
|
||||||
|
Value_Type.Integer size ->
|
||||||
|
Value_Type.Integer size
|
||||||
|
Value_Type.Byte -> Value_Type.Byte
|
||||||
|
Value_Type.Float _ -> Value_Type.Float
|
||||||
|
_ -> Value_Type.Mixed
|
||||||
|
Value_Type.Char current_size current_variable -> case new of
|
||||||
|
Value_Type.Char new_size new_variable ->
|
||||||
|
result_variable = current_variable || new_variable || current_size != new_size
|
||||||
|
case result_variable of
|
||||||
|
True -> Value_Type.Char Nothing True
|
||||||
|
False -> Value_Type.Char current_size False
|
||||||
|
_ -> Value_Type.Mixed
|
||||||
|
Value_Type.Binary current_size current_variable -> case new of
|
||||||
|
Value_Type.Binary new_size new_variable ->
|
||||||
|
result_variable = current_variable || new_variable || current_size != new_size
|
||||||
|
case result_variable of
|
||||||
|
True -> Value_Type.Binary Nothing True
|
||||||
|
False -> Value_Type.Binary current_size False
|
||||||
|
_ -> Value_Type.Mixed
|
||||||
|
_ ->
|
||||||
|
if current == new then current else Value_Type.Mixed
|
||||||
|
|
||||||
|
## PRIVATE
|
||||||
|
Finds the most specific value type that will fit all the provided types.
|
||||||
|
|
||||||
|
If `strict` is `True`, it is implemented as specified in the note
|
||||||
|
"Unifying Column Types" in `Table.union`. In that case, if no common type
|
||||||
|
is found, `Nothing` is returned.
|
||||||
|
|
||||||
|
It assumes that the `types` vector is not empty.
|
||||||
|
find_common_type : Vector Value_Type -> Boolean -> Value_Type | Nothing
|
||||||
|
find_common_type types strict =
|
||||||
|
most_generic_type = (types.drop 1).fold types.first reconcile_types
|
||||||
|
if strict.not || most_generic_type != Value_Type.Mixed then most_generic_type else
|
||||||
|
# Double check if Mixed was really allowed to come out.
|
||||||
|
if types.contains Value_Type.Mixed then Value_Type.Mixed else
|
||||||
|
Nothing
|
||||||
|
|
||||||
|
## PRIVATE
|
||||||
|
Checks if the given type is a valid target type for parsing.
|
||||||
|
|
||||||
|
This will be replaced once we change parse to rely on `Value_Type` instead.
|
||||||
|
ensure_valid_parse_target type ~action =
|
||||||
|
expected_types = [Auto, Integer, Decimal, Date, Date_Time, Time_Of_Day, Boolean]
|
||||||
|
if expected_types.contains type . not then Error.throw (Illegal_Argument.Error "Unsupported target type "+type.to_text+".") else action
|
@ -1,216 +0,0 @@
|
|||||||
## TODO This is a prototype based on the current pending design, used to proceed
|
|
||||||
with handling of types in the `filter` component and others. It will be
|
|
||||||
revisited when proper type support is implemented.
|
|
||||||
|
|
||||||
from Standard.Base import all
|
|
||||||
import Standard.Base.Errors.Illegal_Argument.Illegal_Argument
|
|
||||||
|
|
||||||
from project.Errors import Invalid_Value_Type
|
|
||||||
|
|
||||||
## Type to represent the different sizes of integer or float possible within a database.
|
|
||||||
type Bits
|
|
||||||
## 16-bit (2 byte) value
|
|
||||||
Bits_16
|
|
||||||
## 32-bit (4 byte) value
|
|
||||||
Bits_32
|
|
||||||
## 64-bit (8 byte) value
|
|
||||||
Bits_64
|
|
||||||
|
|
||||||
## PRIVATE
|
|
||||||
to_bits : Integer
|
|
||||||
to_bits self = case self of
|
|
||||||
Bits.Bits_16 -> 16
|
|
||||||
Bits.Bits_32 -> 32
|
|
||||||
Bits.Bits_64 -> 64
|
|
||||||
|
|
||||||
type Bits_Comparator
|
|
||||||
compare x y = Comparable.from x.to_bits . compare x.to_bits y.to_bits
|
|
||||||
hash x = Comparable.from x.to_bits . hash x.to_bits
|
|
||||||
|
|
||||||
Comparable.from (_:Bits) = Bits_Comparator
|
|
||||||
|
|
||||||
## Represents the different possible types of values within RDBMS columns.
|
|
||||||
type Value_Type
|
|
||||||
## Boolean or Bit value: 0 or 1.
|
|
||||||
|
|
||||||
ANSI SQL: BIT / BOOLEAN
|
|
||||||
Boolean
|
|
||||||
|
|
||||||
## Integer value: 0 to 255
|
|
||||||
|
|
||||||
ANSI SQL: TINYINT
|
|
||||||
Byte
|
|
||||||
|
|
||||||
## Integer value:
|
|
||||||
|
|
||||||
16-bit: -32,768 to 32,767
|
|
||||||
32-bit: -2,147,483,648 to -2,147,483,648
|
|
||||||
64-bit: -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807
|
|
||||||
ANSI SQL: SMALLINT (16-bit), INT (32-bit), BIGINT (64-bit)
|
|
||||||
Integer size:Bits=Bits.Bits_64
|
|
||||||
|
|
||||||
## Floating point value.
|
|
||||||
|
|
||||||
ANSI SQL: REAL, FLOAT, DOUBLE
|
|
||||||
Float size:Bits=Bits.Bits_64
|
|
||||||
|
|
||||||
## Arbitrary precision numerical value with a scale and precision.
|
|
||||||
|
|
||||||
ANSI SQL: NUMERIC, DECIMAL
|
|
||||||
Decimal precision:(Integer|Nothing)=Nothing scale:(Integer|Nothing)=Nothing
|
|
||||||
|
|
||||||
## Character string.
|
|
||||||
|
|
||||||
ANSI SQL: CHAR, VARCHAR, TEXT, LONGVARCHAR, NCHAR, NVARCHAR, TEXT, CLOB, NCLOB
|
|
||||||
Char size:(Integer|Nothing)=Nothing variable:Boolean=True
|
|
||||||
|
|
||||||
## Date
|
|
||||||
|
|
||||||
ANSI SQL: DATE
|
|
||||||
Date
|
|
||||||
|
|
||||||
## Date and Time
|
|
||||||
|
|
||||||
ANSI SQL: TIMESTAMP / DateTime
|
|
||||||
Date_Time with_timezone:Boolean=True
|
|
||||||
|
|
||||||
## Time of day
|
|
||||||
|
|
||||||
ANSI SQL: TIME, TIME WITHOUT TIME ZONE
|
|
||||||
Time
|
|
||||||
|
|
||||||
## Binary stream.
|
|
||||||
|
|
||||||
ANSI SQL: BINARY, VARBINARY, LONGVARBINARY, BLOB, BIT(n)
|
|
||||||
Binary size:(Integer|Nothing)=Nothing variable:Boolean=False
|
|
||||||
|
|
||||||
## Unsupported SQL type.
|
|
||||||
|
|
||||||
Fallback provided to allow describing types that are not supported by Enso at this time.
|
|
||||||
Unsupported_Data_Type type_name:Text=""
|
|
||||||
|
|
||||||
## A mix of values can be stored in the Column.
|
|
||||||
|
|
||||||
In-Memory and SQLite tables support this.
|
|
||||||
Mixed
|
|
||||||
|
|
||||||
## ADVANCED
|
|
||||||
UNSTABLE
|
|
||||||
Checks if the provided value type is a textual type (with any settings)
|
|
||||||
and runs the following action or reports a type error.
|
|
||||||
expect_text : Value_Type -> Any -> Text -> Any ! Invalid_Value_Type
|
|
||||||
expect_text value_type ~action related_column=Nothing = case value_type of
|
|
||||||
Value_Type.Char _ _ -> action
|
|
||||||
_ -> Error.throw (Invalid_Value_Type.Error Value_Type.Char value_type related_column)
|
|
||||||
|
|
||||||
## ADVANCED
|
|
||||||
UNSTABLE
|
|
||||||
Checks if the provided value type is a boolean type and runs the
|
|
||||||
following action or reports a type error.
|
|
||||||
expect_boolean : Value_Type -> Any -> Any ! Invalid_Value_Type
|
|
||||||
expect_boolean value_type ~action = case value_type of
|
|
||||||
Value_Type.Boolean -> action
|
|
||||||
_ -> Error.throw (Invalid_Value_Type.Error Value_Type.Boolean value_type)
|
|
||||||
|
|
||||||
## UNSTABLE
|
|
||||||
Checks if the `Value_Type` represents a floating-point number type.
|
|
||||||
is_floating_point : Boolean
|
|
||||||
is_floating_point self = case self of
|
|
||||||
Value_Type.Float _ -> True
|
|
||||||
_ -> False
|
|
||||||
|
|
||||||
## UNSTABLE
|
|
||||||
Checks if the `Value_Type` represents a text type.
|
|
||||||
is_text : Boolean
|
|
||||||
is_text self = case self of
|
|
||||||
Value_Type.Char _ _ -> True
|
|
||||||
_ -> False
|
|
||||||
|
|
||||||
## PRIVATE
|
|
||||||
Finds a type that can fit both a current type and a new type.
|
|
||||||
reconcile_types current new = case current of
|
|
||||||
Value_Type.Mixed -> Value_Type.Mixed
|
|
||||||
Value_Type.Integer size -> case new of
|
|
||||||
Value_Type.Integer new_size ->
|
|
||||||
Value_Type.Integer (Math.max size new_size)
|
|
||||||
Value_Type.Byte -> Value_Type.Integer size
|
|
||||||
Value_Type.Boolean -> Value_Type.Integer size
|
|
||||||
# If we unify integers with floats, we select the default Float 64 regardless of the input sizes.
|
|
||||||
Value_Type.Float _ -> Value_Type.Float
|
|
||||||
_ -> Value_Type.Mixed
|
|
||||||
Value_Type.Float size -> case new of
|
|
||||||
Value_Type.Float new_size ->
|
|
||||||
Value_Type.Float (Math.max size new_size)
|
|
||||||
# If we unify integers with floats, we select the default Float 64 regardless of the input sizes.
|
|
||||||
Value_Type.Integer _ -> Value_Type.Float
|
|
||||||
Value_Type.Byte -> Value_Type.Float
|
|
||||||
Value_Type.Boolean -> Value_Type.Float
|
|
||||||
_ -> Value_Type.Mixed
|
|
||||||
Value_Type.Byte -> case new of
|
|
||||||
Value_Type.Byte -> Value_Type.Byte
|
|
||||||
Value_Type.Integer size ->
|
|
||||||
Value_Type.Integer size
|
|
||||||
Value_Type.Boolean -> Value_Type.Byte
|
|
||||||
Value_Type.Float _ -> Value_Type.Float
|
|
||||||
_ -> Value_Type.Mixed
|
|
||||||
Value_Type.Boolean -> case new of
|
|
||||||
Value_Type.Boolean -> Value_Type.Boolean
|
|
||||||
Value_Type.Integer size ->
|
|
||||||
Value_Type.Integer size
|
|
||||||
Value_Type.Byte -> Value_Type.Byte
|
|
||||||
Value_Type.Float _ -> Value_Type.Float
|
|
||||||
_ -> Value_Type.Mixed
|
|
||||||
Value_Type.Char current_size current_variable -> case new of
|
|
||||||
Value_Type.Char new_size new_variable ->
|
|
||||||
result_variable = current_variable || new_variable || current_size != new_size
|
|
||||||
case result_variable of
|
|
||||||
True -> Value_Type.Char Nothing True
|
|
||||||
False -> Value_Type.Char current_size False
|
|
||||||
_ -> Value_Type.Mixed
|
|
||||||
Value_Type.Binary current_size current_variable -> case new of
|
|
||||||
Value_Type.Binary new_size new_variable ->
|
|
||||||
result_variable = current_variable || new_variable || current_size != new_size
|
|
||||||
case result_variable of
|
|
||||||
True -> Value_Type.Binary Nothing True
|
|
||||||
False -> Value_Type.Binary current_size False
|
|
||||||
_ -> Value_Type.Mixed
|
|
||||||
_ ->
|
|
||||||
if current == new then current else Value_Type.Mixed
|
|
||||||
|
|
||||||
## PRIVATE
|
|
||||||
Finds the most specific value type that will fit all the provided types.
|
|
||||||
|
|
||||||
If `strict` is `True`, it is implemented as specified in the note
|
|
||||||
"Unifying Column Types" in `Table.union`. In that case, if no common type
|
|
||||||
is found, `Nothing` is returned.
|
|
||||||
|
|
||||||
It assumes that the `types` vector is not empty.
|
|
||||||
find_common_type : Vector Value_Type -> Boolean -> Value_Type | Nothing
|
|
||||||
find_common_type types strict =
|
|
||||||
most_generic_type = (types.drop 1).fold types.first Value_Type.reconcile_types
|
|
||||||
if strict.not || most_generic_type != Value_Type.Mixed then most_generic_type else
|
|
||||||
# Double check if Mixed was really allowed to come out.
|
|
||||||
if types.contains Value_Type.Mixed then Value_Type.Mixed else
|
|
||||||
Nothing
|
|
||||||
|
|
||||||
## The type representing inferring the column type automatically based on values
|
|
||||||
present in the column.
|
|
||||||
|
|
||||||
The most specific type which is valid for all values in a column is chosen:
|
|
||||||
- if all values are integers, `Integer` is chosen,
|
|
||||||
- if all values are decimals or integers, `Decimal` is chosen,
|
|
||||||
- if the values are all the same time type (a date, a time or a date-time),
|
|
||||||
the corresponding type is chosen, `Date`, `Time_Of_Day` or `Date_Time`,
|
|
||||||
respectively,
|
|
||||||
- if all values are booleans, `Boolean` is chosen,
|
|
||||||
- otherwise, `Text` is chosen as a fallback and the column is kept as-is
|
|
||||||
without parsing.
|
|
||||||
type Auto
|
|
||||||
|
|
||||||
## PRIVATE
|
|
||||||
Checks if the given type is a valid target type for parsing.
|
|
||||||
|
|
||||||
This will be replaced once we change parse to rely on `Value_Type` instead.
|
|
||||||
ensure_valid_parse_target type ~action =
|
|
||||||
expected_types = [Auto, Integer, Decimal, Date, Date_Time, Time_Of_Day, Boolean]
|
|
||||||
if expected_types.contains type . not then Error.throw (Illegal_Argument.Error "Unsupported target type "+type.to_text+".") else action
|
|
@ -116,7 +116,8 @@ type Floating_Point_Equality
|
|||||||
to_display_text self =
|
to_display_text self =
|
||||||
"Relying on equality of floating-point numbers is not recommended (within "+self.location+")."
|
"Relying on equality of floating-point numbers is not recommended (within "+self.location+")."
|
||||||
|
|
||||||
## Indicates that a text value with a delimiter was included in a concatenation without any quote character
|
## Indicates that a text value with a delimiter was included in a concatenation
|
||||||
|
without any quote character
|
||||||
type Unquoted_Delimiter
|
type Unquoted_Delimiter
|
||||||
Error (column:Text) (rows:[Integer])
|
Error (column:Text) (rows:[Integer])
|
||||||
|
|
||||||
@ -431,3 +432,25 @@ type Invalid_Aggregate_Column
|
|||||||
to_display_text : Text
|
to_display_text : Text
|
||||||
to_display_text self =
|
to_display_text self =
|
||||||
"The name ["+self.name+"] is not a valid column name nor expression."
|
"The name ["+self.name+"] is not a valid column name nor expression."
|
||||||
|
|
||||||
|
type Inexact_Type_Coercion
|
||||||
|
## Indicates that the requested `Value_Type` is not available in the given
|
||||||
|
backend, so it was replaced by its closest available type.
|
||||||
|
Warning (requested_type : Value_Type) (actual_type : Value_Type)
|
||||||
|
|
||||||
|
to_display_text : Text
|
||||||
|
to_display_text self =
|
||||||
|
"The requested type ["+self.requested_type.to_text+"] is not available in the given backend, so it was replaced by its closest available type ["+self.actual_type.to_text+"]."
|
||||||
|
|
||||||
|
to_text : Text
|
||||||
|
to_text self =
|
||||||
|
"Inexact_Type_Coercion.Warning (requested_type = " + self.requested_type.to_text + ") (actual_type = " + self.actual_type.to_text + ")"
|
||||||
|
|
||||||
|
type Invalid_Value_For_Type
|
||||||
|
## Indicates that a column construction/transformation failed because the
|
||||||
|
provided value is not valid for the requested column type.
|
||||||
|
Error (value : Any) (value_type : Value_Type)
|
||||||
|
|
||||||
|
to_display_text : Text
|
||||||
|
to_display_text self =
|
||||||
|
"The value ["+self.value.to_text+"] is not valid for the column type ["+self.value_type.to_text+"]."
|
||||||
|
@ -227,10 +227,9 @@ java_aggregator name column =
|
|||||||
order_columns = ordering.map c->c.column.java_column
|
order_columns = ordering.map c->c.column.java_column
|
||||||
order_direction = ordering.map c->c.direction.to_sign
|
order_direction = ordering.map c->c.direction.to_sign
|
||||||
LastAggregator.new name c.java_column ignore_nothing order_columns.to_array order_direction.to_array
|
LastAggregator.new name c.java_column ignore_nothing order_columns.to_array order_direction.to_array
|
||||||
Maximum c _ -> MinOrMaxAggregator.new name c.java_column 1
|
Maximum c _ -> MinOrMaxAggregator.new name c.java_column MinOrMaxAggregator.MAX
|
||||||
Minimum c _ -> MinOrMaxAggregator.new name c.java_column -1
|
Minimum c _ -> MinOrMaxAggregator.new name c.java_column MinOrMaxAggregator.MIN
|
||||||
Shortest c _ -> ShortestOrLongestAggregator.new name c.java_column -1
|
Shortest c _ -> ShortestOrLongestAggregator.new name c.java_column ShortestOrLongestAggregator.SHORTEST
|
||||||
Longest c _ -> ShortestOrLongestAggregator.new name c.java_column 1
|
Longest c _ -> ShortestOrLongestAggregator.new name c.java_column ShortestOrLongestAggregator.LONGEST
|
||||||
Concatenate c _ join prefix suffix quote -> ConcatenateAggregator.new name c.java_column join prefix suffix quote
|
Concatenate c _ join prefix suffix quote -> ConcatenateAggregator.new name c.java_column join prefix suffix quote
|
||||||
_ -> Error.throw (Invalid_Aggregation.Error name -1 "Unsupported aggregation")
|
_ -> Error.throw (Invalid_Aggregation.Error name -1 "Unsupported aggregation")
|
||||||
|
|
||||||
|
@ -6,7 +6,6 @@ import Standard.Base.System.File.Output_Stream
|
|||||||
import project.Data.Table.Table
|
import project.Data.Table.Table
|
||||||
import project.Data.Data_Formatter.Data_Formatter
|
import project.Data.Data_Formatter.Data_Formatter
|
||||||
import project.Data.Match_Columns.Match_Columns
|
import project.Data.Match_Columns.Match_Columns
|
||||||
import project.Data.Storage.Storage
|
|
||||||
import project.Delimited.Delimited_Format.Delimited_Format
|
import project.Delimited.Delimited_Format.Delimited_Format
|
||||||
import project.Delimited.Quote_Style.Quote_Style
|
import project.Delimited.Quote_Style.Quote_Style
|
||||||
import project.Internal.Delimited_Reader
|
import project.Internal.Delimited_Reader
|
||||||
@ -137,13 +136,12 @@ write_to_stream table format stream on_problems related_file=Nothing separator_o
|
|||||||
write_to_writer : Table -> Delimited_Format -> Writer -> Text | Nothing -> Boolean -> Nothing
|
write_to_writer : Table -> Delimited_Format -> Writer -> Text | Nothing -> Boolean -> Nothing
|
||||||
write_to_writer table format java_writer separator_override=Nothing needs_leading_newline=False =
|
write_to_writer table format java_writer separator_override=Nothing needs_leading_newline=False =
|
||||||
column_formatters = Panic.recover Illegal_Argument <| case format.value_formatter of
|
column_formatters = Panic.recover Illegal_Argument <| case format.value_formatter of
|
||||||
Nothing -> table.columns.map column-> case column.storage_type of
|
Nothing -> table.columns.map column->
|
||||||
Storage.Text -> TextFormatter.new
|
if column.value_type.is_text then TextFormatter.new else
|
||||||
_ ->
|
|
||||||
Panic.throw (Illegal_Argument.Error "If the expected file format does not specify a valid `Data_Formatter`, only Text columns are allowed.")
|
Panic.throw (Illegal_Argument.Error "If the expected file format does not specify a valid `Data_Formatter`, only Text columns are allowed.")
|
||||||
value_formatter -> table.columns.map column->
|
value_formatter -> table.columns.map column->
|
||||||
storage_type = column.storage_type
|
value_type = column.value_type
|
||||||
value_formatter.make_formatter_for_column_type storage_type
|
value_formatter.make_formatter_for_column_type value_type
|
||||||
quote_behavior = case format.quote_style of
|
quote_behavior = case format.quote_style of
|
||||||
Quote_Style.No_Quotes -> WriteQuoteBehavior.NEVER
|
Quote_Style.No_Quotes -> WriteQuoteBehavior.NEVER
|
||||||
Quote_Style.With_Quotes always _ _ ->
|
Quote_Style.With_Quotes always _ _ ->
|
||||||
|
@ -2,7 +2,7 @@ from Standard.Base import all
|
|||||||
import Standard.Base.Errors.Common.No_Such_Method
|
import Standard.Base.Errors.Common.No_Such_Method
|
||||||
import Standard.Base.Errors.Common.Type_Error
|
import Standard.Base.Errors.Common.Type_Error
|
||||||
|
|
||||||
import project.Data.Value_Type.Value_Type
|
import project.Data.Type.Value_Type.Value_Type
|
||||||
|
|
||||||
from Standard.Base.Data.Filter_Condition.Filter_Condition import all
|
from Standard.Base.Data.Filter_Condition.Filter_Condition import all
|
||||||
|
|
||||||
|
@ -8,9 +8,10 @@ polyglot java import org.enso.table.data.table.Table as Java_Table
|
|||||||
|
|
||||||
polyglot java import org.enso.table.data.index.DefaultIndex
|
polyglot java import org.enso.table.data.index.DefaultIndex
|
||||||
polyglot java import org.enso.table.data.column.storage.Storage
|
polyglot java import org.enso.table.data.column.storage.Storage
|
||||||
|
polyglot java import org.enso.table.data.column.builder.object.BoolBuilder
|
||||||
polyglot java import org.enso.table.data.column.builder.object.InferredBuilder
|
polyglot java import org.enso.table.data.column.builder.object.InferredBuilder
|
||||||
polyglot java import org.enso.table.data.column.builder.object.NumericBuilder
|
polyglot java import org.enso.table.data.column.builder.object.NumericBuilder
|
||||||
polyglot java import org.enso.table.data.column.builder.object.BoolBuilder
|
polyglot java import org.enso.table.data.column.builder.object.StringBuilder
|
||||||
|
|
||||||
## PRIVATE
|
## PRIVATE
|
||||||
make_bool_builder : BoolBuilder
|
make_bool_builder : BoolBuilder
|
||||||
@ -25,7 +26,11 @@ make_long_builder : Integer -> NumericBuilder
|
|||||||
make_long_builder initial_size = NumericBuilder.createLongBuilder initial_size
|
make_long_builder initial_size = NumericBuilder.createLongBuilder initial_size
|
||||||
|
|
||||||
## PRIVATE
|
## PRIVATE
|
||||||
make_inferred_builder : Integer -> NumericBuilder
|
make_string_builder : Integer -> StringBuilder
|
||||||
|
make_string_builder initial_size = StringBuilder.new initial_size
|
||||||
|
|
||||||
|
## PRIVATE
|
||||||
|
make_inferred_builder : Integer -> InferredBuilder
|
||||||
make_inferred_builder initial_size = InferredBuilder.new initial_size
|
make_inferred_builder initial_size = InferredBuilder.new initial_size
|
||||||
|
|
||||||
## PRIVATE
|
## PRIVATE
|
||||||
|
@ -4,7 +4,7 @@ import Standard.Base.Errors.Illegal_State.Illegal_State
|
|||||||
|
|
||||||
from project.Errors import Invalid_Value_Type, No_Such_Column, Missing_Input_Columns, Column_Indexes_Out_Of_Range
|
from project.Errors import Invalid_Value_Type, No_Such_Column, Missing_Input_Columns, Column_Indexes_Out_Of_Range
|
||||||
import project.Data.Join_Condition.Join_Condition
|
import project.Data.Join_Condition.Join_Condition
|
||||||
import project.Data.Value_Type.Value_Type
|
import project.Data.Type.Value_Type.Value_Type
|
||||||
import project.Internal.Problem_Builder.Problem_Builder
|
import project.Internal.Problem_Builder.Problem_Builder
|
||||||
|
|
||||||
type Join_Condition_Resolver
|
type Join_Condition_Resolver
|
||||||
|
@ -7,7 +7,8 @@ import project.Data.Column_Selector.Column_Selector
|
|||||||
import project.Data.Position.Position
|
import project.Data.Position.Position
|
||||||
import project.Data.Sort_Column.Sort_Column
|
import project.Data.Sort_Column.Sort_Column
|
||||||
import project.Data.Table.Table
|
import project.Data.Table.Table
|
||||||
import project.Data.Value_Type.Value_Type
|
import project.Data.Type.Value_Type.Value_Type
|
||||||
|
import project.Data.Type.Value_Type_Helpers
|
||||||
import project.Internal.Problem_Builder.Problem_Builder
|
import project.Internal.Problem_Builder.Problem_Builder
|
||||||
import project.Internal.Unique_Name_Strategy.Unique_Name_Strategy
|
import project.Internal.Unique_Name_Strategy.Unique_Name_Strategy
|
||||||
|
|
||||||
@ -388,7 +389,7 @@ unify_result_type_for_union column_set all_tables allow_type_widening problem_bu
|
|||||||
case allow_type_widening of
|
case allow_type_widening of
|
||||||
True ->
|
True ->
|
||||||
types = columns.filter Filter_Condition.Not_Nothing . map .value_type
|
types = columns.filter Filter_Condition.Not_Nothing . map .value_type
|
||||||
common_type = Value_Type.find_common_type types strict=True
|
common_type = Value_Type_Helpers.find_common_type types strict=True
|
||||||
if common_type.is_nothing then
|
if common_type.is_nothing then
|
||||||
problem_builder.report_other_warning (No_Common_Type.Error column_set.name)
|
problem_builder.report_other_warning (No_Common_Type.Error column_set.name)
|
||||||
common_type
|
common_type
|
||||||
@ -445,4 +446,5 @@ get_blank_columns when_any treat_nans_as_blank internal_columns make_column tabl
|
|||||||
Nothing -> True
|
Nothing -> True
|
||||||
1 -> True
|
1 -> True
|
||||||
0 -> False
|
0 -> False
|
||||||
_ -> Panic.throw (Illegal_State.Error "Unexpected result. Perhaps an implementation bug of Column_Selector.Blank_Columns.")
|
unexpected ->
|
||||||
|
Panic.throw (Illegal_State.Error "Unexpected result: "+unexpected.to_display_text+". Perhaps an implementation bug of Column_Selector.Blank_Columns.")
|
||||||
|
@ -3,7 +3,8 @@ from Standard.Base import all
|
|||||||
import project.Data.Aggregate_Column.Aggregate_Column
|
import project.Data.Aggregate_Column.Aggregate_Column
|
||||||
import project.Data.Column.Column
|
import project.Data.Column.Column
|
||||||
import project.Data.Column_Selector.Column_Selector
|
import project.Data.Column_Selector.Column_Selector
|
||||||
import project.Data.Value_Type.Auto
|
import project.Data.Type.Value_Type.Auto
|
||||||
|
import project.Data.Type.Value_Type.Value_Type
|
||||||
import project.Data.Data_Formatter.Data_Formatter
|
import project.Data.Data_Formatter.Data_Formatter
|
||||||
import project.Data.Join_Condition.Join_Condition
|
import project.Data.Join_Condition.Join_Condition
|
||||||
import project.Data.Join_Kind.Join_Kind
|
import project.Data.Join_Kind.Join_Kind
|
||||||
@ -24,7 +25,8 @@ import project.Excel.Excel_Workbook.Excel_Workbook
|
|||||||
export project.Data.Aggregate_Column.Aggregate_Column
|
export project.Data.Aggregate_Column.Aggregate_Column
|
||||||
export project.Data.Column.Column
|
export project.Data.Column.Column
|
||||||
export project.Data.Column_Selector.Column_Selector
|
export project.Data.Column_Selector.Column_Selector
|
||||||
export project.Data.Value_Type.Auto
|
export project.Data.Type.Value_Type.Auto
|
||||||
|
export project.Data.Type.Value_Type.Value_Type
|
||||||
export project.Data.Data_Formatter.Data_Formatter
|
export project.Data.Data_Formatter.Data_Formatter
|
||||||
export project.Data.Join_Condition.Join_Condition
|
export project.Data.Join_Condition.Join_Condition
|
||||||
export project.Data.Join_Kind.Join_Kind
|
export project.Data.Join_Kind.Join_Kind
|
||||||
|
@ -1,5 +1,6 @@
|
|||||||
from Standard.Base import all
|
from Standard.Base import all
|
||||||
import Standard.Base.Errors.Common.No_Such_Method
|
import Standard.Base.Errors.Common.No_Such_Method
|
||||||
|
import Standard.Base.Errors.Illegal_Argument.Illegal_Argument
|
||||||
|
|
||||||
import project.Test_Result.Test_Result
|
import project.Test_Result.Test_Result
|
||||||
from project.Test import Test
|
from project.Test import Test
|
||||||
@ -22,7 +23,10 @@ from project.Test import Test
|
|||||||
Any.should_fail_with : Any -> Integer -> Test_Result
|
Any.should_fail_with : Any -> Integer -> Test_Result
|
||||||
Any.should_fail_with self matcher frames_to_skip=0 =
|
Any.should_fail_with self matcher frames_to_skip=0 =
|
||||||
loc = Meta.get_source_location 1+frames_to_skip
|
loc = Meta.get_source_location 1+frames_to_skip
|
||||||
Test.fail ("Expected an error " + matcher.to_text + " but no error occurred, instead got: " + self.to_text + " (at " + loc + ").")
|
matcher_text = case matcher.to_text of
|
||||||
|
text : Text -> text
|
||||||
|
_ -> Meta.meta matcher . to_text
|
||||||
|
Test.fail ("Expected an error " + matcher_text + " but no error occurred, instead got: " + self.to_text + " (at " + loc + ").")
|
||||||
|
|
||||||
## Expect a function to fail with the provided dataflow error.
|
## Expect a function to fail with the provided dataflow error.
|
||||||
|
|
||||||
@ -44,7 +48,10 @@ Error.should_fail_with self matcher frames_to_skip=0 =
|
|||||||
caught = self.catch
|
caught = self.catch
|
||||||
if caught == matcher || caught.is_a matcher then Nothing else
|
if caught == matcher || caught.is_a matcher then Nothing else
|
||||||
loc = Meta.get_source_location 3+frames_to_skip
|
loc = Meta.get_source_location 3+frames_to_skip
|
||||||
Test.fail ("Expected error "+matcher.to_text+", but error " + caught.to_text + " has been returned (at " + loc + ").")
|
matcher_text = case matcher.to_text of
|
||||||
|
text : Text -> text
|
||||||
|
_ -> Meta.meta matcher . to_text
|
||||||
|
Test.fail ("Expected error "+matcher_text+", but error " + caught.to_text + " has been returned (at " + loc + ").")
|
||||||
|
|
||||||
## Asserts that `self` value is equal to the expected value.
|
## Asserts that `self` value is equal to the expected value.
|
||||||
|
|
||||||
@ -101,6 +108,10 @@ Any.should_equal_type self that frames_to_skip=0 = case (self.is_same_object_as
|
|||||||
msg = self.to_text + " did not equal type " + that.to_text + " (at " + loc + ")."
|
msg = self.to_text + " did not equal type " + that.to_text + " (at " + loc + ")."
|
||||||
Test.fail msg
|
Test.fail msg
|
||||||
|
|
||||||
|
## Added so that dataflow errors are not silently lost.
|
||||||
|
Error.should_equal_type self _ frames_to_skip=0 =
|
||||||
|
Test.fail_match_on_unexpected_error self 1+frames_to_skip
|
||||||
|
|
||||||
## Asserts that `self` value is not equal to the expected value.
|
## Asserts that `self` value is not equal to the expected value.
|
||||||
|
|
||||||
Arguments:
|
Arguments:
|
||||||
@ -123,6 +134,10 @@ Any.should_not_equal self that frames_to_skip=0 = case self != that of
|
|||||||
msg = self.to_text + " did equal " + that.to_text + " (at " + loc + ")."
|
msg = self.to_text + " did equal " + that.to_text + " (at " + loc + ")."
|
||||||
Test.fail msg
|
Test.fail msg
|
||||||
|
|
||||||
|
## Added so that dataflow errors are not silently lost.
|
||||||
|
Error.should_not_equal self _ frames_to_skip=0 =
|
||||||
|
Test.fail_match_on_unexpected_error self 1+frames_to_skip
|
||||||
|
|
||||||
## Asserts that `self` value is not equal to the expected type value.
|
## Asserts that `self` value is not equal to the expected type value.
|
||||||
|
|
||||||
Arguments:
|
Arguments:
|
||||||
@ -145,6 +160,10 @@ Any.should_not_equal_type self that frames_to_skip=0 = case (self.is_same_object
|
|||||||
msg = self.to_text + " did equal type " + that.to_text + " (at " + loc + ")."
|
msg = self.to_text + " did equal type " + that.to_text + " (at " + loc + ")."
|
||||||
Test.fail msg
|
Test.fail msg
|
||||||
|
|
||||||
|
## Added so that dataflow errors are not silently lost.
|
||||||
|
Error.should_not_equal_type self _ frames_to_skip=0 =
|
||||||
|
Test.fail_match_on_unexpected_error self 1+frames_to_skip
|
||||||
|
|
||||||
## Asserts that `self` value is a Text value and starts with `that`.
|
## Asserts that `self` value is a Text value and starts with `that`.
|
||||||
|
|
||||||
Arguments:
|
Arguments:
|
||||||
@ -251,6 +270,11 @@ Error.should_succeed : Integer -> Any
|
|||||||
Error.should_succeed self frames_to_skip=0 =
|
Error.should_succeed self frames_to_skip=0 =
|
||||||
Test.fail_match_on_unexpected_error self 1+frames_to_skip
|
Test.fail_match_on_unexpected_error self 1+frames_to_skip
|
||||||
|
|
||||||
|
## Handles an unexpected dataflow error.
|
||||||
|
Error.should_be_a : Integer -> Any
|
||||||
|
Error.should_be_a self frames_to_skip=0 =
|
||||||
|
Test.fail_match_on_unexpected_error self 1+frames_to_skip
|
||||||
|
|
||||||
## Asserts that the given `Boolean` is `True`
|
## Asserts that the given `Boolean` is `True`
|
||||||
|
|
||||||
> Example
|
> Example
|
||||||
@ -320,32 +344,48 @@ Error.should_be_false self = Test.fail_match_on_unexpected_error self 1
|
|||||||
example_should_be_a = 1.should_be_a Boolean
|
example_should_be_a = 1.should_be_a Boolean
|
||||||
Any.should_be_a : Any -> Test_Result
|
Any.should_be_a : Any -> Test_Result
|
||||||
Any.should_be_a self typ =
|
Any.should_be_a self typ =
|
||||||
ok = case Meta.meta typ of
|
loc = Meta.get_source_location 1
|
||||||
|
fail_on_wrong_arg_type =
|
||||||
|
Panic.throw <|
|
||||||
|
Illegal_Argument.Error "typ ("+typ.to_display_text+") must either be a type or a constructor. Use `should_equal` for value equality test instead."
|
||||||
|
case Meta.meta typ of
|
||||||
c : Meta.Constructor -> case Meta.meta self of
|
c : Meta.Constructor -> case Meta.meta self of
|
||||||
a : Meta.Atom -> a.constructor == c
|
a : Meta.Atom ->
|
||||||
_ -> False
|
if a.constructor == c then Test_Result.Success else
|
||||||
_ -> self.is_a typ || self==typ
|
expected_type = Meta.get_qualified_type_name typ
|
||||||
|
actual_type = Meta.get_qualified_type_name self
|
||||||
if ok then Test_Result.Success else
|
message = "Expected a value of type "+expected_type+", built with constructor "+c.name+", but got a value of type "+actual_type+", built with constructor "+a.constructor.name+" instead (at "+loc+")."
|
||||||
loc = Meta.get_source_location 3
|
Test.fail message
|
||||||
expected_type = Meta.get_qualified_type_name typ
|
_ ->
|
||||||
actual_type = Meta.get_qualified_type_name self
|
expected_type = Meta.get_qualified_type_name typ
|
||||||
message = "Expected a value of type " + expected_type + " but got a value of type " + actual_type + " instead (at " + loc + ")."
|
actual_type = Meta.get_qualified_type_name self
|
||||||
Test.fail message
|
message = "Expected a value of type "+expected_type+", built with constructor "+c.name+", but got a value of type "+actual_type+" instead (at "+loc+")."
|
||||||
|
Test.fail message
|
||||||
## Asserts that a value is of a given type.
|
_ : Meta.Type ->
|
||||||
|
ok = self.is_a typ || self==typ
|
||||||
Arguments:
|
if ok then Test_Result.Success else
|
||||||
- typ: The type to assert that `self` is a value of.
|
expected_type = Meta.get_qualified_type_name typ
|
||||||
|
actual_type = Meta.get_qualified_type_name self
|
||||||
> Examples
|
message = "Expected a value of type "+expected_type+" but got a value of type "+actual_type+" instead (at "+loc+")."
|
||||||
Assert that 1 is of type Integer.
|
Test.fail message
|
||||||
|
# Workaround for 0-argument atom constructors which 'unapplies' them.
|
||||||
from Standard.Test import Test
|
atom : Meta.Atom ->
|
||||||
|
ctor = atom . constructor
|
||||||
example_should_be_an = 1.should_be_an Integer
|
if ctor.fields.not_empty then fail_on_wrong_arg_type else
|
||||||
Any.should_be_an : Any -> Test_Result
|
self.should_be_a (ctor.value ...)
|
||||||
Any.should_be_an self typ = self.should_be_a typ
|
_ : Meta.Polyglot ->
|
||||||
|
ok = self.is_a typ
|
||||||
|
if ok then Test_Result.Success else
|
||||||
|
actual_type = Meta.get_qualified_type_name self
|
||||||
|
message = "Expected a value of Java class "+typ.to_text+" but got a value of type "+actual_type+" instead (at "+loc+")."
|
||||||
|
Test.fail message
|
||||||
|
Meta.Primitive.Value (b : Boolean) ->
|
||||||
|
ok = self == b
|
||||||
|
if ok then Test_Result.Success else
|
||||||
|
actual_type = Meta.get_qualified_type_name self
|
||||||
|
message = "Expected a value of "+typ.to_text+" but got a value of type "+actual_type+" instead (at "+loc+")."
|
||||||
|
Test.fail message
|
||||||
|
_ -> fail_on_wrong_arg_type
|
||||||
|
|
||||||
## Asserts that `self` value contains the same elements as `that`.
|
## Asserts that `self` value contains the same elements as `that`.
|
||||||
|
|
||||||
|
@ -29,9 +29,15 @@ type Test
|
|||||||
if config.should_run_group name then
|
if config.should_run_group name then
|
||||||
case pending of
|
case pending of
|
||||||
Nothing ->
|
Nothing ->
|
||||||
r = State.run Spec (Spec.Value name List.Nil) <|
|
handle_failed_group_builder caught_panic =
|
||||||
behaviors
|
stack_trace_text = caught_panic.stack_trace.map .to_display_text . join '\n'
|
||||||
State.get Spec
|
result = Test_Result.Failure "A Panic has been thrown outside of `Test.specify`, failed to run the test group: "+caught_panic.payload.to_display_text details=caught_panic.to_text+'\n'+stack_trace_text
|
||||||
|
behavior = Behavior.Value "{Building the test group.}" result Duration.zero
|
||||||
|
Spec.Value name (List.Cons behavior List.Nil)
|
||||||
|
r = Panic.catch Any handler=handle_failed_group_builder <|
|
||||||
|
State.run Spec (Spec.Value name List.Nil) <|
|
||||||
|
behaviors
|
||||||
|
State.get Spec
|
||||||
Test_Reporter.print_report r config suite.builder
|
Test_Reporter.print_report r config suite.builder
|
||||||
new_suite = Test_Suite.Value suite.config (List.Cons r suite.specs) suite.builder
|
new_suite = Test_Suite.Value suite.config (List.Cons r suite.specs) suite.builder
|
||||||
State.put Test_Suite new_suite
|
State.put Test_Suite new_suite
|
||||||
|
@ -4,7 +4,6 @@ from Standard.Base.Data.Json import render
|
|||||||
|
|
||||||
from Standard.Table import Table, Column
|
from Standard.Table import Table, Column
|
||||||
import Standard.Table.Data.Row.Row
|
import Standard.Table.Data.Row.Row
|
||||||
import Standard.Table.Data.Storage.Storage
|
|
||||||
|
|
||||||
import project.Id.Id
|
import project.Id.Id
|
||||||
from project.Text import get_lazy_visualisation_text_window
|
from project.Text import get_lazy_visualisation_text_window
|
||||||
@ -236,7 +235,7 @@ Column.default_visualization self = Id.table
|
|||||||
Checks if the column stores numbers.
|
Checks if the column stores numbers.
|
||||||
Column.is_numeric : Boolean
|
Column.is_numeric : Boolean
|
||||||
Column.is_numeric self =
|
Column.is_numeric self =
|
||||||
[Storage.Integer,Storage.Decimal].contains self.storage_type
|
self.value_type.is_numeric
|
||||||
|
|
||||||
## PRIVATE
|
## PRIVATE
|
||||||
Returns the data requested to render a lazy view of the default visualisation. Decides
|
Returns the data requested to render a lazy view of the default visualisation. Decides
|
||||||
|
@ -22,31 +22,8 @@ prepare_visualization x = Helpers.recover_errors <|
|
|||||||
prepared = x.to_sql.prepare
|
prepared = x.to_sql.prepare
|
||||||
code = prepared.first
|
code = prepared.first
|
||||||
interpolations = prepared.second
|
interpolations = prepared.second
|
||||||
mapped = interpolations.map e->
|
mapped = interpolations.map value->
|
||||||
value = e.first
|
enso_type = Meta.get_qualified_type_name value
|
||||||
actual_type = Meta.get_qualified_type_name value
|
JS_Object.from_pairs [["value", value], ["enso_type", enso_type]]
|
||||||
expected_sql_type = e.second.name
|
|
||||||
expected_enso_type = find_expected_enso_type_for_sql e.second
|
|
||||||
JS_Object.from_pairs [["value", value], ["actual_type", actual_type], ["expected_sql_type", expected_sql_type], ["expected_enso_type", expected_enso_type]]
|
|
||||||
dialect = x.connection.dialect.name
|
dialect = x.connection.dialect.name
|
||||||
JS_Object.from_pairs [["dialect", dialect], ["code", code], ["interpolations", mapped]] . to_text
|
JS_Object.from_pairs [["dialect", dialect], ["code", code], ["interpolations", mapped]] . to_text
|
||||||
|
|
||||||
## PRIVATE
|
|
||||||
|
|
||||||
Return an expected Enso type for an SQL type.
|
|
||||||
|
|
||||||
Arguments:
|
|
||||||
- sql_type: The SQL type to convert to an Enso type.
|
|
||||||
|
|
||||||
Expected Enso types are only inferred for some known SQL types. For unknown
|
|
||||||
types it will return `Nothing`.
|
|
||||||
find_expected_enso_type_for_sql : SQL_Type -> Text
|
|
||||||
find_expected_enso_type_for_sql sql_type =
|
|
||||||
expected_type = if sql_type.is_definitely_integer then Integer else
|
|
||||||
if sql_type.is_definitely_double then Decimal else
|
|
||||||
if sql_type.is_definitely_text then Text else
|
|
||||||
if sql_type.is_definitely_boolean then Boolean else
|
|
||||||
Nothing
|
|
||||||
case expected_type of
|
|
||||||
Nothing -> Nothing
|
|
||||||
_ -> Meta.get_qualified_type_name expected_type
|
|
||||||
|
@ -26,7 +26,8 @@ import java.io.FileWriter
|
|||||||
object FrgaalJavaCompiler {
|
object FrgaalJavaCompiler {
|
||||||
private val ENSO_SOURCES = ".enso-sources"
|
private val ENSO_SOURCES = ".enso-sources"
|
||||||
|
|
||||||
val frgaal = "org.frgaal" % "compiler" % "19.0.0" % "provided"
|
val frgaal = "org.frgaal" % "compiler" % "19.0.1" % "provided"
|
||||||
|
val sourceLevel = "19"
|
||||||
|
|
||||||
def compilers(
|
def compilers(
|
||||||
classpath: sbt.Keys.Classpath,
|
classpath: sbt.Keys.Classpath,
|
||||||
|
@ -1,5 +1,6 @@
|
|||||||
package org.enso.table.aggregations;
|
package org.enso.table.aggregations;
|
||||||
|
|
||||||
|
import org.enso.table.data.column.storage.type.StorageType;
|
||||||
import org.enso.table.problems.AggregatedProblems;
|
import org.enso.table.problems.AggregatedProblems;
|
||||||
import org.enso.table.problems.Problem;
|
import org.enso.table.problems.Problem;
|
||||||
|
|
||||||
@ -10,10 +11,10 @@ import java.util.stream.Collectors;
|
|||||||
/** Interface used to define aggregate columns. */
|
/** Interface used to define aggregate columns. */
|
||||||
public abstract class Aggregator {
|
public abstract class Aggregator {
|
||||||
private final String name;
|
private final String name;
|
||||||
private final int type;
|
private final StorageType type;
|
||||||
private AggregatedProblems problems;
|
private AggregatedProblems problems;
|
||||||
|
|
||||||
protected Aggregator(String name, int type) {
|
protected Aggregator(String name, StorageType type) {
|
||||||
this.name = name;
|
this.name = name;
|
||||||
this.type = type;
|
this.type = type;
|
||||||
this.problems = null;
|
this.problems = null;
|
||||||
@ -33,7 +34,7 @@ public abstract class Aggregator {
|
|||||||
*
|
*
|
||||||
* @return The type of the new column.
|
* @return The type of the new column.
|
||||||
*/
|
*/
|
||||||
public int getType() {
|
public StorageType getType() {
|
||||||
return type;
|
return type;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -1,6 +1,7 @@
|
|||||||
package org.enso.table.aggregations;
|
package org.enso.table.aggregations;
|
||||||
|
|
||||||
import org.enso.table.data.column.storage.Storage;
|
import org.enso.table.data.column.storage.Storage;
|
||||||
|
import org.enso.table.data.column.storage.type.TextType;
|
||||||
import org.enso.table.data.table.Column;
|
import org.enso.table.data.table.Column;
|
||||||
import org.enso.table.data.table.problems.InvalidAggregation;
|
import org.enso.table.data.table.problems.InvalidAggregation;
|
||||||
import org.enso.table.data.table.problems.UnquotedDelimiter;
|
import org.enso.table.data.table.problems.UnquotedDelimiter;
|
||||||
@ -16,7 +17,7 @@ public class Concatenate extends Aggregator {
|
|||||||
|
|
||||||
public Concatenate(
|
public Concatenate(
|
||||||
String name, Column column, String separator, String prefix, String suffix, String quote) {
|
String name, Column column, String separator, String prefix, String suffix, String quote) {
|
||||||
super(name, Storage.Type.STRING);
|
super(name, TextType.VARIABLE_LENGTH);
|
||||||
this.storage = column.getStorage();
|
this.storage = column.getStorage();
|
||||||
|
|
||||||
this.separator = separator == null ? "" : separator;
|
this.separator = separator == null ? "" : separator;
|
||||||
|
@ -1,13 +1,13 @@
|
|||||||
package org.enso.table.aggregations;
|
package org.enso.table.aggregations;
|
||||||
|
|
||||||
import org.enso.table.data.column.storage.Storage;
|
import org.enso.table.data.column.storage.type.IntegerType;
|
||||||
|
|
||||||
import java.util.List;
|
import java.util.List;
|
||||||
|
|
||||||
/** Aggregate Column counting the number of entries in a group. */
|
/** Aggregate Column counting the number of entries in a group. */
|
||||||
public class Count extends Aggregator {
|
public class Count extends Aggregator {
|
||||||
public Count(String name) {
|
public Count(String name) {
|
||||||
super(name, Storage.Type.LONG);
|
super(name, IntegerType.INT_64);
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
|
@ -2,13 +2,13 @@ package org.enso.table.aggregations;
|
|||||||
|
|
||||||
import org.enso.base.text.TextFoldingStrategy;
|
import org.enso.base.text.TextFoldingStrategy;
|
||||||
import org.enso.table.data.column.storage.Storage;
|
import org.enso.table.data.column.storage.Storage;
|
||||||
|
import org.enso.table.data.column.storage.type.IntegerType;
|
||||||
import org.enso.table.data.index.UnorderedMultiValueKey;
|
import org.enso.table.data.index.UnorderedMultiValueKey;
|
||||||
import org.enso.table.data.table.Column;
|
import org.enso.table.data.table.Column;
|
||||||
import org.enso.table.data.table.problems.FloatingPointGrouping;
|
import org.enso.table.data.table.problems.FloatingPointGrouping;
|
||||||
import org.enso.table.util.ConstantList;
|
import org.enso.table.util.ConstantList;
|
||||||
|
|
||||||
import java.util.Arrays;
|
import java.util.Arrays;
|
||||||
import java.util.Comparator;
|
|
||||||
import java.util.HashSet;
|
import java.util.HashSet;
|
||||||
import java.util.List;
|
import java.util.List;
|
||||||
|
|
||||||
@ -29,7 +29,7 @@ public class CountDistinct extends Aggregator {
|
|||||||
* @param ignoreAllNull if true ignore then all values are null
|
* @param ignoreAllNull if true ignore then all values are null
|
||||||
*/
|
*/
|
||||||
public CountDistinct(String name, Column[] columns, boolean ignoreAllNull) {
|
public CountDistinct(String name, Column[] columns, boolean ignoreAllNull) {
|
||||||
super(name, Storage.Type.LONG);
|
super(name, IntegerType.INT_64);
|
||||||
this.storage = Arrays.stream(columns).map(Column::getStorage).toArray(Storage[]::new);
|
this.storage = Arrays.stream(columns).map(Column::getStorage).toArray(Storage[]::new);
|
||||||
this.ignoreAllNull = ignoreAllNull;
|
this.ignoreAllNull = ignoreAllNull;
|
||||||
textFoldingStrategy =
|
textFoldingStrategy =
|
||||||
|
@ -1,6 +1,7 @@
|
|||||||
package org.enso.table.aggregations;
|
package org.enso.table.aggregations;
|
||||||
|
|
||||||
import org.enso.table.data.column.storage.Storage;
|
import org.enso.table.data.column.storage.Storage;
|
||||||
|
import org.enso.table.data.column.storage.type.IntegerType;
|
||||||
import org.enso.table.data.table.Column;
|
import org.enso.table.data.table.Column;
|
||||||
import org.enso.table.data.table.problems.InvalidAggregation;
|
import org.enso.table.data.table.problems.InvalidAggregation;
|
||||||
|
|
||||||
@ -22,7 +23,7 @@ public class CountEmpty extends Aggregator {
|
|||||||
* @param isEmpty true to count nulls or empty, false to count non-empty
|
* @param isEmpty true to count nulls or empty, false to count non-empty
|
||||||
*/
|
*/
|
||||||
public CountEmpty(String name, Column column, boolean isEmpty) {
|
public CountEmpty(String name, Column column, boolean isEmpty) {
|
||||||
super(name, Storage.Type.LONG);
|
super(name, IntegerType.INT_64);
|
||||||
this.storage = column.getStorage();
|
this.storage = column.getStorage();
|
||||||
this.isEmpty = isEmpty;
|
this.isEmpty = isEmpty;
|
||||||
}
|
}
|
||||||
|
@ -1,6 +1,7 @@
|
|||||||
package org.enso.table.aggregations;
|
package org.enso.table.aggregations;
|
||||||
|
|
||||||
import org.enso.table.data.column.storage.Storage;
|
import org.enso.table.data.column.storage.Storage;
|
||||||
|
import org.enso.table.data.column.storage.type.IntegerType;
|
||||||
import org.enso.table.data.table.Column;
|
import org.enso.table.data.table.Column;
|
||||||
|
|
||||||
import java.util.List;
|
import java.util.List;
|
||||||
@ -21,7 +22,7 @@ public class CountNothing extends Aggregator {
|
|||||||
* @param isNothing true to count nulls, false to count non-nulls
|
* @param isNothing true to count nulls, false to count non-nulls
|
||||||
*/
|
*/
|
||||||
public CountNothing(String name, Column column, boolean isNothing) {
|
public CountNothing(String name, Column column, boolean isNothing) {
|
||||||
super(name, Storage.Type.LONG);
|
super(name, IntegerType.INT_64);
|
||||||
this.storage = column.getStorage();
|
this.storage = column.getStorage();
|
||||||
this.isNothing = isNothing;
|
this.isNothing = isNothing;
|
||||||
}
|
}
|
||||||
@ -30,7 +31,7 @@ public class CountNothing extends Aggregator {
|
|||||||
public Object aggregate(List<Integer> indexes) {
|
public Object aggregate(List<Integer> indexes) {
|
||||||
long count = 0;
|
long count = 0;
|
||||||
for (int row : indexes) {
|
for (int row : indexes) {
|
||||||
count += ((storage.getItemBoxed(row) == null) == isNothing ? 1 : 0);
|
count += ((storage.getItemBoxed(row) == null) == isNothing ? 1L : 0L);
|
||||||
}
|
}
|
||||||
return count;
|
return count;
|
||||||
}
|
}
|
||||||
|
@ -2,6 +2,7 @@ package org.enso.table.aggregations;
|
|||||||
|
|
||||||
import org.enso.base.polyglot.NumericConverter;
|
import org.enso.base.polyglot.NumericConverter;
|
||||||
import org.enso.table.data.column.storage.Storage;
|
import org.enso.table.data.column.storage.Storage;
|
||||||
|
import org.enso.table.data.column.storage.type.FloatType;
|
||||||
import org.enso.table.data.table.Column;
|
import org.enso.table.data.table.Column;
|
||||||
import org.enso.table.data.table.problems.InvalidAggregation;
|
import org.enso.table.data.table.problems.InvalidAggregation;
|
||||||
|
|
||||||
@ -22,7 +23,7 @@ public class Mean extends Aggregator {
|
|||||||
private final Storage<?> storage;
|
private final Storage<?> storage;
|
||||||
|
|
||||||
public Mean(String name, Column column) {
|
public Mean(String name, Column column) {
|
||||||
super(name, Storage.Type.DOUBLE);
|
super(name, FloatType.FLOAT_64);
|
||||||
this.storage = column.getStorage();
|
this.storage = column.getStorage();
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -12,6 +12,9 @@ import java.util.List;
|
|||||||
* Aggregate Column finding the minimum (minOrMax = -1) or maximum (minOrMax = 1) entry in a group.
|
* Aggregate Column finding the minimum (minOrMax = -1) or maximum (minOrMax = 1) entry in a group.
|
||||||
*/
|
*/
|
||||||
public class MinOrMax extends Aggregator {
|
public class MinOrMax extends Aggregator {
|
||||||
|
public static final int MIN = -1;
|
||||||
|
public static final int MAX = 1;
|
||||||
|
|
||||||
private final Storage<?> storage;
|
private final Storage<?> storage;
|
||||||
private final int minOrMax;
|
private final int minOrMax;
|
||||||
|
|
||||||
|
@ -2,6 +2,7 @@ package org.enso.table.aggregations;
|
|||||||
|
|
||||||
import org.enso.base.polyglot.NumericConverter;
|
import org.enso.base.polyglot.NumericConverter;
|
||||||
import org.enso.table.data.column.storage.Storage;
|
import org.enso.table.data.column.storage.Storage;
|
||||||
|
import org.enso.table.data.column.storage.type.FloatType;
|
||||||
import org.enso.table.data.table.Column;
|
import org.enso.table.data.table.Column;
|
||||||
import org.enso.table.data.table.problems.InvalidAggregation;
|
import org.enso.table.data.table.problems.InvalidAggregation;
|
||||||
|
|
||||||
@ -16,7 +17,7 @@ public class Percentile extends Aggregator {
|
|||||||
private final double percentile;
|
private final double percentile;
|
||||||
|
|
||||||
public Percentile(String name, Column column, double percentile) {
|
public Percentile(String name, Column column, double percentile) {
|
||||||
super(name, Storage.Type.DOUBLE);
|
super(name, FloatType.FLOAT_64);
|
||||||
this.storage = column.getStorage();
|
this.storage = column.getStorage();
|
||||||
this.percentile = percentile;
|
this.percentile = percentile;
|
||||||
}
|
}
|
||||||
|
@ -1,8 +1,8 @@
|
|||||||
package org.enso.table.aggregations;
|
package org.enso.table.aggregations;
|
||||||
|
|
||||||
import com.ibm.icu.text.BreakIterator;
|
|
||||||
import org.enso.base.Text_Utils;
|
import org.enso.base.Text_Utils;
|
||||||
import org.enso.table.data.column.storage.Storage;
|
import org.enso.table.data.column.storage.Storage;
|
||||||
|
import org.enso.table.data.column.storage.type.TextType;
|
||||||
import org.enso.table.data.table.Column;
|
import org.enso.table.data.table.Column;
|
||||||
import org.enso.table.data.table.problems.InvalidAggregation;
|
import org.enso.table.data.table.problems.InvalidAggregation;
|
||||||
|
|
||||||
@ -10,11 +10,13 @@ import java.util.List;
|
|||||||
|
|
||||||
/** Aggregate Column finding the longest or shortest string in a group. */
|
/** Aggregate Column finding the longest or shortest string in a group. */
|
||||||
public class ShortestOrLongest extends Aggregator {
|
public class ShortestOrLongest extends Aggregator {
|
||||||
|
public static final int SHORTEST = -1;
|
||||||
|
public static final int LONGEST = 1;
|
||||||
private final Storage<?> storage;
|
private final Storage<?> storage;
|
||||||
private final int minOrMax;
|
private final int minOrMax;
|
||||||
|
|
||||||
public ShortestOrLongest(String name, Column column, int minOrMax) {
|
public ShortestOrLongest(String name, Column column, int minOrMax) {
|
||||||
super(name, Storage.Type.STRING);
|
super(name, TextType.VARIABLE_LENGTH);
|
||||||
this.storage = column.getStorage();
|
this.storage = column.getStorage();
|
||||||
this.minOrMax = minOrMax;
|
this.minOrMax = minOrMax;
|
||||||
}
|
}
|
||||||
|
@ -2,6 +2,7 @@ package org.enso.table.aggregations;
|
|||||||
|
|
||||||
import org.enso.base.polyglot.NumericConverter;
|
import org.enso.base.polyglot.NumericConverter;
|
||||||
import org.enso.table.data.column.storage.Storage;
|
import org.enso.table.data.column.storage.Storage;
|
||||||
|
import org.enso.table.data.column.storage.type.FloatType;
|
||||||
import org.enso.table.data.table.Column;
|
import org.enso.table.data.table.Column;
|
||||||
import org.enso.table.data.table.problems.InvalidAggregation;
|
import org.enso.table.data.table.problems.InvalidAggregation;
|
||||||
|
|
||||||
@ -25,7 +26,7 @@ public class StandardDeviation extends Aggregator {
|
|||||||
private final boolean population;
|
private final boolean population;
|
||||||
|
|
||||||
public StandardDeviation(String name, Column column, boolean population) {
|
public StandardDeviation(String name, Column column, boolean population) {
|
||||||
super(name, Storage.Type.DOUBLE);
|
super(name, FloatType.FLOAT_64);
|
||||||
this.storage = column.getStorage();
|
this.storage = column.getStorage();
|
||||||
this.population = population;
|
this.population = population;
|
||||||
}
|
}
|
||||||
|
@ -2,6 +2,7 @@ package org.enso.table.aggregations;
|
|||||||
|
|
||||||
import org.enso.base.polyglot.NumericConverter;
|
import org.enso.base.polyglot.NumericConverter;
|
||||||
import org.enso.table.data.column.storage.Storage;
|
import org.enso.table.data.column.storage.Storage;
|
||||||
|
import org.enso.table.data.column.storage.type.FloatType;
|
||||||
import org.enso.table.data.table.Column;
|
import org.enso.table.data.table.Column;
|
||||||
import org.enso.table.data.table.problems.InvalidAggregation;
|
import org.enso.table.data.table.problems.InvalidAggregation;
|
||||||
|
|
||||||
@ -12,7 +13,7 @@ public class Sum extends Aggregator {
|
|||||||
private final Storage<?> storage;
|
private final Storage<?> storage;
|
||||||
|
|
||||||
public Sum(String name, Column column) {
|
public Sum(String name, Column column) {
|
||||||
super(name, Storage.Type.DOUBLE);
|
super(name, FloatType.FLOAT_64);
|
||||||
this.storage = column.getStorage();
|
this.storage = column.getStorage();
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -2,6 +2,8 @@ package org.enso.table.data.column.builder.object;
|
|||||||
|
|
||||||
import org.enso.table.data.column.storage.BoolStorage;
|
import org.enso.table.data.column.storage.BoolStorage;
|
||||||
import org.enso.table.data.column.storage.Storage;
|
import org.enso.table.data.column.storage.Storage;
|
||||||
|
import org.enso.table.data.column.storage.type.BooleanType;
|
||||||
|
import org.enso.table.data.column.storage.type.StorageType;
|
||||||
import org.enso.table.util.BitSets;
|
import org.enso.table.util.BitSets;
|
||||||
|
|
||||||
import java.util.BitSet;
|
import java.util.BitSet;
|
||||||
@ -64,7 +66,7 @@ public class BoolBuilder extends TypedBuilder {
|
|||||||
|
|
||||||
@Override
|
@Override
|
||||||
public void appendBulkStorage(Storage<?> storage) {
|
public void appendBulkStorage(Storage<?> storage) {
|
||||||
if (storage.getType() == getType()) {
|
if (storage.getType().equals(getType())) {
|
||||||
if (storage instanceof BoolStorage boolStorage) {
|
if (storage instanceof BoolStorage boolStorage) {
|
||||||
BitSets.copy(boolStorage.getValues(), vals, size, boolStorage.size());
|
BitSets.copy(boolStorage.getValues(), vals, size, boolStorage.size());
|
||||||
BitSets.copy(boolStorage.getIsMissing(), isNa, size, boolStorage.size());
|
BitSets.copy(boolStorage.getIsMissing(), isNa, size, boolStorage.size());
|
||||||
@ -99,17 +101,17 @@ public class BoolBuilder extends TypedBuilder {
|
|||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
public boolean canRetypeTo(long type) {
|
public boolean canRetypeTo(StorageType type) {
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
public TypedBuilder retypeTo(long type) {
|
public TypedBuilder retypeTo(StorageType type) {
|
||||||
throw new UnsupportedOperationException();
|
throw new UnsupportedOperationException();
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
public int getType() {
|
public StorageType getType() {
|
||||||
return Storage.Type.BOOL;
|
return BooleanType.INSTANCE;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -1,21 +1,43 @@
|
|||||||
package org.enso.table.data.column.builder.object;
|
package org.enso.table.data.column.builder.object;
|
||||||
|
|
||||||
import org.enso.table.data.column.storage.Storage;
|
import org.enso.table.data.column.storage.Storage;
|
||||||
|
import org.enso.table.data.column.storage.type.*;
|
||||||
|
import org.enso.table.data.column.storage.type.BooleanType;
|
||||||
|
import org.enso.table.data.column.storage.type.FloatType;
|
||||||
|
import org.enso.table.data.column.storage.type.IntegerType;
|
||||||
|
|
||||||
/** A builder for creating columns dynamically. */
|
/** A builder for creating columns dynamically. */
|
||||||
public abstract class Builder {
|
public abstract class Builder {
|
||||||
public static Builder getForType(int type, int size) {
|
public static Builder getForType(StorageType type, int size) {
|
||||||
return switch (type) {
|
Builder builder = switch (type) {
|
||||||
case Storage.Type.OBJECT -> new ObjectBuilder(size);
|
case AnyObjectType() -> new ObjectBuilder(size);
|
||||||
case Storage.Type.LONG -> NumericBuilder.createLongBuilder(size);
|
case BooleanType() -> new BoolBuilder(size);
|
||||||
case Storage.Type.DOUBLE -> NumericBuilder.createDoubleBuilder(size);
|
case DateType() -> new DateBuilder(size);
|
||||||
case Storage.Type.STRING -> new StringBuilder(size);
|
case DateTimeType() -> new DateTimeBuilder(size);
|
||||||
case Storage.Type.BOOL -> new BoolBuilder();
|
case TimeOfDayType() -> new TimeOfDayBuilder(size);
|
||||||
case Storage.Type.DATE -> new DateBuilder(size);
|
case FloatType(Bits bits) ->
|
||||||
case Storage.Type.TIME_OF_DAY -> new TimeOfDayBuilder(size);
|
switch (bits) {
|
||||||
case Storage.Type.DATE_TIME -> new DateTimeBuilder(size);
|
case BITS_64 -> NumericBuilder.createDoubleBuilder(size);
|
||||||
default -> new InferredBuilder(size);
|
default -> throw new IllegalArgumentException("Only 64-bit floats are currently supported.");
|
||||||
|
};
|
||||||
|
case IntegerType(Bits bits) ->
|
||||||
|
switch (bits) {
|
||||||
|
case BITS_64 -> NumericBuilder.createLongBuilder(size);
|
||||||
|
default -> throw new IllegalArgumentException("TODO: Builders other than 64-bit int are not yet supported.");
|
||||||
|
};
|
||||||
|
case TextType(long maxLength, boolean isFixed) -> {
|
||||||
|
if (isFixed) {
|
||||||
|
throw new IllegalArgumentException("Fixed-length text builders are not yet supported yet.");
|
||||||
|
}
|
||||||
|
if (maxLength >= 0) {
|
||||||
|
throw new IllegalArgumentException("Text builders with a maximum length are not yet supported yet.");
|
||||||
|
}
|
||||||
|
|
||||||
|
yield new StringBuilder(size);
|
||||||
|
}
|
||||||
};
|
};
|
||||||
|
assert builder.getType().equals(type);
|
||||||
|
return builder;
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
@ -66,4 +88,7 @@ public abstract class Builder {
|
|||||||
* @return a storage containing all the items appended so far
|
* @return a storage containing all the items appended so far
|
||||||
*/
|
*/
|
||||||
public abstract Storage<?> seal();
|
public abstract Storage<?> seal();
|
||||||
|
|
||||||
|
/** @return the current storage type of this builder */
|
||||||
|
public abstract StorageType getType();
|
||||||
}
|
}
|
||||||
|
@ -2,6 +2,8 @@ package org.enso.table.data.column.builder.object;
|
|||||||
|
|
||||||
import org.enso.table.data.column.storage.DateStorage;
|
import org.enso.table.data.column.storage.DateStorage;
|
||||||
import org.enso.table.data.column.storage.Storage;
|
import org.enso.table.data.column.storage.Storage;
|
||||||
|
import org.enso.table.data.column.storage.type.DateType;
|
||||||
|
import org.enso.table.data.column.storage.type.StorageType;
|
||||||
|
|
||||||
import java.time.LocalDate;
|
import java.time.LocalDate;
|
||||||
|
|
||||||
@ -17,8 +19,8 @@ public class DateBuilder extends TypedBuilderImpl<LocalDate> {
|
|||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
public int getType() {
|
public StorageType getType() {
|
||||||
return Storage.Type.DATE;
|
return DateType.INSTANCE;
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
|
@ -2,6 +2,8 @@ package org.enso.table.data.column.builder.object;
|
|||||||
|
|
||||||
import org.enso.table.data.column.storage.DateTimeStorage;
|
import org.enso.table.data.column.storage.DateTimeStorage;
|
||||||
import org.enso.table.data.column.storage.Storage;
|
import org.enso.table.data.column.storage.Storage;
|
||||||
|
import org.enso.table.data.column.storage.type.DateTimeType;
|
||||||
|
import org.enso.table.data.column.storage.type.StorageType;
|
||||||
|
|
||||||
import java.time.ZonedDateTime;
|
import java.time.ZonedDateTime;
|
||||||
|
|
||||||
@ -17,8 +19,8 @@ public class DateTimeBuilder extends TypedBuilderImpl<ZonedDateTime> {
|
|||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
public int getType() {
|
public StorageType getType() {
|
||||||
return Storage.Type.DATE_TIME;
|
return DateTimeType.INSTANCE;
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
|
@ -2,8 +2,15 @@ package org.enso.table.data.column.builder.object;
|
|||||||
|
|
||||||
import org.enso.base.polyglot.NumericConverter;
|
import org.enso.base.polyglot.NumericConverter;
|
||||||
import org.enso.table.data.column.storage.Storage;
|
import org.enso.table.data.column.storage.Storage;
|
||||||
|
import org.enso.table.data.column.storage.type.BooleanType;
|
||||||
|
import org.enso.table.data.column.storage.type.DateTimeType;
|
||||||
|
import org.enso.table.data.column.storage.type.DateType;
|
||||||
|
import org.enso.table.data.column.storage.type.FloatType;
|
||||||
|
import org.enso.table.data.column.storage.type.IntegerType;
|
||||||
|
import org.enso.table.data.column.storage.type.StorageType;
|
||||||
|
import org.enso.table.data.column.storage.type.TextType;
|
||||||
|
import org.enso.table.data.column.storage.type.TimeOfDayType;
|
||||||
|
|
||||||
import java.math.BigDecimal;
|
|
||||||
import java.time.LocalDate;
|
import java.time.LocalDate;
|
||||||
import java.time.LocalTime;
|
import java.time.LocalTime;
|
||||||
import java.time.ZonedDateTime;
|
import java.time.ZonedDateTime;
|
||||||
@ -18,9 +25,9 @@ public class InferredBuilder extends Builder {
|
|||||||
private final int initialSize;
|
private final int initialSize;
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Creates a new instance of this builder, with the given known result size.
|
* Creates a new instance of this builder, with the given known result length.
|
||||||
*
|
*
|
||||||
* @param initialSize the result size
|
* @param initialSize the result length
|
||||||
*/
|
*/
|
||||||
public InferredBuilder(int initialSize) {
|
public InferredBuilder(int initialSize) {
|
||||||
this.initialSize = initialSize;
|
this.initialSize = initialSize;
|
||||||
@ -107,22 +114,26 @@ public class InferredBuilder extends Builder {
|
|||||||
currentBuilder.appendNulls(currentSize);
|
currentBuilder.appendNulls(currentSize);
|
||||||
}
|
}
|
||||||
|
|
||||||
private record RetypeInfo(Class<?> clazz, int type) {}
|
private record RetypeInfo(Class<?> clazz, StorageType type) {}
|
||||||
|
|
||||||
private static final List<RetypeInfo> retypePairs =
|
private static final List<RetypeInfo> retypePairs =
|
||||||
List.of(
|
List.of(
|
||||||
new RetypeInfo(Boolean.class, Storage.Type.BOOL),
|
new RetypeInfo(Boolean.class, BooleanType.INSTANCE),
|
||||||
new RetypeInfo(Long.class, Storage.Type.LONG),
|
new RetypeInfo(Long.class, IntegerType.INT_64),
|
||||||
new RetypeInfo(Double.class, Storage.Type.DOUBLE),
|
new RetypeInfo(Double.class, FloatType.FLOAT_64),
|
||||||
new RetypeInfo(String.class, Storage.Type.STRING),
|
new RetypeInfo(String.class, TextType.VARIABLE_LENGTH),
|
||||||
new RetypeInfo(BigDecimal.class, Storage.Type.DOUBLE),
|
// TODO [RW] I think BigDecimals should not be coerced to floats, we should add Decimal
|
||||||
new RetypeInfo(LocalDate.class, Storage.Type.DATE),
|
// support to in-memory tables at some point
|
||||||
new RetypeInfo(LocalTime.class, Storage.Type.TIME_OF_DAY),
|
// new RetypeInfo(BigDecimal.class, StorageType.FLOAT_64),
|
||||||
new RetypeInfo(ZonedDateTime.class, Storage.Type.DATE_TIME),
|
new RetypeInfo(LocalDate.class, DateType.INSTANCE),
|
||||||
new RetypeInfo(Float.class, Storage.Type.DOUBLE),
|
new RetypeInfo(LocalTime.class, TimeOfDayType.INSTANCE),
|
||||||
new RetypeInfo(Integer.class, Storage.Type.LONG),
|
new RetypeInfo(ZonedDateTime.class, DateTimeType.INSTANCE),
|
||||||
new RetypeInfo(Short.class, Storage.Type.LONG),
|
new RetypeInfo(Float.class, FloatType.FLOAT_64),
|
||||||
new RetypeInfo(Byte.class, Storage.Type.LONG));
|
// Smaller integer types are upcast to 64-bit integers by default anyway. This logic does
|
||||||
|
// not apply only if a specific type is requested (so not in inferred builder).
|
||||||
|
new RetypeInfo(Integer.class, IntegerType.INT_64),
|
||||||
|
new RetypeInfo(Short.class, IntegerType.INT_64),
|
||||||
|
new RetypeInfo(Byte.class, IntegerType.INT_64));
|
||||||
|
|
||||||
private void retypeAndAppend(Object o) {
|
private void retypeAndAppend(Object o) {
|
||||||
for (RetypeInfo info : retypePairs) {
|
for (RetypeInfo info : retypePairs) {
|
||||||
@ -156,4 +167,10 @@ public class InferredBuilder extends Builder {
|
|||||||
}
|
}
|
||||||
return currentBuilder.seal();
|
return currentBuilder.seal();
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@Override
|
||||||
|
public StorageType getType() {
|
||||||
|
// The type of InferredBuilder can change over time, so we do not report any stable type here.
|
||||||
|
return null;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
@ -2,12 +2,17 @@ package org.enso.table.data.column.builder.object;
|
|||||||
|
|
||||||
import java.util.Arrays;
|
import java.util.Arrays;
|
||||||
import java.util.BitSet;
|
import java.util.BitSet;
|
||||||
|
import java.util.Objects;
|
||||||
|
|
||||||
import org.enso.base.polyglot.NumericConverter;
|
import org.enso.base.polyglot.NumericConverter;
|
||||||
import org.enso.table.data.column.storage.BoolStorage;
|
import org.enso.table.data.column.storage.BoolStorage;
|
||||||
import org.enso.table.data.column.storage.DoubleStorage;
|
import org.enso.table.data.column.storage.DoubleStorage;
|
||||||
import org.enso.table.data.column.storage.LongStorage;
|
import org.enso.table.data.column.storage.LongStorage;
|
||||||
import org.enso.table.data.column.storage.Storage;
|
import org.enso.table.data.column.storage.Storage;
|
||||||
|
import org.enso.table.data.column.storage.type.BooleanType;
|
||||||
|
import org.enso.table.data.column.storage.type.FloatType;
|
||||||
|
import org.enso.table.data.column.storage.type.IntegerType;
|
||||||
|
import org.enso.table.data.column.storage.type.StorageType;
|
||||||
import org.enso.table.util.BitSets;
|
import org.enso.table.util.BitSets;
|
||||||
|
|
||||||
/**
|
/**
|
||||||
@ -46,16 +51,16 @@ public class NumericBuilder extends TypedBuilder {
|
|||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
public boolean canRetypeTo(long type) {
|
public boolean canRetypeTo(StorageType type) {
|
||||||
return !this.isDouble && type == Storage.Type.DOUBLE;
|
return !this.isDouble && Objects.equals(type, FloatType.FLOAT_64);
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
public TypedBuilder retypeTo(long type) {
|
public TypedBuilder retypeTo(StorageType type) {
|
||||||
if (!this.isDouble && type == Storage.Type.DOUBLE) {
|
if (!this.isDouble && Objects.equals(type, FloatType.FLOAT_64)) {
|
||||||
this.isDouble = true;
|
this.isDouble = true;
|
||||||
for (int i = 0; i < currentSize; i++) {
|
for (int i = 0; i < currentSize; i++) {
|
||||||
data[i] = Double.doubleToRawLongBits(data[i]);
|
data[i] = Double.doubleToRawLongBits((double) data[i]);
|
||||||
}
|
}
|
||||||
return this;
|
return this;
|
||||||
} else {
|
} else {
|
||||||
@ -64,8 +69,8 @@ public class NumericBuilder extends TypedBuilder {
|
|||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
public int getType() {
|
public StorageType getType() {
|
||||||
return isDouble ? Storage.Type.DOUBLE : Storage.Type.LONG;
|
return isDouble ? FloatType.FLOAT_64 : IntegerType.INT_64;
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
@ -119,7 +124,7 @@ public class NumericBuilder extends TypedBuilder {
|
|||||||
}
|
}
|
||||||
|
|
||||||
private void appendBulkDouble(Storage<?> storage) {
|
private void appendBulkDouble(Storage<?> storage) {
|
||||||
if (storage.getType() == Storage.Type.DOUBLE) {
|
if (Objects.equals(storage.getType(), FloatType.FLOAT_64)) {
|
||||||
if (storage instanceof DoubleStorage doubleStorage) {
|
if (storage instanceof DoubleStorage doubleStorage) {
|
||||||
int n = doubleStorage.size();
|
int n = doubleStorage.size();
|
||||||
ensureFreeSpaceFor(n);
|
ensureFreeSpaceFor(n);
|
||||||
@ -132,12 +137,12 @@ public class NumericBuilder extends TypedBuilder {
|
|||||||
+ storage
|
+ storage
|
||||||
+ ". This is a bug in the Table library.");
|
+ ". This is a bug in the Table library.");
|
||||||
}
|
}
|
||||||
} else if (storage.getType() == Storage.Type.LONG) {
|
} else if (Objects.equals(storage.getType(), IntegerType.INT_64)) {
|
||||||
if (storage instanceof LongStorage longStorage) {
|
if (storage instanceof LongStorage longStorage) {
|
||||||
int n = longStorage.size();
|
int n = longStorage.size();
|
||||||
BitSets.copy(longStorage.getIsMissing(), isMissing, currentSize, n);
|
BitSets.copy(longStorage.getIsMissing(), isMissing, currentSize, n);
|
||||||
for (int i = 0; i < n; i++) {
|
for (int i = 0; i < n; i++) {
|
||||||
data[currentSize++] = Double.doubleToRawLongBits(longStorage.getItem(i));
|
data[currentSize++] = Double.doubleToRawLongBits((double) longStorage.getItem(i));
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
throw new IllegalStateException(
|
throw new IllegalStateException(
|
||||||
@ -145,7 +150,7 @@ public class NumericBuilder extends TypedBuilder {
|
|||||||
+ storage
|
+ storage
|
||||||
+ ". This is a bug in the Table library.");
|
+ ". This is a bug in the Table library.");
|
||||||
}
|
}
|
||||||
} else if (storage.getType() == Storage.Type.BOOL) {
|
} else if (Objects.equals(storage.getType(), BooleanType.INSTANCE)) {
|
||||||
if (storage instanceof BoolStorage boolStorage) {
|
if (storage instanceof BoolStorage boolStorage) {
|
||||||
int n = boolStorage.size();
|
int n = boolStorage.size();
|
||||||
for (int i = 0; i < n; i++) {
|
for (int i = 0; i < n; i++) {
|
||||||
@ -168,7 +173,7 @@ public class NumericBuilder extends TypedBuilder {
|
|||||||
}
|
}
|
||||||
|
|
||||||
private void appendBulkLong(Storage<?> storage) {
|
private void appendBulkLong(Storage<?> storage) {
|
||||||
if (storage.getType() == Storage.Type.LONG) {
|
if (Objects.equals(storage.getType(), IntegerType.INT_64)) {
|
||||||
if (storage instanceof LongStorage longStorage) {
|
if (storage instanceof LongStorage longStorage) {
|
||||||
int n = longStorage.size();
|
int n = longStorage.size();
|
||||||
ensureFreeSpaceFor(n);
|
ensureFreeSpaceFor(n);
|
||||||
@ -181,7 +186,7 @@ public class NumericBuilder extends TypedBuilder {
|
|||||||
+ storage
|
+ storage
|
||||||
+ ". This is a bug in the Table library.");
|
+ ". This is a bug in the Table library.");
|
||||||
}
|
}
|
||||||
} else if (storage.getType() == Storage.Type.BOOL) {
|
} else if (Objects.equals(storage.getType(), BooleanType.INSTANCE)) {
|
||||||
if (storage instanceof BoolStorage boolStorage) {
|
if (storage instanceof BoolStorage boolStorage) {
|
||||||
int n = boolStorage.size();
|
int n = boolStorage.size();
|
||||||
for (int i = 0; i < n; i++) {
|
for (int i = 0; i < n; i++) {
|
||||||
@ -203,7 +208,7 @@ public class NumericBuilder extends TypedBuilder {
|
|||||||
}
|
}
|
||||||
|
|
||||||
private long booleanAsLong(boolean value) {
|
private long booleanAsLong(boolean value) {
|
||||||
return value ? 1 : 0;
|
return value ? 1L : 0L;
|
||||||
}
|
}
|
||||||
|
|
||||||
private double booleanAsDouble(boolean value) {
|
private double booleanAsDouble(boolean value) {
|
||||||
|
@ -3,6 +3,8 @@ package org.enso.table.data.column.builder.object;
|
|||||||
import org.enso.table.data.column.storage.ObjectStorage;
|
import org.enso.table.data.column.storage.ObjectStorage;
|
||||||
import org.enso.table.data.column.storage.SpecializedStorage;
|
import org.enso.table.data.column.storage.SpecializedStorage;
|
||||||
import org.enso.table.data.column.storage.Storage;
|
import org.enso.table.data.column.storage.Storage;
|
||||||
|
import org.enso.table.data.column.storage.type.AnyObjectType;
|
||||||
|
import org.enso.table.data.column.storage.type.StorageType;
|
||||||
|
|
||||||
import java.util.Arrays;
|
import java.util.Arrays;
|
||||||
|
|
||||||
@ -25,18 +27,18 @@ public class ObjectBuilder extends TypedBuilder {
|
|||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
public boolean canRetypeTo(long type) {
|
public boolean canRetypeTo(StorageType type) {
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
public TypedBuilder retypeTo(long type) {
|
public TypedBuilder retypeTo(StorageType type) {
|
||||||
throw new IllegalStateException("Broken invariant: rewriting the most general type.");
|
throw new IllegalStateException("Broken invariant: rewriting the most general type.");
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
public int getType() {
|
public StorageType getType() {
|
||||||
return Storage.Type.OBJECT;
|
return AnyObjectType.INSTANCE;
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
|
@ -1,10 +1,12 @@
|
|||||||
package org.enso.table.data.column.builder.object;
|
package org.enso.table.data.column.builder.object;
|
||||||
|
|
||||||
public class StorageTypeMismatch extends RuntimeException {
|
import org.enso.table.data.column.storage.type.StorageType;
|
||||||
private final int expectedType;
|
|
||||||
private final int gotType;
|
|
||||||
|
|
||||||
public StorageTypeMismatch(int expectedType, int gotType) {
|
public class StorageTypeMismatch extends RuntimeException {
|
||||||
|
private final StorageType expectedType;
|
||||||
|
private final StorageType gotType;
|
||||||
|
|
||||||
|
public StorageTypeMismatch(StorageType expectedType, StorageType gotType) {
|
||||||
this.expectedType = expectedType;
|
this.expectedType = expectedType;
|
||||||
this.gotType = gotType;
|
this.gotType = gotType;
|
||||||
}
|
}
|
||||||
@ -18,7 +20,7 @@ public class StorageTypeMismatch extends RuntimeException {
|
|||||||
+ ". This is a bug in the Table library.";
|
+ ". This is a bug in the Table library.";
|
||||||
}
|
}
|
||||||
|
|
||||||
public int gotType() {
|
public StorageType gotType() {
|
||||||
return gotType;
|
return gotType;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -2,6 +2,8 @@ package org.enso.table.data.column.builder.object;
|
|||||||
|
|
||||||
import org.enso.table.data.column.storage.Storage;
|
import org.enso.table.data.column.storage.Storage;
|
||||||
import org.enso.table.data.column.storage.StringStorage;
|
import org.enso.table.data.column.storage.StringStorage;
|
||||||
|
import org.enso.table.data.column.storage.type.StorageType;
|
||||||
|
import org.enso.table.data.column.storage.type.TextType;
|
||||||
|
|
||||||
/** A builder for string columns. */
|
/** A builder for string columns. */
|
||||||
public class StringBuilder extends TypedBuilderImpl<String> {
|
public class StringBuilder extends TypedBuilderImpl<String> {
|
||||||
@ -15,8 +17,8 @@ public class StringBuilder extends TypedBuilderImpl<String> {
|
|||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
public int getType() {
|
public StorageType getType() {
|
||||||
return Storage.Type.STRING;
|
return TextType.VARIABLE_LENGTH;
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
|
@ -2,6 +2,8 @@ package org.enso.table.data.column.builder.object;
|
|||||||
|
|
||||||
import org.enso.table.data.column.storage.Storage;
|
import org.enso.table.data.column.storage.Storage;
|
||||||
import org.enso.table.data.column.storage.TimeOfDayStorage;
|
import org.enso.table.data.column.storage.TimeOfDayStorage;
|
||||||
|
import org.enso.table.data.column.storage.type.StorageType;
|
||||||
|
import org.enso.table.data.column.storage.type.TimeOfDayType;
|
||||||
|
|
||||||
import java.time.LocalTime;
|
import java.time.LocalTime;
|
||||||
|
|
||||||
@ -17,8 +19,8 @@ public class TimeOfDayBuilder extends TypedBuilderImpl<LocalTime> {
|
|||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
public int getType() {
|
public StorageType getType() {
|
||||||
return Storage.Type.TIME_OF_DAY;
|
return TimeOfDayType.INSTANCE;
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
|
@ -1,6 +1,8 @@
|
|||||||
package org.enso.table.data.column.builder.object;
|
package org.enso.table.data.column.builder.object;
|
||||||
|
|
||||||
/** A builder for the given storage type and known result size. */
|
import org.enso.table.data.column.storage.type.StorageType;
|
||||||
|
|
||||||
|
/** A builder for the given storage type and known result length. */
|
||||||
public abstract class TypedBuilder extends Builder {
|
public abstract class TypedBuilder extends Builder {
|
||||||
/**
|
/**
|
||||||
* Dump all the items into a given boxed buffer.
|
* Dump all the items into a given boxed buffer.
|
||||||
@ -12,22 +14,19 @@ public abstract class TypedBuilder extends Builder {
|
|||||||
/**
|
/**
|
||||||
* Checks if the builder can be efficiently retyped to the given storage type.
|
* Checks if the builder can be efficiently retyped to the given storage type.
|
||||||
*
|
*
|
||||||
* @param type the storage type enumeration
|
* @param type the storage type
|
||||||
* @return whether the column can be retyped
|
* @return whether the column can be retyped
|
||||||
*/
|
*/
|
||||||
public abstract boolean canRetypeTo(long type);
|
public abstract boolean canRetypeTo(StorageType type);
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Retype this builder to the given type. Can only be called if {@link #canRetypeTo(long)} returns
|
* Retype this builder to the given type. Can only be called if {@link #canRetypeTo(StorageType)}
|
||||||
* true for the type.
|
* returns true for the type.
|
||||||
*
|
*
|
||||||
* @param type the target type
|
* @param type the target type
|
||||||
* @return a retyped builder
|
* @return a retyped builder
|
||||||
*/
|
*/
|
||||||
public abstract TypedBuilder retypeTo(long type);
|
public abstract TypedBuilder retypeTo(StorageType type);
|
||||||
|
|
||||||
/** @return the current storage type of this builder */
|
|
||||||
public abstract int getType();
|
|
||||||
|
|
||||||
/** Specifies if the following object will be accepted by this builder's append* methods. */
|
/** Specifies if the following object will be accepted by this builder's append* methods. */
|
||||||
public abstract boolean accepts(Object o);
|
public abstract boolean accepts(Object o);
|
||||||
|
@ -2,8 +2,11 @@ package org.enso.table.data.column.builder.object;
|
|||||||
|
|
||||||
import org.enso.table.data.column.storage.SpecializedStorage;
|
import org.enso.table.data.column.storage.SpecializedStorage;
|
||||||
import org.enso.table.data.column.storage.Storage;
|
import org.enso.table.data.column.storage.Storage;
|
||||||
|
import org.enso.table.data.column.storage.type.AnyObjectType;
|
||||||
|
import org.enso.table.data.column.storage.type.StorageType;
|
||||||
|
|
||||||
import java.util.Arrays;
|
import java.util.Arrays;
|
||||||
|
import java.util.Objects;
|
||||||
|
|
||||||
public abstract class TypedBuilderImpl<T> extends TypedBuilder {
|
public abstract class TypedBuilderImpl<T> extends TypedBuilder {
|
||||||
protected T[] data;
|
protected T[] data;
|
||||||
@ -21,13 +24,13 @@ public abstract class TypedBuilderImpl<T> extends TypedBuilder {
|
|||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
public boolean canRetypeTo(long type) {
|
public boolean canRetypeTo(StorageType type) {
|
||||||
return type == Storage.Type.OBJECT;
|
return Objects.equals(type, AnyObjectType.INSTANCE);
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
public TypedBuilder retypeTo(long type) {
|
public TypedBuilder retypeTo(StorageType type) {
|
||||||
if (type == Storage.Type.OBJECT) {
|
if (Objects.equals(type, AnyObjectType.INSTANCE)) {
|
||||||
Object[] widenedData = Arrays.copyOf(data, data.length, Object[].class);
|
Object[] widenedData = Arrays.copyOf(data, data.length, Object[].class);
|
||||||
ObjectBuilder res = new ObjectBuilder(widenedData);
|
ObjectBuilder res = new ObjectBuilder(widenedData);
|
||||||
res.setCurrentSize(currentSize);
|
res.setCurrentSize(currentSize);
|
||||||
@ -53,7 +56,7 @@ public abstract class TypedBuilderImpl<T> extends TypedBuilder {
|
|||||||
|
|
||||||
@Override
|
@Override
|
||||||
public void appendBulkStorage(Storage<?> storage) {
|
public void appendBulkStorage(Storage<?> storage) {
|
||||||
if (storage.getType() == getType()) {
|
if (storage.getType().equals(getType())) {
|
||||||
if (storage instanceof SpecializedStorage<?>) {
|
if (storage instanceof SpecializedStorage<?>) {
|
||||||
// This cast is safe, because storage.getType() == this.getType() iff storage.T == this.T
|
// This cast is safe, because storage.getType() == this.getType() iff storage.T == this.T
|
||||||
@SuppressWarnings("unchecked")
|
@SuppressWarnings("unchecked")
|
||||||
|
@ -1,119 +0,0 @@
|
|||||||
package org.enso.table.data.column.builder.string;
|
|
||||||
|
|
||||||
import org.enso.table.data.column.storage.DoubleStorage;
|
|
||||||
import org.enso.table.data.column.storage.LongStorage;
|
|
||||||
import org.enso.table.data.column.storage.Storage;
|
|
||||||
|
|
||||||
import java.util.BitSet;
|
|
||||||
|
|
||||||
/**
|
|
||||||
* A column builder for numeric types. Tries to interpret all data as 64-bit integers. If that
|
|
||||||
* becomes impossible, retypes itself to store 64-bit floats. When even that fails, falls back to a
|
|
||||||
* {@link StringStorageBuilder}.
|
|
||||||
*/
|
|
||||||
public class PrimInferredStorageBuilder extends StorageBuilder {
|
|
||||||
private enum Type {
|
|
||||||
LONG,
|
|
||||||
DOUBLE
|
|
||||||
}
|
|
||||||
|
|
||||||
private int size = 0;
|
|
||||||
private long[] data = new long[64];
|
|
||||||
private String[] rawData = new String[64];
|
|
||||||
private final BitSet isMissing = new BitSet();
|
|
||||||
private Type type = Type.LONG;
|
|
||||||
|
|
||||||
/** @inheritDoc */
|
|
||||||
@Override
|
|
||||||
public StorageBuilder parseAndAppend(String value) {
|
|
||||||
if (value == null) {
|
|
||||||
ensureAppendable();
|
|
||||||
isMissing.set(size);
|
|
||||||
size++;
|
|
||||||
return this;
|
|
||||||
}
|
|
||||||
switch (type) {
|
|
||||||
case LONG:
|
|
||||||
return appendLong(value);
|
|
||||||
case DOUBLE:
|
|
||||||
return appendDouble(value);
|
|
||||||
default:
|
|
||||||
throw new IllegalStateException();
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
private StorageBuilder appendLong(String value) {
|
|
||||||
try {
|
|
||||||
long l = Long.parseLong(value);
|
|
||||||
ensureAppendable();
|
|
||||||
rawData[size] = value;
|
|
||||||
data[size] = l;
|
|
||||||
size++;
|
|
||||||
return this;
|
|
||||||
} catch (NumberFormatException ignored) {
|
|
||||||
return failedLong(value);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
private StorageBuilder appendDouble(String value) {
|
|
||||||
try {
|
|
||||||
double d = Double.parseDouble(value);
|
|
||||||
ensureAppendable();
|
|
||||||
data[size] = Double.doubleToRawLongBits(d);
|
|
||||||
rawData[size] = value;
|
|
||||||
size++;
|
|
||||||
return this;
|
|
||||||
} catch (NumberFormatException ignored) {
|
|
||||||
return failedDouble(value);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
private StorageBuilder failedLong(String value) {
|
|
||||||
try {
|
|
||||||
double d = Double.parseDouble(value);
|
|
||||||
retypeToDouble();
|
|
||||||
ensureAppendable();
|
|
||||||
data[size] = Double.doubleToRawLongBits(d);
|
|
||||||
rawData[size] = value;
|
|
||||||
size++;
|
|
||||||
return this;
|
|
||||||
} catch (NumberFormatException ignored) {
|
|
||||||
return failedDouble(value);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
private StorageBuilder failedDouble(String value) {
|
|
||||||
StringStorageBuilder newBuilder = new StringStorageBuilder(rawData, size);
|
|
||||||
newBuilder.parseAndAppend(value);
|
|
||||||
return newBuilder;
|
|
||||||
}
|
|
||||||
|
|
||||||
private void retypeToDouble() {
|
|
||||||
for (int i = 0; i < size; i++) {
|
|
||||||
data[i] = Double.doubleToRawLongBits(data[i]);
|
|
||||||
}
|
|
||||||
type = Type.DOUBLE;
|
|
||||||
}
|
|
||||||
|
|
||||||
// TODO[MK] Consider storing data `rawData` in non-linear storage to avoid reallocations.
|
|
||||||
private void ensureAppendable() {
|
|
||||||
if (size >= data.length) {
|
|
||||||
long[] newData = new long[2 * data.length];
|
|
||||||
String[] newRawData = new String[2 * data.length];
|
|
||||||
System.arraycopy(data, 0, newData, 0, data.length);
|
|
||||||
System.arraycopy(rawData, 0, newRawData, 0, rawData.length);
|
|
||||||
data = newData;
|
|
||||||
rawData = newRawData;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/** @inheritDoc */
|
|
||||||
@Override
|
|
||||||
public Storage<?> seal() {
|
|
||||||
if (type == Type.LONG) {
|
|
||||||
return new LongStorage(data, size, isMissing);
|
|
||||||
} else {
|
|
||||||
return new DoubleStorage(data, size, isMissing);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
@ -13,6 +13,8 @@ import org.enso.table.data.column.operation.map.MapOperation;
|
|||||||
import org.enso.table.data.column.operation.map.MapOperationProblemBuilder;
|
import org.enso.table.data.column.operation.map.MapOperationProblemBuilder;
|
||||||
import org.enso.table.data.column.operation.map.UnaryMapOperation;
|
import org.enso.table.data.column.operation.map.UnaryMapOperation;
|
||||||
import org.enso.table.data.column.operation.map.bool.BooleanIsInOp;
|
import org.enso.table.data.column.operation.map.bool.BooleanIsInOp;
|
||||||
|
import org.enso.table.data.column.storage.type.BooleanType;
|
||||||
|
import org.enso.table.data.column.storage.type.StorageType;
|
||||||
import org.enso.table.data.index.Index;
|
import org.enso.table.data.index.Index;
|
||||||
import org.enso.table.data.mask.OrderMask;
|
import org.enso.table.data.mask.OrderMask;
|
||||||
import org.enso.table.data.mask.SliceRange;
|
import org.enso.table.data.mask.SliceRange;
|
||||||
@ -55,8 +57,8 @@ public final class BoolStorage extends Storage<Boolean> {
|
|||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
public int getType() {
|
public StorageType getType() {
|
||||||
return Type.BOOL;
|
return BooleanType.INSTANCE;
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
|
@ -5,8 +5,9 @@ import java.time.LocalDate;
|
|||||||
import org.enso.table.data.column.builder.object.Builder;
|
import org.enso.table.data.column.builder.object.Builder;
|
||||||
import org.enso.table.data.column.builder.object.DateBuilder;
|
import org.enso.table.data.column.builder.object.DateBuilder;
|
||||||
import org.enso.table.data.column.operation.map.MapOpStorage;
|
import org.enso.table.data.column.operation.map.MapOpStorage;
|
||||||
import org.enso.table.data.column.operation.map.SpecializedIsInOp;
|
|
||||||
import org.enso.table.data.column.operation.map.datetime.DateTimeIsInOp;
|
import org.enso.table.data.column.operation.map.datetime.DateTimeIsInOp;
|
||||||
|
import org.enso.table.data.column.storage.type.DateType;
|
||||||
|
import org.enso.table.data.column.storage.type.StorageType;
|
||||||
|
|
||||||
public final class DateStorage extends SpecializedStorage<LocalDate> {
|
public final class DateStorage extends SpecializedStorage<LocalDate> {
|
||||||
/**
|
/**
|
||||||
@ -36,8 +37,8 @@ public final class DateStorage extends SpecializedStorage<LocalDate> {
|
|||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
public int getType() {
|
public StorageType getType() {
|
||||||
return Type.DATE;
|
return DateType.INSTANCE;
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
|
@ -3,8 +3,9 @@ package org.enso.table.data.column.storage;
|
|||||||
import org.enso.table.data.column.builder.object.Builder;
|
import org.enso.table.data.column.builder.object.Builder;
|
||||||
import org.enso.table.data.column.builder.object.DateTimeBuilder;
|
import org.enso.table.data.column.builder.object.DateTimeBuilder;
|
||||||
import org.enso.table.data.column.operation.map.MapOpStorage;
|
import org.enso.table.data.column.operation.map.MapOpStorage;
|
||||||
import org.enso.table.data.column.operation.map.SpecializedIsInOp;
|
|
||||||
import org.enso.table.data.column.operation.map.datetime.DateTimeIsInOp;
|
import org.enso.table.data.column.operation.map.datetime.DateTimeIsInOp;
|
||||||
|
import org.enso.table.data.column.storage.type.DateTimeType;
|
||||||
|
import org.enso.table.data.column.storage.type.StorageType;
|
||||||
|
|
||||||
import java.time.ZonedDateTime;
|
import java.time.ZonedDateTime;
|
||||||
|
|
||||||
@ -38,8 +39,8 @@ public final class DateTimeStorage extends SpecializedStorage<ZonedDateTime> {
|
|||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
public int getType() {
|
public StorageType getType() {
|
||||||
return Type.DATE_TIME;
|
return DateTimeType.INSTANCE;
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
|
@ -11,6 +11,8 @@ import org.enso.table.data.column.operation.map.UnaryMapOperation;
|
|||||||
import org.enso.table.data.column.operation.map.numeric.DoubleBooleanOp;
|
import org.enso.table.data.column.operation.map.numeric.DoubleBooleanOp;
|
||||||
import org.enso.table.data.column.operation.map.numeric.DoubleIsInOp;
|
import org.enso.table.data.column.operation.map.numeric.DoubleIsInOp;
|
||||||
import org.enso.table.data.column.operation.map.numeric.DoubleNumericOp;
|
import org.enso.table.data.column.operation.map.numeric.DoubleNumericOp;
|
||||||
|
import org.enso.table.data.column.storage.type.FloatType;
|
||||||
|
import org.enso.table.data.column.storage.type.StorageType;
|
||||||
import org.enso.table.data.index.Index;
|
import org.enso.table.data.index.Index;
|
||||||
import org.enso.table.data.mask.OrderMask;
|
import org.enso.table.data.mask.OrderMask;
|
||||||
import org.enso.table.data.mask.SliceRange;
|
import org.enso.table.data.mask.SliceRange;
|
||||||
@ -73,8 +75,8 @@ public final class DoubleStorage extends NumericStorage<Double> {
|
|||||||
|
|
||||||
/** @inheritDoc */
|
/** @inheritDoc */
|
||||||
@Override
|
@Override
|
||||||
public int getType() {
|
public StorageType getType() {
|
||||||
return Type.DOUBLE;
|
return FloatType.FLOAT_64;
|
||||||
}
|
}
|
||||||
|
|
||||||
/** @inheritDoc */
|
/** @inheritDoc */
|
||||||
|
@ -1,8 +1,5 @@
|
|||||||
package org.enso.table.data.column.storage;
|
package org.enso.table.data.column.storage;
|
||||||
|
|
||||||
import java.util.BitSet;
|
|
||||||
import java.util.List;
|
|
||||||
|
|
||||||
import org.enso.table.data.column.builder.object.Builder;
|
import org.enso.table.data.column.builder.object.Builder;
|
||||||
import org.enso.table.data.column.builder.object.NumericBuilder;
|
import org.enso.table.data.column.builder.object.NumericBuilder;
|
||||||
import org.enso.table.data.column.operation.map.MapOpStorage;
|
import org.enso.table.data.column.operation.map.MapOpStorage;
|
||||||
@ -11,13 +8,21 @@ import org.enso.table.data.column.operation.map.UnaryMapOperation;
|
|||||||
import org.enso.table.data.column.operation.map.numeric.LongBooleanOp;
|
import org.enso.table.data.column.operation.map.numeric.LongBooleanOp;
|
||||||
import org.enso.table.data.column.operation.map.numeric.LongIsInOp;
|
import org.enso.table.data.column.operation.map.numeric.LongIsInOp;
|
||||||
import org.enso.table.data.column.operation.map.numeric.LongNumericOp;
|
import org.enso.table.data.column.operation.map.numeric.LongNumericOp;
|
||||||
|
import org.enso.table.data.column.storage.type.IntegerType;
|
||||||
|
import org.enso.table.data.column.storage.type.StorageType;
|
||||||
import org.enso.table.data.index.Index;
|
import org.enso.table.data.index.Index;
|
||||||
import org.enso.table.data.mask.OrderMask;
|
import org.enso.table.data.mask.OrderMask;
|
||||||
import org.enso.table.data.mask.SliceRange;
|
import org.enso.table.data.mask.SliceRange;
|
||||||
import org.graalvm.polyglot.Value;
|
import org.graalvm.polyglot.Value;
|
||||||
|
|
||||||
|
import java.util.BitSet;
|
||||||
|
import java.util.List;
|
||||||
|
|
||||||
/** A column storing 64-bit integers. */
|
/** A column storing 64-bit integers. */
|
||||||
public final class LongStorage extends NumericStorage<Long> {
|
public final class LongStorage extends NumericStorage<Long> {
|
||||||
|
// TODO [RW] at some point we will want to add separate storage classes for byte, short and int,
|
||||||
|
// for more compact storage and more efficient handling of smaller integers; for now we will be
|
||||||
|
// handling this just by checking the bounds
|
||||||
private final long[] data;
|
private final long[] data;
|
||||||
private final BitSet isMissing;
|
private final BitSet isMissing;
|
||||||
private final int size;
|
private final int size;
|
||||||
@ -77,8 +82,9 @@ public final class LongStorage extends NumericStorage<Long> {
|
|||||||
|
|
||||||
/** @inheritDoc */
|
/** @inheritDoc */
|
||||||
@Override
|
@Override
|
||||||
public int getType() {
|
public StorageType getType() {
|
||||||
return Type.LONG;
|
// TODO add possibility to set integer bit limit
|
||||||
|
return IntegerType.INT_64;
|
||||||
}
|
}
|
||||||
|
|
||||||
/** @inheritDoc */
|
/** @inheritDoc */
|
||||||
|
@ -0,0 +1,102 @@
|
|||||||
|
package org.enso.table.data.column.storage;
|
||||||
|
|
||||||
|
import org.enso.table.data.column.builder.object.Builder;
|
||||||
|
import org.enso.table.data.column.operation.map.MapOperationProblemBuilder;
|
||||||
|
import org.enso.table.data.column.storage.type.AnyObjectType;
|
||||||
|
import org.enso.table.data.column.storage.type.StorageType;
|
||||||
|
import org.enso.table.data.mask.OrderMask;
|
||||||
|
import org.enso.table.data.mask.SliceRange;
|
||||||
|
|
||||||
|
import java.util.BitSet;
|
||||||
|
import java.util.List;
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Wraps a storage of any type and alters its reported storage to be of type AnyObject.
|
||||||
|
*
|
||||||
|
* <p>This is used to ensure that we can change a column's type to Mixed without changing its
|
||||||
|
* underlying storage unnecessarily.
|
||||||
|
*/
|
||||||
|
public class MixedStorageFacade extends Storage<Object> {
|
||||||
|
private final Storage<?> underlyingStorage;
|
||||||
|
|
||||||
|
public MixedStorageFacade(Storage<?> storage) {
|
||||||
|
underlyingStorage = storage;
|
||||||
|
}
|
||||||
|
|
||||||
|
@Override
|
||||||
|
public int size() {
|
||||||
|
return underlyingStorage.size();
|
||||||
|
}
|
||||||
|
|
||||||
|
@Override
|
||||||
|
public int countMissing() {
|
||||||
|
return underlyingStorage.countMissing();
|
||||||
|
}
|
||||||
|
|
||||||
|
@Override
|
||||||
|
public StorageType getType() {
|
||||||
|
return AnyObjectType.INSTANCE;
|
||||||
|
}
|
||||||
|
|
||||||
|
@Override
|
||||||
|
public boolean isNa(long idx) {
|
||||||
|
return underlyingStorage.isNa(idx);
|
||||||
|
}
|
||||||
|
|
||||||
|
@Override
|
||||||
|
public Object getItemBoxed(int idx) {
|
||||||
|
return underlyingStorage.getItemBoxed(idx);
|
||||||
|
}
|
||||||
|
|
||||||
|
@Override
|
||||||
|
public boolean isOpVectorized(String name) {
|
||||||
|
return underlyingStorage.isOpVectorized(name);
|
||||||
|
}
|
||||||
|
|
||||||
|
@Override
|
||||||
|
protected Storage<?> runVectorizedMap(
|
||||||
|
String name, Object argument, MapOperationProblemBuilder problemBuilder) {
|
||||||
|
return underlyingStorage.runVectorizedMap(name, argument, problemBuilder);
|
||||||
|
}
|
||||||
|
|
||||||
|
@Override
|
||||||
|
protected Storage<?> runVectorizedZip(
|
||||||
|
String name, Storage<?> argument, MapOperationProblemBuilder problemBuilder) {
|
||||||
|
return underlyingStorage.runVectorizedZip(name, argument, problemBuilder);
|
||||||
|
}
|
||||||
|
|
||||||
|
@Override
|
||||||
|
public Storage<Object> mask(BitSet mask, int cardinality) {
|
||||||
|
Storage<?> newStorage = underlyingStorage.mask(mask, cardinality);
|
||||||
|
return new MixedStorageFacade(newStorage);
|
||||||
|
}
|
||||||
|
|
||||||
|
@Override
|
||||||
|
public Storage<Object> applyMask(OrderMask mask) {
|
||||||
|
Storage<?> newStorage = underlyingStorage.applyMask(mask);
|
||||||
|
return new MixedStorageFacade(newStorage);
|
||||||
|
}
|
||||||
|
|
||||||
|
@Override
|
||||||
|
public Storage<Object> countMask(int[] counts, int total) {
|
||||||
|
Storage<?> newStorage = underlyingStorage.countMask(counts, total);
|
||||||
|
return new MixedStorageFacade(newStorage);
|
||||||
|
}
|
||||||
|
|
||||||
|
@Override
|
||||||
|
public Storage<Object> slice(int offset, int limit) {
|
||||||
|
Storage<?> newStorage = underlyingStorage.slice(offset, limit);
|
||||||
|
return new MixedStorageFacade(newStorage);
|
||||||
|
}
|
||||||
|
|
||||||
|
@Override
|
||||||
|
public Builder createDefaultBuilderOfSameType(int capacity) {
|
||||||
|
throw new UnsupportedOperationException("TODO");
|
||||||
|
}
|
||||||
|
|
||||||
|
@Override
|
||||||
|
public Storage<Object> slice(List<SliceRange> ranges) {
|
||||||
|
Storage<?> newStorage = underlyingStorage.slice(ranges);
|
||||||
|
return new MixedStorageFacade(newStorage);
|
||||||
|
}
|
||||||
|
}
|
@ -6,6 +6,8 @@ import org.enso.table.data.column.builder.object.Builder;
|
|||||||
import org.enso.table.data.column.builder.object.ObjectBuilder;
|
import org.enso.table.data.column.builder.object.ObjectBuilder;
|
||||||
import org.enso.table.data.column.operation.map.MapOpStorage;
|
import org.enso.table.data.column.operation.map.MapOpStorage;
|
||||||
import org.enso.table.data.column.operation.map.UnaryMapOperation;
|
import org.enso.table.data.column.operation.map.UnaryMapOperation;
|
||||||
|
import org.enso.table.data.column.storage.type.AnyObjectType;
|
||||||
|
import org.enso.table.data.column.storage.type.StorageType;
|
||||||
|
|
||||||
/** A column storing arbitrary objects. */
|
/** A column storing arbitrary objects. */
|
||||||
public final class ObjectStorage extends SpecializedStorage<Object> {
|
public final class ObjectStorage extends SpecializedStorage<Object> {
|
||||||
@ -28,8 +30,8 @@ public final class ObjectStorage extends SpecializedStorage<Object> {
|
|||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
public int getType() {
|
public StorageType getType() {
|
||||||
return Type.OBJECT;
|
return AnyObjectType.INSTANCE;
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
|
@ -4,6 +4,7 @@ import java.util.BitSet;
|
|||||||
import java.util.List;
|
import java.util.List;
|
||||||
import org.enso.table.data.column.operation.map.MapOpStorage;
|
import org.enso.table.data.column.operation.map.MapOpStorage;
|
||||||
import org.enso.table.data.column.operation.map.MapOperationProblemBuilder;
|
import org.enso.table.data.column.operation.map.MapOperationProblemBuilder;
|
||||||
|
import org.enso.table.data.column.storage.type.StorageType;
|
||||||
import org.enso.table.data.index.Index;
|
import org.enso.table.data.index.Index;
|
||||||
import org.enso.table.data.mask.OrderMask;
|
import org.enso.table.data.mask.OrderMask;
|
||||||
import org.enso.table.data.mask.SliceRange;
|
import org.enso.table.data.mask.SliceRange;
|
||||||
@ -15,7 +16,7 @@ public abstract class SpecializedStorage<T> extends Storage<T> {
|
|||||||
protected abstract T[] newUnderlyingArray(int size);
|
protected abstract T[] newUnderlyingArray(int size);
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
public abstract int getType();
|
public abstract StorageType getType();
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* @param data the underlying data
|
* @param data the underlying data
|
||||||
|
@ -10,6 +10,7 @@ import org.enso.table.data.column.builder.object.Builder;
|
|||||||
import org.enso.table.data.column.builder.object.InferredBuilder;
|
import org.enso.table.data.column.builder.object.InferredBuilder;
|
||||||
import org.enso.table.data.column.builder.object.ObjectBuilder;
|
import org.enso.table.data.column.builder.object.ObjectBuilder;
|
||||||
import org.enso.table.data.column.operation.map.MapOperationProblemBuilder;
|
import org.enso.table.data.column.operation.map.MapOperationProblemBuilder;
|
||||||
|
import org.enso.table.data.column.storage.type.StorageType;
|
||||||
import org.enso.table.data.mask.OrderMask;
|
import org.enso.table.data.mask.OrderMask;
|
||||||
import org.enso.table.data.mask.SliceRange;
|
import org.enso.table.data.mask.SliceRange;
|
||||||
import org.graalvm.polyglot.Value;
|
import org.graalvm.polyglot.Value;
|
||||||
@ -22,8 +23,8 @@ public abstract class Storage<T> {
|
|||||||
/** @return the number of NA elements in this column */
|
/** @return the number of NA elements in this column */
|
||||||
public abstract int countMissing();
|
public abstract int countMissing();
|
||||||
|
|
||||||
/** @return the type tag of this column's storage. Must be one of {@link Type} */
|
/** @return the type tag of this column's storage. */
|
||||||
public abstract int getType();
|
public abstract StorageType getType();
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Checks whether the value at {@code idx} is missing.
|
* Checks whether the value at {@code idx} is missing.
|
||||||
@ -41,24 +42,6 @@ public abstract class Storage<T> {
|
|||||||
*/
|
*/
|
||||||
public abstract T getItemBoxed(int idx);
|
public abstract T getItemBoxed(int idx);
|
||||||
|
|
||||||
/**
|
|
||||||
* Enumerating possible storage types.
|
|
||||||
*
|
|
||||||
* <p>Keep in sync with variables in {@code Standard.Table.Data.Column}. These variables are
|
|
||||||
* copied between Enso and Java code, in order to make them trivially constant on the Enso side,
|
|
||||||
* without invoking the polyglot machinery to access them.
|
|
||||||
*/
|
|
||||||
public static final class Type {
|
|
||||||
public static final int OBJECT = 0;
|
|
||||||
public static final int LONG = 1;
|
|
||||||
public static final int DOUBLE = 2;
|
|
||||||
public static final int STRING = 3;
|
|
||||||
public static final int BOOL = 4;
|
|
||||||
public static final int DATE = 5;
|
|
||||||
public static final int TIME_OF_DAY = 6;
|
|
||||||
public static final int DATE_TIME = 7;
|
|
||||||
}
|
|
||||||
|
|
||||||
/** A container for names of vectorizable operation. */
|
/** A container for names of vectorizable operation. */
|
||||||
public static final class Maps {
|
public static final class Maps {
|
||||||
public static final String EQ = "==";
|
public static final String EQ = "==";
|
||||||
@ -262,8 +245,8 @@ public abstract class Storage<T> {
|
|||||||
* counts[i]}.
|
* counts[i]}.
|
||||||
*
|
*
|
||||||
* @param counts the mask specifying elements duplication
|
* @param counts the mask specifying elements duplication
|
||||||
* @param total the sum of all elements in the mask, also interpreted as the size of the resulting
|
* @param total the sum of all elements in the mask, also interpreted as the length of the
|
||||||
* storage
|
* resulting storage
|
||||||
* @return the storage masked according to the specified rules
|
* @return the storage masked according to the specified rules
|
||||||
*/
|
*/
|
||||||
public abstract Storage<T> countMask(int[] counts, int total);
|
public abstract Storage<T> countMask(int[] counts, int total);
|
||||||
|
@ -12,6 +12,8 @@ import org.enso.table.data.column.operation.map.UnaryMapOperation;
|
|||||||
import org.enso.table.data.column.operation.map.text.LikeOp;
|
import org.enso.table.data.column.operation.map.text.LikeOp;
|
||||||
import org.enso.table.data.column.operation.map.text.StringBooleanOp;
|
import org.enso.table.data.column.operation.map.text.StringBooleanOp;
|
||||||
import org.enso.table.data.column.operation.map.text.StringIsInOp;
|
import org.enso.table.data.column.operation.map.text.StringIsInOp;
|
||||||
|
import org.enso.table.data.column.storage.type.StorageType;
|
||||||
|
import org.enso.table.data.column.storage.type.TextType;
|
||||||
import org.graalvm.polyglot.Value;
|
import org.graalvm.polyglot.Value;
|
||||||
|
|
||||||
/** A column storing strings. */
|
/** A column storing strings. */
|
||||||
@ -36,8 +38,9 @@ public final class StringStorage extends SpecializedStorage<String> {
|
|||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
public int getType() {
|
public StorageType getType() {
|
||||||
return Type.STRING;
|
// TODO [RW] constant length strings support
|
||||||
|
return TextType.VARIABLE_LENGTH;
|
||||||
}
|
}
|
||||||
|
|
||||||
private static final MapOpStorage<String, SpecializedStorage<String>> ops = buildOps();
|
private static final MapOpStorage<String, SpecializedStorage<String>> ops = buildOps();
|
||||||
|
@ -5,8 +5,9 @@ import java.time.LocalTime;
|
|||||||
import org.enso.table.data.column.builder.object.Builder;
|
import org.enso.table.data.column.builder.object.Builder;
|
||||||
import org.enso.table.data.column.builder.object.TimeOfDayBuilder;
|
import org.enso.table.data.column.builder.object.TimeOfDayBuilder;
|
||||||
import org.enso.table.data.column.operation.map.MapOpStorage;
|
import org.enso.table.data.column.operation.map.MapOpStorage;
|
||||||
import org.enso.table.data.column.operation.map.SpecializedIsInOp;
|
|
||||||
import org.enso.table.data.column.operation.map.datetime.DateTimeIsInOp;
|
import org.enso.table.data.column.operation.map.datetime.DateTimeIsInOp;
|
||||||
|
import org.enso.table.data.column.storage.type.StorageType;
|
||||||
|
import org.enso.table.data.column.storage.type.TimeOfDayType;
|
||||||
|
|
||||||
public final class TimeOfDayStorage extends SpecializedStorage<LocalTime> {
|
public final class TimeOfDayStorage extends SpecializedStorage<LocalTime> {
|
||||||
/**
|
/**
|
||||||
@ -36,8 +37,8 @@ public final class TimeOfDayStorage extends SpecializedStorage<LocalTime> {
|
|||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
public int getType() {
|
public StorageType getType() {
|
||||||
return Type.TIME_OF_DAY;
|
return TimeOfDayType.INSTANCE;
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
|
@ -0,0 +1,5 @@
|
|||||||
|
package org.enso.table.data.column.storage.type;
|
||||||
|
|
||||||
|
public record AnyObjectType() implements StorageType {
|
||||||
|
public static final AnyObjectType INSTANCE = new AnyObjectType();
|
||||||
|
}
|
@ -0,0 +1,22 @@
|
|||||||
|
package org.enso.table.data.column.storage.type;
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Represents sizes for some of our storages.
|
||||||
|
*
|
||||||
|
* <p>This corresponds to the Enso type {@code Bits}.
|
||||||
|
*/
|
||||||
|
public enum Bits {
|
||||||
|
BITS_8(8),
|
||||||
|
BITS_16(16),
|
||||||
|
BITS_32(32),
|
||||||
|
BITS_64(64);
|
||||||
|
private final int size;
|
||||||
|
|
||||||
|
Bits(int size) {
|
||||||
|
this.size = size;
|
||||||
|
}
|
||||||
|
|
||||||
|
public int toInteger() {
|
||||||
|
return this.size;
|
||||||
|
}
|
||||||
|
}
|
@ -0,0 +1,5 @@
|
|||||||
|
package org.enso.table.data.column.storage.type;
|
||||||
|
|
||||||
|
public record BooleanType() implements StorageType {
|
||||||
|
public static final BooleanType INSTANCE = new BooleanType();
|
||||||
|
}
|
@ -0,0 +1,5 @@
|
|||||||
|
package org.enso.table.data.column.storage.type;
|
||||||
|
|
||||||
|
public record DateTimeType() implements StorageType {
|
||||||
|
public static final DateTimeType INSTANCE = new DateTimeType();
|
||||||
|
}
|
@ -0,0 +1,5 @@
|
|||||||
|
package org.enso.table.data.column.storage.type;
|
||||||
|
|
||||||
|
public record DateType() implements StorageType {
|
||||||
|
public static final DateType INSTANCE = new DateType();
|
||||||
|
}
|
@ -0,0 +1,11 @@
|
|||||||
|
package org.enso.table.data.column.storage.type;
|
||||||
|
|
||||||
|
public record FloatType(Bits bits) implements StorageType {
|
||||||
|
public static final FloatType FLOAT_64 = new FloatType(Bits.BITS_64);
|
||||||
|
|
||||||
|
public FloatType {
|
||||||
|
if (bits != Bits.BITS_64) {
|
||||||
|
throw new IllegalArgumentException("Only 64-bit floats are currently supported.");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
@ -0,0 +1,5 @@
|
|||||||
|
package org.enso.table.data.column.storage.type;
|
||||||
|
|
||||||
|
public record IntegerType(Bits bits) implements StorageType {
|
||||||
|
public static final IntegerType INT_64 = new IntegerType(Bits.BITS_64);
|
||||||
|
}
|
@ -0,0 +1,7 @@
|
|||||||
|
package org.enso.table.data.column.storage.type;
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Represents an underlying internal storage type that can be mapped to the Value Type that is exposed to users.
|
||||||
|
*/
|
||||||
|
public sealed interface StorageType permits AnyObjectType, BooleanType, DateType, DateTimeType, FloatType, IntegerType, TextType, TimeOfDayType {
|
||||||
|
}
|
@ -0,0 +1,5 @@
|
|||||||
|
package org.enso.table.data.column.storage.type;
|
||||||
|
|
||||||
|
public record TextType(long maxLength, boolean fixedLength) implements StorageType {
|
||||||
|
public static final TextType VARIABLE_LENGTH = new TextType(-1, false);
|
||||||
|
}
|
@ -0,0 +1,5 @@
|
|||||||
|
package org.enso.table.data.column.storage.type;
|
||||||
|
|
||||||
|
public record TimeOfDayType() implements StorageType {
|
||||||
|
public static final TimeOfDayType INSTANCE = new TimeOfDayType();
|
||||||
|
}
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user