This PR replaces most of the string newtypes in the open-dds crate with
new implementations based on `SmolStr`, using a macro adapted from the
macro used in ndc-models. `SmolStr` usage should result in less heap
allocations and the macro ensures all the newtypes get all the same
trait implementations (such as various `From` and `Borrow` instances)
and helper impl functions (such as `as_str`).
The new macro, `str_newtype`, creates a newtype wrapper around `SmolStr`
or another type (typically `Identifier`), and writes out all the various
trait implementations. It also takes a documentation string which
ensures that every newtype has a description in JSON Schema.
The changes in the downstream crates are just adjusting to use the new
newtypes.
Also:
* `QueryGraphqlConfig` used a lot of String typed-properties; these have
been changed for the appropriate `GraphQlTypeName` and
`GraphQlFieldName` types.
* metadata-resolve had a duplicate newtype called
`ConnectorArgumentName`, which duplicated open-dds's
`DataConnectorArgumentName`. This has been removed and replaced.
* A new newtype called `CollectionName` has been added in open-dds to
represent the name of the ndc collection that a model points to. This
was previously just a string.
V3_GIT_ORIGIN_REV_ID: d5a33756ef0e110b2255478358a38e42e6ff89a2
This PR updated ndc-models to the latest version on main. This version
is still a 0.1.x version, but it now includes all the [new
newtypes](https://github.com/hasura/ndc-spec/pull/156) that wrap
previously stringly-typed things. For example, `ArgumentName`,
`FieldName`, etc.
This pervades across the entire engine, but thankfully the changes are
mostly mechanical repetitive changes. Usually you will see conversions
from `String`-typed variables into the newtypes using this sort of form:
`FieldName::from(string.as_str())`, which is the most efficient way
copying the value (the str slice is copied). Or you will see usages of
the newtype as a raw string by `.as_str()`-ing it. Converting the
newtypes into a String can be done with `.into()` if owned, but if
referenced `.as_str().to_owned()` performs the clone and type
conversion.
Other changes:
* A few minor instances of `ok_or()` usages (or similar) have been
converted into lazy error construction variants (eg `ok_or_else()`)
V3_GIT_ORIGIN_REV_ID: 64a371ae6197ef3be98a6f7cdc4052d654a43da0
The PR adds code that handles multiple versions of the ndc-models during
the execution pipeline. Depending on whether a connector supports `v01`
or `v02` models, a different set of types is used to send and receive
the http requests.
However, the engine internally still uses the latest (v02) models inside
its IR. Unfortunately, it was going to be quite traumatic to prevent the
engine from using ndc models inside the IR and during response
processing and remote joins. This means that the engine generates v02
requests, and these are downgraded into v01 requests. v01 responses are
upgraded to v02 responses before being processed by the engine.
The ndc client (`execute::ndc::client`) now only takes new wrapper enum
types (`execute::ndc::types`) that split between v01 or v02
requests/responses. Every place that carries an ndc request now carries
this type instead, which allows it to carry either a v01 or a v02
request.
When ndc requests are created during planning, all creation goes via the
new `execute::plan::ndc_request` module. This inspects the connector's
supported version, creates the necessary request, and if needed,
downgrades it to v01.
When ndc responses are read during planning or during remote joins, they
are upgraded to v02 via helper functions defined on the types in
`execute::ndc::types`.
The upgrade/downgrade code is located in `execute::ndc::migration`. Keep
in mind the "v02" types are currently the same as the "v01" types so the
migration code is not doing much. This will change as the v02 types are
modified.
However, this approach has its drawbacks. One is that it prevents
changes to the ndc types [like
this](https://github.com/hasura/ndc-spec/pull/158) without a fair bit of
pain (see
[comment](https://github.com/hasura/ndc-spec/pull/158#issuecomment-2202127094)).
Another is that the downgrade code can fail at runtime and it is not
immediately obvious to developers using new, but unused, v02 features
that their new feature would fail on v01, because that mapping to v01
has already been written. Another is that we're paying some (small,
probably?) performance cost by upgrading/downgrading types because we
need to rebuild data structures.
Also:
* `execute::ndc::response` has been merged into `execute::ndc::client`,
since it was inextricably linked.
V3_GIT_ORIGIN_REV_ID: f3f36736b52058323d789c378fed06af566f39a3
We shouldn't be replacing 'TableScan's for `information_schema` and
`hasura` schemas. Previously, we only had a check for `hasura` schema,
this PR now includes a check for `information_schema`. Queries on
`information_schema` will now work.
V3_GIT_ORIGIN_REV_ID: b25276556027b52ff940ddd3d094ea20f6fc7538
Adds a very experimental SQL interface to v3-engine for GenAI use cases.
---------
Co-authored-by: Abhinav Gupta <127770473+abhinav-hasura@users.noreply.github.com>
Co-authored-by: Gil Mizrahi <gil@gilmi.net>
Co-authored-by: Anon Ray <ecthiender@users.noreply.github.com>
V3_GIT_ORIGIN_REV_ID: 077779ec4e7843abdffdac1ed6aa655210649b93