### Description
This PR attempts to fix several issues with source customization as it relates to remote relationships. There were several issues regarding casing: at the relationship border, we didn't properly set the target source's case, we didn't have access to the list of supported features to decide whether the feature was allowed or not, and we didn't have access to the global default.
However, all of that information is available when we build the schema cache, as we do resolve the case of some elements such as function names: we can therefore resolve source information at the same time, and simplify both the root of the schema and the remote relationship border.
To do this, this PR introduces a new type, `ResolvedSourceCustomization`, to be used in the Schema Cache, as opposed to the metadata's `SourceCustomization`, following a pattern established by a lot of other types.
### Remaining work and open questions
One major point of confusion: it seems to me that we didn't set the case at all across remote relationships, which would suggest we would use the case of the LHS source across the subset of the RHS one that is accessible through the remote relationship, which would in turn "corrupt" the parser cache and might result in the wrong case being used for that source later on. Is that assesment correct, and was I right to fix it?
Another one is that we seem not to be using the local case of the RHS to name the field in an object relationship; unless I'm mistaken we only use it for array relationships? Is that intentional?
This PR is also missing tests that would show-case the difference, and a changelog entry. To my knowledge, all the tests of this feature are in the python test suite; this could be the opportunity to move them to the hspec suite, but this might be a considerable amount of work?
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5619
GitOrigin-RevId: 51a81b713a74575e82d9f96b51633f158ce3a47b
### Description
This PR changes all the schema code to operate in a specific `SchemaT` monad, rather than in an arbitrary `m` monad. `SchemaT` is intended to be used opaquely with `runSourceSchema` and `runRemoteSchema`. The main goal of this is to allow a different reader context per part of the schema: this PR also minimizes the contexts. This means that we no longer require `SchemaOptions` when building remote schemas' schema, and this PR therefore removes a lot of dummy / placeholder values accordingly.
### Performance and stacking
This PR has been through several iterations. #5339 was the original version, that accomplished the same thing by stacking readers on top of the stack at every remote relationship boundary. This raised performance concerns, and @0x777 confirmed with an ad-hoc test that in some extreme cases we could see up to a 10% performance impact. This version, while more verbose, allows us to unstack / re-stack the readers, and avoid that problem. #5517 adds a new benchmark set to be able to automatically measure this on every PR.
### Remaining work
- [x] a comment (or perhaps even a Note?) should be added to `SchemaT`
- [x] we probably want for #5517 to be merged first so that we can confirm the lack of performance penalty
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5458
GitOrigin-RevId: e06b83d90da475f745b838f1fd8f8b4d9d3f4b10
This includes TH.Lift instances.
I am motivated to make this change because `unordered-containers` is set to either v0.2.17.0 or v0.2.19.1 in nixpkgs-unstable.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5620
GitOrigin-RevId: 7fd3024fdbf6a948adbdf5f4187d47d5da9acbda
This PR expands the OpenAPI specification generated for metadata to include separate definitions for `SourceMetadata` for each native database type, and for DataConnector.
For the most part the changes add `HasCodec` implementations, and don't modify existing code otherwise.
The generated OpenAPI spec can be used to generate TypeScript definitions that distinguish different source metadata types based on the value of the `kind` properly. There is a problem: because the specified `kind` value for a data connector source is any string, when TypeScript gets a source with a `kind` value of, say, `"postgres"`, it cannot unambiguously determine whether the source is postgres, or a data connector. For example,
```ts
function consumeSourceMetadata(source: SourceMetadata) {
if (source.kind === "postgres" || source.kind === "pg") {
// At this point TypeScript infers that `source` is either an instance
// of `PostgresSourceMetadata`, or `DataconnectorSourceMetadata`. It
// can't narrow further.
source
}
if (source.kind === "something else") {
// TypeScript infers that this `source` must be an instance of
// `DataconnectorSourceMetadata` because `source.kind` does not match
// any of the other options.
source
}
}
```
The simplest way I can think of to fix this would be to add a boolean property to the `SourceMetadata` type along the lines of `isNative` or `isDataConnector`. This could be a field that only exists in serialized data, like the metadata version field. The combination of one of the native database names for `kind`, and a true value for `isNative` would be enough for TypeScript to unambiguously distinguish the source kinds.
But note that in the current state TypeScript is able to reference the short `"pg"` name correctly!
~~Tests are not passing yet due to some discrepancies in DTO serialization vs existing Metadata serialization. I'm working on that.~~
The placeholders that I used for table and function metadata are not compatible with the ordered JSON serialization in use. I think the best solution is to write compatible codecs for those types in another PR. For now I have disabled some DTO tests for this PR.
Here are the generated [OpenAPI spec](https://github.com/hasura/graphql-engine-mono/files/9397333/openapi.tar.gz) based on these changes, and the generated [TypeScript client code](https://github.com/hasura/graphql-engine-mono/files/9397339/client-typescript.tar.gz) based on that spec.
Ticket: [MM-66](https://hasurahq.atlassian.net/browse/MM-66)
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5582
GitOrigin-RevId: e1446191c6c832879db04f129daa397a3be03f62
This does not yet enable Aggregation Predicates to users, but enables building the execution backend and tests of the schema.
This is a prerequisite for:
* #5174
* #5261
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5607
GitOrigin-RevId: e07beb01949724545131629c111d41a7ec4636f2
This makes it easier to refactor `BackendSchema`, because if the type of a type class method is changed, it's easier to update the corresponding dummy implementations.
Partially addresses hasura/graphql-engine-mono#2971, in the sense that this aids refactors.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5443
GitOrigin-RevId: 65e169d01415a04e7c419a628cf32e743448543d
The module `Hasura.SQL.AnyBackend` was introduced (in #751) to centralize the logic for case-switching behavior that depends on the particular flavor of relational DB backend (Postgres vs MSSQL vs BigQuery vs MySQL vs DataConnectors). This allows us to write a bunch of code in a backend-agnostic way, even if runtime behavior does depend on the chosen backend. At the same time, it allows us to write backend-specific code without having to care (too much) about the existence of other backends.
In #851 this module was rewritten to use Template Haskell.
I've heard that one of the reasons for the use of TH was that this would make it easier to keep backends out of the compilation product entirely. This would allow customers, especially on OSS, to benefit from simpler software licensing.
However:
1. This conditional compilation never materialized.
2. It's not clear whether writing this particular module based on TH would be sufficient for conditional compilation. And in any case, it can be done using CPP pragmas as well.
3. The TH code is extraordinarily complex. Since its introduction, it has been documented extraordinarily well, but it's still very difficult to maintain and/or refactor, due to its non-idiomatic nature.
4. Hasura's company objectives are now Cloud-oriented, so that software licensing issues work differently, and in particular, do not depend on what's part of the compilation product.
So this PR reverts on #851 by spelling out the code generated by TH. This is a net-negative diff size. IOW we used to generate less code than the size of the code doing the generating. This makes the code readable and maintainable.
The generated code has been modified in one way, which I'll now describe.
In the scenario that support for a new backend is introduced, a constructor is added to the `BackendType` type. This would then cause `liftTag` to be partial, thus raising a compiler warning. Resolving this requires adding corresponding constructors to the `BackendTag` and `AnyBackend` types. This would then require amending **almost** all other methods.
The exceptions are `composeAnyBackend` and `unpackAnyBackend`. These methods test whether two values are compatible, i.e. belong to the same backend. Both have a default case that in one way or another ignores the input values. Using TH here ensures that all values that belong together are caught. But after spelling out the TH, the presence of the default case means that no compiler warning is thrown for a missing match of matching values. So in the default case, we now do an explicit check for equality. If there _is_ an equality, that means that there is a missing `case`. So this is reported as an `error` (which is very crude, but it should be).
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5333
GitOrigin-RevId: 5aaf0a93394bd740aa7371526d3175c8142b3541
### Description
This PR moves some strictness annotations to a concrete use site, rather than putting `seq` in an helper function.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5436
GitOrigin-RevId: 1f279e05333ab80167ad2e18d09b8792eddc52c3
### Description
This PR is a first step towards having a dedicated reader context per schema block. It adds the required Reader instance, and switches from a `SchemaT ReaderT` stack to a `ReaderT SchemaT` stack. Furthermore, it cleans up / harmonizes some of the top-level schema building functions.
Sources and remotes are now built each within their own run of `runReaderT`: for now, the reader context is the same in both cases, meaning no special care is required at the boundary of remote relationships.
Actions are explicitly run with the source context for now; we could envision creating a third and distinct context for them.
This PR is expected to be a no-op.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5300
GitOrigin-RevId: a014e5b3504eb4ef740c820d305d6d2695f622f7
It's about time.
To do this I had to check a few more boxes.
* I copied the flags from `graphql-engine.cabal` to the libraries in `server/lib`.
* I moved `Cacheable` instances of schema parser types beside the typeclass declaration.
* I removed imports of `Hasura.Prelude` from the tests, and rewrote them accordingly.
* I copied the `TestMonad` parse monad into `server/src-test/Hasura/GraphQL/Schema/RemoteTest.hs`, which was using it. I think this could be done with the real thing, but I tried replacing it with constraints and it messed with my head somewhat.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5311
GitOrigin-RevId: ebebcc50a16f2d517b7f730fe72410827ca3e86c
## Description ✍️
This PR fixes the config status update when the `Service configured successfully` message is written before the server is actually spawned. Now the status is updated only when the server is spawned successfully. To be specific, this change posts the status closer to where we log `starting API server`.
### Related Issues ✍
#2751
### Solution and Design ✍
We update the status inside `runHGEServer` function. This helps in adding the message only when the server is started. If any exception is thrown before the server is spawned, only that message is written to `config_status` table instead of the `Service configured successfully` message.
## Affected components ✍️
- ✅ Server
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5179
Co-authored-by: Naveen Naidu <30195193+Naveenaidu@users.noreply.github.com>
Co-authored-by: Anon Ray <616387+ecthiender@users.noreply.github.com>
GitOrigin-RevId: 7860008403aa0645583e26915f620b66a5bbc531
Followup to hasura/graphql-engine-mono#4713.
The `memoizeOn` method, part of `MonadSchema`, originally had the following type:
```haskell
memoizeOn
:: (HasCallStack, Ord a, Typeable a, Typeable b, Typeable k)
=> TH.Name
-> a
-> m (Parser k n b)
-> m (Parser k n b)
```
The reason for operating on `Parser`s specifically was that the `MonadSchema` effect would additionally initialize certain `Unique` values, which appear (nested in) the type of `Parser`.
hasura/graphql-engine-mono#518 changed the type of `memoizeOn`, to additionally allow memoizing `FieldParser`s. These also contained a `Unique` value, which was similarly initialized by the `MonadSchema` effect. The new type of `memoizeOn` was as follows:
```haskell
memoizeOn
:: forall p d a b
. (HasCallStack, HasDefinition (p n b) d, Ord a, Typeable p, Typeable a, Typeable b)
=> TH.Name
-> a
-> m (p n b)
-> m (p n b)
```
Note the type `p n b` of the value being memoized: by choosing `p` to be either `Parser k` or `FieldParser`, both can be memoized. Also note the new `HasDefinition (p n b) d` constraint, which provided a `Lens` for accessing the `Unique` value to be initialized.
A quick simplification is that the `HasCallStack` constraint has never been used by any code. This was realized in hasura/graphql-engine-mono#4713, by removing that constraint.
hasura/graphql-engine-mono#2980 removed the `Unique` value from our GraphQL-related types entirely, as their original purpose was never truly realized. One part of removing `Unique` consisted of dropping the `HasDefinition (p n b) d` constraint from `memoizeOn`.
What I didn't realize at the time was that this meant that the type of `memoizeOn` could be generalized and simplified much further. This PR finally implements that generalization. The new type is as follows:
```haskell
memoizeOn ::
forall a p.
(Ord a, Typeable a, Typeable p) =>
TH.Name ->
a ->
m p ->
m p
```
This change has a couple of consequences.
1. While constructing the schema, we often output `Maybe (Parser ...)`, to model that the existence of certain pieces of GraphQL schema sometimes depends on the permissions that a certain role has. The previous versions of `memoizeOn` were not able to handle this, as the only thing they could memoize was fully-defined (if not yet fully-evaluated) `(Field)Parser`s. This much more general API _would_ allow memoizing `Maybe (Parser ...)`s. However, we probably have to be continue being cautious with this: if we blindly memoize all `Maybe (Parser ...)`s, the resulting code may never be able to decide whether the value is `Just` or `Nothing` - i.e. it never commits to the existence-or-not of a GraphQL schema fragment. This would manifest as a non-well-founded knot tying, and this would get reported as an error by the implementation of `memoizeOn`.
tl;dr: This generalization _technically_ allows for memoizing `Maybe` values, but we probably still want to avoid doing so.
For this reason, the PR adds a specialized version of `memoizeOn` to `Hasura.GraphQL.Schema.Parser`.
2. There is no longer any need to connect the `MonadSchema` knot-tying effect with the `MonadParse` effect. In fact, after this PR, the `memoizeOn` method is completely GraphQL-agnostic, and so we implement hasura/graphql-engine-mono#4726, separating `memoizeOn` from `MonadParse` entirely - `memoizeOn` can be defined and implemented as a general Haskell typeclass method.
Since `MonadSchema` has been made into a single-type-parameter type class, it has been renamed to something more general, namely `MonadMemoize`. Its only task is to memoize arbitrary `Typeable p` objects under a combined key consisting of a `TH.Name` and a `Typeable a`.
Also for this reason, the new `MonadMemoize` has been moved to the more general `Control.Monad.Memoize`.
3. After this change, it's somewhat clearer what `memoizeOn` does: it memoizes an arbitrary value of a `Typeable` type. The only thing that needs to be understood in its implementation is how the manual blackholing works. There is no more semantic interaction with _any_ GraphQL code.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4725
Co-authored-by: Daniel Harvey <4729125+danieljharvey@users.noreply.github.com>
GitOrigin-RevId: 089fa2e82c2ce29da76850e994eabb1e261f9c92
### Description
By definition, root fields are at the root of the schema: only functions that craft root fields need to know about how to customize the name of root fields. However, the presence of `Has MkRootFieldName` in `MonadBuildSchemaBase` meant that the entirety of the schema building code was implicitly aware of / capable of altering root field names.
This PR removes this constraint, and moves it to the functions that do craft root fields. This has several upsides:
- it makes it more explicit where root fields are being crafted
- it prevents functions that should not use this from mistakenly applying it to non-root fields
- it simplifies the shared schema context
### Future work
- can we maybe pass this as an argument, instead of making it a required part of the context?
- ~~AFAICT, we only ever use `mempty` for it: is this actually dead code that we should actually just remove altogether?~~
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5235
GitOrigin-RevId: 4268751f3ab87ae8e03b6fe9e1efa1b096200027
For some reason these functions exist in `Backends.Postgres.SQL.Value`.
We don't want to depend on that module here.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5292
GitOrigin-RevId: a09bd3cdb0caf08938bce0728a8d281344c1d4ce
Moves code from `Hasura.RQL.Types.Metadata` that is specific to serialization into a new module, `Hasura.RQL.Types.Metadata.Serialization`.
I'm breaking up #5184 into smaller PRs. This is the third and final PR in that effort. This PR is stacked on #5210 and #5211.
The tracking issue is https://hasurahq.atlassian.net/browse/MM-35
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5212
GitOrigin-RevId: 6cde6d52173590fafe0969a06f2a3411db4fbc78
Introduces a new function, `metadataToDTO`, that converts a `Metadata` value to a `MetadataV3` DTO value. This is the next step in the alternative serialization path for metadata that comes with a generated OpenAPI specification.
This PR carves up the existing `metadataToOrdJSON` function so that helpers previously embedded in the `where` block of that function can also be used in the implementation of `metadataToDTO`. If I did everything correctly `metadataToOrdJSON` should behave exactly as before.
In a followup PR I will move the extracted helpers to a new submodule, `Hasura.RQL.Types.Metadata.Serialization`, since they add up to several hundred lines of code.
I'm breaking up #5184 into smaller PRs, and this is the second PR in that effort. This PR is stacked on #5210.
The tracking issue is https://hasurahq.atlassian.net/browse/MM-35
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5211
GitOrigin-RevId: 2596ed5312d7b1232c47ae1d08a51d8ead11fcb8
A following PR moves serialization-related code out `Hasura.RQL.Types.Metadata` into a specialized submodule. To avoid circular dependencies a number of other definitions also need to be moved into their own submodule. This PR does that extra moving first so that we can keep each PR as small, and as easy to review as possible.
There are a lot of changed lines; but it's all moving code from one module to another.
I'm breaking up #5184 into smaller PRs, and this is the first PR in that effort.
The tracking issue is https://hasurahq.atlassian.net/browse/MM-35
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5210
GitOrigin-RevId: 6fb6e29a967ab5ad4724006c8e0addd2d63a3946
### Description
I am not 100% sure about this PR; while I think the code is better this way, I'm willing to be convinced otherwise.
In short, this PR moves the `RoleName` field into the `SchemaContext`, instead of being a nebulous `Has RoleName` constraint on the reader monad. The major upside of this is that it makes it an explicit named field, rather than something that must be given as part of a tuple of arguments when calling `runReader`.
However, the downside is that it breaks the helper permissions functions of `Schema.Table`, which relied on `Has RoleName r`. This PR makes the choice of passing the role name explicitly to all of those functions, which in turn means first explicitly fetching the role name in a lot of places. It makes it more explicit when a schema building block relies on the role name, but is a bit verbose...
### Alternatives
Some alternatives worth considering:
- attempting something like `Has context r, Has RoleName context`, which would allow them to be independent from the context but still fetch the role name from the reader, but might require type annotations to not be ambiguous
- keeping the permission functions the same, with `Has RoleName r`, and introducing a bunch of newtypes instead of using tuples to explicitly implement all the required `Has` instances
- changing the permission functions to `Has SchemaContext r`, since they are functions used only to build the schema, and therefore may be allowed to be tied to the context.
What do y'all think?
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5073
GitOrigin-RevId: 8fd09fafb54905a4d115ef30842d35da0c3db5d2
In the process of decoupling the schema parsers from the GraphQL Engine, we need to remove dependencies on `Hasura.Base.Error`.
First of all, we have avoided using `QErr` in schema parsers code, instead returning a more appropriate data type which can be converted to a `Hasura.Base.Error.QErr` later.
Secondly, we create a new `ParseErrorCode` type to represent parse failure types, which are then converted to a `Hasura.Base.Error.Code` later.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5181
GitOrigin-RevId: 8655e26adb1e7d5e3d552c77a8a403f987b53467
Updates to the latest version of autodocodec and uses the new features, in particular `discriminatedUnionCodec`.
This allows us to remove the `ValueWrapper*` types and `sumTypeCodec`. Sum types are now encoded as discriminated unions.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5155
GitOrigin-RevId: 20bfdc12b28d35db354c4a149b9175fab0b2b7d2
This is now the sole in-universe dependency of the schema parsers. As
such, we need to extract it as a library before we can extract the
schema parsers as a library.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5202
GitOrigin-RevId: fbe571855768e56dc8b8e259b8efe900de3ecc54
Makes the init specs shorter by using `shouldBe` instead of `shouldSatisfy` wherever possible. This also makes test failures more expressive.
It also simplifies boolean logic in most places, following HLint warnings. These changes brought to you by `hlint --refactor`, which is basically magic.
I have left some redundancy in the boolean logic for clarity, along with the appropriate HLint suppressions.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5087
GitOrigin-RevId: 52bf3626be2615e6a32a0fc0e8be19cca31ee4ad
This introduces an `ErrorMessage` newtype which wraps `Text` in a manner which is designed to be easy to construct, and difficult to deconstruct.
It provides functionality similar to `Data.Text.Extended`, but designed _only_ for error messages. Error messages are constructed through `fromString`, concatenation, or the `toErrorValue` function, which is designed to be overridden for all meaningful domain types that might show up in an error message. Notably, there are not and should never be instances of `ToErrorValue` for `String`, `Text`, `Int`, etc. This is so that we correctly represent the value in a way that is specific to its type. For example, all `Name` values (from the _graphql-parser-hs_ library) are single-quoted now; no exceptions.
I have mostly had to add `instance ToErrorValue` for various backend types (and also add newtypes where necessary). Some of these are not strictly necessary for this changeset, as I had bigger aspirations when I started. These aspirations have been tempered by trying and failing twice.
As such, in this changeset, I have started by introducing this type to the `parseError` and `parseErrorWith` functions. In the future, I would like to extend this to the `QErr` record and the various `throwError` functions, but this is a much larger task and should probably be done in stages.
For now, `toErrorMessage` and `fromErrorMessage` are provided for conversion to and from `Text`, but the intent is to stop exporting these once all error messages are converted to the new type.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5018
GitOrigin-RevId: 84b37e238992e4312255a87ca44f41af65e2d89a
This removes the one remaining instance of `unsafeMkName` in production
code, uses `G.name` where possible in tests, and ignores instances where
it's not possible (such as `instance Arbitrary G.Name`).
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5074
GitOrigin-RevId: d8049edf1f1bc2ef25f34874ef5bd5a5934bd33d
### Description
A trivial PR, extracted out of #4936, that removes remote schema permissions from the schema context, as they are only ever used at the top level: whether or not we need to use remote schema permissions is not something that impacts _how_ we build the schema, but whether some parts of the schema should be built at all, and therefore doesn't need to be accessible throughout the build process.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5050
GitOrigin-RevId: 734673370393d5640ad753222982baf2698f6d8f
This moves `MkTypename` and `NamingCase` into their own modules, with the intent of reducing the scope of the schema parsers code, and trying to reduce imports of large modules when small ones will do.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4978
GitOrigin-RevId: 19541257fe010035390f6183a4eaa37bae0d3ca1
Earlier, if the `select` root field had a custom root field set, the same custom root field was then used for the streaming subscription root field as well. This leads to duplicate root fields being generated in the `subscription_root`.
This PR fixes that. It provides a way to customize the streaming subscription root field and not use the `select` root field's custom root field name for the streaming subscription root field.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4967
Co-authored-by: Anon Ray <616387+ecthiender@users.noreply.github.com>
GitOrigin-RevId: 54e74ce97561b0e5cfdfc60d1ca340aaebecf7d4
This reduces the usage of "utils" modules in the parsers code, especially those that are simply re-exported from elsewhere, to facilitate extracting the parsers code into its own library.
It mostly inlines the imports that are re-exported from `Hasura.Prelude` and `Data.Parser.JSONPath`. It also removes references to `Data.*.Extended` modules. When necessary, it re-implements the functionality (which is typically trivial).
It does not tackle all external dependencies. I observed the following that will take more work:
- `Data.GADT.Compare.Extended`
- `Data.Text.Extended`
- `Hasura.Base.Error`
- `Hasura.RQL.Types.Common`
- `Hasura.Server.Utils`
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4964
GitOrigin-RevId: 54ad3c1b7a31f13e34340ebe9fcc36d0ad57b8bd
This improves `parseJSONPath` and `encodeJSONPath` to encode special characters appropriately by delegating to Aeson.
This also makes a couple of improvements to `encodeJSONPath`.
1. The function is moved from `Hasura.Base.Error` to `Data.Parser.JSONPath`. This still doesn't seem too appropriate but it is somewhat better. I am basing this on the fact that its test cases already lived in `Data.Parser.JSONPathSpec`.
2. It now returns `Text`, not `String`.
4. It quotes strings with double quotes (`"`) rather than single quotes (`'`), just like JSON.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4935
GitOrigin-RevId: bf44353cd740500245f2e38907a7d6263ae0291c