Commit Graph

34 Commits

Author SHA1 Message Date
Daniel Chambers
bfd046b224 Add additional tracing spans to HGE GraphQL queries and the Super Connector
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/9332
GitOrigin-RevId: ecde2383a42acf93fa8c6abb8bbd4c3b074b77fb
2023-05-31 05:49:12 +00:00
Tom Harding
e0c0043e76 Upgrade Ormolu to 0.7.0.0
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/9284
GitOrigin-RevId: 2f2cf2ad01900a54e4bdb970205ac0ef313c7e00
2023-05-24 13:53:53 +00:00
Tom Harding
b6799f0882 Import InsOrdHashMap, not OMap, OM, Map, HM, ...
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/8946
GitOrigin-RevId: 434e7c335bc69119020dd35761c7d4539bc51ff8
2023-04-27 07:43:22 +00:00
Tom Harding
7e334e08a4 Import HashMap, not HM, Map, M...
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/8947
GitOrigin-RevId: 18e52c928e1df535579e2077b4af6c2ce92bdcef
2023-04-26 15:43:44 +00:00
Daniel Chambers
56db8ec358 Data Connectors: Fix track_table not working if the table was created after schema cache was built
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/8732
GitOrigin-RevId: e11a0ef6979d1d58a0b39dcd8fff48d446d3420f
2023-04-13 01:30:50 +00:00
Rishichandra Wawhal
c6d65508b2 [feature branch] EE Lite Trials
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/8208
Co-authored-by: awjchen <13142944+awjchen@users.noreply.github.com>
Co-authored-by: Vijay Prasanna <11921040+vijayprasanna13@users.noreply.github.com>
Co-authored-by: Toan Nguyen  <1615675+hgiasac@users.noreply.github.com>
Co-authored-by: Abhijeet Khangarot <26903230+abhi40308@users.noreply.github.com>
Co-authored-by: Solomon <24038+solomon-b@users.noreply.github.com>
Co-authored-by: gneeri <10553562+gneeri@users.noreply.github.com>
GitOrigin-RevId: 454ee0dea636da77e43810edb2f427137027956c
2023-04-05 08:59:09 +00:00
Auke Booij
79b8a6a07b chore(server): move some query tags code to a sensible place
Also add a `default` implementation for `MonadQueryTags`.

This avoids a bunch of imports on `Hasura.GraphQL.Execute.Backend` which is a big module with lots of (transitive) dependencies.

PR-URL: https://github.com/hasura/graphql-engine-mono/pull/8571
GitOrigin-RevId: 8ecca452721b77953e6d088c79d8d6f003f2996f
2023-03-30 21:19:38 +00:00
Daniel Harvey
95f5553af6 chore(server): split new statistics log from QueryLog
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/8326
GitOrigin-RevId: 02ee652302de5328e63054a6448dca10de7b5c1b
2023-03-15 13:06:47 +00:00
Tom Harding
2124fa0f08 feature(server): make execution statistics available through logging
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/8286
Co-authored-by: Daniel Harvey <4729125+danieljharvey@users.noreply.github.com>
GitOrigin-RevId: 72de592c08778649693d8ff0a0555b16fb28c4bd
2023-03-14 11:33:45 +00:00
Daniel Chambers
dc83352863 Support for using Data Connectors as the target of remote relationships
[GDC-487]: https://hasurahq.atlassian.net/browse/GDC-487?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ
[GDC-488]: https://hasurahq.atlassian.net/browse/GDC-488?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ

PR-URL: https://github.com/hasura/graphql-engine-mono/pull/8182
GitOrigin-RevId: 712953666e1a5ea07500f1e1aed669f27650f7a4
2023-03-07 01:33:26 +00:00
Antoine Leblanc
6e574f1bbe harmonize network manager handling
## Description

### I want to speak to the `Manager`

Oh boy. This PR is both fairly straightforward and overreaching, so let's break it down.

For most network access, we need a [`HTTP.Manager`](https://hackage.haskell.org/package/http-client-0.1.0.0/docs/Network-HTTP-Client-Manager.html). It is created only once, at the top level, when starting the engine, and is then threaded through the application to wherever we need to make a network call. As of main, the way we do this is not standardized: most of the GraphQL execution code passes it "manually" as a function argument throughout the code. We also have a custom monad constraint, `HasHttpManagerM`, that describes a monad's ability to provide a manager. And, finally, several parts of the code store the manager in some kind of argument structure, such as `RunT`'s `RunCtx`.

This PR's first goal is to harmonize all of this: we always create the manager at the root, and we already have it when we do our very first `runReaderT`. Wouldn't it make sense for the rest of the code to not manually pass it anywhere, to not store it anywhere, but to always rely on the current monad providing it? This is, in short, what this PR does: it implements a constraint on the base monads, so that they provide the manager, and removes most explicit passing from the code.

### First come, first served

One way this PR goes a tiny bit further than "just" doing the aforementioned harmonization is that it starts the process of implementing the "Services oriented architecture" roughly outlined in this [draft document](https://docs.google.com/document/d/1FAigqrST0juU1WcT4HIxJxe1iEBwTuBZodTaeUvsKqQ/edit?usp=sharing). Instead of using the existing `HasHTTPManagerM`, this PR revamps it into the `ProvidesNetwork` service.

The idea is, again, that we should make all "external" dependencies of the engine, all things that the core of the engine doesn't care about, a "service". This allows us to define clear APIs for features, to choose different implementations based on which version of the engine we're running, harmonizes our many scattered monadic constraints... Which is why this service is called "Network": we can refine it, moving forward, to be the constraint that defines how all network communication is to operate, instead of relying on disparate classes constraint or hardcoded decisions. A comment in the code clarifies this intent.

### Side-effects? In my Haskell?

This PR also unavoidably touches some other aspects of the codebase. One such example: it introduces `Hasura.App.AppContext`, named after `HasuraPro.Context.AppContext`: a name for the reader structure at the base level. It also transforms `Handler` from a type alias to a newtype, as `Handler` is where we actually enforce HTTP limits; but without `Handler` being a distinct type, any code path could simply do a `runExceptT $ runReader` and forget to enforce them.

(As a rule of thumb, i am starting to consider any straggling `runReaderT` or `runExceptT` as a code smell: we should not stack / unstack monads haphazardly, and every layer should be an opaque `newtype` with a corresponding run function.)

## Further work

In several places, i have left TODOs when i have encountered things that suggest that we should do further unrelated cleanups. I'll write down the follow-up steps, either in the aforementioned document or on slack. But, in short, at a glance, in approximate order, we could:

- delete `ExecutionCtx` as it is only a subset of `ServerCtx`, and remove one more `runReaderT` call
- delete `ServerConfigCtx` as it is only a subset of `ServerCtx`, and remove it from `RunCtx`
- remove `ServerCtx` from `HandlerCtx`, and make it part of `AppContext`, or even make it the `AppContext` altogether (since, at least for the OSS version, `AppContext` is there again only a subset)
- remove `CacheBuildParams` and `CacheBuild` altogether, as they're just a distinct stack that is a `ReaderT` on top of `IO` that contains, you guessed it, the same thing as `ServerCtx`
- move `RunT` out of `RQL.Types` and rename it, since after the previous cleanups **it only contains `UserInfo`**; it could be bundled with the authentication service, made a small implementation detail in `Hasura.Server.Auth`
-  rename `PGMetadaStorageT` to something a bit more accurate, such as `App`, and enforce its IO base

This would significantly simply our complex stack. From there, or in parallel, we can start moving existing dependencies as Services. For the purpose of supporting read replicas entitlement, we could move `MonadResolveSource` to a `SourceResolver` service, as attempted in #7653, and transform `UserAuthenticationM` into a `Authentication` service.

PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7736
GitOrigin-RevId: 68cce710eb9e7d752bda1ba0c49541d24df8209f
2023-02-22 15:55:54 +00:00
Antoine Leblanc
dec8579db8 Allow backend execution to happen on the base app monad.
### Description

Each Backend executes queries against the database in a slightly different stack: Postgres uses its own `TXeT`, MSSQL uses a variant of it, BigQuery is simply in `ExceptT QErr IO`... To accommodate those variations, we had originally introduced an `ExecutionMonad b` type family in `BackendExecute`, allowing each backend to describe its own stack. It was then up to that backend's `BackendTransport` instance to implement running said stack, and converting the result back into our main app monad.

However, this was not without complications: `TraceT` is one of them: as it usually needs to be on the top of the stack, converting from one stack to the other implies the use `interpTraceT`, which is quite monstrous. Furthermore, as part of the Entitlement Services work, we're trying to move to a "Services" architecture in which the entire engine runs in one base monad, that delegates features and dependencies to monad constraints; and as a result we'd like to minimize the number of different monad stacks we have to maintain and translate from and to in the codebase.

To improve things, this PR changes `ExecutionMonad b` from an _absolute_ stack to a _relative_ one: i.e.: what needs to be stacked on top of our base monad for the execution. In `Transport`, we then only need to pop the top of the stack, and voila. This greatly simplifies the implementation of the backends, as there's no longer any need to do any stack transformation: MySQL's implementation becomes a `runIdentityT`! This also removes most mentions of `TraceT` from the execution code since it's no longer required: we can rely on the base monad's existing `MonadTrace` constraint.

To continue encapsulating monadic actions in `DBStepInfo` and avoid threading a bunch of `forall` all over the place, this PR introduces a small local helper: `OnBaseMonad`. One only downside of all this is that this requires adding `MonadBaseControl IO m` constraint all over the place: previously, we would run directly on `IO` and lift, and would therefore not need to bring that constraint all the way.

PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7789
GitOrigin-RevId: e9b2e431c5c47fa9851abf87545c0415ff6d1a12
2023-02-09 14:40:04 +00:00
Rakesh Emmadi
f2a5d7cef3 server/pro/multitenant: Postgres connection routing using kriti templates
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6822
Co-authored-by: paritosh-08 <85472423+paritosh-08@users.noreply.github.com>
Co-authored-by: Naveen Naidu <30195193+Naveenaidu@users.noreply.github.com>
Co-authored-by: Sooraj <8408875+soorajshankar@users.noreply.github.com>
Co-authored-by: Varun Choudhary <68095256+Varun-Choudhary@users.noreply.github.com>
Co-authored-by: Sean Park-Ross <94021366+seanparkross@users.noreply.github.com>
GitOrigin-RevId: 61cfc00a97de88df1ede3f26829a0d78ec9c0bc5
2023-01-25 07:14:31 +00:00
Samir Talwar
342391f39d Upgrade Ormolu to v0.5.
This upgrades the version of Ormolu required by the HGE repository to v0.5.0.1, and reformats all code accordingly.

Ormolu v0.5 reformats code that uses infix operators. This is mostly useful, adding newlines and indentation to make it clear which operators are applied first, but in some cases, it's unpleasant. To make this easier on the eyes, I had to do the following:

* Add a few fixity declarations (search for `infix`)
* Add parentheses to make precedence clear, allowing Ormolu to keep everything on one line
* Rename `relevantEq` to `(==~)` in #6651 and set it to `infix 4`
* Add a few _.ormolu_ files (thanks to @hallettj for helping me get started), mostly for Autodocodec operators that don't have explicit fixity declarations

In general, I think these changes are quite reasonable. They mostly affect indentation.

PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6675
GitOrigin-RevId: cd47d87f1d089fb0bc9dcbbe7798dbceedcd7d83
2022-11-02 20:55:13 +00:00
Vamshi Surabhi
a01d1188f2 scaffolding for remote-schemas module
The main aim of the PR is:

1. To set up a module structure for 'remote-schemas' package.
2. Move parts by the remote schema codebase into the new module structure to validate it.

## Notes to the reviewer

Why a PR with large-ish diff?

1. We've been making progress on the MM project but we don't yet know long it is going to take us to get to the first milestone. To understand this better, we need to figure out the unknowns as soon as possible. Hence I've taken a stab at the first two items in the [end-state](https://gist.github.com/0x777/ca2bdc4284d21c3eec153b51dea255c9) document to figure out the unknowns. Unsurprisingly, there are a bunch of issues that we haven't discussed earlier. These are documented in the 'open questions' section.

1. The diff is large but that is only code moved around and I've added a section that documents how things are moved. In addition, there are fair number of PR comments to help with the review process.

## Changes in the PR

### Module structure

Sets up the module structure as follows:

```
Hasura/
  RemoteSchema/
    Metadata/
      Types.hs
    SchemaCache/
      Types.hs
      Permission.hs
      RemoteRelationship.hs
      Build.hs
    MetadataAPI/
      Types.hs
      Execute.hs
```

### 1. Types representing metadata are moved

Types that capture metadata information (currently scattered across several RQL modules) are moved into `Hasura.RemoteSchema.Metadata.Types`.

- This new module only depends on very 'core' modules such as
  `Hasura.Session` for the notion of roles and `Hasura.Incremental` for `Cacheable` typeclass.

- The requirement on database modules is avoided by generalizing the remote schemas metadata to accept an arbitrary 'r' for a remote relationship
  definition.

### 2. SchemaCache related types and build logic have been moved

Types that represent remote schemas information in SchemaCache are moved into `Hasura.RemoteSchema.SchemaCache.Types`.

Similar to `H.RS.Metadata.Types`, this module depends on 'core' modules except for `Hasura.GraphQL.Parser.Variable`. It has something to do with remote relationships but I haven't spent time looking into it. The validation of 'remote relationships to remote schema' is also something that needs to be looked at.

Rips out the logic that builds remote schema's SchemaCache information from the monolithic `buildSchemaCacheRule` and moves it into `Hasura.RemoteSchema.SchemaCache.Build`. Further, the `.SchemaCache.Permission` and `.SchemaCache.RemoteRelationship` have been created from existing modules that capture schema cache building logic for those two components.

This was a fair amount of work. On main, currently remote schema's SchemaCache information is built in two phases - in the first phase, 'permissions' and 'remote relationships' are ignored and in the second phase they are filled in.

While remote relationships can only be resolved after partially resolving sources and other remote schemas, the same isn't true for permissions. Further, most of the work that is done to resolve remote relationships can be moved to the first phase so that the second phase can be a very simple traversal.

This is the approach that was taken - resolve permissions and as much as remote relationships information in the first phase.

### 3. Metadata APIs related types and build logic have been moved

The types that represent remote schema related metadata APIs and the execution logic have been moved to `Hasura.RemoteSchema.MetadataAPI.Types` and `.Execute` modules respectively.

## Open questions:

1. `Hasura.RemoteSchema.Metadata.Types` is so called because I was hoping that all of the metadata related APIs of remote schema can be brought in at `Hasura.RemoteSchema.Metadata.API`. However, as metadata APIs depended on functions from `SchemaCache` module (see [1](ceba6d6226/server/src-lib/Hasura/RQL/DDL/RemoteSchema.hs (L55)) and [2](ceba6d6226/server/src-lib/Hasura/RQL/DDL/RemoteSchema.hs (L91)), it made more sense to create a separate top-level module for `MetadataAPI`s.

   Maybe we can just have `Hasura.RemoteSchema.Metadata` and get rid of the extra nesting or have `Hasura.RemoteSchema.Metadata.{Core,Permission,RemoteRelationship}` if we want to break them down further.

1. `buildRemoteSchemas` in `H.RS.SchemaCache.Build` has the following type:

   ```haskell
   buildRemoteSchemas ::
     ( ArrowChoice arr,
       Inc.ArrowDistribute arr,
       ArrowWriter (Seq CollectedInfo) arr,
       Inc.ArrowCache m arr,
       MonadIO m,
       HasHttpManagerM m,
       Inc.Cacheable remoteRelationshipDefinition,
       ToJSON remoteRelationshipDefinition,
       MonadError QErr m
     ) =>
     Env.Environment ->
     ( (Inc.Dependency (HashMap RemoteSchemaName Inc.InvalidationKey), OrderedRoles),
       [RemoteSchemaMetadataG remoteRelationshipDefinition]
     )
       `arr` HashMap RemoteSchemaName (PartiallyResolvedRemoteSchemaCtxG remoteRelationshipDefinition, MetadataObject)
   ```

   Note the dependence on `CollectedInfo` which is defined as

   ```haskell
   data CollectedInfo
     = CIInconsistency InconsistentMetadata
     | CIDependency
         MetadataObject
         -- ^ for error reporting on missing dependencies
         SchemaObjId
         SchemaDependency
     deriving (Eq)
   ```

   this pretty much means that remote schemas is dependent on types from databases, actions, ....

   How do we fix this? Maybe introduce a typeclass such as `ArrowCollectRemoteSchemaDependencies` which is defined in `Hasura.RemoteSchema` and then implemented in graphql-engine?

1. The dependency on `buildSchemaCacheFor` in `.MetadataAPI.Execute` which has the following signature:

   ```haskell
   buildSchemaCacheFor ::
     (QErrM m, CacheRWM m, MetadataM m) =>
     MetadataObjId ->
     MetadataModifier ->
   ```

   This can be easily resolved if we restrict what the metadata APIs are allowed to do. Currently, they operate in an unfettered access to modify SchemaCache (the `CacheRWM` constraint):

   ```haskell
   runAddRemoteSchema ::
     ( QErrM m,
       CacheRWM m,
       MonadIO m,
       HasHttpManagerM m,
       MetadataM m,
       Tracing.MonadTrace m
     ) =>
     Env.Environment ->
     AddRemoteSchemaQuery ->
     m EncJSON
   ```

   This should instead be changed to restrict remote schema APIs to only modify remote schema metadata (but has access to the remote schemas part of the schema cache), this dependency is completely removed.

   ```haskell
   runAddRemoteSchema ::
     ( QErrM m,
       MonadIO m,
       HasHttpManagerM m,
       MonadReader RemoteSchemasSchemaCache m,
       MonadState RemoteSchemaMetadata m,
       Tracing.MonadTrace m
     ) =>
     Env.Environment ->
     AddRemoteSchemaQuery ->
     m RemoteSchemeMetadataObjId
   ```

   The idea is that the core graphql-engine would call these functions and then call
   `buildSchemaCacheFor`.

PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6291
GitOrigin-RevId: 51357148c6404afe70219afa71bd1d59bdf4ffc6
2022-10-21 03:15:04 +00:00
Tom Harding
178e452b6b Use witherable, remove catMaybes/mapMaybe
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5250
GitOrigin-RevId: 5f0a582b3a853d2dbcce20e88c17970290625fc6
2022-07-29 14:53:16 +00:00
Antoine Leblanc
3cbcbd9291 Remove RQL/Types.hs
## Description

This PR removes `RQL.Types`, which was now only re-exporting a bunch of unrelated modules.

PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4363
GitOrigin-RevId: 894f29a19bff70b3dad8abc5d9858434d5065417
2022-04-27 13:58:47 +00:00
Antoine Leblanc
effde675aa Clean RemoteJoin.Join by introducing RemoteJoin.Source
### Description

This PR cleans `processRemoteJoins` by splitting the code, introducing comments, and applied the same strategies than #3810 did. Most importantly, it introduces a new module `RemoteJoin.Source`, made to be very similar to `RemoteJoin.RemoteSchema`, that exposes the required tooling to make a join call to a source, which decluters `Join`. Furthermore, this PR uses the same "dependency injection" to make the core of `Join` free from IO: this opens the door to testing the join engine in the unit tests.

None of the functions were modified when moved from their old module to the new one, but there's no way to easily see this in a diff.

PR-URL: https://github.com/hasura/graphql-engine-mono/pull/3894
GitOrigin-RevId: 1e7c43006f092326e061f9ba12674e207b628bef
2022-03-10 15:26:24 +00:00
Antoine Leblanc
6e1761f8f9 Enable remote joins from remote schemas in the execution engine.
### Description

This PR adds the ability to perform remote joins from remote schemas in the engine. To do so, we alter the definition of an `ExecutionStep` targeting a remote schema: the `ExecStepRemote` constructor now expects a `Maybe RemoteJoins`. This new argument is used when processing the execution step, in the transport layer (either `Transport.HTTP` or `Transport.WebSocket`).

For this `Maybe RemoteJoins` to be extracted from a parsed query, this PR also extends the `Execute.RemoteJoin.Collect` module, to implement "collection" from a selection set. Not only do those new functions extract the remote joins, but they also apply all necessary transformations to the selection sets (such as inserting the necessary "phantom" fields used as join keys).

Finally in `Execute.RemoteJoin.Join`, we make two changes. First, we now always look for nested remote joins, regardless of whether the join we just performed went to a source or a remote schema; and second we adapt our join tree logic according to the special cases that were added to deal with remote server edge cases.

Additionally, this PR refactors / cleans / documents `Execute.RemoteJoin.RemoteServer`. This is not required as part of this change and could be moved to a separate PR if needed (a similar cleanup of `Join` is done independently in #3894). It also introduces a draft of a new documentation page for this project, that will be refined in the release PR that ships the feature (either #3069 or a copy of it).

While this PR extends the engine, it doesn't plug such relationships in the schema, meaning that, as of this PR, the new code paths in `Join` are technically unreachable. Adding the corresponding schema code and, ultimately, enabling the metadata API will be done in subsequent PRs.

### Keeping track of concrete type names

The main change this PR makes to the existing `Join` code is to handle a new reserved field we sometimes use when targeting remote servers: the `__hasura_internal_typename` field. In short, a GraphQL selection set can sometimes "branch" based on the concrete "runtime type" of the object on which the selection happens:

```graphql
query {
  author(id: 53478) {
    ... on Writer {
      name
      articles {
        title
      }
    }
    ... on Artist {
      name
      articles {
        title
      }
    }
  }
}
```

If both of those `articles` are remote joins, we need to be able, when we get the answer, to differentiate between the two different cases. We do this by asking for `__typename`, to be able to decide if we're in the `Writer` or the `Artist` branch of the query.

To avoid further processing / customization of results, we only insert this `__hasura_internal_typename: __typename` field in the query in the case of unions of interfaces AND if we have the guarantee that we will processing the request as part of the remote joins "folding": that is, if there's any remote join in this branch in the tree. Otherwise, we don't insert the field, and we leave that part of the response untouched.

PR-URL: https://github.com/hasura/graphql-engine-mono/pull/3810
GitOrigin-RevId: 89aaf16274d68e26ad3730b80c2d2fdc2896b96c
2022-03-09 03:18:22 +00:00
Antoine Leblanc
f96b889401 Replace all occurrences of mapMaybe id by catMaybes.
### Description

Several libraries define `catMaybes` as `mapMaybe id`. We had it defined in `Data.HashMap.Strict.Extended` already. This small PR also defines it in `Extended` modules for other containers and replaces every occurrence of `mapMaybe id` accordingly.

PR-URL: https://github.com/hasura/graphql-engine-mono/pull/3884
GitOrigin-RevId: d222a2ca2f4eb9b725b20450a62a626d3886dbf4
2022-03-03 20:13:10 +00:00
Antoine Leblanc
0e3beb028d Extract generic containers from the codebase
### Description

There were several places in the codebase where we would either implement a generic container, or express the need for one. This PR extracts / creates all relevant containers, and adapts the relevant parts of the code to make use of said new generic containers. More specifically, it introduces the following modules:
- `Data.Set.Extended`, for new functions on `Data.Set`
- `Data.HashMap.Strict.Multi`, for hash maps that accept multiple values
- `Data.HashMap.Strict.NonEmpty`, for hash maps that can never be constructed as empty
- `Data.Trie`, for a generic implementation of a prefix tree

This PR makes use of those new containers in the following parts of the code:
- `Hasura.GraphQL.Execute.RemoteJoin.Types`
- `Hasura.RQL.Types.Endpoint*`

PR-URL: https://github.com/hasura/graphql-engine-mono/pull/3828
GitOrigin-RevId: e6c1b971bcb3f5ab66bc91d0fa4d0e9df7a0c6c6
2022-03-01 16:04:22 +00:00
jkachmar
d50aae87a5 Updates cabal freeze file
#### TODO

- [x] fix `hashable >= 1.3.1` serialization ordering issue [^1]
  - `test_graphql_mutations.py::TestGraphQLMutateEnums` was failing
- [x] fix `unordered-containers` serialization ordering issue [^2]
  - `test_graphql_queries.py` was failing on Citus
- [ ] verify that no new failures have been introduced
- [ ] open issues to fix the above
  - identify test cases that "leak" implementation details by depending on `hashable` instance ordering
  - bump `hashable >= 1.3.1` and update test cases with new ordering OR modify them so that ordering is stable
  - bump `unordered-containers >= 0.2.15.0` and update test cases with new ordering OR modify them so that ordering is stable
    - one of the test cases was failing on string equality comparison for a generated Citus query
    - we probably don't want to _actually_ do this unless there are _very specific_ guarantees we want to make about generated query structure
---

Just what it says on the tin.

https://github.com/hasura/graphql-engine-mono/pull/3538 updated the freeze file a few weeks ago, but it looks like the index state hadn't been updated since December so a lot of stuff that had newer versions didn't get updated.

---

EDIT: I should add, the motivation for doing this in the first place is that `hspec > 2.8.4` now supports specifying filtering spec trees based on patterns provided by the `HSPEC_MATCH` environment variable.

For example, one could have a script that executes the following:
```
HSPEC_MATCH="PostgreSQL" \
  ghcid \
    --command \
      'cabal repl graphql-engine:test:tests-hspec \
         --repl-option -O0 \
         --repl-option -fobject-code' \
    --test "main"
```
...which will loop on typechecking the `tests-hspec` component, and then as soon as it passes (i.e. no warnings or errors) will run _only_ the `PostgreSQL` sub-components.

[^1]: `hashable >= 1.3.1.0` [updated its default salts](https://github.com/haskell-unordered-containers/hashable/pull/196), which [broke serialization ordering](https://github.com/haskell/aeson/issues/837)
[^2]: `unordered-containers >= 0.2.16.0` [introduced changes to some of its internal functions](https://hackage.haskell.org/package/unordered-containers-0.2.16.0/changelog) which seem like they could have affected serialization stability

PR-URL: https://github.com/hasura/graphql-engine-mono/pull/3672
GitOrigin-RevId: bbd1d48c73db4021913f0b5345b7315a8d6525d3
2022-02-18 05:32:08 +00:00
Robert
1ff3723ed8 server: assorted minor clean-up around HTTP managers
- consistent qualified imports
- less convoluted initialization of pro logging HTTP manager
- pass pro HTTP manager directly instead of via Has
- remove some dead healthcheck code

PR-URL: https://github.com/hasura/graphql-engine-mono/pull/3639
GitOrigin-RevId: dfa7b9c62d1842a07a8514cdb77f1ed86064fb06
2022-02-16 07:09:47 +00:00
Antoine Leblanc
a023a3259d Prevent uses of unsafeMkName whenever possible.
### Description

This PR is the result of a discussion in #3363. Namely, we would like to remove all uses of `unsafeMkName`, or at the very least document every single one of them, to avoid similar issues. To do so, this PR does the following:
- it adds a hlint suggestion not to use that function:
  - suggestions don't mark the PR as failed, but will be shown at review time
  - it is possible to disable that hint with `{- HLINT ignore myFunction "unsafe" -}`
- wherever possible, it removes uses of `unsafeMkName` in favour of `mkName`
- it adds a comment with a tracking issue for the two remaining uses:
  - #3478
  - #3479

### Remaining work

- discuss whether this hint should make the linter step fail, since the linter step isn't required to merge anyway, and there is a way to disable the hint wherever we think the use of that function is acceptable
- check that none of those uses were load-bearing and result in errors now

PR-URL: https://github.com/hasura/graphql-engine-mono/pull/3480
GitOrigin-RevId: 0a7e3e9d1a48185764c04ab61e34b58273af347c
2022-01-27 15:13:37 +00:00
Kirill Zaborsky
a2afe4116b BigQuery remote joins
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/2874
Co-authored-by: Abby Sassel <3883855+sassela@users.noreply.github.com>
GitOrigin-RevId: 24b0304716795a28038629775238996c28b312a3
2021-11-24 16:22:55 +00:00
David Overton
aac64f2c81 Source typename customization (close graphql-engine#6974)
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/1616
GitOrigin-RevId: f7eefd2367929209aa77895ea585e96a99a78d47
2021-10-29 14:43:14 +00:00
Robert
71af68e9e5 server: drop HasVersion implicit parameter (closes #2236)
The only real use was for the dubious multitenant option
--consoleAssetsVersion, which actually overrode not just
the assets version. I.e., as far as I can tell, if you pass
--consoleAssetsVersion to multitenant, that version will
also make it into e.g. HTTP client user agent headers as
the proper graphql-engine version.

I'm dropping that option, since it seems unused in production
and I don't want to go to the effort of fixing it, but am happy
to look into that if folks feels strongly that it should be
kept.

(Reason for attacking this is that I was looking into http
client things around blacklisting, and the versioning thing
is a bit painful around http client headers.)

PR-URL: https://github.com/hasura/graphql-engine-mono/pull/2458
GitOrigin-RevId: a02b05557124bdba9f65e96b3aa2746aeee03f4a
2021-10-13 16:39:58 +00:00
Robert
11a454c2d6 server, pro: actually reformat the code-base using ormolu
This commit applies ormolu to the whole Haskell code base by running `make format`.

For in-flight branches, simply merging changes from `main` will result in merge conflicts.
To avoid this, update your branch using the following instructions. Replace `<format-commit>`
by the hash of *this* commit.

$ git checkout my-feature-branch
$ git merge <format-commit>^    # and resolve conflicts normally
$ make format
$ git commit -a -m "reformat with ormolu"
$ git merge -s ours post-ormolu

https://github.com/hasura/graphql-engine-mono/pull/2404

GitOrigin-RevId: 75049f5c12f430c615eafb4c6b8e83e371e01c8e
2021-09-23 22:57:37 +00:00
jkachmar
112d206fa6 Adds Remote Source Join Execution
https://github.com/hasura/graphql-engine-mono/pull/2038

Co-authored-by: Phil Freeman <630306+paf31@users.noreply.github.com>
GitOrigin-RevId: 0843bd0610822469f727d768810694b748fec790
2021-09-22 10:44:01 +00:00
jkachmar
4a83bb1834 Remote schema execution logic
https://github.com/hasura/graphql-engine-mono/pull/1995

Co-authored-by: David Overton <7734777+dmoverton@users.noreply.github.com>
GitOrigin-RevId: 178669089ec5e63b1f3da1d3ba0a9f8debbc108d
2021-08-06 13:40:37 +00:00
David Overton
1000df2e0a Fix/remote nested field customization
https://github.com/hasura/graphql-engine-mono/pull/2009

GitOrigin-RevId: bec59d2173afd6f392997f6bd953775f4a42dd48
2021-08-05 14:59:55 +00:00
David Overton
1abb1dee69 Remote Schema Customization take 2 using parser tranformations
https://github.com/hasura/graphql-engine-mono/pull/1740

GitOrigin-RevId: e807952058243a97f67cd9969fa434933a08652f
2021-07-30 11:33:59 +00:00
Antoine Leblanc
0a9382f2dc [gardening] remove Alias from Backend
### Description

In our haste to generalize everything for MSSQL, we put every single "suspicious" type in Backend, including ones that weren't required. `Alias` is one of those: it's only used in a type alias, and is actually just an implementation detail of the translation layer. This PR removes it.

https://github.com/hasura/graphql-engine-mono/pull/1759

GitOrigin-RevId: fb348934ec65a51aae7f95d93c83c3bb704587b5
2021-07-09 15:54:56 +00:00
Vamshi Surabhi
e8e4f30dd6 server: support remote relationships on SQL Server and BigQuery (#1497)
Remote relationships are now supported on SQL Server and BigQuery. The major change though is the re-architecture of remote join execution logic. Prior to this PR, each backend is responsible for processing the remote relationships that are part of their AST.

This is not ideal as there is nothing specific about a remote join's execution that ties it to a backend. The only backend specific part is whether or not the specification of the remote relationship is valid (i.e, we'll need to validate whether the scalars are compatible).

The approach now changes to this:

1. Before delegating the AST to the backend, we traverse the AST, collect all the remote joins while modifying the AST to add necessary join fields where needed.

1. Once the remote joins are collected from the AST, the database call is made to fetch the response. The necessary data for the remote join(s) is collected from the database's response and one or more remote schema calls are constructed as necessary.

1. The remote schema calls are then executed and the data from the database and from the remote schemas is joined to produce the final response.

### Known issues

1. Ideally the traversal of the IR to collect remote joins should return an AST which does not include remote join fields. This operation can be type safe but isn't taken up as part of the PR.

1. There is a lot of code duplication between `Transport/HTTP.hs` and `Transport/Websocket.hs` which needs to be fixed ASAP. This too hasn't been taken up by this PR.

1. The type which represents the execution plan is only modified to handle our current remote joins and as such it will have to be changed to accommodate general remote joins.

1. Use of lenses would have reduced the boilerplate code to collect remote joins from the base AST.

1. The current remote join logic assumes that the join columns of a remote relationship appear with their names in the database response. This however is incorrect as they could be aliased. This can be taken up by anyone, I've left a comment in the code.

### Notes to the reviewers

I think it is best reviewed commit by commit.

1. The first one is very straight forward.

1. The second one refactors the remote join execution logic but other than moving things around, it doesn't change the user facing functionality.  This moves Postgres specific parts to `Backends/Postgres` module from `Execute`. Some IR related code to `Hasura.RQL.IR` module.  Simplifies various type class function signatures as a backend doesn't have to handle remote joins anymore

1. The third one fixes partial case matches that for some weird reason weren't shown as warnings before this refactor

1. The fourth one generalizes the validation logic of remote relationships and implements `scalarTypeGraphQLName` function on SQL Server and BigQuery which is used by the validation logic. This enables remote relationships on BigQuery and SQL Server.

https://github.com/hasura/graphql-engine-mono/pull/1497

GitOrigin-RevId: 77dd8eed326602b16e9a8496f52f46d22b795598
2021-06-11 03:27:39 +00:00