1 Server code architecture
Alexis King edited this page 2020-01-16 18:34:36 -06:00
This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

The Hasura GraphQL Engines backend server is written in Haskell. For more details on building the server code, see server/CONTRIBUTING.md.

Table of contents

Table of Contents

Command

The server has a command line interface with 5 commands. They are

  1. serve - Start HTTP and WebSocket servers
  2. export - Export metadata in JSON
  3. clean - Clean metadata
  4. execute - Execute a JSON Query
  5. version - Print server version

optparse-applicative the package is used to generate and parse CLI commands and options.

In file src-exec/Init.hs:

  • RawHGECommand -> parsed command from CLI
  • parseHGECommand -> raw command parser
  • HGECommand -> resolved command with environment variables
  • mkHGEOptions -> make command with environment variables

Serve

This command starts GraphQL engine backend server. It accepts ServeOptions through the command line. Example

graphql-engine --database-url postgresql://postgres@localhost:5432/db serve --server-port 8181 --enable-console --admin-secret mySecret

This command does the following actions.

  1. Resolve Auth Mode using admin secret or auth webhook or JWT config.
  2. Make Postgres connection info.
  3. Create Postgres connection pool using connection info generated in 2
  4. Use pool created in 3 to initialise state
  5. Create server application and cache IO ref
  6. Start schema sync
  7. Start event trigger threads
  8. Start update checker
  9. Start telemetry reporter
  10. Start the application created in 5

Auth

The server operates in any one of four authorization modes. They are

  1. No Auth
  2. Only Admin Secret
  3. Admin Secret and Webhook
  4. Admin Secret and JWT

In file src-lib/Hasura/Server/Auth.hs:

  • AuthMode -> type represents authorization mode
  • mkAuthMode -> make authorization mode

Postgres Connection Info

The server uses an in-house custom library pg-client-hs to establish a connection with Postgres. The library provides an interface to create Postgres transactions and run them. Check the library for more details.

The server accepts Postgres connection parameters through CLI. Or user can also provide database URI string via --database-url option.

Postgres State Init

GraphQL Engine server reserves hdb_catalog and hdb_views schema to store application state and define custom triggers and views. Whenever the server starts it checks for these schemas to be present. If not present, define schemas, tables, views and functions. Refer src-rsr/initialize.sql file.

Postgres State Migrate

The hdb_catalog Postgres schema is versioned. You can modify the schema by bumping up the version. The migration SQL scripts are found in src-rsr folder.

On startup server queries version from hdb_catalog.hdb_version table. If it is not the current version, then server migrates the hdb_catalog schema. If it is greater than the current version, the server exits with an error.

In file src-exec/Migrate.hs:

  • migrateCatalog -> migrates catalog if required

Schema Sync

When multiple instances of GraphQL engine are running on the same database, we need to sync the metadata changes across all instances. This will facilitate horizontal scaling of GraphQL engine server.

The server uses Postgres' notify and listen feature. For any metadata request which modifies the state, the server makes an entry in hdb_catalog.hdb_schema_update_event table with its instance uuid. The server listens on that table for events. Server checks uuid present in an event payload with its uuid. If the event from another instance, then server re-builds metadata and replaces in reference. The server performs schema syncing in separate threads using forkIO

In file src-lib/Hasura/Server/SchemaUpdate.hs:

  • startSchemaSync -> start schema syncing threads

Create application and cache IO Ref

The Hasura GraphQL Engine uses Spock-core framework to build HTTP server and wai-websockets to build Websocket server. Learn more about cache IO Ref here

In src-lib/Hasura/Server/App.hs:

  • mkWaiApp -> Builds WAI application, Cache Reference
HTTP

The HTTP API has following routes

  • GET /console -> serves console if enabled
  • GET /healthz -> reports server health status
  • GET /v1/version -> fetch server version
  • POST /v1/query -> JSON Query API
  • POST /v1/graphql -> GraphQL API
  • POST /v1/graphql/explain -> Explain a GraphQL query
  • POST /v1alpha1/pg_dump -> PG Dump of database

Developer APIs:

  • GET /dev/plan_cache -> fetch cached query plans
  • GET /dev/subscriptions -> fetch active subscriptions
  • GET /dev/subscriptions/extended -> fetch extended details of active subscriptions
Websocket

The WebSocket server is used to perform GraphQL queries and subscriptions.

See file src-lib/Hasura/GraphQL/Transport/WebSocket.hs.

Event Triggers

The server creates two separate threads to fetch events from Postgres and process (consume) those events. An STM queue is used to propagate events between these two threads

See file src-lib/Hasura/Events/Lib.hs for more details.

Update Checker

The Server creates a thread to check for updates and log if available. The checker frequency is a day.

See file src-lib/Server/CheckUpdates.hs for more details.

Telemetry

If telemetry is enabled, the Server creates a thread to report anonymized metrics to the telemetry server regarding the usage of various features of Hasura.

See file src-lib/Hasura/Server/Telemetry.hs for more details.

Metadata

The Hasura GraphQL Engine metadata is application state or cache which stores essential information of

  • Tables/Views
  • SQL Functions
  • Relationships
  • Permissions
  • Event Triggers
  • Remote GraphQL Schemas
  • GraphQL Query Collections
  • Query Templates

Each one of above also referred to as metadata object. The Metadata APIs are used to add, drop or manage metadata. Only tables, views and SQL functions present in Metadata are via GraphQL and JSON Query API. Role-based GraphQL schema is auto-generated from metadata.

The type SchemaCache (see src-lib/Hasura/RQL/Types/SchemaCache.hs file) is internal representation of cached metadata.

The server stores raw metadata in hdb_catalog schema of Postgres.

Build Metadata

On start-up, RQReloadMetadata API call and in case of renames via RQRunSql the server fetches metadata stored in hdb_catalog schema and builds the SchemaCache. Cache, thus generated is replaced in IO reference.

In file src-lib/Hasura/RQL/DDL/Schema/Table.hs:

  • buildSchemaCache -> build metadata from postgres and remote servers

Inconsistent Metadata

A metadata object is said to be inconsistent if there is an exception in making it using the raw data from Postgres and stitching schema from remote servers. In a few cases, the server should build metadata by ignoring inconsistent objects.

The server should ignore inconsistencies in the following cases:

  1. On startup
  2. RQReloadMetadata query type

The server should consider inconsistencies in the following cases:

  1. Migrating catalog on startup
  2. Renaming tables, columns and relationships
  3. RQReplaceMetadata query types

In file src-lib/Hasura/RQL/DDL/Schema/Table.hs:

  • buildSchemaCacheStrict -> build metadata and throw exception if any inconsistency

Metadata Reference

Server stores metadata in an IO reference with its version. Modification to metadata in reference is guarded with lock and this ensures each request has the latest metadata.

In file src-lib/Hasura/Server/App.hs:

  • SchemaCacheRef -> type represents cache reference
  • withSCUpdate -> function used to update cache reference

Metadata Dependencies

In Hasura metadata, one object is dependant on others. Adding, removing or modifying an object may result in inconsistency of other objects. Say, if a column is used to define a relationship and user is trying to drop the column using RQRunSql then there'll be inconsistency in the relationship. We should track metadata dependencies and check for them when necessary. This is represented through a dependency map which is a hash map from a dependent to a set of its dependencies.

type DepMap = M.HashMap SchemaObjId (HS.HashSet SchemaDependency)

On dropping/modifying a dependency, the server obtains all dependents and reports to the client with an API exception.

In file src-lib/Hasura/Server/RQL/Types/SchemaCache.hs:

  • DepMap -> type that represents dependency map
  • getDependentObjs -> get dependents
  • addToDepMap -> add dependencies

See file src-lib/Hasura/RQL/DDL/Deps.hs for functions related to dependencies

JSON Query

There are two kinds of JSON query APIs available via /v1/query route. They are

  1. Metadata APIs
  2. Table data CRUD

See type RQLQuery in src-lib/Hasura/Server/Query.hs file.

Query types RQInsert, RQSelect, RQUpdate, RQDelete and RQCount are JSON queries and rest all are Metadata queries. The RQBulk query type essentially accepts queries in bulk as an array.

Metadata APIs

These APIs are used to modify the metadata in both cache and Postgres hdb_catalog.

Users can manage metadata via

  • RQReplaceMetadata -> Import and apply metadata
  • RQExportMetadata -> Export metadata
  • RQClearMetadata -> Clear user defined metadata
  • RQReloadMetadata -> Rebuild metadata from Postgres and add it to cache

Each metadata query occurs two phases:-

1. Validation: In this phase, the incoming request if validated. For example, if a request is made to track a table via RQTrackTable, the server checks whether it is being added already by looking up in schema cache. See trackExistingTableOrViewP1 in src-lib/Hasura/RQL/DDL/Schema/Table.hs file for tracking a table.

2. Execution: In this phase, the server fetches required information from Postgres to make the internal representation of metadata object. For table it is TableInfo (see src-lib/Hasura/RQL/Types/SchemaCache.hs file) Add the metadata object to cache along with it's dependencies. Learn more about dependencies here.

If execution is successful, we get a success message JSON and modified schema cache. The modified schema cache is replaced in cache IO reference and success message is sent to the client.

[Metadata Request] -> validate with cache -> add/drop/modify catalog & cache -> [Success | Error]

Metadata objects and related modules:-

tables -> src-lib/Hasura/RQL/DDL/Schema/Table.hs functions -> src-lib/Hasura/RQL/DDL/Schema/Function.hs relationships -> src-lib/Hasura/RQL/DDL/Relationship.hs permissions -> src-lib/Hasura/RQL/DDL/Permission.hs remote schemas -> src-lib/Hasura/RQL/DDL/RemoteSchema.hs event triggers -> src-lib/Hasura/RQL/DDL/EventTrigger.hs query collections -> src-lib/Hasura/RQL/DDL/QueryCollection.hs query templates -> src-lib/Hasura/RQL/DDL/QueryTemplate.hs metadata management -> src-lib/Hasura/RQL/DDL/Metadata.hs

Table data CRUD

These are custom syntax in JSON to perform CRUD operations on table data. The console uses these APIs to fetch data from Postgres such as tables or functions to be tracked, foreign keys, Postgres type information etc. The incoming JSON payload is validated and resolved to an internal AST by enforcing permission rules for the role. The internal AST is compiled to the SQL AST which is convertible to SQL String.

[JSON Request] -> validation with permission -> [Internal AST] -> [SQL AST] -> [SQL String]

Query types and related module:

  • select -> src-lib/Hasura/RQL/DML/Select.hs
  • delete -> src-lib/Hasura/RQL/DML/Delete.hs
  • update -> src-lib/Hasura/RQL/DML/Update.hs
  • insert -> src-lib/Hasura/RQL/DML/Insert.hs
  • count -> src-lib/Hasura/RQL/DML/Count.hs

The SQL AST

The GraphQL Engine, in a nutshell, is GraphQL/JSON to SQL compiler. All queries and mutations are eventually converted to a custom Abstract Syntax Tree (AST) which is encodable to a SQL String via ToSQL type class.

In file src-lib/Hasura/SQL/DML.hs:

  • SQLExp -> the AST for Postgres SQL.
  • Select -> represents select SQL statement.

The ToSQL type class

Any type which is convertible to SQL String.

In file src-lib/Hasura/SQL/Types.hs:

class ToSQL a where
  toSQL :: a -> TB.Builder

Where TB.Builder is a performant Text builder.

GraphQL

The Hasura GraphQL Engine server generates instant GraphQL APIs on top of Postgres table, views and SQL functions. The server also stitches remote schemas added to the metadata.

Parser

We use custom library graphql-parser-hs which has GraphQL AST and parser.

Schema Generation

Our entire schema is auto-generated from metadata. We generate role based schema in which permissions are enforced. For example, if a role doesn't have permission to select a table then the schema related to that table is not present in generated GraphQL schema. For each role, a GraphQL context is generated and stored in SchemaCache value.

In file src-lib/Hasura/GraphQL/Context.hs:

  • GCtx -> The GraphQL Context
  • GCtxMap -> Role to context map
  • OpCtxMap -> Operation context map

In file src-lib/Hasura/GraphQL/Schema.hs:

  • mkGCtxMap -> Builds GraphQL context

Validation

The incoming GraphQL request is validated along with variables and converted to annotated AST.

In file src-lib/Hasura/GraphQL/Validate.hs:

  • validateGQ -> validates the GraphQL request

In file src-lib/Hasura/GraphQL/Validate/Field.hs:

  • SelSet -> The annotated selection set

Execution

After validation, the annotated selection set (SelSet) is executed per each field. Operation context (OpCtxMap) which is in GraphQL context (GCtx) has required information to resolve a query/mutation.

In file src-lib/Hasura/GraphQL/Execute/Query.hs:

  • convertQuerySelSet -> Resolve query selection set
  • queryOpFromPlan -> Resolve cached query

In file src-lib/Hasura/GraphQL/Execute.hs:

  • resolveMutSelSet -> Resolve mutation selection set

Query Caching

All incoming queries and subscriptions can be cached only if they satisfy:

  1. All variables types should be primitive scalars
  2. Each top-level field should use all available variables

In file src-lib/Hasura/GraphQL/Execute/Query.hs:

  • QueryPlan -> GraphQL query compiled to SQL query
  • ReusableQueryPlan -> Cachable query
  • getReusablePlan -> Make a cachable query if possible

Introspection

The server supports native GraphQL introspection system.

In file src-lib/Hasura/GraphQL/Resolve/Introspect.hs:

  • schemaR -> Resolve __schema field
  • typeR -> Resolve __type field

Subscription

TODO

GraphQL Explain

The server supports fetching Postgres SQL query plan for a GraphQL query via /v1/graphql/explain endpoint. SQL, thus generated, is executed with EXPLAIN ANALYZE.

[GraphQL Query Request] -> validate -> resolve to SQL -> execute SQL with EXPLAIN ANALYZE

In file src-lib/Hasura/GraphQL/Explain.hs: GQLExplain -> Explain request explainGQLQuery -> Execute explain request

PG Dump

Server executes pg_dump and returns the output. See src-rsr/run_pg_dump.sh script.

In file src-lib/Hasura/Server/PGDump.hs:

  • PGDumpReqBody -> PG Dump API request
  • execPGDump -> Handler for PG Dump API

Source code tree

Executable

src-exec
├── Main.hs # the main module
├── Migrate.hs # catalog and metadata migrations
└── Ops.hs # operations

Library

src-lib
├── Data
│   ├── Aeson
│   │   └── Extended.hs
│   ├── HashMap
│   │   └── Strict
│   │       └── InsOrd
│   │           └── Extended.hs
│   ├── Parser
│   │   └── JSONPath.hs # JSON path parser
│   ├── Sequence
│   │   └── NonEmpty.hs # Non empty sequence
│   ├── TByteString.hs
│   └── Text
│       └── Extended.hs
├── Hasura
│   ├── Cache.hs # Abstractions for Unbounded cache
│   ├── Db.hs # Database operations
│   ├── EncJSON.hs # JSON text builder
│   ├── Events # Event triggers related
│   │   ├── HTTP.hs
│   │   └── Lib.hs
│   ├── GraphQL # All GraphQL related
│   │   ├── Context.hs
│   │   ├── Execute
│   │   │   ├── LiveQuery # Subscriptions
│   │   │   │   ├── Fallback.hs
│   │   │   │   ├── Multiplexed.hs
│   │   │   │   └── Types.hs
│   │   │   ├── LiveQuery.hs
│   │   │   ├── Plan.hs
│   │   │   └── Query.hs # HTTP query
│   │   ├── Execute.hs
│   │   ├── Explain.hs # Analyze GraphQL query
│   │   ├── RemoteServer.hs # Remote schemas
│   │   ├── Resolve # GraphQL query resolving
│   │   │   ├── BoolExp.hs
│   │   │   ├── Context.hs
│   │   │   ├── ContextTypes.hs
│   │   │   ├── InputValue.hs
│   │   │   ├── Insert.hs
│   │   │   ├── Introspect.hs
│   │   │   ├── Mutation.hs
│   │   │   └── Select.hs
│   │   ├── Resolve.hs
│   │   ├── Schema.hs # GraphQL schema generation
│   │   ├── Transport
│   │   │   ├── HTTP
│   │   │   │   └── Protocol.hs
│   │   │   ├── HTTP.hs
│   │   │   ├── WebSocket
│   │   │   │   ├── Protocol.hs
│   │   │   │   └── Server.hs
│   │   │   └── WebSocket.hs
│   │   ├── Utils.hs
│   │   ├── Validate # GraphQL query validation
│   │   │   ├── Context.hs
│   │   │   ├── Field.hs
│   │   │   ├── InputValue.hs
│   │   │   └── Types.hs
│   │   └── Validate.hs
│   ├── HTTP.hs
│   ├── Logging.hs # Global logging related
│   ├── Prelude.hs # Custom prelude
│   ├── RQL # JSON queries
│   │   ├── DDL # Metadata related
│   │   │   ├── Deps.hs
│   │   │   ├── EventTrigger.hs
│   │   │   ├── Headers.hs
│   │   │   ├── Metadata.hs
│   │   │   ├── Permission
│   │   │   │   ├── Internal.hs
│   │   │   │   └── Triggers.hs
│   │   │   ├── Permission.hs
│   │   │   ├── QueryCollection.hs
│   │   │   ├── QueryTemplate.hs
│   │   │   ├── Relationship
│   │   │   │   ├── Rename.hs
│   │   │   │   └── Types.hs
│   │   │   ├── Relationship.hs
│   │   │   ├── RemoteSchema.hs
│   │   │   ├── Schema
│   │   │   │   ├── Diff.hs
│   │   │   │   ├── Function.hs
│   │   │   │   ├── Rename.hs
│   │   │   │   └── Table.hs
│   │   │   └── Utils.hs
│   │   ├── DML # JSON CRUD queries
│   │   │   ├── Count.hs
│   │   │   ├── Delete.hs
│   │   │   ├── Insert.hs
│   │   │   ├── Internal.hs
│   │   │   ├── Mutation.hs
│   │   │   ├── QueryTemplate.hs
│   │   │   ├── Returning.hs
│   │   │   ├── Select
│   │   │   │   ├── Internal.hs
│   │   │   │   └── Types.hs
│   │   │   ├── Select.hs
│   │   │   └── Update.hs
│   │   ├── GBoolExp.hs # SQL boolean expressions
│   │   ├── Instances.hs
│   │   ├── Types
│   │   │   ├── BoolExp.hs
│   │   │   ├── Catalog.hs
│   │   │   ├── Common.hs
│   │   │   ├── DML.hs
│   │   │   ├── Error.hs
│   │   │   ├── EventTrigger.hs
│   │   │   ├── Metadata.hs
│   │   │   ├── Permission.hs
│   │   │   ├── QueryCollection.hs
│   │   │   ├── RemoteSchema.hs
│   │   │   ├── SchemaCache.hs
│   │   │   └── SchemaCacheTypes.hs
│   │   └── Types.hs
│   ├── Server # API server
│   │   ├── App.hs # Creates Http and Websocket app
│   │   ├── Auth # Authorization related
│   │   │   ├── JWT # JSON Web Token
│   │   │   │   ├── Internal.hs
│   │   │   │   └── Logging.hs
│   │   │   └── JWT.hs
│   │   ├── Auth.hs
│   │   ├── CheckUpdates.hs # Checks if any new version
│   │   ├── Config.hs # Get server configuration
│   │   ├── Context.hs
│   │   ├── Cors.hs
│   │   ├── Init.hs # Initialise server and CLI parser
│   │   ├── Logging.hs # Server logging
│   │   ├── Middleware.hs
│   │   ├── PGDump.hs # Dump postgres
│   │   ├── Query.hs # JSON query handler
│   │   ├── SchemaUpdate.hs # Horizontal scaling
│   │   ├── Telemetry.hs # Reports telemetry
│   │   ├── Utils.hs
│   │   └── Version.hs # Server version
│   └── SQL # SQL related
│       ├── DML.hs # AST
│       ├── GeoJSON.hs # JSON representation of geography/geometry types
│       ├── Rewrite.hs # Rewrite SQL Select AST with modified identifiers
│       ├── Time.hs # Postgres time
│       ├── Types.hs # Haskell types for Postgres
│       └── Value.hs # Postgres SQL value parser
└── Network
    └── URI
        └── Extended.hs # instances for URI

Resources

src-rsr
├── catalog_metadata.sql # SQL to fetch server metadata from Postgres
├── console.html # console HTML template
├── hdb_metadata.yaml # System defined Metadata
├── initialise.sql # Init SQL script
├── insert_trigger.sql.j2 # Insert permission trigger template
├── introspection.json # The GraphQL introspection query payload
├── migrate_from_10_to_11.sql # Migration scripts
├── migrate_from_11_to_12.sql
├── migrate_from_12_to_13.sql
├── migrate_from_13_to_14.sql
├── migrate_from_14_to_15.sql
├── migrate_from_15_to_16.sql
├── migrate_from_1.sql
├── migrate_from_4_to_5.sql
├── migrate_from_5_to_6.sql
├── migrate_from_6_to_7.sql
├── migrate_from_7_to_8.sql
├── migrate_from_8_to_9.sql
├── migrate_from_9_to_10.sql
├── migrate_metadata_from_15_to_16.yaml
├── migrate_metadata_from_1.yaml
├── migrate_metadata_from_4_to_5.yaml
├── migrate_metadata_from_7_to_8.yaml
├── migrate_metadata_from_8_to_9.yaml
├── run_pg_dump.sh # Script to dump postgres schema
├── schema.graphql # Default GraphQL schema
├── table_meta.sql # Fetch a table's metadata
└── trigger.sql.j2 # Event trigger template
  • All files with migrate_from_* are catalog schema migrations.
  • All files with migrate_metadata_from_* are metadata migrations.
  • All files with *.j2 are jinja templates