fix typos in documentation (#2562)

This commit is contained in:
Marion Schleifer 2019-09-11 09:17:14 +02:00 committed by Shahidh K Muhammed
parent 1487cc1d26
commit 480b34ea5e
133 changed files with 775 additions and 1215 deletions

View File

@ -372,7 +372,9 @@ class Schema extends Component {
Track Track
</Button> </Button>
</div> </div>
<div className={styles.display_inline}>{displayTableName(table)}</div> <div className={styles.display_inline}>
{displayTableName(table)}
</div>
{gqlCompatibilityWarning} {gqlCompatibilityWarning}
</div> </div>
); );

View File

@ -18,7 +18,7 @@ change/task that requires modifications in some component of GraphQL engine and
Feel free to open pull requests to address these issues or to add/fix docs features/content, even if a Feel free to open pull requests to address these issues or to add/fix docs features/content, even if a
corresponding issue doesn't exist. If you are unsure about whether to go ahead and work on something like corresponding issue doesn't exist. If you are unsure about whether to go ahead and work on something like
the latter, please get in touch with the maintainers in the `GraphQL Engine`->`contrib` channel in the the latter, please get in touch with the maintainers in the `GraphQL engine`->`contrib` channel in the
community [Discord](https://discord.gg/vBPpJkS). community [Discord](https://discord.gg/vBPpJkS).
## Setup requirements ## Setup requirements
@ -101,6 +101,7 @@ without losing any visible quality.
### Syntax ### Syntax
- Ensure heading underlines are the same length as the headings. Short underlines will throw warnings during builds. - Ensure heading underlines are the same length as the headings. Short underlines will throw warnings during builds.
- Use bold in headings in place of string literals for aesthetics (i.e. ** in place of ``)
- While adding code blocks ensure the right language type is set. Sometimes adding placeholders breaks the language's - While adding code blocks ensure the right language type is set. Sometimes adding placeholders breaks the language's
syntax in which case you'll have to set the language type to `none` to avoid warnings during builds. syntax in which case you'll have to set the language type to `none` to avoid warnings during builds.

View File

@ -1,6 +1,6 @@
# Hasura GraphQL Engine Docs # Hasura GraphQL Engine Docs
The documentation accompanying the Hasura GraphQL Engine is written with The documentation accompanying the Hasura GraphQL engine is written with
[Sphinx](http://www.sphinx-doc.org/en/master/) and deployed to [Sphinx](http://www.sphinx-doc.org/en/master/) and deployed to
[docs.hasura.io](https://docs.hasura.io). [docs.hasura.io](https://docs.hasura.io).

View File

@ -138,7 +138,7 @@
</div> </div>
<br/> <br/>
<div> <div>
<a target="_blank" href="https://github.com/hasura/graphql-engine">Hasura GraphQL Engine</a> is open source. <a target="_blank" href="https://github.com/hasura/graphql-engine/blob/master/LICENSE">See license</a> <a target="_blank" href="https://github.com/hasura/graphql-engine">Hasura GraphQL engine</a> is open source. <a target="_blank" href="https://github.com/hasura/graphql-engine/blob/master/LICENSE">See license</a>
</div> </div>
<div> <div>
Powered by <a target="_blank" href="http://www.sphinx-doc.org">Sphinx</a>. Powered by <a target="_blank" href="http://www.sphinx-doc.org">Sphinx</a>.

View File

@ -4,14 +4,14 @@
<div class="box_wrapper"> <div class="box_wrapper">
<div class="box_head_wrapper"> <div class="box_head_wrapper">
<div class="box_head"> <div class="box_head">
<img src="{{ pathto('_images/landing/graphql.svg', 1) }}" alt="GraphQL Engine"/> <img src="{{ pathto('_images/landing/graphql.svg', 1) }}" alt="GraphQL engine"/>
<h3 class="head_wrapper">1. The Hasura GraphQL Engine</h3> <h3 class="head_wrapper">1. The Hasura GraphQL engine</h3>
</div> </div>
<div class="view_all_wrapper small_content"> <div class="view_all_wrapper small_content">
</div> </div>
</div> </div>
<div class="small_content space_wrapper text_left"> <div class="small_content space_wrapper text_left">
This guide covers all Hasura GraphQL Engine concepts and features. This guide covers all Hasura GraphQL engine concepts and features.
<br/> <br/> <br/> <br/>
</div> </div>
<div class="sign_in_wrapper space_wrapper"> <div class="sign_in_wrapper space_wrapper">

View File

@ -62,4 +62,4 @@ the list of enabled APIs.
--enabled-apis="graphql,metadata" --enabled-apis="graphql,metadata"
HASURA_GRAPHQL_ENABLED_APIS="graphql,metadata" HASURA_GRAPHQL_ENABLED_APIS="graphql,metadata"
See :doc:`../deployment/graphql-engine-flags/reference` for info on setting the above flag/env var See :doc:`../deployment/graphql-engine-flags/reference` for info on setting the above flag/env var.

View File

@ -11,12 +11,12 @@ All GraphQL requests for queries, subscriptions and mutations are made to the Gr
Endpoint Endpoint
-------- --------
All requests are ``POST`` requests to ``/v1/graphql`` (or ``/v1alpha1/graphql``) endpoint. All requests are ``POST`` requests to the ``/v1/graphql`` (or ``/v1alpha1/graphql``) endpoint.
.. note:: .. note::
``/v1/graphql`` endpoint returns HTTP 200 status codes for all responses. The ``/v1/graphql`` endpoint returns HTTP 200 status codes for all responses.
This is a **breaking** change from ``/v1alpha1/graphql`` behaviour, where This is a **breaking** change from the ``/v1alpha1/graphql`` behaviour, where
request errors and internal errors were responded with 4xx and 5xx status request errors and internal errors were responded with 4xx and 5xx status
codes. codes.

View File

@ -8,8 +8,8 @@ API Reference - Mutation
.. _insert_upsert_syntax: .. _insert_upsert_syntax:
Insert/Upsert syntax Insert / upsert syntax
-------------------- ----------------------
.. code-block:: none .. code-block:: none
@ -31,19 +31,19 @@ Insert/Upsert syntax
* - mutation-name * - mutation-name
- false - false
- Value - Value
- Name mutation for observability - Name of mutation for observability
* - mutation-field-name * - mutation-field-name
- true - true
- Value - Value
- name of the auto-generated mutation field. E.g. *insert_author* - Name of the auto-generated mutation field, e.g. *insert_author*
* - input-object * - input-object
- true - true
- InputObject_ - InputObject_
- name of the auto-generated mutation field - Name of the auto-generated mutation field
* - mutation-response * - mutation-response
- true - true
- MutationResponse_ - MutationResponse_
- Object to be returned after mutation succeeds. - Object to be returned after mutation succeeds
* - on-conflict * - on-conflict
- false - false
- ConflictClause_ - ConflictClause_
@ -120,11 +120,11 @@ Update syntax
* - mutation-field-name * - mutation-field-name
- true - true
- Value - Value
- name of the auto-generated update mutation field. E.g. *update_author* - Name of the auto-generated update mutation field, e.g. *update_author*
* - where-argument * - where-argument
- true - true
- whereArgExp_ - whereArgExp_
- selection criteria for rows to be updated - Selection criteria for rows to be updated
* - set-argument * - set-argument
- false - false
- setArgExp_ - setArgExp_
@ -144,19 +144,19 @@ Update syntax
* - delete-key-argument * - delete-key-argument
- false - false
- deleteKeyArgExp_ - deleteKeyArgExp_
- key to be deleted in the value of JSONB columns in the table - Key to be deleted in the value of JSONB columns in the table
* - delete-elem-argument * - delete-elem-argument
- false - false
- deleteElemArgExp_ - deleteElemArgExp_
- array element to be deleted in the value of JSONB columns in the table - Array element to be deleted in the value of JSONB columns in the table
* - delete-at-path-argument * - delete-at-path-argument
- false - false
- deleteAtPathArgExp_ - deleteAtPathArgExp_
- element at path to be deleted in the value of JSONB columns in the table - Element at path to be deleted in the value of JSONB columns in the table
* - mutation-response * - mutation-response
- true - true
- MutationResponse_ - MutationResponse_
- Object to be returned after mutation succeeds. - Object to be returned after mutation succeeds
**E.g. UPDATE**: **E.g. UPDATE**:
@ -199,15 +199,15 @@ Delete syntax
* - mutation-field-name * - mutation-field-name
- true - true
- Value - Value
- name of the auto-generated delete mutation field. E.g. *delete_author* - Name of the auto-generated delete mutation field, e.g. *delete_author*
* - where-argument * - where-argument
- true - true
- whereArgExp_ - whereArgExp_
- selection criteria for rows to delete - Selection criteria for rows to delete
* - mutation-response * - mutation-response
- true - true
- MutationResponse_ - MutationResponse_
- Object to be returned after mutation succeeds. - Object to be returned after mutation succeeds
**E.g. DELETE**: **E.g. DELETE**:
@ -234,7 +234,7 @@ Syntax definitions
.. _MutationResponse: .. _MutationResponse:
Mutation Response Mutation response
^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^
.. code-block:: none .. code-block:: none
@ -305,7 +305,7 @@ E.g.:
**on_conflict** argument **on_conflict** argument
^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^
Conflict clause is used to convert an *insert* query to an *upsert* query. *Upsert* respects the table's *update* The conflict clause is used to convert an *insert* query to an *upsert* query. *Upsert* respects the table's *update*
permissions before editing an existing row in case of a conflict. Hence the conflict clause is permitted only if a permissions before editing an existing row in case of a conflict. Hence the conflict clause is permitted only if a
table has *update* permissions defined. table has *update* permissions defined.

View File

@ -6,8 +6,8 @@ API Reference - Query / Subscription
:depth: 3 :depth: 3
:local: :local:
Query/Subscription syntax Query / subscription syntax
------------------------- ---------------------------
.. code-block:: none .. code-block:: none
@ -75,7 +75,7 @@ Object
.. _SimpleObject: .. _SimpleObject:
Simple Object Simple object
************* *************
.. code-block:: none .. code-block:: none
@ -133,7 +133,7 @@ E.g.
.. _AggregateObject: .. _AggregateObject:
Aggregate Object Aggregate object
**************** ****************
.. code-block:: none .. code-block:: none
@ -192,7 +192,7 @@ Aggregate Object
} }
} }
(For more details on aggregate functions, refer to `Postgres docs <https://www.postgresql.org/docs/current/functions-aggregate.html#FUNCTIONS-AGGREGATE-STATISTICS-TABLE>`__.) (For more details on aggregate functions, refer to the `Postgres docs <https://www.postgresql.org/docs/current/functions-aggregate.html#FUNCTIONS-AGGREGATE-STATISTICS-TABLE>`__).
E.g. E.g.
@ -393,7 +393,7 @@ Operator
* - ``_has_keys_all`` * - ``_has_keys_all``
- ``?&`` - ``?&``
(For more details on what these operators do, refer to `Postgres docs <https://www.postgresql.org/docs/current/static/functions-json.html#FUNCTIONS-JSONB-OP-TABLE>`__.) (For more details on what these operators do, refer to the `Postgres docs <https://www.postgresql.org/docs/current/static/functions-json.html#FUNCTIONS-JSONB-OP-TABLE>`__).
**PostGIS related operators on GEOMETRY columns:** **PostGIS related operators on GEOMETRY columns:**
@ -419,12 +419,12 @@ Operator
* - ``_st_d_within`` * - ``_st_d_within``
- ``ST_DWithin`` - ``ST_DWithin``
(For more details on what these operators do, refer to `PostGIS docs <http://postgis.net/workshops/postgis-intro/spatial_relationships.html>`__.) (For more details on what these operators do, refer to the `PostGIS docs <http://postgis.net/workshops/postgis-intro/spatial_relationships.html>`__).
.. note:: .. note::
- All operators take a JSON representation of ``geometry/geography`` values as input value. - All operators take a JSON representation of ``geometry/geography`` values as input value.
- Input value for ``_st_d_within`` operator is an object: - The input value for ``_st_d_within`` operator is an object:
.. parsed-literal:: .. parsed-literal::
@ -578,7 +578,7 @@ Operation aggregate
{op_name: TableAggOpOrderBy_} {op_name: TableAggOpOrderBy_}
Available operations are ``sum``, ``avg``, ``max``, ``min``, ``stddev``, ``stddev_samp``, Available operations are ``sum``, ``avg``, ``max``, ``min``, ``stddev``, ``stddev_samp``,
``stddev_pop``, ``variance``, ``var_samp`` and ``var_pop`` ``stddev_pop``, ``variance``, ``var_samp`` and ``var_pop``.
TableAggOpOrderBy TableAggOpOrderBy
&&&&&&&&&&&&&&&&& &&&&&&&&&&&&&&&&&

View File

@ -34,19 +34,19 @@ GraphQL API
All GraphQL requests for queries, subscriptions and mutations are made to the GraphQL API. All GraphQL requests for queries, subscriptions and mutations are made to the GraphQL API.
See details at :doc:`graphql-api/index` See details at :doc:`graphql-api/index`.
.. _schema_metadata_api: .. _schema_metadata_api:
Schema / Metadata API Schema / metadata API
^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^
Hasura exposes a Schema / Metadata API for managing metadata for permissions/relationships or for directly Hasura exposes a schema / metadata API for managing metadata for permissions/relationships or for directly
executing SQL on the underlying Postgres. executing SQL on the underlying Postgres.
This is primarily intended to be used as an ``admin`` API to manage Hasura schema and metadata. This is primarily intended to be used as an ``admin`` API to manage the Hasura schema and metadata.
See details at :doc:`schema-metadata-api/index` See details at :doc:`schema-metadata-api/index`.
.. _version_api: .. _version_api:
@ -66,7 +66,7 @@ Health check API
^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^
A ``GET`` request to the public ``/healthz`` endpoint will respond with ``200`` A ``GET`` request to the public ``/healthz`` endpoint will respond with ``200``
if GraphQL Engine is ready to serve requests and there are no inconsistencies if the GraphQL engine is ready to serve requests and there are no inconsistencies
with the metadata. The response will be ``500`` if there are metadata with the metadata. The response will be ``500`` if there are metadata
inconsistencies and you should use the console or check the server logs to find inconsistencies and you should use the console or check the server logs to find
out what the errors are. out what the errors are.
@ -79,7 +79,7 @@ pg_dump API
The ``/v1alpha1/pg_dump`` is an admin-only endpoint that can be used to execute The ``/v1alpha1/pg_dump`` is an admin-only endpoint that can be used to execute
``pg_dump`` on the Postgres instance connected to Hasura. The ``pg_dump`` CLI ``pg_dump`` on the Postgres instance connected to Hasura. The ``pg_dump`` CLI
tool's argument can be passed as POST request body to the API and the response tool's argument can be passed as a POST request body to the API and the response
is sent back to the client. is sent back to the client.
See details at :doc:`pgdump`. See details at :doc:`pgdump`.

View File

@ -13,7 +13,7 @@ Postgres instance that Hasura is configured with.
The primary motive of this API is to provide convenience methods to initialise migrations from an The primary motive of this API is to provide convenience methods to initialise migrations from an
existing Hasura instance. But the functionality can be later expanded to do other things existing Hasura instance. But the functionality can be later expanded to do other things
such as taking data dump etc. such as taking a data dump etc.
Endpoint Endpoint
-------- --------
@ -94,4 +94,4 @@ state that this API is not enabled. i.e. remove it from the list of enabled APIs
--enabled-apis="graphql,metadata" --enabled-apis="graphql,metadata"
HASURA_GRAPHQL_ENABLED_APIS="graphql,metadata" HASURA_GRAPHQL_ENABLED_APIS="graphql,metadata"
See :doc:`../deployment/graphql-engine-flags/reference` for info on setting the above flag/env var See :doc:`../deployment/graphql-engine-flags/reference` for info on setting the above flag/env var.

View File

@ -304,8 +304,8 @@ variables:
Implicitly Supported types Implicitly Supported types
-------------------------- --------------------------
All ``Implicit`` types in the :ref:`above table <types_table>` are implicitly supported by GraphQL Engine. You have to All ``Implicit`` types in the :ref:`above table <types_table>` are implicitly supported by the GraphQL engine. You have to
provide the value in **String**. provide the value as a **String**.
E.g. For time without time zone type E.g. For time without time zone type
@ -334,6 +334,6 @@ E.g. For macaddr type
.. Note:: .. Note::
You can learn more about PostgreSQL data types `here <https://www.postgresql.org/docs/current/static/datatype.html>`__ You can learn more about PostgreSQL data types `here <https://www.postgresql.org/docs/current/static/datatype.html>`__.

View File

@ -6,7 +6,7 @@ Schema/Metadata API Reference: Custom Functions
:depth: 1 :depth: 1
:local: :local:
Track/untrack a custom SQL function in Hasura GraphQL engine. Track/untrack a custom SQL function in the Hasura GraphQL engine.
Only tracked custom functions are available for querying/mutating/subscribing data over the GraphQL API. Only tracked custom functions are available for querying/mutating/subscribing data over the GraphQL API.
@ -24,7 +24,7 @@ Currently, only functions which satisfy the following constraints can be exposed
- **Return type**: MUST be ``SETOF <table-name>`` - **Return type**: MUST be ``SETOF <table-name>``
- **Argument modes**: ONLY ``IN`` - **Argument modes**: ONLY ``IN``
Add a SQL function ``search_articles``: Add an SQL function ``search_articles``:
.. code-block:: http .. code-block:: http
@ -47,7 +47,7 @@ untrack_function
``untrack_function`` is used to remove a SQL function from the GraphQL schema. ``untrack_function`` is used to remove a SQL function from the GraphQL schema.
Remove a SQL function ``search_articles``: Remove an SQL function ``search_articles``:
.. code-block:: http .. code-block:: http

View File

@ -95,7 +95,7 @@ Args syntax
* - replace * - replace
- false - false
- Boolean - Boolean
- If set to true, event trigger is replaced with the new definition - If set to true, the event trigger is replaced with the new definition
.. _delete_event_trigger: .. _delete_event_trigger:
@ -200,7 +200,7 @@ OperationSpec
* - columns * - columns
- true - true
- EventTriggerColumns_ - EventTriggerColumns_
- List of columns or "*" to listen changes on - List of columns or "*" to listen to changes
* - payload * - payload
- false - false
- EventTriggerColumns_ - EventTriggerColumns_

View File

@ -8,12 +8,12 @@ Schema / Metadata API Reference
:depth: 1 :depth: 1
:local: :local:
The Schema / Metadata API provides the following features: The schema / metadata API provides the following features:
1. Execute SQL on the underlying Postgres database, supports schema modifying actions. 1. Execute SQL on the underlying Postgres database, supports schema modifying actions.
2. Modify Hasura metadata (permissions rules and relationships). 2. Modify Hasura metadata (permission rules and relationships).
This is primarily intended to be used as an ``admin`` API to manage Hasura schema and metadata. This is primarily intended to be used as an ``admin`` API to manage the Hasura schema and metadata.
Endpoint Endpoint
-------- --------
@ -90,11 +90,11 @@ The various types of queries are listed in the following table:
* - :ref:`track_function` * - :ref:`track_function`
- :ref:`FunctionName <FunctionName>` - :ref:`FunctionName <FunctionName>`
- Add a SQL function - Add an SQL function
* - :ref:`untrack_function` * - :ref:`untrack_function`
- :ref:`FunctionName <FunctionName>` - :ref:`FunctionName <FunctionName>`
- Remove a SQL function - Remove an SQL function
* - :ref:`create_object_relationship` * - :ref:`create_object_relationship`
- :ref:`create_object_relationship_args <create_object_relationship_syntax>` - :ref:`create_object_relationship_args <create_object_relationship_syntax>`
@ -150,27 +150,27 @@ The various types of queries are listed in the following table:
* - :ref:`create_event_trigger` * - :ref:`create_event_trigger`
- :ref:`create_event_trigger_args <create_event_trigger_syntax>` - :ref:`create_event_trigger_args <create_event_trigger_syntax>`
- Create or replace event trigger - Create or replace an event trigger
* - :ref:`invoke_event_trigger`
- :ref:`invoke_event_trigger_args <invoke_event_trigger_syntax>`
- Invoke trigger manually
* - :ref:`delete_event_trigger` * - :ref:`delete_event_trigger`
- :ref:`delete_event_trigger_args <delete_event_trigger_syntax>` - :ref:`delete_event_trigger_args <delete_event_trigger_syntax>`
- Delete existing event trigger - Delete an existing event trigger
* - :ref:`invoke_event_trigger`
- :ref:`invoke_event_trigger_args <invoke_event_trigger_syntax>`
- Invoke a trigger manually
* - :ref:`add_remote_schema` * - :ref:`add_remote_schema`
- :ref:`add_remote_schema_args <add_remote_schema_syntax>` - :ref:`add_remote_schema_args <add_remote_schema_syntax>`
- Add a remote GraphQL server as remote schema - Add a remote GraphQL server as a remote schema
* - :ref:`remove_remote_schema` * - :ref:`remove_remote_schema`
- :ref:`remove_remote_schema_args <remove_remote_schema_syntax>` - :ref:`remove_remote_schema_args <remove_remote_schema_syntax>`
- Remove existing remote schema - Remove an existing remote schema
* - :ref:`reload_remote_schema` * - :ref:`reload_remote_schema`
- :ref:`reload_remote_schema_args <reload_remote_schema_syntax>` - :ref:`reload_remote_schema_args <reload_remote_schema_syntax>`
- Reload schema of existing remote server - Reload schema of an existing remote schema
* - :ref:`export_metadata` * - :ref:`export_metadata`
- :ref:`Empty Object` - :ref:`Empty Object`
@ -206,19 +206,19 @@ The various types of queries are listed in the following table:
* - :ref:`add_query_to_collection` * - :ref:`add_query_to_collection`
- :ref:`add_query_to_collection_args <add_query_to_collection_syntax>` - :ref:`add_query_to_collection_args <add_query_to_collection_syntax>`
- Add a query to given collection - Add a query to a given collection
* - :ref:`drop_query_from_collection` * - :ref:`drop_query_from_collection`
- :ref:`drop_query_from_collection_args <drop_query_from_collection_syntax>` - :ref:`drop_query_from_collection_args <drop_query_from_collection_syntax>`
- Drop a query from given collection - Drop a query from a given collection
* - :ref:`add_collection_to_allowlist` * - :ref:`add_collection_to_allowlist`
- :ref:`add_collection_to_allowlist_args <add_collection_to_allowlist_syntax>` - :ref:`add_collection_to_allowlist_args <add_collection_to_allowlist_syntax>`
- Add a collection to allow-list - Add a collection to the allow-list
* - :ref:`drop_collection_from_allowlist` * - :ref:`drop_collection_from_allowlist`
- :ref:`drop_collection_from_allowlist_args <drop_collection_from_allowlist_syntax>` - :ref:`drop_collection_from_allowlist_args <drop_collection_from_allowlist_syntax>`
- Drop a collection from allow-list - Drop a collection from the allow-list
**See:** **See:**
@ -283,15 +283,15 @@ Error codes
:widths: 10, 20, 70 :widths: 10, 20, 70
:header-rows: 1 :header-rows: 1
Disabling Schema/Metadata API Disabling schema / metadata API
----------------------------- -------------------------------
Since this API can be used to make changes to the GraphQL schema, it can be Since this API can be used to make changes to the GraphQL schema, it can be
disabled, especially in production deployments. disabled, especially in production deployments.
The ``enabled-apis`` flag or the ``HASURA_GRAPHQL_ENABLED_APIS`` env var can be used to The ``enabled-apis`` flag or the ``HASURA_GRAPHQL_ENABLED_APIS`` env var can be used to
enable/disable this API. By default, The schema/metadata API is enabled. To disable it, you need enable/disable this API. By default, the schema/metadata API is enabled. To disable it, you need
to explicitly state that this API is not enabled. i.e. remove it from the list of enabled APIs. to explicitly state that this API is not enabled i.e. remove it from the list of enabled APIs.
.. code-block:: bash .. code-block:: bash
@ -299,7 +299,7 @@ to explicitly state that this API is not enabled. i.e. remove it from the list o
--enabled-apis="graphql" --enabled-apis="graphql"
HASURA_GRAPHQL_ENABLED_APIS="graphql" HASURA_GRAPHQL_ENABLED_APIS="graphql"
See :doc:`../../deployment/graphql-engine-flags/reference` for info on setting the above flag/env var See :doc:`../../deployment/graphql-engine-flags/reference` for info on setting the above flag/env var.
.. toctree:: .. toctree::
:maxdepth: 1 :maxdepth: 1

View File

@ -13,8 +13,8 @@ APIs to manage Hasura metadata which is stored in ``hdb_catalog`` schema.
export_metadata export_metadata
--------------- ---------------
``export_metadata`` is used to export the current metadata from server as a JSON ``export_metadata`` is used to export the current metadata from the server as a JSON
file. Response JSON will be the metadata object. file. The response JSON will be the metadata object.
.. code-block:: http .. code-block:: http
@ -59,7 +59,7 @@ Args should be the JSON object which is same as the output of
reload_metadata reload_metadata
--------------- ---------------
``reload_metadata`` should be used when there is change in underlying Postgres ``reload_metadata`` should be used when there is a change in underlying Postgres
database that Hasura should be aware of. Example: a new column is added to a database that Hasura should be aware of. Example: a new column is added to a
table using ``psql`` and this column should now be added to the GraphQL schema. table using ``psql`` and this column should now be added to the GraphQL schema.
@ -156,7 +156,7 @@ Response:-
drop_inconsistent_metadata drop_inconsistent_metadata
-------------------------- --------------------------
``drop_inconsistent_metadata`` can be used to purge all inconsistent objects from metadata. ``drop_inconsistent_metadata`` can be used to purge all inconsistent objects from the metadata.
.. code-block:: http .. code-block:: http

View File

@ -25,9 +25,9 @@ create_insert_permission
An insert permission is used to enforce constraints on the data that is being An insert permission is used to enforce constraints on the data that is being
inserted. inserted.
Let's look at an example, a permission for the ``user`` role to insert into Let's look at an example, a permission for the ``user`` role to insert into the
``article`` table. What is the constraint that we would like to enforce here? *A ``article`` table. What is the constraint that we would like to enforce here? *A
user can only insert articles for themself* user can only insert articles for themselves* .
.. code-block:: http .. code-block:: http
@ -52,7 +52,7 @@ user can only insert articles for themself*
} }
} }
This reads as follows - For the ``user`` role: This reads as follows - for the ``user`` role:
* For every row that is being inserted into the *article* table, allow insert only if the ``check`` passes i.e. that the ``author_id`` column value is the same as the value in the request header ``X-HASURA-USER-ID``". * For every row that is being inserted into the *article* table, allow insert only if the ``check`` passes i.e. that the ``author_id`` column value is the same as the value in the request header ``X-HASURA-USER-ID``".
@ -122,7 +122,7 @@ Args syntax
* - comment * - comment
- false - false
- text - text
- comment - Comment
.. _InsertPermission: .. _InsertPermission:
@ -143,7 +143,7 @@ InsertPermission
* - set * - set
- false - false
- :ref:`ColumnPresetExp` - :ref:`ColumnPresetExp`
- Preset values for columns that can sourced from session variables or static values. - Preset values for columns that can be sourced from session variables or static values
* - columns * - columns
- false - false
- :ref:`PGColumn` array (or) ``'*'`` - :ref:`PGColumn` array (or) ``'*'``
@ -189,9 +189,9 @@ create_select_permission
A select permission is used to restrict access to only the specified columns and rows. A select permission is used to restrict access to only the specified columns and rows.
Let's look at an example, a permission for the ``user`` role to select from Let's look at an example, a permission for the ``user`` role to select from the
``article`` table: all columns can be read, and rows that have been published or ``article`` table: all columns can be read, as well as the rows that have been published or
authored by themself. authored by the user themselves.
.. code-block:: http .. code-block:: http
@ -255,7 +255,7 @@ Args syntax
* - comment * - comment
- false - false
- text - text
- comment - Comment
.. _SelectPermission: .. _SelectPermission:
@ -348,18 +348,18 @@ An example:
} }
} }
This reads as follows - For the ``user`` role: This reads as follows - for the ``user`` role:
* Allow updating only those rows where the ``check`` passes i.e. the value of the ``author_id`` column of a row matches the value of the session variable ``X-HASURA-USER-ID`` value. * Allow updating only those rows where the ``check`` passes i.e. the value of the ``author_id`` column of a row matches the value of the session variable ``X-HASURA-USER-ID`` value.
* If the above ``check`` passes for a given row, allow updating only the ``title``, ``content`` and ``category`` columns (*as specified in the* ``columns`` *key*) * If the above ``check`` passes for a given row, allow updating only the ``title``, ``content`` and ``category`` columns (*as specified in the* ``columns`` *key*).
* When this update happens, the value of the column ``id`` will be automatically ``set`` to the value of the resolved session variable ``X-HASURA-USER-ID``. * When this update happens, the value of the column ``id`` will be automatically ``set`` to the value of the resolved session variable ``X-HASURA-USER-ID``.
.. note:: .. note::
It is important to deny updates to columns that will determine the row It is important to deny updates to columns that will determine the row
ownership. In the above example, ``author_id`` column determines the ownership. In the above example, the ``author_id`` column determines the
ownership of a row in the ``article`` table. Columns such as this should ownership of a row in the ``article`` table. Columns such as this should
never be allowed to be updated. never be allowed to be updated.
@ -390,7 +390,7 @@ Args syntax
* - comment * - comment
- false - false
- text - text
- comment - Comment
.. _UpdatePermission: .. _UpdatePermission:
@ -415,7 +415,7 @@ UpdatePermission
* - set * - set
- false - false
- :ref:`ColumnPresetExp` - :ref:`ColumnPresetExp`
- Preset values for columns that can sourced from session variables or static values. - Preset values for columns that can be sourced from session variables or static values.
.. _drop_update_permission: .. _drop_update_permission:
@ -476,8 +476,8 @@ An example:
This reads as follows: This reads as follows:
"``delete`` for ``user`` role on ``article`` table is allowed on rows where "``delete`` for the ``user`` role on the ``article`` table is allowed on rows where
``author_id`` is same as the request header ``X-HASURA-USER-ID`` value." ``author_id`` is the same as the request header ``X-HASURA-USER-ID`` value."
.. _create_delete_permission_syntax: .. _create_delete_permission_syntax:
@ -506,7 +506,7 @@ Args syntax
* - comment * - comment
- false - false
- text - text
- comment - Comment
.. _DeletePermission: .. _DeletePermission:
@ -607,4 +607,4 @@ Args syntax
* - comment * - comment
- false - false
- Text - Text
- comment - Comment

View File

@ -6,9 +6,9 @@ Schema/Metadata API Reference: Query collections
:depth: 1 :depth: 1
:local: :local:
Group queries using Query collections. Group queries using query collections.
Create/Drop query collections and Add/Drop a query to a collection using following query types. Create/drop query collections and add/drop a query to a collection using the following query types.
.. _create_query_collection: .. _create_query_collection:
@ -102,14 +102,14 @@ Args syntax
* - cascade * - cascade
- true - true
- boolean - boolean
- When set to ``true``, the collection (if present) is removed from allowlist - When set to ``true``, the collection (if present) is removed from the allowlist
.. _add_query_to_collection: .. _add_query_to_collection:
add_query_to_collection add_query_to_collection
----------------------- -----------------------
``add_query_to_collection`` is used to add a query to given collection ``add_query_to_collection`` is used to add a query to a given collection.
.. code-block:: http .. code-block:: http
@ -156,7 +156,7 @@ Args Syntax
drop_query_from_collection drop_query_from_collection
-------------------------- --------------------------
``drop_query_from_collection`` is used to remove a query from given collection ``drop_query_from_collection`` is used to remove a query from a given collection.
.. code-block:: http .. code-block:: http
@ -198,7 +198,7 @@ Args Syntax
add_collection_to_allowlist add_collection_to_allowlist
---------------------------- ----------------------------
``add_collection_to_allowlist`` is used to add a collection to allow-list ``add_collection_to_allowlist`` is used to add a collection to the allow-list.
.. code-block:: http .. code-block:: http
@ -235,7 +235,7 @@ Args Syntax
drop_collection_from_allowlist drop_collection_from_allowlist
------------------------------- -------------------------------
``drop_collection_from_allowlist`` is used to remove a collection from allow-list ``drop_collection_from_allowlist`` is used to remove a collection from the allow-list.
.. code-block:: http .. code-block:: http

View File

@ -125,7 +125,7 @@ Args syntax
* - using * - using
- true - true
- ObjRelUsing_ - ObjRelUsing_
- Use one of the available ways to define object relationship - Use one of the available ways to define an object relationship
* - comment * - comment
- false - false
- text - text
@ -255,7 +255,7 @@ follows:
It is easy to make mistakes while using ``manual_configuration``. It is easy to make mistakes while using ``manual_configuration``.
One simple check is to ensure that foreign key constraint semantics are valid One simple check is to ensure that foreign key constraint semantics are valid
on the columns being used in ``column_mapping``. In the previous example, if on the columns being used in ``column_mapping``. In the previous example, if
it was allowed, a foreign key constraint could have been defined on it was allowed, a foreign key constraint could have been defined on the
``author`` table's ``id`` column to ``article_detail`` view's ``author_id`` ``author`` table's ``id`` column to ``article_detail`` view's ``author_id``
column. column.
@ -282,7 +282,7 @@ Args syntax
* - using * - using
- true - true
- ArrRelUsing_ - ArrRelUsing_
- Use one of the available ways to define array relationship - Use one of the available ways to define an array relationship
* - comment * - comment
- false - false
- text - text
@ -452,4 +452,4 @@ Args syntax
* - comment * - comment
- false - false
- Text - Text
- comment - Comment

View File

@ -20,11 +20,11 @@ returned.
This is an admin-only query, i.e. the query can only be executed by a This is an admin-only query, i.e. the query can only be executed by a
request having ``X-Hasura-Role: admin``. This can be set by passing request having ``X-Hasura-Role: admin``. This can be set by passing
``X-Hasura-Admin-Secret`` or by setting the right role in Webhook/JWT ``X-Hasura-Admin-Secret`` or by setting the right role in webhook/JWT
authorization mode. authorization mode.
This is deliberate as it is hard to enforce any sort of permissions on arbitrary SQL. If This is deliberate as it is hard to enforce any sort of permissions on arbitrary SQL. If
you find yourselves in the need of using ``run_sql`` to run custom DML queries, you find yourself in the need of using ``run_sql`` to run custom DML queries,
consider creating a view. You can now define permissions on that particular view consider creating a view. You can now define permissions on that particular view
for various roles. for various roles.
@ -50,11 +50,11 @@ An example:
} }
While ``run_sql`` lets you run any SQL, it tries to ensure that the Hasura GraphQL engine's While ``run_sql`` lets you run any SQL, it tries to ensure that the Hasura GraphQL engine's
state (relationships, permissions etc.) is consistent. i.e., you state (relationships, permissions etc.) is consistent i.e. you
cannot drop a column on which any metadata is dependent on (say a permission or cannot drop a column on which any metadata is dependent on (say a permission or
a relationship). The effects, however, can be cascaded. a relationship). The effects, however, can be cascaded.
Example:- If we were to drop 'bio' column from the author table (let's say Example: If we were to drop the 'bio' column from the author table (let's say
the column is used in some permission), you would see an error. the column is used in some permission), you would see an error.
.. code-block:: http .. code-block:: http
@ -108,8 +108,8 @@ We can however, cascade these changes.
With the above query, the dependent permission is also dropped. With the above query, the dependent permission is also dropped.
Example:- If we were to drop a foreign key constraint from the article table Example: If we were to drop a foreign key constraint from the article table
(let's say the column involved in foreign key is used to define a relationship), (let's say the column involved in the foreign key is used to define a relationship),
you would see an error. you would see an error.
.. code-block:: http .. code-block:: http
@ -176,10 +176,10 @@ In case of 1, 2 and 3 the dependent objects (if any) can be dropped using ``casc
However, when altering type of columns, if any objects are affected, the change However, when altering type of columns, if any objects are affected, the change
cannot be cascaded. So, those dependent objects have to be manually dropped before cannot be cascaded. So, those dependent objects have to be manually dropped before
executing the SQL statement. Dropping SQL functions will cascade the functions in executing the SQL statement. Dropping SQL functions will cascade the functions in
metadata even without using ``cascade`` since no other objects dependant on them. metadata even without using ``cascade`` since no other objects depend on them.
Overloading tracked SQL functions is not allowed. Overloading tracked SQL functions is not allowed.
Set ``check_metadata_consistency`` field to ``false`` to force server to not consider metadata dependencies. Set ``check_metadata_consistency`` field to ``false`` to force the server to not consider metadata dependencies.
.. _run_sql_syntax: .. _run_sql_syntax:

View File

@ -348,12 +348,12 @@ Operator
* - ``_st_d_within`` * - ``_st_d_within``
- ``ST_DWithin`` - ``ST_DWithin``
(For more details on what these operators do, refer to `PostGIS docs <http://postgis.net/workshops/postgis-intro/spatial_relationships.html>`__.) (For more details on what these operators do, refer to `PostGIS docs <http://postgis.net/workshops/postgis-intro/spatial_relationships.html>`__).
.. note:: .. note::
- All operators take a JSON representation of ``geometry/geography`` values as input value. - All operators take a JSON representation of ``geometry/geography`` values as input value.
- Input value for ``_st_d_within`` operator is an object: - The input value for ``_st_d_within`` operator is an object:
.. parsed-literal:: .. parsed-literal::
@ -395,7 +395,7 @@ An empty JSONObject_
ColumnPresetsExp ColumnPresetsExp
^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^
A JSONObject_ of Postgres column name to value mapping, where value can be static or derived from a session variable. A JSONObject_ of a Postgres column name to value mapping, where the value can be static or derived from a session variable.
.. parsed-literal:: .. parsed-literal::
:class: haskell-pre :class: haskell-pre
@ -406,7 +406,7 @@ A JSONObject_ of Postgres column name to value mapping, where value can be stati
.. ..
} }
E.g. where ``id`` is derived from session variable and ``city`` is a static value. E.g. where ``id`` is derived from a session variable and ``city`` is a static value.
.. code-block:: json .. code-block:: json

View File

@ -6,7 +6,7 @@ Schema/Metadata API Reference: Tables/Views
:depth: 1 :depth: 1
:local: :local:
Track/untrack a table/view in Hasura GraphQL engine Track/untrack a table/view in Hasura GraphQL engine.
Only tracked tables/views are available for querying/mutating/subscribing data over the GraphQL API. Only tracked tables/views are available for querying/mutating/subscribing data over the GraphQL API.
@ -147,4 +147,4 @@ Args syntax
* - cascade * - cascade
- false - false
- Boolean - Boolean
- When set to ``true``, the effect (if possible) is cascaded to any metadata dependent objects (relationships, permissions, templates). - When set to ``true``, the effect (if possible) is cascaded to any metadata dependent objects (relationships, permissions, templates)

View File

@ -9,12 +9,12 @@ Authentication
Overview Overview
-------- --------
Authentication is handled outside Hasura. Hasura delegates authentication and resolution of request Authentication is handled outside of Hasura. Hasura delegates authentication and resolution of request
headers into session variables to your authentication service *(existing or new)*. headers into session variables to your authentication service *(existing or new)*.
Your authentication service is required to pass a user's **role** information in the form of session Your authentication service is required to pass a user's **role** information in the form of session
variables like ``X-Hasura-Role``, etc. More often than not, you'll also need to pass user information variables like ``X-Hasura-Role``, etc. More often than not, you'll also need to pass user information
for your access-control use-cases, like ``X-Hasura-User-Id``, to build permission rules. for your access control use cases, like ``X-Hasura-User-Id``, to build permission rules.
Authentication options Authentication options
---------------------- ----------------------
@ -23,12 +23,12 @@ Hasura supports two modes of authentication configuration:
1) **Webhook**: Your auth server exposes a webhook that is used to authenticate all incoming requests 1) **Webhook**: Your auth server exposes a webhook that is used to authenticate all incoming requests
to the Hasura GraphQL engine server and to get metadata about the request to evaluate access control to the Hasura GraphQL engine server and to get metadata about the request to evaluate access control
rules. Here's how a GraphQL request is processed in Webhook mode: rules. Here's how a GraphQL request is processed in webhook mode:
.. thumbnail:: ../../../../img/graphql/manual/auth/auth-webhook-overview.png .. thumbnail:: ../../../../img/graphql/manual/auth/auth-webhook-overview.png
2) **JWT** (JSON Web Token): Your auth server issues JWTs to your client app, which, when sent as part 2) **JWT** (JSON Web Token): Your auth server issues JWTs to your client app, which, when sent as part
of the request, are verified and decoded by GraphQL engine to get metadata about the request to of the request, are verified and decoded by the GraphQL engine to get metadata about the request to
evaluate access control rules. Here's how a GraphQL query is processed in JWT mode: evaluate access control rules. Here's how a GraphQL query is processed in JWT mode:
.. thumbnail:: ../../../../img/graphql/manual/auth/auth-jwt-overview.png .. thumbnail:: ../../../../img/graphql/manual/auth/auth-jwt-overview.png

View File

@ -9,10 +9,10 @@ Authentication using JWT
Introduction Introduction
------------ ------------
You can configure GraphQL engine to use JWT authorization mode to authorize all incoming requests to Hasura GraphQL engine server. You can configure the GraphQL engine to use JWT authorization mode to authorize all incoming requests to the Hasura GraphQL engine server.
The idea is - Your auth server will return JWT tokens, which is decoded and The idea is that your auth server will return JWT tokens, which are decoded and
verified by GraphQL engine to authorize and get metadata about the request verified by the GraphQL engine to authorize and get metadata about the request
(``x-hasura-*`` values). (``x-hasura-*`` values).
@ -21,9 +21,9 @@ verified by GraphQL engine to authorize and get metadata about the request
The JWT is decoded, the signature is verified, then it is asserted that the The JWT is decoded, the signature is verified, then it is asserted that the
current role of the user (if specified in the request) is in the list of allowed roles. current role of the user (if specified in the request) is in the list of allowed roles.
If current role is not specified in the request, then the default role is picked. If the current role is not specified in the request, then the default role is applied.
If the authorization passes, then all of the ``x-hasura-*`` values in the claim If the authorization passes, then all of the ``x-hasura-*`` values in the claim
is used for the permissions system. are used for the permissions system.
.. admonition:: Prerequisite .. admonition:: Prerequisite
@ -32,8 +32,8 @@ is used for the permissions system.
In JWT mode, on a secured endpoint: In JWT mode, on a secured endpoint:
- JWT authentication is **enforced** when ``X-Hasura-Admin-Secret`` header is **not found** in the request. - JWT authentication is **enforced** when the ``X-Hasura-Admin-Secret`` header is **not found** in the request.
- JWT authentication is **skipped** when ``X-Hasura-Admin-Secret`` header **is found** in the request and - JWT authentication is **skipped** when the ``X-Hasura-Admin-Secret`` header **is found** in the request and
admin access is granted. admin access is granted.
@ -57,11 +57,11 @@ the following:
2. A ``x-hasura-allowed-roles`` field : a list of allowed roles for the user i.e. acceptable values of the 2. A ``x-hasura-allowed-roles`` field : a list of allowed roles for the user i.e. acceptable values of the
``x-hasura-role`` header ``x-hasura-role`` header
The claims in the JWT, can have other ``x-hasura-*`` fields where their values The claims in the JWT can have other ``x-hasura-*`` fields where their values
can only be strings. You can use these ``x-hasura-*`` fields in your can only be strings. You can use these ``x-hasura-*`` fields in your
permissions. permissions.
Now, the JWT should be sent by the client to Hasura GraphQL engine via the Now the JWT should be sent by the client to the Hasura GraphQL engine via the
``Authorization: Bearer <JWT>`` header. ``Authorization: Bearer <JWT>`` header.
Example JWT claim: Example JWT claim:
@ -91,14 +91,14 @@ specific claims have to be present. This value can be configured in the JWT
config while starting the server. config while starting the server.
**Note**: ``x-hasura-default-role`` and ``x-hasura-allowed-roles`` are **Note**: ``x-hasura-default-role`` and ``x-hasura-allowed-roles`` are
mandatory, while rest of them are optional. mandatory, while the rest of them are optional.
.. note:: .. note::
All ``x-hasura-*`` values should be ``String``, they will be converted to the All ``x-hasura-*`` values should be of type ``String``, they will be converted to the
right type automatically. right type automatically.
The default role can be overridden by ``x-hasura-role`` header, while making a The default role can be overridden by the ``x-hasura-role`` header, while making a
request. request.
.. code-block:: http .. code-block:: http
@ -156,16 +156,16 @@ the ``jwk_url`` field.
^^^^^^^^^^^ ^^^^^^^^^^^
A URL where a provider publishes their JWKs (which are used for signing the A URL where a provider publishes their JWKs (which are used for signing the
JWTs). The URL **must** publish the JWKs in the standard format as described in JWTs). The URL **must** publish the JWKs in the standard format as described in
https://tools.ietf.org/html/rfc7517 https://tools.ietf.org/html/rfc7517.
This is an optional field. You can also provide the key (certificate, PEM This is an optional field. You can also provide the key (certificate, PEM
encoded public key) as string as well - under the ``key`` field. encoded public key) as string as well - under the ``key`` field.
**Rotating JWKs**: **Rotating JWKs**:
Some providers rotate their JWKs (E.g - Firebase). If the provider sends an Some providers rotate their JWKs (e.g. Firebase). If the provider sends an
``Expires`` header with the response of JWK, then graphql-engine will refresh ``Expires`` header with the response of JWK, then the GraphQL engine will refresh
the JWKs automatically. If the provider does not send ``Expires`` header, the the JWKs automatically. If the provider does not send an ``Expires`` header, the
JWKs are not refreshed. JWKs are not refreshed.
**Example**: **Example**:
@ -178,7 +178,7 @@ JWKs are not refreshed.
``claims_namespace`` ``claims_namespace``
^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^
This is an optional field. You can specify the key name This is an optional field. You can specify the key name
inside which the Hasura specific claims will be present. E.g. - ``https://mydomain.com/claims``. inside which the Hasura specific claims will be present, e.g. ``https://mydomain.com/claims``.
**Default value** is: ``https://hasura.io/jwt/claims``. **Default value** is: ``https://hasura.io/jwt/claims``.
@ -191,10 +191,10 @@ This is an optional field, with only the following possible values:
Default is ``json``. Default is ``json``.
This is to indicate that if the hasura specific claims are a regular JSON object This is to indicate that if the Hasura specific claims are a regular JSON object
or stringified JSON or stringified JSON.
This is required because providers like AWS Cognito only allows strings in the This is required because providers like AWS Cognito only allow strings in the
JWT claims. `See #1176 <https://github.com/hasura/graphql-engine/issues/1176>`_. JWT claims. `See #1176 <https://github.com/hasura/graphql-engine/issues/1176>`_.
Example:- Example:-
@ -322,7 +322,7 @@ The ``key`` is the actual shared secret, which is used by Hasura and the externa
RSA based RSA based
+++++++++ +++++++++
If your auth server is using RSA to sign JWTs, and is using a 512-bit key. In this case, If your auth server is using RSA to sign JWTs, and is using a 512-bit key,
the JWT config needs to have the only the public key. the JWT config needs to have the only the public key.
**Example 1**: public key in PEM format (not OpenSSH format): **Example 1**: public key in PEM format (not OpenSSH format):
@ -423,7 +423,7 @@ Auth0
^^^^^ ^^^^^
Refer the :doc:`Auth0 JWT Integration guide <../../guides/integrations/auth0-jwt>` for a full integration guide Refer the :doc:`Auth0 JWT Integration guide <../../guides/integrations/auth0-jwt>` for a full integration guide
with Auth0 with Auth0.
Auth0 publishes their JWK under: Auth0 publishes their JWK under:
@ -486,7 +486,7 @@ Here are some sample apps that use JWT authorization. You can follow the instruc
repositories to get started. repositories to get started.
- `Auth0 JWT example <https://github.com/hasura/graphql-engine/tree/master/community/sample-apps/todo-auth0-jwt>`__: - `Auth0 JWT example <https://github.com/hasura/graphql-engine/tree/master/community/sample-apps/todo-auth0-jwt>`__:
A todo app that uses Hasura GraphQL Engine and Auth0 JWT A todo app that uses Hasura GraphQL engine and Auth0 JWT
- `Firebase JWT example <https://github.com/hasura/graphql-engine/tree/master/community/sample-apps/firebase-jwt>`__: - `Firebase JWT example <https://github.com/hasura/graphql-engine/tree/master/community/sample-apps/firebase-jwt>`__:
Barebones example to show how to have Firebase Auth integrated with Hasura JWT mode Barebones example to show how to have Firebase Auth integrated with Hasura JWT mode

View File

@ -9,7 +9,7 @@ Authentication using webhooks
Introduction Introduction
------------ ------------
You can configure GraphQL engine to use a webhook to authenticate all incoming requests to the Hasura GraphQL engine server. You can configure the GraphQL engine to use a webhook to authenticate all incoming requests to the Hasura GraphQL engine server.
.. thumbnail:: ../../../../img/graphql/manual/auth/webhook-auth.png .. thumbnail:: ../../../../img/graphql/manual/auth/webhook-auth.png
@ -19,14 +19,14 @@ You can configure GraphQL engine to use a webhook to authenticate all incoming r
In webhook mode, on a secured endpoint: In webhook mode, on a secured endpoint:
- The configured webhook is **called** when ``X-Hasura-Admin-Secret`` header is not found in the request. - The configured webhook is **called** when the ``X-Hasura-Admin-Secret`` header is not found in the request.
- The configured webhook is **ignored** when ``X-Hasura-Admin-Secret`` header is found in the request and - The configured webhook is **ignored** when the ``X-Hasura-Admin-Secret`` header is found in the request and
admin access is granted. admin access is granted.
Configuring webhook mode Configuring webhook mode
------------------------ ------------------------
* You can configure Hasura to run in webhook mode by running GraphQL engine with the ``--auth-hook`` flag or the ``HASURA_GRAPHQL_AUTH_HOOK`` environment variable (see :doc:`GraphQL engine server options <../../deployment/graphql-engine-flags/reference>`), the value of which is the webhook endpoint. * You can configure Hasura to run in webhook mode by running the GraphQL engine with the ``--auth-hook`` flag or the ``HASURA_GRAPHQL_AUTH_HOOK`` environment variable (see :doc:`GraphQL engine server options <../../deployment/graphql-engine-flags/reference>`), the value of which is the webhook endpoint.
* You can configure Hasura to send either a ``GET`` or a ``POST`` request to your auth webhook. The default configuration is ``GET`` and you can override this with ``POST`` by using the ``--auth-hook-mode`` flag or the ``HASURA_GRAPHQL_AUTH_HOOK_MODE`` environment variable (*in addition to those specified above; see* :doc:`GraphQL engine server options <../../deployment/graphql-engine-flags/reference>`). * You can configure Hasura to send either a ``GET`` or a ``POST`` request to your auth webhook. The default configuration is ``GET`` and you can override this with ``POST`` by using the ``--auth-hook-mode`` flag or the ``HASURA_GRAPHQL_AUTH_HOOK_MODE`` environment variable (*in addition to those specified above; see* :doc:`GraphQL engine server options <../../deployment/graphql-engine-flags/reference>`).
@ -78,7 +78,7 @@ POST request
} }
} }
If you configure your webhook to use ``POST``, then Hasura **will send all client headers in payload** If you configure your webhook to use ``POST``, then Hasura **will send all client headers in payload**.
Response Response
^^^^^^^^ ^^^^^^^^
@ -105,14 +105,14 @@ You should send the ``X-Hasura-*`` "session variables" to your permission rules
Failure Failure
+++++++ +++++++
If you want to deny the GraphQL request return a ``401 Unauthorized`` exception. If you want to deny the GraphQL request, return a ``401 Unauthorized`` exception.
.. code-block:: http .. code-block:: http
HTTP/1.1 401 Unauthorized HTTP/1.1 401 Unauthorized
.. note:: .. note::
Anything other than a ``200`` or ``401`` response from webhook makes server raise a ``500 Internal Server Error`` Anything other than a ``200`` or ``401`` response from webhook makes the server raise a ``500 Internal Server Error``
exception. exception.
Auth webhook samples Auth webhook samples
@ -136,4 +136,4 @@ Once deployed, you can use any of the following endpoints as your auth webhook i
.. note:: .. note::
If you are using ``firebase`` you will have to set the associated environment variables. If you are using ``Firebase``, you will have to set the associated environment variables.

View File

@ -85,8 +85,8 @@ as shown below:
.. thumbnail:: ../../../../img/graphql/manual/auth/permission-basics-simple-example.png .. thumbnail:: ../../../../img/graphql/manual/auth/permission-basics-simple-example.png
This permission rule reads as "*For the role* ``user`` *, table* `` *and operation* ``select``/``query``*, This permission rule reads as: "*For the role* ``user`` *, table* ``author`` *and operation* ``select``/``query``,
allow access to those rows where the value in the* ``id`` *column is the same as the value in the* allow access to those rows where the value in the ``id`` *column is the same as the value in the*
``X-Hasura-User-ID`` *session variable*". ``X-Hasura-User-ID`` *session variable*".
Run a query **with** access control Run a query **with** access control
@ -111,4 +111,4 @@ Next steps
Read about roles and session variables at: :doc:`roles-variables` Read about roles and session variables at: :doc:`roles-variables`
See more detailed examples at: :doc:`Common access control examples<common-roles-auth-examples>` See more detailed examples at: :doc:`Common access control examples<common-roles-auth-examples>`

View File

@ -10,7 +10,7 @@ This is a guide to help you set up a basic authorization architecture for your G
that you first check out :doc:`roles-variables` and :doc:`permission-rules` that you first check out :doc:`roles-variables` and :doc:`permission-rules`
that will be referred to throughout this guide. that will be referred to throughout this guide.
Here are some examples of common use-cases. Here are some examples of common use cases.
Anonymous (not logged in) users Anonymous (not logged in) users
------------------------------- -------------------------------
@ -38,7 +38,7 @@ Logged-in users
- Set up a permission for insert/select/update/delete that uses said column. E.g.: - Set up a permission for insert/select/update/delete that uses said column. E.g.:
``author_id: {_eq: "X-Hasura-User-Id"}`` for an article table. ``author_id: {_eq: "X-Hasura-User-Id"}`` for an article table.
- Note that the ``X-Hasura-User-Id`` is a :doc:`dynamic session variable<./roles-variables>` that comes in from - Note that the ``X-Hasura-User-Id`` is a :doc:`dynamic session variable<./roles-variables>` that comes in from
your :doc:`auth webhook's <../authentication/webhook>` response, or as a request as a header if you're testing. your :doc:`auth webhook's <../authentication/webhook>` response, or as a request header if you're testing.
.. thumbnail:: ../../../../img/graphql/manual/auth/user-select-graphiql.png .. thumbnail:: ../../../../img/graphql/manual/auth/user-select-graphiql.png
:class: no-shadow :class: no-shadow
@ -125,9 +125,9 @@ Sometimes your data/user model requires that:
* Users can have multiple roles. * Users can have multiple roles.
* Each role has access to different parts of your database schema. * Each role has access to different parts of your database schema.
If you have the information about roles and how they map to your data in the same database as the one configured with GraphQL Engine, you can leverage relationships to define permissions that effectively control access to data and the operations each role is allowed to perform. If you have the information about roles and how they map to your data in the same database as the one configured with the GraphQL engine, you can leverage relationships to define permissions that effectively control access to data and the operations each role is allowed to perform.
To understand how this works, let's model the roles and corresponding permissions in the context of a blog app wth the following roles: To understand how this works, let's model the roles and corresponding permissions in the context of a blog app with the following roles:
* ``author``: Users with this role can submit **their own** articles. * ``author``: Users with this role can submit **their own** articles.
@ -223,7 +223,7 @@ Permissions for role ``author``
* **Allow users with the role** ``author`` **to insert only their own articles** * **Allow users with the role** ``author`` **to insert only their own articles**
For this permission rule, we'll make use of two features of the GraphQL Engine's permissions system: For this permission rule, we'll make use of two features of the GraphQL engine's permissions system:
a) :ref:`Column-level permissions<col-level-permissions>`: Restrict access to certain columns only. a) :ref:`Column-level permissions<col-level-permissions>`: Restrict access to certain columns only.
@ -252,7 +252,7 @@ Permissions for role ``reviewer``
.. thumbnail:: ../../../../img/graphql/manual/auth/multirole-example-reviewer-update.png .. thumbnail:: ../../../../img/graphql/manual/auth/multirole-example-reviewer-update.png
The array-relationship based permission rule in the above image reads as "*if the ID of any of reviewers assigned to this article is equal to the user's ID i.e. the* ``X-Hasura-User-Id`` *session-variable's value, allow access to it*". The columns' access is restricted using the column-level permissions highlighted above. The array-relationship based permission rule in the above image reads as "*if the ID of any reviewer assigned to this article is equal to the user's ID i.e. the* ``X-Hasura-User-Id`` *session-variable's value, allow access to it*". The columns' access is restricted using the column-level permissions highlighted above.
* **Allow users with the role** ``reviewer`` **to select articles assigned to them for reviews** * **Allow users with the role** ``reviewer`` **to select articles assigned to them for reviews**

View File

@ -11,7 +11,7 @@ Overview
Hasura supports **role-based** authorization where access control is done by creating rules for each role, Hasura supports **role-based** authorization where access control is done by creating rules for each role,
table and operation (*insert*, *update*, etc.). These access control rules use dynamic session table and operation (*insert*, *update*, etc.). These access control rules use dynamic session
variables that are passed to GraphQL engine from your :doc:`authentication service <../authentication/index>` variables that are passed to the GraphQL engine from your :doc:`authentication service <../authentication/index>`
with every request. Role information is inferred from the ``X-Hasura-Role`` and ``X-Hasura-Allowed-Roles`` with every request. Role information is inferred from the ``X-Hasura-Role`` and ``X-Hasura-Allowed-Roles``
session variables. Other session variables can be passed by your auth service as per your requirements. session variables. Other session variables can be passed by your auth service as per your requirements.
@ -20,11 +20,11 @@ session variables. Other session variables can be passed by your auth service as
.. thumbnail:: ../../../../img/graphql/manual/auth/hasura-perms.png .. thumbnail:: ../../../../img/graphql/manual/auth/hasura-perms.png
:width: 80 % :width: 80 %
Trying access control out Trying out access control
------------------------- -------------------------
If you just want to see role-based access control in action, you need not set up or integrate your If you just want to see role-based access control in action, you need not set up or integrate your
auth service with GraphQL Engine. You can just: auth service with GraphQL engine. You can just:
* Define permission rules for a table for a role. * Define permission rules for a table for a role.

View File

@ -10,7 +10,7 @@ Introduction
------------ ------------
Access control rules in Hasura are defined at a role, table and action (*insert, update, select, delete*) Access control rules in Hasura are defined at a role, table and action (*insert, update, select, delete*)
level granulaity: level granularity:
.. thumbnail:: ../../../../img/graphql/manual/auth/permission-rule-granularity.png .. thumbnail:: ../../../../img/graphql/manual/auth/permission-rule-granularity.png
@ -90,7 +90,7 @@ Using operators to build rules
****************************** ******************************
Type-based operators (*depending on the column type*) are available for constructing row-level permissions. Type-based operators (*depending on the column type*) are available for constructing row-level permissions.
You can use the same operators that you use to :doc:`filtering query results <../../queries/query-filters>` You can use the same operators that you use to :doc:`filter query results <../../queries/query-filters>`
to define permission rules. to define permission rules.
See the :ref:`API reference <MetadataOperator>` for a list of all supported operators. See the :ref:`API reference <MetadataOperator>` for a list of all supported operators.
@ -112,7 +112,7 @@ the value in the ``id`` column is greater than 10:
.. thumbnail:: ../../../../img/graphql/manual/auth/simple-boolean-expression.png .. thumbnail:: ../../../../img/graphql/manual/auth/simple-boolean-expression.png
You can construct more complex boolean expression using the ``_and``, ``_or`` and ``not`` operators: You can construct more complex boolean expressions using the ``_and``, ``_or`` and ``not`` operators:
.. thumbnail:: ../../../../img/graphql/manual/auth/boolean-operators.png .. thumbnail:: ../../../../img/graphql/manual/auth/boolean-operators.png
@ -125,8 +125,8 @@ or "A":
Using session variables Using session variables
*********************** ***********************
Session variable, that have been resolved from authentication tokens by either your authentication webhook or Session variables that have been resolved from authentication tokens by either your authentication webhook or
by Hasura using the JWT configuration, are available for constructing row-level permissions. by Hasura using the JWT configuration are available for constructing row-level permissions.
E.g. to allow an ``author`` to access only their articles, you can use the ``X-Hasura-User-ID`` session variable E.g. to allow an ``author`` to access only their articles, you can use the ``X-Hasura-User-ID`` session variable
to construct a rule to restrict access for ``select`` to rows in the ``articles`` table where the value in the to construct a rule to restrict access for ``select`` to rows in the ``articles`` table where the value in the
@ -143,7 +143,7 @@ Using relationships or nested objects
You can leverage :doc:`relationships <../../schema/relationships/index>` to define permission rules with fields You can leverage :doc:`relationships <../../schema/relationships/index>` to define permission rules with fields
from a nested object. from a nested object.
For e.g. let's say you have an object relationship called ``agent`` from the ``authors`` table to another table For example, let's say you have an object relationship called ``agent`` from the ``authors`` table to another table
called ``agent`` (*an author can have an agent*) and we want to allow users with the role ``agent`` to access called ``agent`` (*an author can have an agent*) and we want to allow users with the role ``agent`` to access
the details of the authors who they manage in ``authors`` table. We can define the following permission rule the details of the authors who they manage in ``authors`` table. We can define the following permission rule
that uses the aforementioned object relationship: that uses the aforementioned object relationship:
@ -151,10 +151,10 @@ that uses the aforementioned object relationship:
.. thumbnail:: ../../../../img/graphql/manual/auth/nested-object-permission-simple-example.png .. thumbnail:: ../../../../img/graphql/manual/auth/nested-object-permission-simple-example.png
This permission rule reads as "*if the author's agent's* ``id`` *is the same as the requesting user's* This permission rule reads as "*if the author's agent's* ``id`` *is the same as the requesting user's*
``id`` *, allow access to the author's details*. ``id`` *, allow access to the author's details*."
.. admonition:: Array and Object relationships work similarly .. admonition:: Array and object relationships work similarly
- The above example would have worked even if the relationship were an array relationship. In our example, - The above example would have worked even if the relationship were an array relationship. In our example,
the corresponding rule for an array relationship would have read "*if any of this author's agents'* ``id`` the corresponding rule for an array relationship would have read "*if any of this author's agents'* ``id``
@ -167,7 +167,7 @@ This permission rule reads as "*if the author's agent's* ``id`` *is the same a
Column-level permissions Column-level permissions
^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^
Column-level permissions determine access to columns in the rows that accessible based on row-level Column-level permissions determine access to columns in the rows that are accessible based on row-level
permissions. These permissions are simple selections: permissions. These permissions are simple selections:
.. thumbnail:: ../../../../img/graphql/manual/auth/column-level-permissions.png .. thumbnail:: ../../../../img/graphql/manual/auth/column-level-permissions.png
@ -177,7 +177,7 @@ the ``select`` operation.
.. _limit-rows-permissions: .. _limit-rows-permissions:
Row Fetch Limit Row fetch limit
^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^
In the case of ``select`` operations, the number of rows to be returned in the response can be limited In the case of ``select`` operations, the number of rows to be returned in the response can be limited

View File

@ -43,7 +43,7 @@ of ``user-id``, the session variable.
When you are constructing permission rules, however, there might be several variables that represent the business logic When you are constructing permission rules, however, there might be several variables that represent the business logic
of having access to data. For example, if you have a SaaS application, you might restrict access based on a ``client_id`` of having access to data. For example, if you have a SaaS application, you might restrict access based on a ``client_id``
variable. If you want to provide different levels of access on different devices you might restrict access based on a variable. If you want to provide different levels of access on different devices, you might restrict access based on a
``device_type`` variable. ``device_type`` variable.
Hasura allows you to create permission rules that can use any dynamic variable that is a property of the request. Hasura allows you to create permission rules that can use any dynamic variable that is a property of the request.
@ -89,7 +89,7 @@ Modelling Roles in Hasura
General guidelines for modelling roles in Hasura. General guidelines for modelling roles in Hasura.
Roles are typically be modelled in two ways: Roles are typically modelled in two ways:
1. **Hierarchical roles**: Access scopes are nested depending on available roles. `Roles in Github for organisations <https://help.github.com/en/articles/managing-peoples-access-to-your-organization-with-roles>`_ 1. **Hierarchical roles**: Access scopes are nested depending on available roles. `Roles in Github for organisations <https://help.github.com/en/articles/managing-peoples-access-to-your-organization-with-roles>`_
is a great example of such modelling where access scopes are inherited by deeper roles: is a great example of such modelling where access scopes are inherited by deeper roles:
@ -124,7 +124,7 @@ partially captured by the table below (*showing access permissions for the* ``us
} }
* - org-member * - org-member
- Allow access to personally created repositories and the organisation's repositories. - Allow access to personally created repositories and the organisation's repositories
- -
.. code-block:: json .. code-block:: json
@ -159,7 +159,7 @@ trivial row-level permission like ``"creator_id": {"_eq": "X-Hasura-User-Id"}``
our example in the previous section, this user information (*ownership or relationship*) must be available for our example in the previous section, this user information (*ownership or relationship*) must be available for
defining a permission rule. defining a permission rule.
These non-trivial use-cases are to handled differently based on whether this information is available in the same These non-trivial use cases are to be handled differently based on whether this information is available in the same
database or not. database or not.
Relationship information is available in the same database Relationship information is available in the same database
@ -169,8 +169,8 @@ Let's take a closer look at the permission rule for the ``org-member`` rule in t
section. The rule reads as "*allow access to this repository if it was created by this user or if this user is section. The rule reads as "*allow access to this repository if it was created by this user or if this user is
a member of the organisation that this repository belongs to*". a member of the organisation that this repository belongs to*".
The crucial piece of user information, that is presumed to be available in the same database, that makes this an The crucial piece of user information that is presumed to be available in the same database and that makes this an
effective rule is the mapping of users (*members*) to organizations. effective rule, is the mapping of users (*members*) to organizations.
Since this information is available in the same database, it can be easily leveraged via Since this information is available in the same database, it can be easily leveraged via
:ref:`Relationships in permissions <relationships-in-permissions>` (*see this reference for another :ref:`Relationships in permissions <relationships-in-permissions>` (*see this reference for another
@ -181,11 +181,11 @@ Relationship information is **not** available in the same database
When this user information is not available in the database that Hasura is configured to use, session variables When this user information is not available in the database that Hasura is configured to use, session variables
are the only avenue to pass this information to a permission rule. In our example, the mapping of users (members) are the only avenue to pass this information to a permission rule. In our example, the mapping of users (members)
to organizations may not have been in available in the same database. to organizations may not have been available in the same database.
To convey this information, a session variable, say ``X-Hasura-Allowed-Organisations`` can be used by the To convey this information, a session variable, say ``X-Hasura-Allowed-Organisations`` can be used by the
configured authentication to relay this information. We can then check for the following condition to emulate configured authentication to relay this information. We can then check for the following condition to emulate
the same rule - *is the organization that this repository belongs to part of the list of the organizations the the same rule: *is the organization that this repository belongs to part of the list of the organizations the
user is a member of*. user is a member of*.
The permission for ``org-member`` role changes to this: The permission for ``org-member`` role changes to this:

View File

@ -1,7 +1,7 @@
Authorization modes Authorization modes
=================== ===================
You can run Hasura's GraphQL Engine in three modes: You can run Hasura's GraphQL engine in three modes:
1. No Authentication mode 1. No Authentication mode
^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^

View File

@ -7,7 +7,7 @@ Authentication & Authorization
:local: :local:
In Hasura, access control or authorization is based on **roles**. Let's take a look at how this works In Hasura, access control or authorization is based on **roles**. Let's take a look at how this works
when GraphQL engine receives a request: when the GraphQL engine receives a request:
.. thumbnail:: ../../../img/graphql/manual/auth/auth-high-level-overview.png .. thumbnail:: ../../../img/graphql/manual/auth/auth-high-level-overview.png
@ -18,14 +18,14 @@ As you can see from this:
Your authentication service is required to pass a user's **role** information in the form of session Your authentication service is required to pass a user's **role** information in the form of session
variables like ``X-Hasura-Role``, etc. More often than not, you'll also need to pass user information variables like ``X-Hasura-Role``, etc. More often than not, you'll also need to pass user information
for your access-control use-cases, like ``X-Hasura-User-Id``, to build permission rules. for your access control use cases, like ``X-Hasura-User-Id``, to build permission rules.
- For **Authorization** or **Access Control**, Hasura helps you define granular role-based access control - For **Authorization** or **Access Control**, Hasura helps you define granular role-based access control
rules for every field in your GraphQL schema *(granular enough to control access to any row or rules for every field in your GraphQL schema *(granular enough to control access to any row or
column in your database)*. column in your database)*.
Hasura uses the role/user information in the session variables and the actual query itself to validate Hasura uses the role/user information in the session variables and the actual query itself to validate
the query against the rules defined by you. If the query/operation is allowed, it generates a SQL the query against the rules defined by you. If the query/operation is allowed, it generates an SQL
query, which includes the row/column-level constraints from the access control rules, and sends it to query, which includes the row/column-level constraints from the access control rules, and sends it to
the database to perform the required operation (*fetch the required rows for queries, insert/edit the database to perform the required operation (*fetch the required rows for queries, insert/edit
rows for mutations, etc.*). rows for mutations, etc.*).

View File

@ -7,8 +7,8 @@ Custom business logic
:local: :local:
For the backends of most apps, you may have to implement custom business logic to complement the CRUD and For the backends of most apps, you may have to implement custom business logic to complement the CRUD and
real-time API provided by GraphQL Engine. Depending on the nature of the use case and its position vis-a-vis real-time API provided by GraphQL engine. Depending on the nature of the use case and its position vis-a-vis
GraphQL Engine/Postgres, different avenues are recommended for introducing such business logic in your app's backend: GraphQL engine/Postgres, different avenues are recommended for introducing such business logic in your app's backend:
- **Pre-CRUD**: :ref:`remote-schemas` - **Pre-CRUD**: :ref:`remote-schemas`
@ -23,16 +23,16 @@ Custom resolvers in remote schemas
---------------------------------- ----------------------------------
Merging remote schemas is ideal for adding "pre-CRUD" business logic (*logic to be run before you invoke Merging remote schemas is ideal for adding "pre-CRUD" business logic (*logic to be run before you invoke
GraphQL Engine's GraphQL API to insert/modify data in Postgres*) or custom business logic that is not part of GraphQL engine's GraphQL API to insert/modify data in Postgres*) or custom business logic that is not part of
your GraphQL Engine schema. Here are some use-cases where remote schemas are ideal: your GraphQL engine schema. Here are some use-cases where remote schemas are ideal:
- Customizing mutations (e.g. running validations before inserts) - Customizing mutations (e.g. running validations before inserts)
- Supporting features like payments, etc. and providing a consistent interface to access them i.e. behind the - Supporting features like payments, etc. and providing a consistent interface to access them i.e. behind the
GraphQL Engines API GraphQL engines API
- Fetching disparate data from other sources (e.g. from a weather API or another database) - Fetching disparate data from other sources (e.g. from a weather API or another database)
To support these kinds of business logic, a custom GraphQL schema with resolvers that implement said business To support these kinds of business logic, a custom GraphQL schema with resolvers that implement said business
logic is needed (*see link below for boilerplates*). This remote schema can then be merged with GraphQL Engine's logic is needed (*see link below for boilerplates*). This remote schema can then be merged with GraphQL engine's
schema using the console. Here's a reference architecture diagram for such a setup: schema using the console. Here's a reference architecture diagram for such a setup:
.. thumbnail:: ../../../img/graphql/manual/schema/schema-stitching-v1-arch-diagram.png .. thumbnail:: ../../../img/graphql/manual/schema/schema-stitching-v1-arch-diagram.png
@ -44,16 +44,16 @@ For more details, links to boilerplates for custom GraphQL servers, etc. please
Asynchronous business logic / Events triggers Asynchronous business logic / Events triggers
--------------------------------------------- ---------------------------------------------
"post-CRUD" business logic (*follow up logic to be run after GraphQL Engine's GraphQL API has been used to insert "post-CRUD" business logic (*follow up logic to be run after GraphQL engine's GraphQL API has been used to insert
or modify data in Postgres*) typically tends to be asynchronous, stateless and is triggered on changes to data or modify data in Postgres*) typically tends to be asynchronous, stateless and is triggered on changes to data
relevant to each use case. E.g. for every new user in your database, you may want to send out a notification. This relevant to each use case. E.g. for every new user in your database, you may want to send out a notification. This
business logic is triggered for every new row in your ``users`` table. business logic is triggered for every new row in your ``users`` table.
GraphQL Engine comes with built-in events triggers on tables in the Postgres database. These triggers capture events GraphQL engine comes with built-in events triggers on tables in the Postgres database. These triggers capture events
on specified tables and then invoke configured webhooks, which contain your business logic. on specified tables and then invoke configured webhooks, which contain your business logic.
If your business logic is stateful, it can even store its state back in the Postgres instance configured to work If your business logic is stateful, it can even store its state back in the Postgres instance configured to work
with GraphQL Engine, allowing your frontend app to offer a reactive user experience, where the app uses GraphQL with GraphQL engine, allowing your frontend app to offer a reactive user experience, where the app uses GraphQL
subscriptions to listen to updates from your webhook via Postgres. subscriptions to listen to updates from your webhook via Postgres.
.. thumbnail:: ../../../img/graphql/manual/event-triggers/database-event-triggers.png .. thumbnail:: ../../../img/graphql/manual/event-triggers/database-event-triggers.png
@ -82,7 +82,7 @@ Derived data / Data transformations
----------------------------------- -----------------------------------
For some use cases, you may want to transform your data in Postgres or run some predetermined function on it to For some use cases, you may want to transform your data in Postgres or run some predetermined function on it to
derive another dataset (*that will be queried using GraphQL Engine*). E.g. let's say you store each user's location derive another dataset (*that will be queried using GraphQL engine*). E.g. let's say you store each user's location
data in the database as a ``point`` type. You are interested in calculating the distance (*say the haversine distance*) data in the database as a ``point`` type. You are interested in calculating the distance (*say the haversine distance*)
between each set of two users i.e. you want this derived dataset: between each set of two users i.e. you want this derived dataset:
@ -101,6 +101,6 @@ between each set of two users i.e. you want this derived dataset:
The easiest way to handle these kinds of use cases is to create a view, which encapsulates your business logic The easiest way to handle these kinds of use cases is to create a view, which encapsulates your business logic
(*in our example, calculating the distance between any two users*), and query your derived/transformed data as you (*in our example, calculating the distance between any two users*), and query your derived/transformed data as you
would a table using GraphQL Engine (*with permissions defined explicitly for your view if needed*). would a table using GraphQL engine (*with permissions defined explicitly for your view if needed*).
For more information on how to do this, please see :doc:`../queries/derived-data`. For more information on how to do this, please see :doc:`../queries/derived-data`.

View File

@ -6,8 +6,8 @@ Allow-list for queries
:depth: 1 :depth: 1
:local: :local:
**Allow-list** is a list of safe queries (*GraphQL queries, mutations or subscriptions*) that is stored by The **Allow-list** is a list of safe queries (*GraphQL queries, mutations or subscriptions*) that is stored by
GraphQL engine in its metadata. When enabled, it can be used to restrict GraphQL engine so that it the GraphQL engine in its metadata. When enabled, it can be used to restrict the GraphQL engine so that it
executes **only** those queries that are present in the list *(available after version v1.0.0-beta.1)*. executes **only** those queries that are present in the list *(available after version v1.0.0-beta.1)*.
Adding or removing a query in allow-list Adding or removing a query in allow-list
@ -35,7 +35,7 @@ You can add or remove a query in the allow-list in two ways:
* You can upload files, like this `sample file <https://gist.github.com/dsandip/8b1b4aa87708289d4c9f8fd9621eb025>`_, * You can upload files, like this `sample file <https://gist.github.com/dsandip/8b1b4aa87708289d4c9f8fd9621eb025>`_,
to add multiple queries to the allow-list (each query needs to have a name). to add multiple queries to the allow-list (each query needs to have a name).
* **Using metadata APIs:** Queries can be stored in collections and a collection(s) can added to or removed * **Using metadata APIs:** Queries can be stored in collections and a collection can be added to or removed
from the allow-list. See :doc:`Collections & Allow-list APIs<../api-reference/schema-metadata-api/query-collections>` from the allow-list. See :doc:`Collections & Allow-list APIs<../api-reference/schema-metadata-api/query-collections>`
for API reference. for API reference.
@ -46,7 +46,7 @@ You can add or remove a query in the allow-list in two ways:
* Any introspection queries that your client apps require will have to be explicitly added to the allow-list * Any introspection queries that your client apps require will have to be explicitly added to the allow-list
to allow running them. to allow running them.
* The order of fields in a query will be **strictly** compared. E.g. assuming the query in first example * The order of fields in a query will be **strictly** compared. E.g. assuming the query in the first example
above is part of the allow-list, the following query will be **rejected**: above is part of the allow-list, the following query will be **rejected**:
.. code-block:: graphql .. code-block:: graphql
@ -60,7 +60,7 @@ You can add or remove a query in the allow-list in two ways:
} }
} }
* Allow-list is stored in the metadata. To version control the state of the list, you are required to export * The allow-list is stored in the metadata. To version control the state of the list, you are required to export
the metadata. See :doc:`Managing Hasura metadata <../migrations/manage-metadata>` for more details. the metadata. See :doc:`Managing Hasura metadata <../migrations/manage-metadata>` for more details.
* You can modify the allow-list without actually enabling it on your instance. * You can modify the allow-list without actually enabling it on your instance.
@ -70,7 +70,7 @@ Enable allow-list
----------------- -----------------
The allow-list validation can be enabled by setting the ``HASURA_GRAPHQL_ENABLE_ALLOWLIST`` environment The allow-list validation can be enabled by setting the ``HASURA_GRAPHQL_ENABLE_ALLOWLIST`` environment
variable to ``true`` or running GraphQL engine with the ``--enable-allowlist`` flag (*default value is* variable to ``true`` or running the GraphQL engine with the ``--enable-allowlist`` flag (*default value is*
``false``). See :ref:`reference docs <command-flags>`. ``false``). See :ref:`reference docs <command-flags>`.
.. note:: .. note::
@ -84,11 +84,11 @@ The following are the recommended best practises for enabling/disabling allow-li
* **In development instances**: During development or in dev instances, disable allow-list (*default setting*) * **In development instances**: During development or in dev instances, disable allow-list (*default setting*)
to allow complete access to the GraphQL schema. Add/remove queries in the allow-list and then export the to allow complete access to the GraphQL schema. Add/remove queries in the allow-list and then export the
metadata for version-control (*so you can apply to it other instances*). metadata for version-control (*so you can apply it to other instances*).
* **In CI/CD instances**: Enable allow-list for testing. * **In CI/CD instances**: Enable the allow-list for testing.
* **In production instances**: Enabling allow-list is highly recommended when running GraphQL engine in production. * **In production instances**: Enabling the allow-list is highly recommended when running the GraphQL engine in production.

View File

@ -1,4 +1,4 @@
Run Hasura GraphQL Engine using Docker Run Hasura GraphQL engine using Docker
====================================== ======================================
.. contents:: Table of contents .. contents:: Table of contents
@ -21,7 +21,7 @@ Step 1: Get the **docker-run.sh** bash script
The `hasura/graphql-engine/install-manifests <https://github.com/hasura/graphql-engine/tree/master/install-manifests>`_ The `hasura/graphql-engine/install-manifests <https://github.com/hasura/graphql-engine/tree/master/install-manifests>`_
repo contains all installation manifests required to deploy Hasura anywhere. repo contains all installation manifests required to deploy Hasura anywhere.
Get the docker run bash script from there: Get the Docker run bash script from there:
.. code-block:: bash .. code-block:: bash
@ -30,7 +30,7 @@ Get the docker run bash script from there:
Step 2: Configure the **docker-run.sh** script Step 2: Configure the **docker-run.sh** script
---------------------------------------------- ----------------------------------------------
The ``docker-run.sh`` script has a sample docker run command in it. The following changes have to be The ``docker-run.sh`` script has a sample Docker run command in it. The following changes have to be
made to the command: made to the command:
- Database URL - Database URL
@ -64,12 +64,12 @@ Examples of ``HASURA_GRAPHQL_DATABASE_URL``:
to connect to the database. to connect to the database.
- Hasura GraphQL engine needs access permissions to your Postgres database as described in - Hasura GraphQL engine needs access permissions to your Postgres database as described in
:doc:`Postgres permissions <../postgres-permissions>` :doc:`Postgres permissions <../postgres-permissions>`.
Network config Network config
^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^
If your Postgres instance is running on ``localhost`` the following changes will be needed to the ``docker run`` If your Postgres instance is running on ``localhost``, the following changes will be needed to the ``docker run``
command to allow the Docker container to access the host's network: command to allow the Docker container to access the host's network:
.. rst-class:: api_tabs .. rst-class:: api_tabs
@ -118,7 +118,7 @@ command to allow the Docker container to access the host's network:
hasura/graphql-engine:latest hasura/graphql-engine:latest
Step 3: Run the Hasura docker container Step 3: Run the Hasura Docker container
--------------------------------------- ---------------------------------------
Execute ``docker-run.sh`` & check if everything is running well: Execute ``docker-run.sh`` & check if everything is running well:

View File

@ -6,7 +6,7 @@ Hasura GraphQL engine server logs (Docker)
:depth: 1 :depth: 1
:local: :local:
You can check logs of Hasura GraphQL engine deployed using Docker by checking the logs of the You can check the logs of the Hasura GraphQL engine deployed using Docker by checking the logs of the
GraphQL engine container: GraphQL engine container:
.. code-block:: bash .. code-block:: bash

View File

@ -9,7 +9,7 @@ Securing the GraphQL endpoint (Docker)
To make sure that your GraphQL endpoint and the Hasura console are not publicly accessible, you need to To make sure that your GraphQL endpoint and the Hasura console are not publicly accessible, you need to
configure an admin secret key. configure an admin secret key.
Run the docker command with an admin-secret env var Run the Docker command with an admin-secret env var
--------------------------------------------------- ---------------------------------------------------
.. code-block:: bash .. code-block:: bash
@ -25,6 +25,6 @@ Run the docker command with an admin-secret env var
.. note:: .. note::
The ``HASURA_GRAPHQL_ADMIN_SECRET`` should never be passed from the client to Hasura GraphQL engine as it would The ``HASURA_GRAPHQL_ADMIN_SECRET`` should never be passed from the client to the Hasura GraphQL engine as it would
give the client full admin rights to your Hasura instance. See :doc:`../../auth/index` for information on give the client full admin rights to your Hasura instance. See :doc:`../../auth/index` for information on
setting up Authentication. setting up authentication.

View File

@ -6,7 +6,7 @@ Updating Hasura GraphQL engine running with Docker
:depth: 1 :depth: 1
:local: :local:
This guide will help you update Hasura GraphQL engine running with Docker. This guide assumes that you already have This guide will help you update the Hasura GraphQL engine running with Docker. This guide assumes that you already have
Hasura GraphQL engine running with Docker. Hasura GraphQL engine running with Docker.
Step 1: Check the latest release version Step 1: Check the latest release version

View File

@ -13,7 +13,7 @@ The following are a few configuration use cases:
Add an admin secret Add an admin secret
------------------- -------------------
To add an admin-secret to Hasura, pass the ``--admin-secret`` flag with a secret To add an admin secret to Hasura, pass the ``--admin-secret`` flag with a secret
generated by you. generated by you.
Run server in this mode using following docker command: Run server in this mode using following docker command:
@ -35,7 +35,7 @@ Typically, you will also have a webhook for authentication:
--admin-secret XXXXXXXXXXXXXXXX --admin-secret XXXXXXXXXXXXXXXX
--auth-hook https://myauth.mywebsite.com/user/session-info --auth-hook https://myauth.mywebsite.com/user/session-info
In addition to flags, the GraphQL Engine also accepts Environment variables. In addition to flags, the GraphQL engine also accepts environment variables.
In the above case, for adding an admin secret you will use the ``HASURA_GRAPHQL_ADMIN_SECRET`` In the above case, for adding an admin secret you will use the ``HASURA_GRAPHQL_ADMIN_SECRET``
and for the webhook, you will use the ``HASURA_GRAPHQL_AUTH_HOOK`` environment variables. and for the webhook, you will use the ``HASURA_GRAPHQL_AUTH_HOOK`` environment variables.
@ -45,7 +45,7 @@ and for the webhook, you will use the ``HASURA_GRAPHQL_AUTH_HOOK`` environment v
Using CLI commands with admin secret Using CLI commands with admin secret
------------------------------------ ------------------------------------
When you start the GraphQL Engine with an admin secret key, CLI commands will also When you start the GraphQL engine with an admin secret key, CLI commands will also
need this admin secret to contact APIs. It can be set in ``config.yaml`` or as an need this admin secret to contact APIs. It can be set in ``config.yaml`` or as an
environment variable or as a flag to the command. For example, let's look at the environment variable or as a flag to the command. For example, let's look at the
case of the ``console`` command: case of the ``console`` command:
@ -62,7 +62,7 @@ The console can now contact the GraphQL APIs with the specified admin secret.
.. note:: .. note::
If you're setting ``admin_secret`` in ``config.yaml`` please make sure you do If you're setting an ``admin_secret`` in ``config.yaml`` please make sure you do
not check this file into a public repository. not check this file into a public repository.
An alternate and safe way is to pass the admin secret value to the command An alternate and safe way is to pass the admin secret value to the command
@ -94,11 +94,11 @@ You can also set the admin secret using a flag to the command:
Configure CORS Configure CORS
-------------- --------------
By default, all CORS requests to Hasura GraphQL engine are allowed. To run with more restrictive CORS settings, By default, all CORS requests to the Hasura GraphQL engine are allowed. To run with more restrictive CORS settings,
use the ``--cors-domain`` flag or the ``HASURA_GRAPHQL_CORS_DOMAIN`` ENV variable. The default value is ``*``, use the ``--cors-domain`` flag or the ``HASURA_GRAPHQL_CORS_DOMAIN`` ENV variable. The default value is ``*``,
which means CORS headers are sent for all domains. which means CORS headers are sent for all domains.
Scheme + host with optional wildcard + optional port has to be mentioned. The scheme + host with optional wildcard + optional port have to be mentioned.
Examples: Examples:
@ -122,7 +122,7 @@ Examples:
.. note:: .. note::
Top-level domains are not considered as part of wildcard domains. You Top-level domains are not considered as part of wildcard domains. You
have to add them separately. E.g - ``https://*.foo.com`` doesn't include have to add them separately. E.g. ``https://*.foo.com`` doesn't include
``https://foo.com``. ``https://foo.com``.
@ -141,7 +141,7 @@ These files can be found at ``/srv/console-assets``.
If you're working in an environment with Hasura running locally and have no If you're working in an environment with Hasura running locally and have no
access to internet, you can configure server/console to load assets from the access to internet, you can configure server/console to load assets from the
docker image itself, instead of the CDN. Docker image itself, instead of the CDN.
Set the following env var or flag on the server: Set the following env var or flag on the server:
@ -153,13 +153,13 @@ Set the following env var or flag on the server:
# flag # flag
--console-assets-dir=/srv/console-assets --console-assets-dir=/srv/console-assets
Once the flag is set, all files in ``/srv/console-assets`` directory of the Once the flag is set, all files in the ``/srv/console-assets`` directory of the
Docker image will be served at ``/console/assets`` endpoint on the server with Docker image will be served at the ``/console/assets`` endpoint on the server with
the right content-type headers. the right content-type headers.
.. note:: .. note::
Hasura follows a rolling update pattern for console release where assets for Hasura follows a rolling update pattern for console releases where assets for
a ``major.minor`` version is updated continuously across all patches. If a ``major.minor`` version is updated continuously across all patches. If
you're using the assets on server Docker image, it might not be that latest you're using the assets on the server with a Docker image, it might not be the latest
version of console. version of console.

View File

@ -17,7 +17,7 @@ The flags can be passed as ENV variables as well.
Server flags Server flags
^^^^^^^^^^^^ ^^^^^^^^^^^^
For ``graphql-engine`` command these are the flags and ENV variables available: For the ``graphql-engine`` command these are the available flags and ENV variables:
.. list-table:: .. list-table::
@ -36,7 +36,7 @@ For ``graphql-engine`` command these are the flags and ENV variables available:
Example: ``postgres://admin:mypass@mydomain.com:5432/mydb`` Example: ``postgres://admin:mypass@mydomain.com:5432/mydb``
Or you can specify following options *(only via flags)* Or you can specify the following options *(only via flags)*:
.. code-block:: none .. code-block:: none
@ -52,7 +52,7 @@ Or you can specify following options *(only via flags)*
Command flags Command flags
^^^^^^^^^^^^^ ^^^^^^^^^^^^^
For ``serve`` sub-command these are the flags and ENV variables available: For the ``serve`` sub-command these are the available flags and ENV variables:
.. list-table:: .. list-table::
:header-rows: 1 :header-rows: 1
@ -98,14 +98,14 @@ For ``serve`` sub-command these are the flags and ENV variables available:
* - ``--unauthorized-role <ROLE>`` * - ``--unauthorized-role <ROLE>``
- ``HASURA_GRAPHQL_UNAUTHORIZED_ROLE`` - ``HASURA_GRAPHQL_UNAUTHORIZED_ROLE``
- Unauthorized role, used when access-key is not sent in access-key only - Unauthorized role, used when access-key is not sent in access-key only
mode or "Authorization" header is absent in JWT mode. mode or the ``Authorization`` header is absent in JWT mode.
Example: ``anonymous``. Now whenever "Authorization" header is Example: ``anonymous``. Now whenever the "authorization" header is
absent, request's role will default to "anonymous". absent, the request's role will default to ``anonymous``.
* - ``--cors-domain <DOMAINS>`` * - ``--cors-domain <DOMAINS>``
- ``HASURA_GRAPHQL_CORS_DOMAIN`` - ``HASURA_GRAPHQL_CORS_DOMAIN``
- CSV of list of domains, excluding scheme (http/https) and including port, - CSV of list of domains, excluding scheme (http/https) and including port,
to allow CORS for. Wildcard domains are allowed. to allow for CORS. Wildcard domains are allowed.
* - ``--disable-cors`` * - ``--disable-cors``
- N/A - N/A
@ -148,7 +148,7 @@ For ``serve`` sub-command these are the flags and ENV variables available:
* - ``-i, --tx-iso <TXISO>`` * - ``-i, --tx-iso <TXISO>``
- ``HASURA_GRAPHQL_TX_ISOLATION`` - ``HASURA_GRAPHQL_TX_ISOLATION``
- transaction isolation. read-committed / repeatable-read / serializable (default: read-commited) - Transaction isolation. read-committed / repeatable-read / serializable (default: read-commited)
* - ``--stringify-numeric-types`` * - ``--stringify-numeric-types``
- ``HASURA_GRAPHQL_STRINGIFY_NUMERIC_TYPES`` - ``HASURA_GRAPHQL_STRINGIFY_NUMERIC_TYPES``
@ -163,27 +163,27 @@ For ``serve`` sub-command these are the flags and ENV variables available:
* - ``--live-queries-fallback-refetch-interval`` * - ``--live-queries-fallback-refetch-interval``
- ``HASURA_GRAPHQL_LIVE_QUERIES_FALLBACK_REFETCH_INTERVAL`` - ``HASURA_GRAPHQL_LIVE_QUERIES_FALLBACK_REFETCH_INTERVAL``
- updated results (if any) will be sent at most once in this interval (in milliseconds) for live queries - Updated results (if any) will be sent at most once in this interval (in milliseconds) for live queries
which cannot be multiplexed. Default: 1000 (1sec) which cannot be multiplexed. Default: 1000 (1sec)
* - ``--live-queries-multiplexed-refetch-interval`` * - ``--live-queries-multiplexed-refetch-interval``
- ``HASURA_GRAPHQL_LIVE_QUERIES_MULTIPLEXED_REFETCH_INTERVAL`` - ``HASURA_GRAPHQL_LIVE_QUERIES_MULTIPLEXED_REFETCH_INTERVAL``
- updated results (if any) will be sent at most once in this interval (in milliseconds) for live queries - Updated results (if any) will be sent at most once in this interval (in milliseconds) for live queries
which can be multiplexed. Default: 1000 (1sec) which can be multiplexed. Default: 1000 (1sec)
* - ``--live-queries-multiplexed-batch-size`` * - ``--live-queries-multiplexed-batch-size``
- ``HASURA_GRAPHQL_LIVE_QUERIES_MULTIPLEXED_BATCH_SIZE`` - ``HASURA_GRAPHQL_LIVE_QUERIES_MULTIPLEXED_BATCH_SIZE``
- multiplexed live queries are split into batches of the specified size. Default 100. - Multiplexed live queries are split into batches of the specified size. Default: 100
* - ``--enable-allowlist`` * - ``--enable-allowlist``
- ``HASURA_GRAPHQL_ENABLE_ALLOWLIST`` - ``HASURA_GRAPHQL_ENABLE_ALLOWLIST``
- Restrict queries allowed to be executed by GraphQL engine to those that are part of the configured - Restrict queries allowed to be executed by the GraphQL engine to those that are part of the configured
allow-list. Default ``false``. *(Available for versions > v1.0.0-beta.1)* allow-list. Default: ``false`` *(Available for versions > v1.0.0-beta.1)*
* - ``--console-assets-dir`` * - ``--console-assets-dir``
- ``HASURA_GRAPHQL_CONSOLE_ASSETS_DIR`` - ``HASURA_GRAPHQL_CONSOLE_ASSETS_DIR``
- Set the value to ``/srv/console-assets`` for the console to load assets from the server itself - Set the value to ``/srv/console-assets`` for the console to load assets from the server itself
instead of CDN. *(Available for versions > v1.0.0-beta.1)* instead of CDN *(Available for versions > v1.0.0-beta.1)*
* - ``--enabled-log-types`` * - ``--enabled-log-types``
- ``HASURA_GRAPHQL_ENABLED_LOG_TYPES`` - ``HASURA_GRAPHQL_ENABLED_LOG_TYPES``

View File

@ -1,4 +1,4 @@
Run Hasura GraphQL Engine on Heroku Run Hasura GraphQL engine on Heroku
=================================== ===================================
.. contents:: Table of contents .. contents:: Table of contents
@ -6,7 +6,7 @@ Run Hasura GraphQL Engine on Heroku
:depth: 2 :depth: 2
:local: :local:
This guide will help you get Hasura GraphQL engine running as a "git push to deploy" app on This guide will help you get the Hasura GraphQL engine running as a "git push to deploy" app on
`Heroku <https://www.heroku.com/platform>`_ and connecting it to a `Heroku Postgres <https://www.heroku.com/postgres>`_ `Heroku <https://www.heroku.com/platform>`_ and connecting it to a `Heroku Postgres <https://www.heroku.com/postgres>`_
instance. If you want a simple, quick deployment on Heroku, follow this :doc:`Heroku quickstart instance. If you want a simple, quick deployment on Heroku, follow this :doc:`Heroku quickstart
guide <../../getting-started/heroku-simple>`. guide <../../getting-started/heroku-simple>`.
@ -21,7 +21,7 @@ https://github.com/hasura/graphql-engine-heroku
Configure database URL Configure database URL
^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^
Edit the command in the ``Dockerfile`` to change which database Hasura GraphQL engine connects to. Edit the command in the ``Dockerfile`` to change which database the Hasura GraphQL engine connects to.
By default, it connects to the primary database in your app which is available at ``DATABASE_URL``. By default, it connects to the primary database in your app which is available at ``DATABASE_URL``.
.. code-block:: dockerfile .. code-block:: dockerfile
@ -42,7 +42,7 @@ Read about more configuration options :doc:`here <../graphql-engine-flags/refere
.. note:: .. note::
Hasura GraphQL engine needs access permissions to your Postgres database as described in Hasura GraphQL engine needs access permissions to your Postgres database as described in
:doc:`Postgres permissions <../postgres-permissions>` :doc:`Postgres permissions <../postgres-permissions>`.
Deploying Deploying
@ -50,8 +50,8 @@ Deploying
These are some sample deployment instructions while creating a new app. These are some sample deployment instructions while creating a new app.
Step 1: Create app with **--stack=container** Step 1: Create an app with **--stack=container**
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Use the `Heroku CLI <https://devcenter.heroku.com/articles/heroku-cli>`_ to create a new Heroku app. Let's call Use the `Heroku CLI <https://devcenter.heroku.com/articles/heroku-cli>`_ to create a new Heroku app. Let's call
the app ``graphql-on-postgres``. the app ``graphql-on-postgres``.
@ -85,10 +85,10 @@ Create the Postgres add-on in your Heroku app.
Created postgresql-angular-20334 as DATABASE_URL Created postgresql-angular-20334 as DATABASE_URL
Use heroku addons:docs heroku-postgresql to view documentation Use heroku addons:docs heroku-postgresql to view documentation
Step 3: git push to deploy Step 3: **git push** to deploy
^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Remember to change ``HEROKU_GIT_REMOTE`` to your git remote below. In our case: Remember to change ``HEROKU_GIT_REMOTE`` to your git remote below. In our case:
``https://git.heroku.com/graphql-on-postgres.git`` ``https://git.heroku.com/graphql-on-postgres.git``.
.. code-block:: bash .. code-block:: bash

View File

@ -6,8 +6,8 @@ Hasura GraphQL engine server logs (Heroku)
:depth: 1 :depth: 1
:local: :local:
You can use the `Heroku CLI <https://devcenter.heroku.com/articles/heroku-cli>`_ to check logs You can use the `Heroku CLI <https://devcenter.heroku.com/articles/heroku-cli>`_ to check the logs
of Hasura GraphQL engine deployed on Heroku: of the Hasura GraphQL engine deployed on Heroku:
.. code-block:: bash .. code-block:: bash
@ -18,6 +18,6 @@ of Hasura GraphQL engine deployed on Heroku:
**See:** **See:**
- https://devcenter.heroku.com/articles/logging for more details on logging in Heroku. - https://devcenter.heroku.com/articles/logging for more details on logging on Heroku.
- :doc:`../logging` for more details on Hasura logs - :doc:`../logging` for more details on Hasura logs

View File

@ -26,7 +26,7 @@ prompted for the admin secret key.
The ``HASURA_GRAPHQL_ADMIN_SECRET`` should never be passed from the client to Hasura GraphQL engine as it would The ``HASURA_GRAPHQL_ADMIN_SECRET`` should never be passed from the client to Hasura GraphQL engine as it would
give the client full admin rights to your Hasura instance. See :doc:`../../auth/index` for information on give the client full admin rights to your Hasura instance. See :doc:`../../auth/index` for information on
setting up Authentication. setting up authentication.
(optional) Use the admin secret with the CLI (optional) Use the admin secret with the CLI

View File

@ -6,7 +6,7 @@ Updating Hasura GraphQL engine on Heroku
:depth: 1 :depth: 1
:local: :local:
This guide will help you update Hasura GraphQL engine running on Heroku. This guide assumes that you already have This guide will help you update the Hasura GraphQL engine running on Heroku. This guide assumes that you already have a
Hasura GraphQL engine running on Heroku. Hasura GraphQL engine running on Heroku.
The current latest version is: The current latest version is:
@ -21,11 +21,11 @@ Step 1: Clone the Hasura GraphQL engine Heroku app
-------------------------------------------------- --------------------------------------------------
The Hasura app with Heroku buildpack/configuration is available at: The Hasura app with Heroku buildpack/configuration is available at:
https://github.com/hasura/graphql-engine-heroku https://github.com/hasura/graphql-engine-heroku.
Clone the above repository. Clone the above repository.
If you already have this, then pull the latest changes which will have the updated GraphQL engine docker image. If you already have this, then pull the latest changes which will have the updated GraphQL engine Docker image.
Step 2: Attach your Heroku app Step 2: Attach your Heroku app
------------------------------ ------------------------------
@ -43,8 +43,8 @@ to be able to push to this app.
You can find your Heroku git repo in your Heroku - Settings - Info - Heroku Git URL You can find your Heroku git repo in your Heroku - Settings - Info - Heroku Git URL
Step 3: Git push to deploy the latest Hasura GraphQL engine Step 3: **git push** to deploy the latest Hasura GraphQL engine
----------------------------------------------------------- ---------------------------------------------------------------
When you ``git push`` to deploy, the Heroku app will get updated with the latest changes: When you ``git push`` to deploy, the Heroku app will get updated with the latest changes:
@ -52,12 +52,12 @@ When you ``git push`` to deploy, the Heroku app will get updated with the latest
$ git push heroku master $ git push heroku master
Deploy a specific version of Hasura GraphQL engine Deploy a specific version of the Hasura GraphQL engine
-------------------------------------------------- ------------------------------------------------------
Head to the ``Dockerfile`` in the git repo you cloned in Step 1. Head to the ``Dockerfile`` in the git repo you cloned in step 1.
Change the ``FROM`` line to the specific version you want. A list of all releases can be found Change the ``FROM`` line to the specific version you want. A list of all releases can be found
at https://github.com/hasura/graphql-engine/releases at https://github.com/hasura/graphql-engine/releases.
.. code-block:: Dockerfile .. code-block:: Dockerfile
:emphasize-lines: 1 :emphasize-lines: 1

View File

@ -25,7 +25,7 @@ Deploy Hasura on Heroku by clicking on this button:
:class: no-shadow :class: no-shadow
:target: https://heroku.com/deploy?template=https://github.com/hasura/graphql-engine-heroku :target: https://heroku.com/deploy?template=https://github.com/hasura/graphql-engine-heroku
Follow the Heroku instructions to deploy, check if the Hasura console loads up when you **View app** and then head Follow the Heroku instructions to deploy, check if the Hasura console loads up when you click on **View app** and then head
to the **Manage App** screen on your Heroku dashboard. to the **Manage App** screen on your Heroku dashboard.
This will deploy Hasura with a free Postgres add-on automatically provisioned. This will deploy Hasura with a free Postgres add-on automatically provisioned.
@ -47,8 +47,8 @@ if you want to secure your endpoint.
.. note:: .. note::
Hasura GraphQL engine needs access permissions to your Postgres database as described in The Hasura GraphQL engine needs access permissions to your Postgres database as described in
:doc:`Postgres permissions <../postgres-permissions>` :doc:`Postgres permissions <../postgres-permissions>`.
Step 4: Track tables and relationships Step 4: Track tables and relationships
-------------------------------------- --------------------------------------

View File

@ -1,4 +1,4 @@
Deploying Hasura GraphQL Engine Deploying Hasura GraphQL engine
=============================== ===============================
.. contents:: Table of contents .. contents:: Table of contents
@ -7,7 +7,7 @@ Deploying Hasura GraphQL Engine
:local: :local:
.. note:: .. note::
This section talks in depth about deploying the Hasura GraphQL engine for a **production like environment**. This section talks in depth about deploying the Hasura GraphQL engine for a **production-like environment**.
If you would simply like to take the Hasura GraphQL engine for a quick spin, choose from our If you would simply like to take the Hasura GraphQL engine for a quick spin, choose from our
:doc:`Getting started guides <../getting-started/index>`. :doc:`Getting started guides <../getting-started/index>`.
@ -28,7 +28,7 @@ Configuration
------------- -------------
By default, Hasura GraphQL engine runs in a very permissive mode for easier development. Check out the below pages By default, Hasura GraphQL engine runs in a very permissive mode for easier development. Check out the below pages
to configure Hasura GraphQL engine for your production environment: to configure the Hasura GraphQL engine for your production environment:
- :doc:`securing-graphql-endpoint` - :doc:`securing-graphql-endpoint`
- :doc:`postgres-permissions` - :doc:`postgres-permissions`

View File

@ -1,4 +1,4 @@
Run Hasura GraphQL Engine on Kubernetes Run Hasura GraphQL engine on Kubernetes
======================================= =======================================
.. contents:: Table of contents .. contents:: Table of contents
@ -49,8 +49,8 @@ Examples of ``HASURA_GRAPHQL_DATABASE_URL``:
You can check the :doc:`logs <logging>` to see if the database credentials are proper and if Hasura is able You can check the :doc:`logs <logging>` to see if the database credentials are proper and if Hasura is able
to connect to the database. to connect to the database.
- Hasura GraphQL engine needs access permissions to your Postgres database as described in - The Hasura GraphQL engine needs access permissions on your Postgres database as described in
:doc:`Postgres permissions <../postgres-permissions>` :doc:`Postgres permissions <../postgres-permissions>`.
Step 3: Create the Kubernetes deployment and service Step 3: Create the Kubernetes deployment and service

View File

@ -6,7 +6,7 @@ Hasura GraphQL engine server logs (Kubernetes)
:depth: 1 :depth: 1
:local: :local:
You can check logs of Hasura GraphQL engine deployed on Kubernetes by checking the logs of the GraphQL engine You can check the logs of the Hasura GraphQL engine deployed on Kubernetes by checking the logs of the GraphQL engine
service, i.e. ``hasura``: service, i.e. ``hasura``:
.. code-block:: bash .. code-block:: bash

View File

@ -36,9 +36,9 @@ Update the ``deployment.yaml`` to set the ``HASURA_GRAPHQL_ADMIN_SECRET`` enviro
.. note:: .. note::
The ``HASURA_GRAPHQL_ADMIN_SECRET`` should never be passed from the client to Hasura GraphQL engine as it would The ``HASURA_GRAPHQL_ADMIN_SECRET`` should never be passed from the client to the Hasura GraphQL engine as it would
give the client full admin rights to your Hasura instance. See :doc:`../../auth/index` for information on give the client full admin rights to your Hasura instance. See :doc:`../../auth/index` for information on
setting up Authentication. setting up authentication.
(optional) Use the admin secret key with the CLI (optional) Use the admin secret key with the CLI

View File

@ -6,8 +6,8 @@ Updating Hasura GraphQL engine running on Kubernetes
:depth: 1 :depth: 1
:local: :local:
This guide will help you update Hasura GraphQL engine running on Kubernetes. This guide assumes that you already have This guide will help you update the Hasura GraphQL engine running on Kubernetes. This guide assumes that you already have
Hasura GraphQL engine running on Kubernetes. the Hasura GraphQL engine running on Kubernetes.
Step 1: Check the latest release version Step 1: Check the latest release version
---------------------------------------- ----------------------------------------
@ -18,7 +18,7 @@ The current latest version is:
<code>hasura/graphql-engine:<span class="latest-release-tag">latest</span></code> <code>hasura/graphql-engine:<span class="latest-release-tag">latest</span></code>
All the versions can be found at: https://github.com/hasura/graphql-engine/releases All the versions can be found at: https://github.com/hasura/graphql-engine/releases.
Step 2: Update the container image Step 2: Update the container image
---------------------------------- ----------------------------------

View File

@ -9,7 +9,7 @@ Hasura GraphQL engine logs
Accessing logs Accessing logs
-------------- --------------
Based on your deployment method, Hasura GraphQL engine logs can be accessed as follows: Based on your deployment method, the Hasura GraphQL engine logs can be accessed as follows:
- :doc:`On Heroku <heroku/logging>` - :doc:`On Heroku <heroku/logging>`
- :doc:`On Docker <docker/logging>` - :doc:`On Docker <docker/logging>`
@ -399,446 +399,4 @@ Monitoring frameworks
You can integrate the logs emitted by Hasura GraphQL with external monitoring tools for better visibility as per You can integrate the logs emitted by Hasura GraphQL with external monitoring tools for better visibility as per
your convenience. your convenience.
For some examples, see :doc:`../guides/monitoring/index` For some examples, see :doc:`../guides/monitoring/index`.
Migration path of logs from (<= **v1.0.0-beta.2** to newer)
-----------------------------------------------------------
Previously, there were two main kinds of logs for every request - ``http-log`` and ``ws-handler``
for HTTP and websockets respectively. (The other logs being, logs during startup, event-trigger
logs, schema-sync logs, jwk-refresh logs etc.).
The structure of the **http-log** has changed
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Summary of the changes
++++++++++++++++++++++
.. list-table:: **http-log** changes
:header-rows: 1
* - Older
- Newer
* - ``detail.status``
- ``detail.http_info.status``
* - ``detail.http_version``
- ``detail.http_info.version``
* - ``detail.method``
- ``detail.http_info.method``
* - ``detail.url``
- ``detail.http_info.url``
* - ``detail.ip``
- ``detail.http_info.ip``
* - ``detail.query_hash``
- removed
* - ``detail.query_execution_time``
- ``detail.operation.query_execution_time``
* - ``detail.request_id``
- ``detail.operation.request_id``
* - ``detail.response_size``
- ``detail.operation.response_size``
* - ``detail.user``
- ``detail.operation.user_vars``
* - ``detail.detail.error`` (only on error)
- ``detail.operation.error`` (only on error)
* - ``detail.detail.request`` (only on error)
- ``detail.operation.query`` (only on error)
Full example logs
+++++++++++++++++
Older, on success :
.. code-block:: json
{
"timestamp": "2019-06-07T12:04:16.713+0000",
"level": "info",
"type": "http-log",
"detail": {
"status": 200,
"query_hash": "e9006e6750ebaa77da775ae4fc60227d3101b03e",
"http_version": "HTTP/1.1",
"query_execution_time": 0.408548571,
"request_id": "1ad0c61b-1431-410e-818e-99b57822bd2b",
"url": "/v1/graphql",
"ip": "106.51.72.39",
"response_size": 204,
"user": {
"x-hasura-role": "admin"
},
"method": "POST",
"detail": null
}
}
Newer, on success:
.. code-block:: json
{
"timestamp": "2019-05-30T23:40:24.654+0530",
"level": "info",
"type": "http-log",
"detail": {
"operation": {
"query_execution_time": 0.009240042,
"user_vars": {
"x-hasura-role": "user"
},
"request_id": "072b3617-6653-4fd5-b5ee-580e9d098c3d",
"response_size": 105,
"error": null,
"query": null
},
"http_info": {
"status": 200,
"http_version": "HTTP/1.1",
"url": "/v1/graphql",
"ip": "127.0.0.1",
"method": "POST"
}
}
}
Older, on error:
.. code-block:: json
{
"timestamp": "2019-06-07T12:24:05.166+0000",
"level": "info",
"type": "http-log",
"detail": {
"status": 200,
"query_hash": "511894cc797a2b5cef1c84f106a038ea7bc8436d",
"http_version": "HTTP/1.1",
"query_execution_time": 2.34687e-4,
"request_id": "02d695c7-8a2d-4a45-84dd-8b61b7255807",
"url": "/v1/graphql",
"ip": "106.51.72.39",
"response_size": 138,
"user": {
"x-hasura-role": "admin"
},
"method": "POST",
"detail": {
"error": {
"path": "$.selectionSet.todo.selectionSet.completedx",
"error": "field \"completedx\" not found in type: 'todo'",
"code": "validation-failed"
},
"request": "{\"query\":\"query {\\n todo {\\n id\\n title\\n completedx\\n }\\n}\",\"variables\":null}"
}
}
}
Newer, on error:
.. code-block:: json
{
"timestamp": "2019-05-29T15:22:37.834+0530",
"level": "info",
"type": "http-log",
"detail": {
"operation": {
"query_execution_time": 0.000656144,
"user_vars": {
"x-hasura-role": "user",
"x-hasura-user-id": "1"
},
"error": {
"path": "$.selectionSet.profile.selectionSet.usernamex",
"error": "field \"usernamex\" not found in type: 'profile'",
"code": "validation-failed"
},
"request_id": "072b3617-6653-4fd5-b5ee-580e9d098c3d",
"response_size": 142,
"query": {
"variables": {
"limit": 10
},
"operationName": "getProfile",
"query": "query getProfile($limit: Int!) { profile(limit: $limit, where:{username: {_like: \"%a%\"}}) { usernamex} }"
}
},
"http_info": {
"status": 200,
"http_version": "HTTP/1.1",
"url": "/v1/graphql",
"ip": "127.0.0.1",
"method": "POST"
}
}
The structure for **ws-handler** has changed, and has been renamed to **websocket-log**
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Summary of the changes
++++++++++++++++++++++
.. list-table:: **websocket-log** changes
:header-rows: 1
* - Older
- Newer
* - ``detail.websocket_id``
- ``detail.connection_info.websocket_id``
* - ``detail.jwt_expiry``
- ``detail.connection_info.jwt_expiry``
* - ``detail.msg``
- ``detail.connection_info.msg``
* - ``detail.user``
- ``detail.user_vars``
* - ``detail.event.detail``:
.. code-block:: json
[
"1",
null,
{
"type": "started"
}
]
- ``detail.event.detail``:
.. code-block:: json
{
"request_id": "d2ede87d-5cb7-44b6-8736-1d898117722a",
"operation_id": "1",
"operation_type": {
"type": "started"
},
"operation_name": null
}
* - ``detail.event.detail`` (on error):
.. code-block:: json
[
"1",
null,
{
"type": "query_err",
"detail": {
"path": "$.selectionSet.todo.selectionSet.titlex",
"error": "field \"titlex\" not found in type: 'todo'",
"code": "validation-failed"
}
}
]
- ``detail.event.detail`` (on error):
.. code-block:: json
{
"request_id": "150e3e6a-e1a7-46ba-a9d4-da6b192a4005",
"operation_id": "1",
"query": {
"variables": {},
"query": "subscription {\n author {\n namex\n }\n}\n"
},
"operation_type": {
"type": "query_err",
"detail": {
"path": "$.selectionSet.author.selectionSet.namex",
"error": "field \"namex\" not found in type: 'author'",
"code": "validation-failed"
}
},
"operation_name": null
}
Full example logs
+++++++++++++++++
Older, on success:
.. code-block:: json
{
"timestamp": "2019-06-07T12:35:40.652+0000",
"level": "info",
"type": "ws-handler",
"detail": {
"event": {
"type": "operation",
"detail": ["1", null, {
"type": "started"
}]
},
"websocket_id": "11dea559-6554-4598-969a-00b48545950f",
"jwt_expiry": null,
"msg": null,
"user": {
"x-hasura-role": "admin"
}
}
}
Newer, on success:
.. code-block:: json
{
"timestamp": "2019-06-10T10:52:54.247+0530",
"level": "info",
"type": "websocket-log",
"detail": {
"event": {
"type": "operation",
"detail": {
"request_id": "d2ede87d-5cb7-44b6-8736-1d898117722a",
"operation_id": "1",
"query": {
"variables": {},
"query": "subscription {\n author {\n name\n }\n}\n"
},
"operation_type": {
"type": "started"
},
"operation_name": null
}
},
"connection_info": {
"websocket_id": "f590dd18-75db-4602-8693-8150239df7f7",
"jwt_expiry": null,
"msg": null
},
"user_vars": {
"x-hasura-role": "admin"
}
}
}
Older, when operation stops:
.. code-block:: json
{
"timestamp": "2019-06-10T05:30:41.432+0000",
"level": "info",
"type": "ws-handler",
"detail": {
"event": {
"type": "operation",
"detail": ["1", null, {
"type": "stopped"
}]
},
"websocket_id": "3f5721ee-1bc6-424c-841f-8ff8a326d9ef",
"jwt_expiry": null,
"msg": null,
"user": {
"x-hasura-role": "admin"
}
}
}
Newer, when operations stops:
.. code-block:: json
{
"timestamp": "2019-06-10T11:01:40.939+0530",
"level": "info",
"type": "websocket-log",
"detail": {
"event": {
"type": "operation",
"detail": {
"request_id": null,
"operation_id": "1",
"query": null,
"operation_type": {
"type": "stopped"
},
"operation_name": null
}
},
"connection_info": {
"websocket_id": "7f782190-fd58-4305-a83f-8e17177b204e",
"jwt_expiry": null,
"msg": null
},
"user_vars": {
"x-hasura-role": "admin"
}
}
}
Older, on error:
.. code-block:: json
{
"timestamp": "2019-06-07T12:38:07.188+0000",
"level": "info",
"type": "ws-handler",
"detail": {
"event": {
"type": "operation",
"detail": ["1", null, {
"type": "query_err",
"detail": {
"path": "$.selectionSet.todo.selectionSet.titlex",
"error": "field \"titlex\" not found in type: 'todo'",
"code": "validation-failed"
}
}]
},
"websocket_id": "77558d9b-99f8-4c6a-b105-a5b08c96543b",
"jwt_expiry": null,
"msg": null,
"user": {
"x-hasura-role": "admin"
}
}
}
Newer, on error:
.. code-block:: json
{
"timestamp": "2019-06-10T10:55:20.650+0530",
"level": "info",
"type": "websocket-log",
"detail": {
"event": {
"type": "operation",
"detail": {
"request_id": "150e3e6a-e1a7-46ba-a9d4-da6b192a4005",
"operation_id": "1",
"query": {
"variables": {},
"query": "subscription {\n author {\n namex\n }\n}\n"
},
"operation_type": {
"type": "query_err",
"detail": {
"path": "$.selectionSet.author.selectionSet.namex",
"error": "field \"namex\" not found in type: 'author'",
"code": "validation-failed"
}
},
"operation_name": null
}
},
"connection_info": {
"websocket_id": "49932ddf-e54d-42c6-bffb-8a57a1c6dcbe",
"jwt_expiry": null,
"msg": null
},
"user_vars": {
"x-hasura-role": "admin"
}
}
}

View File

@ -6,17 +6,17 @@ Postgres permissions
:depth: 1 :depth: 1
:local: :local:
If you're running in a controlled environment, you might need to configure Hasura GraphQL engine to use a If you're running in a controlled environment, you might need to configure the Hasura GraphQL engine to use a
specific Postgres user that your DBA gives you. specific Postgres user that your DBA gives you.
Hasura GraphQL engine needs access to your Postgres database with the following permissions: The Hasura GraphQL engine needs access to your Postgres database with the following permissions:
- (required) Read & write access on 2 schemas: ``hdb_catalog`` and ``hdb_views``. - (required) Read & write access on 2 schemas: ``hdb_catalog`` and ``hdb_views``.
- (required) Read access to the ``information_schema`` and ``pg_catalog`` schemas, to query for list of tables. - (required) Read access to the ``information_schema`` and ``pg_catalog`` schemas, to query for list of tables.
- (required) Read access to the schemas (public or otherwise) if you only want to support queries. - (required) Read access to the schemas (public or otherwise) if you only want to support queries.
- (optional) Write access to the schemas if you want to support mutations as well - (optional) Write access to the schemas if you want to support mutations as well.
- (optional) To create tables and views via the Hasura console (the admin UI) you'll need the privilege to create - (optional) To create tables and views via the Hasura console (the admin UI) you'll need the privilege to create
tables/views. This might not be required when you're working with an existing database tables/views. This might not be required when you're working with an existing database.
Here's a sample SQL block that you can run on your database to create the right credentials: Here's a sample SQL block that you can run on your database to create the right credentials:

View File

@ -22,15 +22,15 @@ Unique name for event trigger.
**Schema/Table** **Schema/Table**
The postgres schema and table name on which event trigger needs to be created. The postgres schema and table name on which the event trigger needs to be created.
**Trigger Operations** **Trigger Operations**
The table operation on which event trigger will be invoked. The table operation on which the event trigger will be invoked.
**Webhook URL** **Webhook URL**
The HTTP(s) URL which will be called with event payload on configured operation. Must be a ``POST`` handler. This URL The HTTP(s) URL which will be called with the event payload on configured operation. Must be a ``POST`` handler. This URL
can be entered manually or can be picked up from an environment variable (*the environment variable needs to be set can be entered manually or can be picked up from an environment variable (*the environment variable needs to be set
before using it for this configuration*). before using it for this configuration*).
@ -42,8 +42,8 @@ Advanced Settings
Listen columns for update Listen columns for update
^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^
Update operations are special because you may want to trigger webhook only if specific columns have changed in a row. Update operations are special because you may want to trigger a webhook only if specific columns have changed in a row.
Choose the columns here which you want the update operation to listen on. Choose the columns here which you want the update operation to listen to.
If a column is not selected here, then an update to that column will not trigger the webhook. If a column is not selected here, then an update to that column will not trigger the webhook.
@ -67,5 +67,5 @@ Each header has 3 parameters:
1. ``Key``: Name of the header e.g. Authorization or X-My-Header. 1. ``Key``: Name of the header e.g. Authorization or X-My-Header.
2. ``Type``: One of ``static`` or ``from env variable``. ``static`` means the value provided in the ``Value`` field is 2. ``Type``: One of ``static`` or ``from env variable``. ``static`` means the value provided in the ``Value`` field is
the raw value of the header. ``from env variable`` means the value provided in the ``Value`` field is the name of the raw value of the header. ``from env variable`` means the value provided in the ``Value`` field is the name of
the environment variable in the GraphQL Engine which will be resolved before sending the header. the environment variable in the GraphQL engine which will be resolved before sending the header.
3. ``Value``: The value of the header. Either static value or name of an environment variable. 3. ``Value``: The value of the header. Either a static value or the name of an environment variable.

View File

@ -17,7 +17,7 @@ Events can be of the following types:
- INSERT: When a row is inserted into a table - INSERT: When a row is inserted into a table
- UPDATE: When a row is updated in a table - UPDATE: When a row is updated in a table
- DELETE: When a row is deleted from a table - DELETE: When a row is deleted from a table
- MANUAL: Using the console or API, an event can be triggered manually on a row. - MANUAL: Using the console or API, an event can be triggered manually on a row
**See:** **See:**

View File

@ -7,18 +7,18 @@ Invoke event trigger via console
:local: :local:
You can select the ``Via console`` trigger operation while :doc:`creating an event trigger <./create-trigger>` You can select the ``Via console`` trigger operation while :doc:`creating an event trigger <./create-trigger>`
to allow invoking the event trigger on rows manually using the Hasura console. *(available after version v1.0.0-beta.1)* to allow invoking the event trigger on rows manually using the Hasura console *(available after version v1.0.0-beta.1)*.
In the ``Data -> [table-name] -> Browse Rows`` tab, clicking the invoke trigger button next to any row lets In the ``Data -> [table-name] -> Browse Rows`` tab, clicking the ``invoke trigger`` button next to any row lets
you invoke manual event triggers configured on the table with that row as payload *(the button will be shown you invoke "manual event triggers" configured on the table with that row as payload *(the button will be shown
only if you have any triggers configured)*: only if you have any triggers configured)*:
.. thumbnail:: ../../../img/graphql/manual/event-triggers/select-manual-trigger.png .. thumbnail:: ../../../img/graphql/manual/event-triggers/select-manual-trigger.png
Click on the event trigger you want to run and a modal will pop-up with the request and response. Click on the event trigger you want to run and a modal will pop up with the request and response.
.. thumbnail:: ../../../img/graphql/manual/event-triggers/run-manual-trigger.png .. thumbnail:: ../../../img/graphql/manual/event-triggers/run-manual-trigger.png
.. note:: .. note::
You can also use the :ref:`invoke_event_trigger` metadata API to invoke manual triggers You can also use the :ref:`invoke_event_trigger` metadata API to invoke manual triggers.

View File

@ -50,7 +50,7 @@ JSON payload
- Description - Description
* - session-variables * - session-variables
- Object_ or NULL - Object_ or NULL
- Key-value pairs of session variables (i.e. "x-hasura-\*" variables) and their values. NULL if no session variables found. - Key-value pairs of session variables (i.e. "x-hasura-\*" variables) and their values (NULL if no session variables found)
* - op-name * - op-name
- OpName_ - OpName_
- Name of the operation. Can only be "INSERT", "UPDATE", "DELETE", "MANUAL" - Name of the operation. Can only be "INSERT", "UPDATE", "DELETE", "MANUAL"

View File

@ -10,17 +10,17 @@ Event trigger samples
Boilerplates Boilerplates
^^^^^^^^^^^^ ^^^^^^^^^^^^
Here are few boilerplates you can use to build and deploy event triggers on different cloud providers: Here are a few boilerplates you can use to build and deploy event triggers on different cloud providers:
* Source code: https://github.com/hasura/graphql-engine/tree/master/community/boilerplates/event-triggers * Source code: https://github.com/hasura/graphql-engine/tree/master/community/boilerplates/event-triggers
There are 2 types of boilerplates: There are 2 types of boilerplates:
**Echo** **Echo**
Returns the event payload with some augmented data. Helps you in understanding the event payload and parsing it. Returns the event payload with some augmented data. It helps you in understanding the event payload and parsing it.
**Mutation** **Mutation**
Makes a mutation based on the event payload. Helps in understanding database access in event trigger. Makes a mutation based on the event payload. It helps in understanding database access inside an event trigger.
Push Notifications Push Notifications
^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^

View File

@ -6,7 +6,7 @@ Using serverless functions
:depth: 1 :depth: 1
:local: :local:
You can use serverless functions along with Event Triggers to design an async business workflow without You can use serverless functions along with event triggers to design an async business workflow without
having to manage any dedicated infrastructure. having to manage any dedicated infrastructure.
As Hasura event triggers can deliver database events to any webhook, serverless functions can be perfect candidates As Hasura event triggers can deliver database events to any webhook, serverless functions can be perfect candidates

View File

@ -6,7 +6,7 @@ Quickstart with Docker
:depth: 1 :depth: 1
:local: :local:
This guide will help you get Hasura GraphQL engine and Postgres running as This guide will help you get the Hasura GraphQL engine and Postgres running as
Docker containers using Docker Compose. This is the easiest way to set up Docker containers using Docker Compose. This is the easiest way to set up
Hasura GraphQL engine on your **local environment**. Hasura GraphQL engine on your **local environment**.

View File

@ -8,7 +8,7 @@ Making your first GraphQL query
:depth: 1 :depth: 1
:local: :local:
Let's create a sample table and query from it using the Hasura console, a UI tool meant for doing exactly this: Let's create a sample table and query data from it using the Hasura console, a UI tool meant for doing exactly this:
Create a table Create a table
-------------- --------------

View File

@ -6,8 +6,8 @@ Quickstart with Heroku
:depth: 1 :depth: 1
:local: :local:
This guide will help you get Hasura GraphQL engine and Postgres running on `Heroku's free tier <https://www.heroku.com/free>`_. This guide will help you get the Hasura GraphQL engine and Postgres running on `Heroku's free tier <https://www.heroku.com/free>`_.
It is the easiest and fastest way of trying Hasura out. It is the easiest and fastest way of trying out Hasura.
If you'd like to link this to an existing database, please head to this guide instead: If you'd like to link this to an existing database, please head to this guide instead:
:doc:`Using an existing database on Heroku <../deployment/heroku/using-existing-heroku-database>`. :doc:`Using an existing database on Heroku <../deployment/heroku/using-existing-heroku-database>`.
@ -52,4 +52,4 @@ Advanced
-------- --------
This was a quickstart guide to get the Hasura GraphQL engine up and running quickly. For more detailed instructions This was a quickstart guide to get the Hasura GraphQL engine up and running quickly. For more detailed instructions
on deploying using Heroku, check out :doc:`../deployment/heroku/index`. on deploying using Heroku, check out :doc:`../deployment/heroku/index`.

View File

@ -7,9 +7,9 @@ Auditing actions on tables in Postgres
:local: :local:
Typically audit logging is added to some of the tables to comply with various certifications. Typically audit logging is added to some of the tables to comply with various certifications.
You may want to capture the user information (role and the session variables) for every change in Postgres that is done through graphql-engine. You may want to capture the user information (role and the session variables) for every change in Postgres that is done through the GraphQL engine.
For every mutation, hasura roughly executes the following transaction: For every mutation, Hasura roughly executes the following transaction:
.. code-block:: sql .. code-block:: sql

View File

@ -7,9 +7,9 @@ Guides: Visual Studio Code Setup
:local: :local:
If you use `Visual Studio code <https://code.visualstudio.com/>`_, `Apollo GraphQL plugin <https://marketplace.visualstudio.com/items?itemName=apollographql.vscode-apollo>`_ can improve your development experience significantly by enabling a lot of cool features like syntax highlighting for GraphQL, auto completion for GraphQL queries and validating your GraphQL queries against a schema or an endpoint. If you use `Visual Studio code <https://code.visualstudio.com/>`_, the `Apollo GraphQL plugin <https://marketplace.visualstudio.com/items?itemName=apollographql.vscode-apollo>`_ can improve your development experience significantly by enabling a lot of cool features like syntax highlighting for GraphQL, auto completion for GraphQL queries and validating your GraphQL queries against a schema or an endpoint.
This guide helps you configure Apollo GraphQL plugin with Hasura to make your local development easier. This guide helps you configure the Apollo GraphQL plugin with Hasura to make your local development easier.
Install the plugin Install the plugin
------------------ ------------------
@ -23,7 +23,7 @@ Launch VS Code Quick Open (Ctrl+P), paste the following command, and press enter
Configure your project Configure your project
---------------------- ----------------------
Create a file called `apollo.config.js` in the root of your project and add the following content: Create a file called ``apollo.config.js`` in the root of your project and add the following content:
.. code-block:: javascript .. code-block:: javascript
@ -41,9 +41,9 @@ Create a file called `apollo.config.js` in the root of your project and add the
Notes: Notes:
- Replace ``http://localhost:8080/v1/graphql`` with your GraphQL Endpoint - Replace ``http://localhost:8080/v1/graphql`` with your GraphQL endpoint.
- You can also add custom headers in the headers object if you wish to emulate the schema for some specific roles or tokens. - You can also add custom headers in the headers object if you wish to emulate the schema for some specific roles or tokens.
For advanced configuration, check out the `docs for the plugin <https://marketplace.visualstudio.com/items?itemName=apollographql.vscode-apollo>`_. For advanced configuration, check out the `docs for the plugin <https://marketplace.visualstudio.com/items?itemName=apollographql.vscode-apollo>`_.
Note: The `VSCode GraphQL <https://github.com/prisma/vscode-graphql>`_ plugin by Prisma does not currently work with Hasura because it has a hard dependency on batching and Hasura does not support batching as of now. Batching as a feature in GraphQL Engine is tracked `here <https://github.com/hasura/graphql-engine/issues/1812>`_. Note: The `VSCode GraphQL <https://github.com/prisma/vscode-graphql>`_ plugin by Prisma does not currently work with Hasura because it has a hard dependency on batching and Hasura does not support batching as of now. Batching as a feature in GraphQL engine is tracked `here <https://github.com/hasura/graphql-engine/issues/1812>`_.

View File

@ -1,6 +1,6 @@
.. _deploy_azure_ci_pg: .. _deploy_azure_ci_pg:
Hasura GraphQL Engine on Azure with Container Instances and Postgres Hasura GraphQL engine on Azure with Container Instances and Postgres
==================================================================== ====================================================================
.. contents:: Table of contents .. contents:: Table of contents
@ -8,7 +8,7 @@ Hasura GraphQL Engine on Azure with Container Instances and Postgres
:depth: 1 :depth: 1
:local: :local:
This guide talks about how to deploy Hasura GraphQL Engine on `Azure This guide talks about how to deploy the Hasura GraphQL engine on `Azure
<https://azure.microsoft.com>`__ using `Container Instances <https://azure.microsoft.com>`__ using `Container Instances
<https://azure.microsoft.com/en-us/services/container-instances/>`__ with `Azure <https://azure.microsoft.com/en-us/services/container-instances/>`__ with `Azure
Database for PostgreSQL server <https://azure.microsoft.com/en-us/services/postgresql/>`__. Database for PostgreSQL server <https://azure.microsoft.com/en-us/services/postgresql/>`__.
@ -31,7 +31,7 @@ All resources mentioned in this guide can be deployed using the one-click button
:target: https://portal.azure.com/#create/Microsoft.Template/uri/https%3a%2f%2fraw.githubusercontent.com%2fhasura%2fgraphql-engine%2fmaster%2finstall-manifests%2fazure-container-with-pg%2fazuredeploy.json :target: https://portal.azure.com/#create/Microsoft.Template/uri/https%3a%2f%2fraw.githubusercontent.com%2fhasura%2fgraphql-engine%2fmaster%2finstall-manifests%2fazure-container-with-pg%2fazuredeploy.json
(This button takes you to the Azure Portal, you might want to :kbd:`Ctrl+Click` to (This button takes you to the Azure Portal, you might want to :kbd:`Ctrl+Click` to
open it in a new tab. Read more about this Resource Manager Template `here <https://github.com/hasura/graphql-engine/tree/master/install-manifests/azure-container-with-pg>`__.) open it in a new tab. Read more about this Resource Manager Template `here <https://github.com/hasura/graphql-engine/tree/master/install-manifests/azure-container-with-pg>`__).
.. tab:: With an existing Postgres Server .. tab:: With an existing Postgres Server
@ -42,19 +42,19 @@ All resources mentioned in this guide can be deployed using the one-click button
:target: https://portal.azure.com/#create/Microsoft.Template/uri/https%3a%2f%2fraw.githubusercontent.com%2fhasura%2fgraphql-engine%2fmaster%2finstall-manifests%2fazure-container%2fazuredeploy.json :target: https://portal.azure.com/#create/Microsoft.Template/uri/https%3a%2f%2fraw.githubusercontent.com%2fhasura%2fgraphql-engine%2fmaster%2finstall-manifests%2fazure-container%2fazuredeploy.json
(This button takes you to the Azure Portal, you might want to :kbd:`Ctrl+Click` to (This button takes you to the Azure Portal, you might want to :kbd:`Ctrl+Click` to
open it in a new tab. Read more about this Resource Manager Template `here <https://github.com/hasura/graphql-engine/tree/master/install-manifests/azure-container>`__.) open it in a new tab. Read more about this Resource Manager Template `here <https://github.com/hasura/graphql-engine/tree/master/install-manifests/azure-container>`__).
Pre-requisites Pre-requisites
-------------- --------------
- Valid Azure Subscription with billing enabled or credits. (`click - Valid Azure Subscription with billing enabled or credits (`click
here <https://azure.microsoft.com/en-us/free/>`__ for a free trial) here <https://azure.microsoft.com/en-us/free/>`__ for a free trial).
- `Azure CLI <https://docs.microsoft.com/en-us/cli/azure/install-azure-cli>`_. - `Azure CLI <https://docs.microsoft.com/en-us/cli/azure/install-azure-cli>`_.
The actions mentioned below can be executed using Azure Portal and Azure CLI. But, The actions mentioned below can be executed using the Azure Portal and the Azure CLI. But,
for the sake of simplicity in documentation, we are going to use Azure CLI, so for the sake of simplicity in documentation, we are going to use Azure CLI, so
that commands can be easily copy pasted and executed. that commands can be easily copy-pasted and executed.
Once the CLI is installed, login to your Azure account: Once the CLI is installed, login to your Azure account:
@ -97,8 +97,8 @@ Once the resource group is created, we create a Postgres server instance:
Choose a unique name for ``<server_name>``. Also choose a strong password for Choose a unique name for ``<server_name>``. Also choose a strong password for
``<server_admin_password>``, including uppercase, lowercase and numeric characters. ``<server_admin_password>``, including uppercase, lowercase and numeric characters.
This will be required later to connect to the database. This will be required later to connect to the database
(Make sure you escape the special characters depending on your shell.) (make sure you escape the special characters depending on your shell).
Note down the hostname. It will be shown as below in the output: Note down the hostname. It will be shown as below in the output:
@ -178,7 +178,7 @@ Open the Hasura Console
----------------------- -----------------------
That's it! Once the deployment is complete, navigate to the container instance's That's it! Once the deployment is complete, navigate to the container instance's
IP or hostname to open Hasura console: IP or hostname to open the Hasura console:
.. code-block:: bash .. code-block:: bash
@ -187,10 +187,10 @@ IP or hostname to open Hasura console:
--query "{FQDN:ipAddress.fqdn,ProvisioningState:provisioningState}" \ --query "{FQDN:ipAddress.fqdn,ProvisioningState:provisioningState}" \
--out table --out table
Output will contain the FQDN in the format The output will contain the FQDN in the format
``<dns-name-label>.westus.azurecontainer.io``. ``<dns-name-label>.westus.azurecontainer.io``.
Visit the following URL for the Hasura Console: Visit the following URL for the Hasura console:
.. code:: .. code::
@ -202,7 +202,7 @@ Replace ``<dns-name-label>`` with the label given earlier.
:class: no-shadow :class: no-shadow
:alt: Hasura console :alt: Hasura console
You can create tables and test your GraphQL queries here. Checkout :ref:`Making You can create tables and test your GraphQL queries here. Check out :ref:`Making
your first GraphQL Query <first_graphql_query>` for a detailed guide. your first GraphQL Query <first_graphql_query>` for a detailed guide.
Troubleshooting Troubleshooting
@ -221,7 +221,7 @@ the database allow connection for Azure services.
Checking logs Checking logs
^^^^^^^^^^^^^ ^^^^^^^^^^^^^
If the console is not loading, you might want to check logs and see if something If the console is not loading, you might want to check the logs and see if something
is wrong: is wrong:
.. code-block:: bash .. code-block:: bash

View File

@ -1,4 +1,4 @@
Hasura GraphQL Engine One-click App on DigitalOcean Marketplace Hasura GraphQL engine One-click App on DigitalOcean Marketplace
=============================================================== ===============================================================
.. contents:: Table of contents .. contents:: Table of contents
@ -6,7 +6,7 @@ Hasura GraphQL Engine One-click App on DigitalOcean Marketplace
:depth: 1 :depth: 1
:local: :local:
Hasura GraphQL Engine is available as a One-click app on the DigitalOcean The Hasura GraphQL engine is available as a One-click app on the DigitalOcean
Marketplace. It is packed with a `Postgres <https://www.postgresql.org/>`__ Marketplace. It is packed with a `Postgres <https://www.postgresql.org/>`__
database and `Caddy <https://caddyserver.com/>`__ webserver for easy and database and `Caddy <https://caddyserver.com/>`__ webserver for easy and
automatic HTTPS using `Let's Encrypt <https://letsencrypt.org/>`__. automatic HTTPS using `Let's Encrypt <https://letsencrypt.org/>`__.
@ -17,8 +17,8 @@ Quickstart
1. Create a Hasura One-click Droplet 1. Create a Hasura One-click Droplet
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Click the button below to create a new Hasura GraphQL Engine Droplet through Click the button below to create a new Hasura GraphQL engine Droplet through
DigitalOcean Marketplace. For first time users, the link also contains a the DigitalOcean Marketplace. For first time users, the link also contains a
referral code with gives you $100 over days. A $5 droplet is good enough to referral code with gives you $100 over days. A $5 droplet is good enough to
support most workloads. (``Ctrl+Click`` to open in a new tab) support most workloads. (``Ctrl+Click`` to open in a new tab)
@ -31,7 +31,7 @@ support most workloads. (``Ctrl+Click`` to open in a new tab)
2. Open console 2. Open console
~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~
Once Hasura GraphQL Engine Droplet is ready, you can visit the Droplet IP to Once the Hasura GraphQL engine Droplet is ready, you can visit the Droplet IP to
open the Hasura console, where you can create tables, explore GraphQL APIs etc. open the Hasura console, where you can create tables, explore GraphQL APIs etc.
Note that it might take 1 or 2 minutes for everything to start running. Note that it might take 1 or 2 minutes for everything to start running.
@ -42,7 +42,7 @@ The Hasura console will be at:
http://<your_droplet_ip>/console http://<your_droplet_ip>/console
The GraphQL Endpoint will be: The GraphQL endpoint will be:
.. code-block:: bash .. code-block:: bash
@ -119,7 +119,7 @@ to your database using GraphQL. When deploying to production, you should secure
the endpoint by adding an admin secret key and then setting up permission rules on the endpoint by adding an admin secret key and then setting up permission rules on
tables. tables.
To add an admin secret key, follow the steps given below: To add an admin secret key, follow the steps described below:
1. Connect to the Droplet via SSH: 1. Connect to the Droplet via SSH:
@ -128,7 +128,7 @@ To add an admin secret key, follow the steps given below:
ssh root@<your_droplet_ip> ssh root@<your_droplet_ip>
2. Goto ``/etc/hasura`` directory: 2. Go to the ``/etc/hasura`` directory:
.. code-block:: bash .. code-block:: bash
@ -166,7 +166,7 @@ following header:
X-Hasura-Admin-Secret: myadminsecretkey X-Hasura-Admin-Secret: myadminsecretkey
Adding a domain & Enabling HTTPS Adding a domain & enabling HTTPS
-------------------------------- --------------------------------
If you own a domain, you can enable HTTPS on this Droplet by mapping the domain If you own a domain, you can enable HTTPS on this Droplet by mapping the domain
@ -181,14 +181,14 @@ HTTP/2 web server with automatic HTTPS using Let's Encrypt.
ssh root@<your_droplet_ip> ssh root@<your_droplet_ip>
3. Goto ``/etc/hasura`` directory: 3. Go to the ``/etc/hasura`` directory:
.. code-block:: bash .. code-block:: bash
cd /etc/hasura cd /etc/hasura
4. Edit ``Caddyfile`` and change ``:80`` to your domain: 4. Edit the ``Caddyfile`` and change ``:80`` to your domain:
.. code-block:: bash .. code-block:: bash
@ -212,13 +212,13 @@ HTTP/2 web server with automatic HTTPS using Let's Encrypt.
docker-compose restart caddy docker-compose restart caddy
Visit ``https://<your_domain>/console`` to visit the Hasura console. Go to ``https://<your_domain>/console`` to visit the Hasura console.
Updating to the latest version Updating to the latest version
------------------------------ ------------------------------
When a new version of GraphQL Engine is released, you can upgrade to it by just When a new version of the GraphQL engine is released, you can upgrade to it by just
changing the version tag in docker-compose.yaml. You can find the latest changing the version tag in ``docker-compose.yaml``. You can find the latest
releases on the `GitHub releases page releases on the `GitHub releases page
<https://github.com/hasura/graphql-engine/releases>`__. <https://github.com/hasura/graphql-engine/releases>`__.
@ -229,7 +229,7 @@ releases on the `GitHub releases page
ssh root@<your_droplet_ip> ssh root@<your_droplet_ip>
2. Goto ``/etc/hasura`` directory: 2. Go to the ``/etc/hasura`` directory:
.. code-block:: bash .. code-block:: bash
@ -260,11 +260,11 @@ releases on the `GitHub releases page
Using DigitalOcean Managed Postgres Database Using DigitalOcean Managed Postgres Database
-------------------------------------------- --------------------------------------------
1. Create a new Postgres Database from DigitalOcean Console, preferably in the 1. Create a new Postgres Database from the DigitalOcean Console, preferably in the
same region as the Droplet. same region as the Droplet.
2. Once the database is created, under the "Overview" tab, from the "Connection 2. Once the database is created, under the "Overview" tab, from the "Connection
Details" section, choose "Connection string" from the dropdown. Details" section, choose "Connection string" from the dropdown.
3. "Connection string" is the "Database URL" - copy it. 3. "Connection string" is the "Database URL". Copy it.
4. Connect to the Droplet via SSH: 4. Connect to the Droplet via SSH:
.. code-block:: bash .. code-block:: bash
@ -272,7 +272,7 @@ Using DigitalOcean Managed Postgres Database
ssh root@<your_droplet_ip> ssh root@<your_droplet_ip>
5. Goto ``/etc/hasura`` directory: 5. Go to the ``/etc/hasura`` directory:
.. code-block:: bash .. code-block:: bash
@ -291,15 +291,15 @@ Using DigitalOcean Managed Postgres Database
# type ESC followed by :wq to save and quit # type ESC followed by :wq to save and quit
Similarly, database URL can be changed to connect to any other Postgres Similarly, the database URL can be changed to connect to any other Postgres
database. database.
.. note:: .. note::
If you're using Hasura with a restricted database user, make sure you go If you're using Hasura with a restricted database user, make sure you go
through :doc:`Postgres permissions <../../deployment/postgres-permissions>` through :doc:`Postgres permissions <../../deployment/postgres-permissions>`
to configure all required permissions. (Not applicable with the default to configure all required permissions (not applicable with the default
connection string with DO Managed Postgres) connection string with DO Managed Postgres).
Logs Logs
---- ----
@ -312,7 +312,7 @@ Logs
ssh root@<your_droplet_ip> ssh root@<your_droplet_ip>
2. Goto ``/etc/hasura`` directory: 2. Go to the ``/etc/hasura`` directory:
.. code-block:: bash .. code-block:: bash

View File

@ -1,4 +1,4 @@
Hasura GraphQL Engine on Google Cloud Platform with Kubernetes Engine and Cloud SQL Hasura GraphQL engine on Google Cloud Platform with Kubernetes engine and Cloud SQL
=================================================================================== ===================================================================================
.. contents:: Table of contents .. contents:: Table of contents
@ -6,8 +6,8 @@ Hasura GraphQL Engine on Google Cloud Platform with Kubernetes Engine and Cloud
:depth: 1 :depth: 1
:local: :local:
This is a guide about deploying Hasura GraphQL Engine on `Google Cloud Platform This is a guide on deploying the Hasura GraphQL engine on the `Google Cloud Platform
<https://cloud.google.com/>`__ using `Kuberentes Engine <https://cloud.google.com/>`__ using `Kuberentes engine
<https://cloud.google.com/kubernetes-engine/>`__ to run Hasura and PosgreSQL <https://cloud.google.com/kubernetes-engine/>`__ to run Hasura and PosgreSQL
backed by `Cloud SQL <https://cloud.google.com/sql/>`__. backed by `Cloud SQL <https://cloud.google.com/sql/>`__.
@ -19,9 +19,9 @@ Prerequisites
- ``gcloud`` CLI (`install <https://cloud.google.com/sdk/>`__) - ``gcloud`` CLI (`install <https://cloud.google.com/sdk/>`__)
- ``kubectl`` CLI (`install <https://kubernetes.io/docs/tasks/tools/install-kubectl/>`__) - ``kubectl`` CLI (`install <https://kubernetes.io/docs/tasks/tools/install-kubectl/>`__)
The actions mentioned below can be done using the Google Cloud Console and The actions mentioned below can be done using the Google Cloud Console and the
``gcloud`` CLI. But, for the sake of simplicity in documentation, we are going ``gcloud`` CLI. But, for the sake of simplicity in documentation, we are going
to use ``gcloud`` CLI, so that commands can be easily copy pasted and executed. to use ``gcloud`` CLI, so that commands can be easily copy-pasted and executed.
Once the CLI is installed, initialize the SDK: Once the CLI is installed, initialize the SDK:
@ -42,7 +42,7 @@ project called ``hasura`` for this guide.
Create a Google Cloud SQL Postgres instance Create a Google Cloud SQL Postgres instance
------------------------------------------- -------------------------------------------
Create a Cloud SQL Postgres instance called ``hasura-postgres`` at the Create a Cloud SQL Postgres instance called ``hasura-postgres`` in the
``asia-south1`` region. ``asia-south1`` region.
.. code-block:: bash .. code-block:: bash
@ -61,8 +61,8 @@ Make sure you substitute ``[PASSWORD]`` with a strong password.
Create a Kubernetes Cluster Create a Kubernetes Cluster
--------------------------- ---------------------------
Before creating the cluster, we need to enable the Kubernetes Engine API. Visit Before creating the cluster, we need to enable the Kubernetes engine API. Visit
the following link in a browser to enable the API. Replace ``hasura`` at the end the below link in a browser to enable the API. Replace ``hasura`` at the end
of the URL with your project name, in case you're not using the same name. Note of the URL with your project name, in case you're not using the same name. Note
that, you will need a billing account added to the project to enable the API. that, you will need a billing account added to the project to enable the API.
@ -78,12 +78,12 @@ in the ``asia-south1-a`` zone with 1 node.
gcloud container clusters create hasura-k8s --zone asia-south1-a \ gcloud container clusters create hasura-k8s --zone asia-south1-a \
--num-nodes 1 --project hasura --num-nodes 1 --project hasura
Setup Cloud SQL Proxy Credentials Set up Cloud SQL Proxy Credentials
--------------------------------- ----------------------------------
Inorder to connect to the Cloud SQL instance, we need to setup a proxy that will In order to connect to the Cloud SQL instance, we need to set up a proxy that will
forward connections from Hasura to the database instance. For that purpose, the forward connections from Hasura to the database instance. For that purpose, the
credentials to access the instance needs to be added to the cluster. credentials to access the instance need to be added to the cluster.
Create a service account and download the JSON key file by following `this guide Create a service account and download the JSON key file by following `this guide
<https://cloud.google.com/sql/docs/postgres/sql-proxy#create-service-account>`__. <https://cloud.google.com/sql/docs/postgres/sql-proxy#create-service-account>`__.
@ -99,7 +99,7 @@ Or if you're overwhelmed with that guide, ensure the following:
4. Click ``Create Key`` to download the JSON file. 4. Click ``Create Key`` to download the JSON file.
Create a Kubernetes secret with this JSON key file; replace Create a Kubernetes secret with this JSON key file; replace
``[JSON_KEY_FILE_PATH]`` with the filename including complete path of the ``[JSON_KEY_FILE_PATH]`` with the filename including the complete path of the
download JSON key file. download JSON key file.
.. code-block:: bash .. code-block:: bash
@ -107,7 +107,7 @@ download JSON key file.
kubectl create secret generic cloudsql-instance-credentials \ kubectl create secret generic cloudsql-instance-credentials \
--from-file=credentials.json=[JSON_KEY_FILE_PATH] --from-file=credentials.json=[JSON_KEY_FILE_PATH]
Create another secret with the database username and password (Use the Create another secret with the database username and password (use the
``[PASSWORD]`` used earlier): ``[PASSWORD]`` used earlier):
.. code-block:: bash .. code-block:: bash
@ -115,8 +115,8 @@ Create another secret with the database username and password (Use the
kubectl create secret generic cloudsql-db-credentials \ kubectl create secret generic cloudsql-db-credentials \
--from-literal=username=postgres --from-literal=password=[PASSWORD] --from-literal=username=postgres --from-literal=password=[PASSWORD]
Deploy Hasura GraphQL Engine Deploy the Hasura GraphQL engine
---------------------------- --------------------------------
Download the ``deployment.yaml`` file: Download the ``deployment.yaml`` file:
@ -148,13 +148,13 @@ Ensure the pods are running:
kubectl get pods kubectl get pods
If there are any errors, check the logs for GraphQL Engine: If there are any errors, check the logs of the GraphQL engine:
.. code-block:: bash .. code-block:: bash
kubectl logs deployment/hasura -c graphql-engine kubectl logs deployment/hasura -c graphql-engine
Expose GraphQL Engine Expose GraphQL engine
--------------------- ---------------------
Now that we have Hasura running, let's expose it on an IP using a LoadBalancer. Now that we have Hasura running, let's expose it on an IP using a LoadBalancer.
@ -166,24 +166,24 @@ Now that we have Hasura running, let's expose it on an IP using a LoadBalancer.
--type LoadBalancer --type LoadBalancer
Open Hasura Console Open Hasura console
------------------- -------------------
Wait for the external IP to be allocated, check status using the following Wait for the external IP to be allocated, check the status using the
command. It usually takes a couple of minutes. command below. It usually takes a couple of minutes.
.. code-block:: bash .. code-block:: bash
kubectl get service kubectl get service
Once the IP is allocated, visit the IP in a browser and it should open the Once the IP is allocated, visit the IP in a browser and it should open the
Console. console.
Troubleshooting Troubleshooting
--------------- ---------------
Check the status for pods to see if they are running. If there are any errors, Check the status for pods to see if they are running. If there are any errors,
check the logs for GraphQL Engine: check the logs of the GraphQL engine:
.. code-block:: bash .. code-block:: bash
@ -195,13 +195,13 @@ You might also want to check the logs for cloudsql-proxy:
kubectl logs deployment/hasura -c cloudsql-proxy kubectl logs deployment/hasura -c cloudsql-proxy
The username password used by Hasura to connect to the database comes from a The username and password used by Hasura to connect to the database comes from a
Kubernetes secret object ``cloudsql-db-credentials`` that we created earlier. Kubernetes secret object ``cloudsql-db-credentials`` that we created earlier.
Tearing down Tearing down
------------ ------------
To clean-up the resources created, just delete the Google Cloud Project: To clean up the resources created, just delete the Google Cloud Project:
.. code-block:: bash .. code-block:: bash

View File

@ -8,13 +8,13 @@ Guides: Deployment
- :doc:`Digital Ocean One-click App on Marketplace <digital-ocean-one-click>` - :doc:`Digital Ocean One-click App on Marketplace <digital-ocean-one-click>`
- :doc:`Azure Container Instances with Postgres <azure-container-instances-postgres>` - :doc:`Azure Container Instances with Postgres <azure-container-instances-postgres>`
- :doc:`Google Cloud Platform with Kubernetes Engine and Cloud SQL <google-kubernetes-engine-cloud-sql>` - :doc:`Google Cloud Platform with Kubernetes engine and Cloud SQL <google-kubernetes-engine-cloud-sql>`
- `Blog: Instant GraphQL on AWS RDS <https://blog.hasura.io/instant-graphql-on-aws-rds-1edfb85b5985>`__ - `Blog: Instant GraphQL on AWS RDS <https://blog.hasura.io/instant-graphql-on-aws-rds-1edfb85b5985>`__
.. note:: .. note::
The above are guides to deploy Hasura GraphQL engine on some specific platforms. The above are guides to deploy the Hasura GraphQL engine on some specific platforms.
For more generic guides, see :doc:`../../deployment/index` For more generic guides, see :doc:`../../deployment/index`.
.. toctree:: .. toctree::
:maxdepth: 1 :maxdepth: 1
@ -23,5 +23,5 @@ Guides: Deployment
DigitalOcean One-click App on Marketplace <digital-ocean-one-click> DigitalOcean One-click App on Marketplace <digital-ocean-one-click>
Azure Container Instances with Postgres <azure-container-instances-postgres> Azure Container Instances with Postgres <azure-container-instances-postgres>
Google Cloud Platform with Kubernetes Engine and Cloud SQL <google-kubernetes-engine-cloud-sql> Google Cloud Platform with Kubernetes engine and Cloud SQL <google-kubernetes-engine-cloud-sql>

View File

@ -33,7 +33,7 @@ Once these packages are installed, import them as follows in the file where you
import { InMemoryCache } from 'apollo-cache-inmemory'; import { InMemoryCache } from 'apollo-cache-inmemory';
below these imports initialise your client to fetch subscriptions along with query and mutation. Below these imports initialise your client to fetch subscriptions along with query and mutation.
.. code-block:: js .. code-block:: js
@ -131,7 +131,7 @@ care of when switching to subscriptions.
.. admonition:: Caveat .. admonition:: Caveat
If all the 3 changes are not made, **it works like a query instead of a subscription** If all the 3 changes are not made, **it works like a query instead of a subscription**
since, the code that sets up apollo-link doesn't work. since the code that sets up apollo-link doesn't work.
.. code-block:: js .. code-block:: js

View File

@ -8,15 +8,15 @@ Auth0 JWT Integration with Hasura GraphQL engine
:depth: 1 :depth: 1
:local: :local:
In this guide, we will walk-through on how to set up Auth0 to work with Hasura GraphQL engine. In this guide, we will walk through how to set up Auth0 to work with the Hasura GraphQL engine.
Create an Auth0 Application Create an Auth0 Application
^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^
- Navigate to the `Auth0 dashboard <https://manage.auth0.com>`__ - Navigate to the `Auth0 dashboard <https://manage.auth0.com>`__.
- Click on the ``Applications`` menu option on the left and then click the ``+ Create Application`` button. - Click on the ``Applications`` menu option on the left and then click the ``+ Create Application`` button.
- In the ``Create Application`` window, set a name for your application and select ``Single Page Web Applications``. - In the ``Create Application`` window, set a name for your application and select ``Single Page Web Applications``
(Assuming your application is React/Angular/Vue etc). (assuming your application is React/Angular/Vue etc).
.. thumbnail:: ../../../../img/graphql/manual/guides/create-client-popup.png .. thumbnail:: ../../../../img/graphql/manual/guides/create-client-popup.png
@ -24,7 +24,7 @@ Configure Auth0 Rules & Callback URLs
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
In the settings of the application, add appropriate (e.g: http://localhost:3000/callback) URLs as ``Allowed Callback In the settings of the application, add appropriate (e.g: http://localhost:3000/callback) URLs as ``Allowed Callback
URLs`` and ``Allowed Web Origins``. Add domain specific URLs as well for production apps. (e.g: https://myapp.com/callback) URLs`` and ``Allowed Web Origins``. Add domain specific URLs as well for production apps (e.g: https://myapp.com/callback).
In the dashboard, navigate to ``Rules``. Add the following rules to add our custom JWT claims: In the dashboard, navigate to ``Rules``. Add the following rules to add our custom JWT claims:
@ -48,12 +48,12 @@ In the dashboard, navigate to ``Rules``. Add the following rules to add our cust
Test auth0 login and generate sample JWTs for testing Test auth0 login and generate sample JWTs for testing
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
You don't need to integrate your UI with auth0 for testing. You call follow the steps below: You don't need to integrate your UI with auth0 for testing. You can follow the steps below:
1. Login to your auth0 app by heading to this URL: ``https://<auth0-domain>.auth0.com/login?client=<client_id>&protocol=oauth2&response_type=token%20id_token&redirect_uri=<callback_uri>&scope=openid%20profile`` 1. Login to your auth0 app by heading to this URL: ``https://<auth0-domain>.auth0.com/login?client=<client_id>&protocol=oauth2&response_type=token%20id_token&redirect_uri=<callback_uri>&scope=openid%20profile``.
- Replace ``<auth0-domain>`` with your auth0 app domain. - Replace ``<auth0-domain>`` with your auth0 app domain.
- Replace ``<client-id>`` with your auth0 app client id. Get your client id from app settings page on the auth0 dashboard. - Replace ``<client-id>`` with your auth0 app client id. Get your client id from the app settings page on the auth0 dashboard.
- Replace ``callback_uri`` with ``https://localhost:3000/callback`` or the URL you entered above. Note that this URL doesn't really need to exist while you are testing. - Replace ``callback_uri`` with ``https://localhost:3000/callback`` or the URL you entered above. Note that this URL doesn't really need to exist while you are testing.
2. Once you head to this login page you should see the auth0 login page that you can login with. 2. Once you head to this login page you should see the auth0 login page that you can login with.
@ -74,7 +74,7 @@ You don't need to integrate your UI with auth0 for testing. You call follow the
:class: no-shadow :class: no-shadow
:alt: JWT from id_token query param :alt: JWT from id_token query param
5. To test this JWT, and to see if all the Hasura claims are added as per the sections above, lets test this out with `jwt.io <https://jwt.io>`__! 5. To test this JWT, and to see if all the Hasura claims are added as per the sections above, let's test this out with `jwt.io <https://jwt.io>`__!
.. image:: https://graphql-engine-cdn.hasura.io/img/jwt-io-debug.png .. image:: https://graphql-engine-cdn.hasura.io/img/jwt-io-debug.png
:class: no-shadow :class: no-shadow
@ -132,7 +132,7 @@ An easier way to generate the above config is to use the following UI:
https://hasura.io/jwt-config. https://hasura.io/jwt-config.
The generated config can be used in env ``HASURA_GRAPHQL_JWT_SECRET`` or ``--jwt-secret`` flag. The generated config can be used in env ``HASURA_GRAPHQL_JWT_SECRET`` or ``--jwt-secret`` flag.
The config generated from this page can be directly pasted in yaml files and command line arguments as it takes care of The config generated from this page can be directly pasted in ``yaml`` files and command line arguments as it takes care of
escaping new lines. escaping new lines.
.. thumbnail:: ../../../../img/graphql/manual/auth/jwt-config-generated.png .. thumbnail:: ../../../../img/graphql/manual/auth/jwt-config-generated.png
@ -142,13 +142,13 @@ escaping new lines.
Add Access Control Rules via Hasura Console Add Access Control Rules via Hasura Console
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Auth0 is configured and ready to be used in the application. You can now setup access control rules that Auth0 is configured and ready to be used in the application. You can now set up access control rules that
will automatically get applied whenever a client makes a graphql query with the Auth0 token. will automatically get applied whenever a client makes a graphql query with the Auth0 token.
Refer :doc:`../../auth/authorization/basics` for more information. Refer to :doc:`../../auth/authorization/basics` for more information.
To test this out, add an access control rule that uses ``x-hasura-user-id`` for the role ``user``. To test this out, add an access control rule that uses ``x-hasura-user-id`` for the role ``user``.
Then make a GraphQL query or a mutation, with the Authorization token from the :ref:`previous step <test-auth0>` Then make a GraphQL query or a mutation, with the authorization token from the :ref:`previous step <test-auth0>`
where we generated an Auth0 token. where we generated an Auth0 token.
.. image:: https://graphql-engine-cdn.hasura.io/img/jwt-header-auth-hasura.png .. image:: https://graphql-engine-cdn.hasura.io/img/jwt-header-auth-hasura.png

View File

@ -8,10 +8,10 @@ Guides: Integration/migration tutorials
- :doc:`auth0-jwt` - :doc:`auth0-jwt`
- :doc:`apollo-subscriptions` - :doc:`apollo-subscriptions`
- `Blog: Move from firebase to realtime GraphQL on Postgres <https://blog.hasura.io/firebase2graphql-moving-from-firebase-to-realtime-graphql-on-postgres-4d36cb7f4eaf>`__. - `Blog: Move from firebase to realtime GraphQL on Postgres <https://blog.hasura.io/firebase2graphql-moving-from-firebase-to-realtime-graphql-on-postgres-4d36cb7f4eaf>`__
- `Blog: Create a Gatsby site using GraphQL on Postgres <https://blog.hasura.io/create-gatsby-sites-using-graphql-on-postgres-603b5dd1e516>`__. - `Blog: Create a Gatsby site using GraphQL on Postgres <https://blog.hasura.io/create-gatsby-sites-using-graphql-on-postgres-603b5dd1e516>`__
- `Blog: Instant GraphQL on AWS RDS <https://blog.hasura.io/instant-graphql-on-aws-rds-1edfb85b5985>`__. - `Blog: Instant GraphQL on AWS RDS <https://blog.hasura.io/instant-graphql-on-aws-rds-1edfb85b5985>`__
- `Blog: Using TimescaleDB with Hasura GraphQL <https://blog.hasura.io/using-timescaledb-with-hasura-graphql-d05f030c4b10>`__. - `Blog: Using TimescaleDB with Hasura GraphQL <https://blog.hasura.io/using-timescaledb-with-hasura-graphql-d05f030c4b10>`__
.. toctree:: .. toctree::
:maxdepth: 1 :maxdepth: 1

View File

@ -6,8 +6,8 @@ Guides: Integrating with monitoring frameworks
:depth: 1 :depth: 1
:local: :local:
The following demonstrate integrating Hasura GraphQL engine logs with some external The following posts demonstrate integrating the Hasura GraphQL engine logs with some external
monitoring frameworks: monitoring frameworks:
- `Blog: GraphQL Observability with Hasura GraphQL Engine and Honeycomb <https://blog.hasura.io/graphql-observability-with-hasura-graphql-engine-and-honeycomb-ee0a1a836c41>`__ - `Blog: GraphQL Observability with Hasura GraphQL engine and Honeycomb <https://blog.hasura.io/graphql-observability-with-hasura-graphql-engine-and-honeycomb-ee0a1a836c41>`__
- `Blog: Uptime Monitoring for Hasura GraphQL Engine with DataDog on GKE <https://blog.hasura.io/uptime-monitoring-for-hasura-graphql-engine-with-datadog-on-gke-4faff5832e7f>`__ - `Blog: Uptime Monitoring for Hasura GraphQL engine with DataDog on GKE <https://blog.hasura.io/uptime-monitoring-for-hasura-graphql-engine-with-datadog-on-gke-4faff5832e7f>`__

View File

@ -62,4 +62,4 @@ Videos
Boilerplates Boilerplates
------------ ------------
For boilerplates, please check our community wiki on `Github <https://github.com/hasura/graphql-engine/wiki/Community#tools-and-boilerplates>`__. For boilerplates, please check our community wiki on `Github <https://github.com/hasura/graphql-engine/wiki/Community#tools-and-boilerplates>`__.

View File

@ -8,7 +8,7 @@ Telemetry Guide/FAQ
:depth: 1 :depth: 1
:local: :local:
The Hasura GraphQL Engine collects anonymous telemetry data that helps the The Hasura GraphQL engine collects anonymous telemetry data that helps the
Hasura team in understanding how the product is being used and in deciding Hasura team in understanding how the product is being used and in deciding
what to focus on next. what to focus on next.
@ -32,7 +32,7 @@ Server
The server periodically sends the number of tables, views, relationships, The server periodically sends the number of tables, views, relationships,
permission rules, custom SQL functions, event triggers and remote schemas permission rules, custom SQL functions, event triggers and remote schemas
tracked by GraphQL Engine, along with randomly generated UUID per database and tracked by the GraphQL engine, along with randomly generated UUIDs per database and
per instance. The name of the current continuous integration environment per instance. The name of the current continuous integration environment
(if any) and the server version is also sent. (if any) and the server version is also sent.
@ -90,7 +90,7 @@ Here is a sample:
"cli_uuid": null "cli_uuid": null
} }
Please note, that ``TABLE_NAME`` and ``SCHEMA_NAME`` are not placeholders. Please note that ``TABLE_NAME`` and ``SCHEMA_NAME`` are not placeholders.
The actual names of the tables, schemas, remote-schemas and event-triggers that The actual names of the tables, schemas, remote-schemas and event-triggers that
are a part of the URL are not sent. are a part of the URL are not sent.
@ -100,7 +100,7 @@ CLI
The CLI collects each execution event, along with a randomly generated UUID. The CLI collects each execution event, along with a randomly generated UUID.
The execution event contains the command name, timestamp and whether the The execution event contains the command name, timestamp and whether the
execution resulted in an error or not. **Error messages, arguments and flags execution resulted in an error or not. **Error messages, arguments and flags
are not recorded**. CLI also collects the server version and UUID that it are not recorded**. The CLI also collects the server version and UUID that it
is talking to. The operating system platform and architecture is also is talking to. The operating system platform and architecture is also
noted along with the CLI version. noted along with the CLI version.
@ -133,7 +133,7 @@ The data is sent to Hasura's servers addressed by ``telemetry.hasura.io``.
How do I turn off telemetry (opt-out)? How do I turn off telemetry (opt-out)?
-------------------------------------- --------------------------------------
You can turn off telemetry on the server and on the console hosted by server You can turn off telemetry on the server and on the console hosted by the server
by setting the following environment variable on the server or by using by setting the following environment variable on the server or by using
the flag ``--enable-telemetry=false``: the flag ``--enable-telemetry=false``:
@ -141,7 +141,7 @@ the flag ``--enable-telemetry=false``:
HASURA_GRAPHQL_ENABLE_TELEMETRY=false HASURA_GRAPHQL_ENABLE_TELEMETRY=false
In order to turn off telemetry on CLI and on the console served by CLI, In order to turn off telemetry on the CLI and on the console served by the CLI,
you can set the same environment varibale on the machine running CLI. you can set the same environment varibale on the machine running CLI.
You can also set ``"enable_telemetry": false`` in the JSON file created You can also set ``"enable_telemetry": false`` in the JSON file created
by the CLI at ``~/.hasura/config.json`` to perisist the setting. by the CLI at ``~/.hasura/config.json`` to perisist the setting.

View File

@ -3,7 +3,7 @@
Hasura CLI: hasura Hasura CLI: hasura
------------------ ------------------
Hasura GraphQL Engine command line tool Hasura GraphQL engine command line tool.
Synopsis Synopsis
~~~~~~~~ ~~~~~~~~
@ -37,10 +37,10 @@ SEE ALSO
* :ref:`hasura completion <hasura_completion>` - Generate auto completion code * :ref:`hasura completion <hasura_completion>` - Generate auto completion code
* :ref:`hasura console <hasura_console>` - Open console to manage database and try out APIs * :ref:`hasura console <hasura_console>` - Open console to manage database and try out APIs
* :ref:`hasura init <hasura_init>` - Initialize directory for Hasura GraphQL Engine migrations * :ref:`hasura init <hasura_init>` - Initialize directory for Hasura GraphQL engine migrations
* :ref:`hasura metadata <hasura_metadata>` - Manage Hasura GraphQL Engine metadata saved in the database * :ref:`hasura metadata <hasura_metadata>` - Manage Hasura GraphQL engine metadata saved in the database
* :ref:`hasura migrate <hasura_migrate>` - Manage migrations on the database * :ref:`hasura migrate <hasura_migrate>` - Manage migrations on the database
* :ref:`hasura update-cli <hasura_update-cli>` - Update the CLI to latest version * :ref:`hasura update-cli <hasura_update-cli>` - Update the CLI to the latest version
* :ref:`hasura version <hasura_version>` - Print the CLI version * :ref:`hasura version <hasura_version>` - Print the CLI version
*Auto generated by spf13/cobra* *Auto generated by spf13/cobra*

View File

@ -3,13 +3,13 @@
Hasura CLI: hasura completion Hasura CLI: hasura completion
----------------------------- -----------------------------
Generate auto completion code Generate auto completion code.
Synopsis Synopsis
~~~~~~~~ ~~~~~~~~
Output shell completion code for the specified shell (bash or zsh) Output shell completion code for the specified shell (bash or zsh).
:: ::
@ -70,6 +70,6 @@ Options inherited from parent commands
SEE ALSO SEE ALSO
~~~~~~~~ ~~~~~~~~
* :ref:`hasura <hasura>` - Hasura GraphQL Engine command line tool * :ref:`hasura <hasura>` - Hasura GraphQL engine command line tool
*Auto generated by spf13/cobra* *Auto generated by spf13/cobra*

View File

@ -3,13 +3,13 @@
Hasura CLI: hasura console Hasura CLI: hasura console
-------------------------- --------------------------
Open console to manage database and try out APIs Open the console to manage the database and try out APIs.
Synopsis Synopsis
~~~~~~~~ ~~~~~~~~
Run a web server to serve Hasura Console for GraphQL Engine to manage database and build queries Run a web server to serve the Hasura console for the GraphQL engine to manage the database and build queries.
:: ::
@ -32,10 +32,10 @@ Options
:: ::
--address string address to serve console and migration API from (default "localhost") --address string address to serve console and migration API from (default "localhost")
--admin-secret string admin secret for Hasura GraphQL Engine --admin-secret string admin secret for Hasura GraphQL engine
--api-port string port for serving migrate api (default "9693") --api-port string port for serving migrate api (default "9693")
--console-port string port for serving console (default "9695") --console-port string port for serving console (default "9695")
--endpoint string http(s) endpoint for Hasura GraphQL Engine --endpoint string http(s) endpoint for Hasura GraphQL engine
-h, --help help for console -h, --help help for console
--no-browser do not automatically open console in browser --no-browser do not automatically open console in browser
--static-dir string directory where static assets mentioned in the console html template can be served from --static-dir string directory where static assets mentioned in the console html template can be served from
@ -52,6 +52,6 @@ Options inherited from parent commands
SEE ALSO SEE ALSO
~~~~~~~~ ~~~~~~~~
* :ref:`hasura <hasura>` - Hasura GraphQL Engine command line tool * :ref:`hasura <hasura>` - Hasura GraphQL engine command line tool
*Auto generated by spf13/cobra* *Auto generated by spf13/cobra*

View File

@ -3,13 +3,13 @@
Hasura CLI: hasura init Hasura CLI: hasura init
----------------------- -----------------------
Initialize directory for Hasura GraphQL Engine migrations Initialize a directory for Hasura GraphQL engine migrations.
Synopsis Synopsis
~~~~~~~~ ~~~~~~~~
Create directories and files required for enabling migrations on Hasura GraphQL Engine Create directories and files required for enabling migrations on the Hasura GraphQL engine.
:: ::
@ -35,9 +35,9 @@ Options
:: ::
--admin-secret string admin secret for Hasura GraphQL Engine --admin-secret string admin secret for Hasura GraphQL engine
--directory string name of directory where files will be created --directory string name of directory where files will be created
--endpoint string http(s) endpoint for Hasura GraphQL Engine --endpoint string http(s) endpoint for Hasura GraphQL engine
-h, --help help for init -h, --help help for init
Options inherited from parent commands Options inherited from parent commands
@ -52,6 +52,6 @@ Options inherited from parent commands
SEE ALSO SEE ALSO
~~~~~~~~ ~~~~~~~~
* :ref:`hasura <hasura>` - Hasura GraphQL Engine command line tool * :ref:`hasura <hasura>` - Hasura GraphQL engine command line tool
*Auto generated by spf13/cobra* *Auto generated by spf13/cobra*

View File

@ -3,13 +3,13 @@
Hasura CLI: hasura metadata Hasura CLI: hasura metadata
--------------------------- ---------------------------
Manage Hasura GraphQL Engine metadata saved in the database Manage Hasura GraphQL engine metadata saved in the database.
Synopsis Synopsis
~~~~~~~~ ~~~~~~~~
Manage Hasura GraphQL Engine metadata saved in the database Manage Hasura GraphQL engine metadata saved in the database.
Options Options
~~~~~~~ ~~~~~~~
@ -30,10 +30,10 @@ Options inherited from parent commands
SEE ALSO SEE ALSO
~~~~~~~~ ~~~~~~~~
* :ref:`hasura <hasura>` - Hasura GraphQL Engine command line tool * :ref:`hasura <hasura>` - Hasura GraphQL engine command line tool
* :ref:`hasura metadata apply <hasura_metadata_apply>` - Apply Hasura metadata on a database * :ref:`hasura metadata apply <hasura_metadata_apply>` - Apply Hasura metadata on a database
* :ref:`hasura metadata clear <hasura_metadata_clear>` - Clear Hasura GraphQL Engine metadata on the database * :ref:`hasura metadata clear <hasura_metadata_clear>` - Clear Hasura GraphQL engine metadata on the database
* :ref:`hasura metadata export <hasura_metadata_export>` - Export Hasura GraphQL Engine metadata from the database * :ref:`hasura metadata export <hasura_metadata_export>` - Export Hasura GraphQL engine metadata from the database
* :ref:`hasura metadata reload <hasura_metadata_reload>` - Reload Hasura GraphQL Engine metadata on the database * :ref:`hasura metadata reload <hasura_metadata_reload>` - Reload Hasura GraphQL engine metadata on the database
*Auto generated by spf13/cobra* *Auto generated by spf13/cobra*

View File

@ -3,13 +3,13 @@
Hasura CLI: hasura metadata apply Hasura CLI: hasura metadata apply
--------------------------------- ---------------------------------
Apply Hasura metadata on a database Apply Hasura metadata on a database.
Synopsis Synopsis
~~~~~~~~ ~~~~~~~~
Apply Hasura metadata on a database Apply Hasura metadata on a database.
:: ::
@ -20,7 +20,7 @@ Examples
:: ::
# Apply Hasura GraphQL Engine metadata present in metadata.[yaml|json] file: # Apply Hasura GraphQL engine metadata present in metadata.[yaml|json] file:
hasura metadata apply hasura metadata apply
Options Options
@ -28,8 +28,8 @@ Options
:: ::
--admin-secret string admin secret for Hasura GraphQL Engine --admin-secret string admin secret for Hasura GraphQL engine
--endpoint string http(s) endpoint for Hasura GraphQL Engine --endpoint string http(s) endpoint for Hasura GraphQL engine
-h, --help help for apply -h, --help help for apply
Options inherited from parent commands Options inherited from parent commands
@ -44,6 +44,6 @@ Options inherited from parent commands
SEE ALSO SEE ALSO
~~~~~~~~ ~~~~~~~~
* :ref:`hasura metadata <hasura_metadata>` - Manage Hasura GraphQL Engine metadata saved in the database * :ref:`hasura metadata <hasura_metadata>` - Manage Hasura GraphQL engine metadata saved in the database
*Auto generated by spf13/cobra* *Auto generated by spf13/cobra*

View File

@ -3,13 +3,13 @@
Hasura CLI: hasura metadata clear Hasura CLI: hasura metadata clear
--------------------------------- ---------------------------------
Clear Hasura GraphQL Engine metadata on the database Clear Hasura GraphQL engine metadata on the database.
Synopsis Synopsis
~~~~~~~~ ~~~~~~~~
Clear Hasura GraphQL Engine metadata on the database Clear Hasura GraphQL engine metadata on the database.
:: ::
@ -30,8 +30,8 @@ Options
:: ::
--admin-secret string admin secret for Hasura GraphQL Engine --admin-secret string admin secret for Hasura GraphQL engine
--endpoint string http(s) endpoint for Hasura GraphQL Engine --endpoint string http(s) endpoint for Hasura GraphQL engine
-h, --help help for clear -h, --help help for clear
Options inherited from parent commands Options inherited from parent commands
@ -46,6 +46,6 @@ Options inherited from parent commands
SEE ALSO SEE ALSO
~~~~~~~~ ~~~~~~~~
* :ref:`hasura metadata <hasura_metadata>` - Manage Hasura GraphQL Engine metadata saved in the database * :ref:`hasura metadata <hasura_metadata>` - Manage Hasura GraphQL engine metadata saved in the database
*Auto generated by spf13/cobra* *Auto generated by spf13/cobra*

View File

@ -3,15 +3,15 @@
Hasura CLI: hasura metadata export Hasura CLI: hasura metadata export
---------------------------------- ----------------------------------
Export Hasura GraphQL Engine metadata from the database Export Hasura GraphQL engine metadata from the database
Synopsis Synopsis
~~~~~~~~ ~~~~~~~~
Export Hasura metadata and save it in migrations/metadata.yaml file. Export Hasura metadata and save it in the ``migrations/metadata.yaml`` file.
The output is a yaml file which captures all the metadata required The output is a yaml file which captures all the metadata required
by GraphQL Engine. This includes info about tables that are tracked, by the GraphQL engine. This includes info about tables that are tracked,
permission rules, relationships and event triggers that are defined permission rules, relationships and event triggers that are defined
on those tables. on those tables.
@ -32,8 +32,8 @@ Options
:: ::
--admin-secret string admin secret for Hasura GraphQL Engine --admin-secret string admin secret for Hasura GraphQL engine
--endpoint string http(s) endpoint for Hasura GraphQL Engine --endpoint string http(s) endpoint for Hasura GraphQL engine
-h, --help help for export -h, --help help for export
Options inherited from parent commands Options inherited from parent commands
@ -48,6 +48,6 @@ Options inherited from parent commands
SEE ALSO SEE ALSO
~~~~~~~~ ~~~~~~~~
* :ref:`hasura metadata <hasura_metadata>` - Manage Hasura GraphQL Engine metadata saved in the database * :ref:`hasura metadata <hasura_metadata>` - Manage Hasura GraphQL engine metadata saved in the database
*Auto generated by spf13/cobra* *Auto generated by spf13/cobra*

View File

@ -3,13 +3,13 @@
Hasura CLI: hasura metadata reload Hasura CLI: hasura metadata reload
---------------------------------- ----------------------------------
Reload Hasura GraphQL Engine metadata on the database Reload Hasura GraphQL engine metadata on the database.
Synopsis Synopsis
~~~~~~~~ ~~~~~~~~
Reload Hasura GraphQL Engine metadata on the database Reload Hasura GraphQL engine metadata on the database.
:: ::
@ -28,8 +28,8 @@ Options
:: ::
--admin-secret string admin secret for Hasura GraphQL Engine --admin-secret string admin secret for Hasura GraphQL engine
--endpoint string http(s) endpoint for Hasura GraphQL Engine --endpoint string http(s) endpoint for Hasura GraphQL engine
-h, --help help for reload -h, --help help for reload
Options inherited from parent commands Options inherited from parent commands
@ -44,6 +44,6 @@ Options inherited from parent commands
SEE ALSO SEE ALSO
~~~~~~~~ ~~~~~~~~
* :ref:`hasura metadata <hasura_metadata>` - Manage Hasura GraphQL Engine metadata saved in the database * :ref:`hasura metadata <hasura_metadata>` - Manage Hasura GraphQL engine metadata saved in the database
*Auto generated by spf13/cobra* *Auto generated by spf13/cobra*

View File

@ -3,13 +3,13 @@
Hasura CLI: hasura migrate Hasura CLI: hasura migrate
-------------------------- --------------------------
Manage migrations on the database Manage migrations on the database.
Synopsis Synopsis
~~~~~~~~ ~~~~~~~~
Manage migrations on the database Manage migrations on the database.
Options Options
~~~~~~~ ~~~~~~~
@ -30,7 +30,7 @@ Options inherited from parent commands
SEE ALSO SEE ALSO
~~~~~~~~ ~~~~~~~~
* :ref:`hasura <hasura>` - Hasura GraphQL Engine command line tool * :ref:`hasura <hasura>` - Hasura GraphQL engine command line tool
* :ref:`hasura migrate apply <hasura_migrate_apply>` - Apply migrations on the database * :ref:`hasura migrate apply <hasura_migrate_apply>` - Apply migrations on the database
* :ref:`hasura migrate create <hasura_migrate_create>` - Create files required for a migration * :ref:`hasura migrate create <hasura_migrate_create>` - Create files required for a migration
* :ref:`hasura migrate status <hasura_migrate_status>` - Display current status of migrations on a database * :ref:`hasura migrate status <hasura_migrate_status>` - Display current status of migrations on a database

View File

@ -3,13 +3,13 @@
Hasura CLI: hasura migrate apply Hasura CLI: hasura migrate apply
-------------------------------- --------------------------------
Apply migrations on the database Apply migrations on the database.
Synopsis Synopsis
~~~~~~~~ ~~~~~~~~
Apply migrations on the database Apply migrations on the database.
:: ::
@ -20,9 +20,9 @@ Options
:: ::
--admin-secret string admin secret for Hasura GraphQL Engine --admin-secret string admin secret for Hasura GraphQL engine
--down string apply all or N down migration steps --down string apply all or N down migration steps
--endpoint string http(s) endpoint for Hasura GraphQL Engine --endpoint string http(s) endpoint for Hasura GraphQL engine
-h, --help help for apply -h, --help help for apply
--skip-execution skip executing the migration action, but mark them as applied --skip-execution skip executing the migration action, but mark them as applied
--type string type of migration (up, down) to be used with version flag (default "up") --type string type of migration (up, down) to be used with version flag (default "up")

View File

@ -3,13 +3,13 @@
Hasura CLI: hasura migrate create Hasura CLI: hasura migrate create
--------------------------------- ---------------------------------
Create files required for a migration Create files required for a migration.
Synopsis Synopsis
~~~~~~~~ ~~~~~~~~
Create sql and yaml files required for a migration Create ``sql`` and ``yaml`` files required for a migration.
:: ::
@ -20,8 +20,8 @@ Options
:: ::
--admin-secret string admin secret for Hasura GraphQL Engine --admin-secret string admin secret for Hasura GraphQL engine
--endpoint string http(s) endpoint for Hasura GraphQL Engine --endpoint string http(s) endpoint for Hasura GraphQL engine
-h, --help help for create -h, --help help for create
--metadata-from-file string path to a hasura metadata file to be used for up actions --metadata-from-file string path to a hasura metadata file to be used for up actions
--metadata-from-server take metadata from the server and write it as an up migration file --metadata-from-server take metadata from the server and write it as an up migration file

View File

@ -3,13 +3,13 @@
Hasura CLI: hasura migrate status Hasura CLI: hasura migrate status
--------------------------------- ---------------------------------
Display current status of migrations on a database Display current status of migrations on a database.
Synopsis Synopsis
~~~~~~~~ ~~~~~~~~
Display current status of migrations on a database Display current status of migrations on a database.
:: ::
@ -20,8 +20,8 @@ Options
:: ::
--admin-secret string admin secret for Hasura GraphQL Engine --admin-secret string admin secret for Hasura GraphQL engine
--endpoint string http(s) endpoint for Hasura GraphQL Engine --endpoint string http(s) endpoint for Hasura GraphQL engine
-h, --help help for status -h, --help help for status
Options inherited from parent commands Options inherited from parent commands

View File

@ -3,13 +3,13 @@
Hasura CLI: hasura update-cli Hasura CLI: hasura update-cli
----------------------------- -----------------------------
Update the CLI to latest version Update the CLI to the latest version.
Synopsis Synopsis
~~~~~~~~ ~~~~~~~~
Update the CLI to latest version Update the CLI to the latest version.
:: ::
@ -47,6 +47,6 @@ Options inherited from parent commands
SEE ALSO SEE ALSO
~~~~~~~~ ~~~~~~~~
* :ref:`hasura <hasura>` - Hasura GraphQL Engine command line tool * :ref:`hasura <hasura>` - Hasura GraphQL engine command line tool
*Auto generated by spf13/cobra* *Auto generated by spf13/cobra*

View File

@ -3,13 +3,13 @@
Hasura CLI: hasura version Hasura CLI: hasura version
-------------------------- --------------------------
Print the CLI version Print the CLI version.
Synopsis Synopsis
~~~~~~~~ ~~~~~~~~
Print the CLI version Print the CLI version.
:: ::
@ -34,6 +34,6 @@ Options inherited from parent commands
SEE ALSO SEE ALSO
~~~~~~~~ ~~~~~~~~
* :ref:`hasura <hasura>` - Hasura GraphQL Engine command line tool * :ref:`hasura <hasura>` - Hasura GraphQL engine command line tool
*Auto generated by spf13/cobra* *Auto generated by spf13/cobra*

View File

@ -68,6 +68,4 @@ Install
(Optional) Add shell completion (Optional) Add shell completion
------------------------------- -------------------------------
To add command auto completion in the shell To add command auto completion in the shell, refer to :ref:`hasura completion <hasura_completion>`.
Refer to :ref:`hasura completion <hasura_completion>`

View File

@ -6,26 +6,26 @@ How Hasura GraphQL engine works
:depth: 1 :depth: 1
:local: :local:
Given a Postgres database, Hasura GraphQL engine can automatically generate a GraphQL schema and process GraphQL Given a Postgres database, the Hasura GraphQL engine can automatically generate a GraphQL schema and process GraphQL
queries, subscriptions and mutations. Heres what Hasura GraphQL engine does under the hood. queries, subscriptions and mutations. Heres what the Hasura GraphQL engine does under the hood.
Schema generation Schema generation
----------------- -----------------
Hasura GraphQL engine automatically generates GraphQL schema components when you track a The Hasura GraphQL engine automatically generates GraphQL schema components when you track a
Postgres table/view in Hasura and create relationships between them. Postgres table/view in Hasura and create relationships between them.
Tables Tables
^^^^^^ ^^^^^^
When you track a Postgres table in Hasura GraphQL engine, it automatically generates the following for it: When you track a Postgres table in the Hasura GraphQL engine, it automatically generates the following for it:
- A GraphQL type definition for the table - A GraphQL type definition for the table
- A Query field with ``where``, ``order_by``, ``limit`` and ``offset`` arguments - A query field with ``where``, ``order_by``, ``limit`` and ``offset`` arguments
- A Subscription field with ``where``, ``order_by``, ``limit`` and ``offset`` arguments - A subscription field with ``where``, ``order_by``, ``limit`` and ``offset`` arguments
- An Insert mutation field with ``on_conflict`` argument that supports upsert and bulk inserts - An insert mutation field with ``on_conflict`` argument that supports upsert and bulk inserts
- An Update mutation field with ``where`` argument that supports bulk updates - An update mutation field with ``where`` argument that supports bulk updates
- A Delete mutation field with ``where`` argument that supports bulk deletes - A delete mutation field with ``where`` argument that supports bulk deletes
Views Views
^^^^^ ^^^^^
@ -33,15 +33,15 @@ Views
When you track a Postgres view in Hasura GraphQL engine, it automatically generates the following for it: When you track a Postgres view in Hasura GraphQL engine, it automatically generates the following for it:
- A GraphQL type definition for the view - A GraphQL type definition for the view
- A Query field with ``where``, ``order_by``, ``limit`` and ``offset`` arguments - A query field with ``where``, ``order_by``, ``limit`` and ``offset`` arguments
- A Subscription field with ``where``, ``order_by``, ``limit`` and ``offset`` arguments - A subscription field with ``where``, ``order_by``, ``limit`` and ``offset`` arguments
Essentially Hasura GraphQL engine does the same thing it would do for a table, but without creating the insert, update Essentially the Hasura GraphQL engine does the same thing it would do for a table, but without creating the insert, update
and delete mutations. and delete mutations.
Relationships Relationships
^^^^^^^^^^^^^ ^^^^^^^^^^^^^
When you create a relationship between a table/view with another table/view in Hasura GraphQL engine, it does the When you create a relationship between a table/view with another table/view in the Hasura GraphQL engine, it does the
following: following:
- Augments the type of the table/view by adding a reference to the nested type to allow fetching nested objects. - Augments the type of the table/view by adding a reference to the nested type to allow fetching nested objects.
@ -50,7 +50,8 @@ following:
Resolvers Resolvers
--------- ---------
Hasura GraphQL engine does not have any resolvers. The Hasura GraphQL engine is actually a compiler that compiles your GraphQL query into an efficient SQL query. The Hasura GraphQL engine does not have any resolvers. The Hasura GraphQL engine is actually a compiler that compiles
your GraphQL query into an efficient SQL query.
Hasura's GraphQL syntax is also optimized to expose the power of the underlying SQL so that you can make powerful Hasura's GraphQL syntax is also optimized to expose the power of the underlying SQL so that you can make powerful
queries via GraphQL. queries via GraphQL.
@ -58,11 +59,11 @@ queries via GraphQL.
Metadata Metadata
-------- --------
All the information required for schema generation is stored by Hasura GraphQL engine as metadata in a specific All the information required for schema generation is stored by the Hasura GraphQL engine as metadata in a specific
Postgres schema in the database. See :doc:`metadata schema <metadata-schema>` for more details. Postgres schema in the database. See :doc:`metadata schema <metadata-schema>` for more details.
.. toctree:: .. toctree::
:maxdepth: 1 :maxdepth: 1
:hidden: :hidden:
Metadata schema <metadata-schema> Metadata schema <metadata-schema>

View File

@ -8,21 +8,21 @@ Hasura GraphQL engine metadata schema
:depth: 2 :depth: 2
:local: :local:
Hasura GraphQL engine uses a set of internal tables to manage the state of the database and the The Hasura GraphQL engine uses a set of internal tables to manage the state of the database and the
GraphQL schema. It uses the data in these tables to generate the GraphQL API which then can be accessed GraphQL schema. It uses the data in these tables to generate the GraphQL API which then can be accessed
from different clients. from different clients.
Hasura GraphQL engine when initialized, creates a schema called ``hdb_catalog`` in the Postgres database and The Hasura GraphQL engine when initialized, creates a schema called ``hdb_catalog`` in the Postgres database and
initializes a few tables under it as described below. initializes a few tables under it as described below.
**hdb_catalog** schema **hdb_catalog** schema
---------------------- ----------------------
This schema is created by Hasura GraphQL Engine to manage its internal state. Whenever a This schema is created by the Hasura GraphQL engine to manage its internal state. Whenever a
table/permission/relationship is created/updated using the Hasura console or the metadata API. Hasura GraphQL engine table/permission/relationship is created/updated using the Hasura console or the metadata API, the Hasura GraphQL engine
captures that information and stores it in the corresponding tables. captures that information and stores it in the corresponding tables.
The following tables are used by Hasura GraphQL engine: The following tables are used by the Hasura GraphQL engine:
**hdb_table** table **hdb_table** table
^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^

View File

@ -1,6 +1,6 @@
.. title:: Hasura GraphQL Engine Documentation .. title:: Hasura GraphQL engine Documentation
Hasura GraphQL Engine documentation Hasura GraphQL engine documentation
=================================== ===================================
.. contents:: Table of contents .. contents:: Table of contents
@ -8,7 +8,7 @@ Hasura GraphQL Engine documentation
:depth: 1 :depth: 1
:local: :local:
The Hasura GraphQL engine lets you setup a GraphQL server and event triggers over a Postgres database in minutes. The Hasura GraphQL engine lets you set up a GraphQL server and event triggers over a Postgres database in minutes.
.. toctree:: .. toctree::

View File

@ -6,31 +6,31 @@ Rolling back applied migrations
:depth: 1 :depth: 1
:local: :local:
If there is some issue with the migrations that are applied, you can If there are any issues with the migrations that are applied, you can
rollback the database and Hasura metadata to a desired version using the roll back the database and Hasura metadata to a desired version using the
``down`` migrations. ``down`` migrations.
.. note:: .. note::
Rollbacks will only work if there are ``down`` migrations defined. Console Rollbacks will only work if there are ``down`` migrations defined. The console
will not generate ``down`` migrations for SQL statements executed from the will not generate ``down`` migrations for SQL statements executed from the
``SQL`` tab, even though you can add them as an ``up`` migration. ``SQL`` tab, even though you can add them as an ``up`` migration.
Rollback also means applying down migrations. Here are some example scenarios: Rollback also means applying down migrations. Here are some example scenarios:
To rollback all the applied migrations, execute: To roll back all the applied migrations, execute:
.. code-block:: bash .. code-block:: bash
hasura migrate apply --down hasura migrate apply --down
To rollback the last 2 migration versions: To roll back the last 2 migration versions:
.. code-block:: bash .. code-block:: bash
hasura migrate apply --down 2 hasura migrate apply --down 2
To rollback a particular migration version: To roll back a particular migration version:
.. code-block:: bash .. code-block:: bash

View File

@ -8,8 +8,8 @@ Writing migrations manually
While the Hasura Console can auto generate migrations for every action, While the Hasura Console can auto generate migrations for every action,
sometimes you might want to write the migrations yourself, by hand. Using the sometimes you might want to write the migrations yourself, by hand. Using the
Hasura CLI, you can bootstrap these migration files and write the SQL for Hasura CLI, you can bootstrap these migration files and write the SQL for the
Postgres Schema and YAML for Hasura metadata actions. Postgres schema and YAML for Hasura metadata actions.
#. Set up the migration files: #. Set up the migration files:

View File

@ -1,14 +1,14 @@
.. _auto_apply_migrations: .. _auto_apply_migrations:
Auto-apply migrations/metadata when server starts Auto-apply migrations/metadata when the server starts
================================================= =====================================================
.. contents:: Table of contents .. contents:: Table of contents
:backlinks: none :backlinks: none
:depth: 1 :depth: 1
:local: :local:
Hasura ships a special docker container which can be used to Hasura ships a special Docker container which can be used to
automatically apply migrations/metadata when the server starts: automatically apply migrations/metadata when the server starts:
.. code-block:: bash .. code-block:: bash
@ -17,16 +17,16 @@ automatically apply migrations/metadata when the server starts:
.. note:: .. note::
This container image includes Hasura CLI at ``/bin/hasura-cli`` and can be This container image includes the Hasura CLI at ``/bin/hasura-cli`` and can be
used for running any other CI/CD scripts in your workflow. used for running any other CI/CD scripts in your workflow.
Applying migrations Applying migrations
------------------- -------------------
The ``migrations`` directory created by Hasura CLI (the one next to The ``migrations`` directory created by the Hasura CLI (the one next to
``config.yaml``) can be mounted at ``/hasura-migrations`` path of this docker ``config.yaml``) can be mounted at the ``/hasura-migrations`` path of this Docker
container and the container's entry point script will apply the migrations before container and the container's entry point script will apply the migrations before
starting the server. If no directory is mounted at the designated path, server starting the server. If no directory is mounted at the designated path, the server
will start ignoring migrations. will start ignoring migrations.
If you want to mount the migrations directory at some location other than If you want to mount the migrations directory at some location other than
@ -37,7 +37,7 @@ If you want to mount the migrations directory at some location other than
HASURA_GRAPHQL_MIGRATIONS_DIR=/custom-path-for-migrations HASURA_GRAPHQL_MIGRATIONS_DIR=/custom-path-for-migrations
Once the migrations are applied, the container resumes operation as a normal Once the migrations are applied, the container resumes operation as a normal
Hasura GraphQL Engine server. Hasura GraphQL engine server.
Example: Example:

View File

@ -10,16 +10,16 @@ This guide is to be followed if you already have set up a database and Hasura
instance and now want to start using migrations to help you track the database instance and now want to start using migrations to help you track the database
and GraphQL schema changes. and GraphQL schema changes.
Step 0: Disable console on Server Step 0: Disable console on the server
--------------------------------- -------------------------------------
To use migrations effectively, console on the Server (which is served at To use migrations effectively, the console on the server (which is served at
``/console``) should be disabled and all changes must go through the console ``/console``) should be disabled and all changes must go through the console
served by CLI. Otherwise, changes could be made through the server-console and served by CLI. Otherwise, changes could be made through the server console and
they will not be tracked by migrations. they will not be tracked by migrations.
So, the first step is to disable console served by the GraphQL Engine server. In So, the first step is to disable the console served by the GraphQL engine server. In
order to do that remove ``--enable-console`` flag from the command that starts order to do that, remove the ``--enable-console`` flag from the command that starts
the server or set the following environment variable to false: the server or set the following environment variable to false:
.. code-block:: bash .. code-block:: bash
@ -39,7 +39,7 @@ Follow the instructions in :ref:`install_hasura_cli`.
Step 2: Set up a project directory Step 2: Set up a project directory
---------------------------------- ----------------------------------
Execute the following command. For the endpoint referred here, let's say you've Execute the command below. For the endpoint referred here, let's say you've
deployed the GraphQL engine on Heroku, then this endpoint is: deployed the GraphQL engine on Heroku, then this endpoint is:
``https://my-graphql.herokuapp.com``. In case you've deployed this using Docker, ``https://my-graphql.herokuapp.com``. In case you've deployed this using Docker,
the URL might be ``http://xx.xx.xx.xx:8080``. This endpoint should not contain the URL might be ``http://xx.xx.xx.xx:8080``. This endpoint should not contain
@ -53,21 +53,21 @@ sub-path if it is configured that way.
cd my-project cd my-project
This will create a new directory called ``my-project`` with a ``config.yaml`` This will create a new directory called ``my-project`` with a ``config.yaml``
file and ``migrations`` directory. This directory structure is mandatory to use file and a ``migrations`` directory. This directory structure is mandatory to use
Hasura migrations. You can commit this directory to version control. Hasura migrations. You can commit this directory to version control.
.. note:: .. note::
In case there is an admin secret set, you can set it as an environment In case there is an admin secret set, you can set it as an environment
variable ``HASURA_GRAPHQL_ADMIN_SECRET=<your-admin-secret>`` on the local variable ``HASURA_GRAPHQL_ADMIN_SECRET=<your-admin-secret>`` on the local
machine and the the CLI will use it. You can also use it as a flag to CLI: machine and the CLI will use it. You can also use it as a flag to CLI:
``--admin-secret '<your-admin-secret>'``. ``--admin-secret '<your-admin-secret>'``.
Step 3: Initialize the migrations as per your current state Step 3: Initialize the migrations as per your current state
----------------------------------------------------------- -----------------------------------------------------------
Create a migration called ``init`` by exporting the current Postgres schema and Create a migration called ``init`` by exporting the current Postgres schema and
metadata from server: metadata from the server:
.. code-block:: bash .. code-block:: bash
@ -103,7 +103,7 @@ Step 4: Use the console from the CLI
------------------------------------ ------------------------------------
From this point onwards, instead of using the console at From this point onwards, instead of using the console at
``http://my-graphql.herokuapp.com/console`` you should use the console from CLI ``http://my-graphql.herokuapp.com/console`` you should use the console from the CLI
by running: by running:
.. code-block:: bash .. code-block:: bash
@ -121,8 +121,8 @@ in the ``migrations/`` directory in your project.
Migrations are only created when using the console through CLI. Migrations are only created when using the console through CLI.
Step 6: Apply the migrations on another instance of GraphQL engine Step 6: Apply the migrations on another instance of the GraphQL engine
------------------------------------------------------------------ ----------------------------------------------------------------------
Apply all migrations present in the ``migrations/`` directory on a new Apply all migrations present in the ``migrations/`` directory on a new
instance at ``http://another-graphql-instance.herokuapp.com``: instance at ``http://another-graphql-instance.herokuapp.com``:
@ -133,12 +133,12 @@ instance at ``http://another-graphql-instance.herokuapp.com``:
hasura migrate apply --endpoint http://another-graphql-instance.herokuapp.com hasura migrate apply --endpoint http://another-graphql-instance.herokuapp.com
In case you need an automated way of applying the migrations, take a look at the In case you need an automated way of applying the migrations, take a look at the
:doc:`CLI-Migrations <auto-apply-migrations>` docker image, which can start :doc:`CLI-Migrations <auto-apply-migrations>` Docker image, which can start the
GraphQL Engine after automatically applying the migrations which are GraphQL engine after automatically applying the migrations which are
mounted into a directory. mounted into a directory.
Step 7: Check status of migrations Step 7: Check the status of migrations
---------------------------------- --------------------------------------
.. code-block:: bash .. code-block:: bash
@ -158,13 +158,13 @@ For example,
1550931962927 Present Present 1550931962927 Present Present
1550931970826 Present Present 1550931970826 Present Present
Such a migration status indicate that there are 3 migration versions in the Such a migration status indicates that there are 3 migration versions in the
local directory and all of them are applied on the database. local directory and all of them are applied on the database.
If ``SOURCE STATUS`` indicates ``Not Present``, it means that the migration If ``SOURCE STATUS`` indicates ``Not Present``, it means that the migration
version is present on the server, but not on the current user's local directory. version is present on the server, but not on the current user's local directory.
This typically happens if multiple people are collaborating on a project and one This typically happens if multiple people are collaborating on a project and one
of the collaborator forgot to pull the latest changes which included the latest of the collaborators forgot to pull the latest changes which included the latest
migration files or another collaborator forgot to push the latest migration migration files or another collaborator forgot to push the latest migration
files that were applied on the database. Syncing of the files would fix the files that were applied on the database. Syncing of the files would fix the
issue. issue.

View File

@ -14,7 +14,7 @@ Introduction
It is typical for developers to use some kind of "migration" tool to track It is typical for developers to use some kind of "migration" tool to track
changes to the Postgres schema. Usually the SQL statements used to create the changes to the Postgres schema. Usually the SQL statements used to create the
tables, views etc. are stored as a single file or multiple files. Certain tools tables, views etc. are stored as a single file or multiple files. Certain tools
also let you add an "up" and a "down" step so that you can roll-back the also let you add an "up" and a "down" step so that you can roll back the
changes. changes.
When you connect Hasura to a Postgres database and use the console to "track" a When you connect Hasura to a Postgres database and use the console to "track" a
@ -22,7 +22,7 @@ table, a piece of information is added to the Hasura "metadata" (configuration)
indicating this table in Postgres should be exposed via GraphQL. Similarly, indicating this table in Postgres should be exposed via GraphQL. Similarly,
most of the actions on the console update the Hasura metadata. most of the actions on the console update the Hasura metadata.
In the development phase, you'll be using the Hasura Console to create and track In the development phase, you'll be using the Hasura console to create and track
tables, create relationships, add permissions etc. When you need to move to a tables, create relationships, add permissions etc. When you need to move to a
new environment, it will become quite hard to re-do all these operations using new environment, it will become quite hard to re-do all these operations using
the console again on a fresh database. You might be looking for a way to export the console again on a fresh database. You might be looking for a way to export

View File

@ -1,6 +1,6 @@
.. _manage_hasura_metadata: .. _manage_hasura_metadata:
Managing Hasura Metadata Managing Hasura metadata
======================== ========================
.. contents:: Table of contents .. contents:: Table of contents
@ -9,16 +9,16 @@ Managing Hasura Metadata
:local: :local:
If your Postgres schema is already managed with a tool like knex, TypeORM, If your Postgres schema is already managed with a tool like knex, TypeORM,
Django/Rails Migrations, you will still need a way to export the actions you Django/Rails migrations, you will still need a way to export the actions you
performed on the Hasura console to apply it later on another Hasura instance. performed on the Hasura console to apply it later on another Hasura instance.
All the actions performed on the console, like tracking tables/views/functions, All the actions performed on the console, like tracking tables/views/functions,
creating relationships, configuring permissions, creating event triggers and remote creating relationships, configuring permissions, creating event triggers and remote
schemas, etc. can be exported as a JSON file which can be version schemas, etc. can be exported as a JSON file which can be version
controlled. The contents of this JSON file is called "Hasura metadata". The controlled. The content of this JSON file is called "Hasura metadata". The
metadata file can be later imported to another Hasura instance to get the same metadata file can be later imported to another Hasura instance to get the same
configuration. You can also manually edit the JSON file to add more objects to configuration. You can also manually edit the JSON file to add more objects to
it and then use it update the instance. it and then use it to update the instance.
Exporting Hasura metadata Exporting Hasura metadata
------------------------- -------------------------
@ -28,8 +28,8 @@ Exporting Hasura metadata
.. tab:: Console .. tab:: Console
1. Click on the Settings gear icon at the top right corner of the console screen. 1. Click on the settings (⚙) icon at the top right corner of the console screen.
2. In the Settings page that open, click on the ``Export Metadata`` button. 2. In the Hasura metadata actions page that opens, click on the ``Export Metadata`` button.
3. This will prompt a file download for ``metadata.json``. Save the file. 3. This will prompt a file download for ``metadata.json``. Save the file.
.. tab:: API .. tab:: API
@ -47,19 +47,20 @@ Exporting Hasura metadata
add ``-H 'X-Hasura-Admin-Secret: <your-admin-secret>'`` as the API is an add ``-H 'X-Hasura-Admin-Secret: <your-admin-secret>'`` as the API is an
admin-only API. admin-only API.
Importing Hasura Metadata Importing Hasura metadata
------------------------- -------------------------
The exported metadata can be imported on another instance of Hasura to replicate You can apply exported metadata from one Hasura GraphQL engine instance to another. You can also apply an older or
the metadata. The import can be done on the same instance and it will overwrite modified version of an instance's metadata onto itself.
the existing metadata with that instance.
Importing completely replaces the metadata on that instance. ie: you lose any metadata that was already present
.. rst-class:: api_tabs .. rst-class:: api_tabs
.. tabs:: .. tabs::
.. tab:: Console .. tab:: Console
1. Click on the Settings gear icon at the top right corner of the console screen. 1. Click on the settings (⚙) icon at the top right corner of the console screen.
2. Click on ``Import Metadata`` button. 2. Click on ``Import Metadata`` button.
3. Choose a ``metadata.json`` file that was exported earlier. 3. Choose a ``metadata.json`` file that was exported earlier.
4. A notification should appear indicating the success or error. 4. A notification should appear indicating the success or error.
@ -74,8 +75,8 @@ the existing metadata with that instance.
curl -d'{"type":"replace_metadata", "args":'$(cat metadata.json)'}' http://localhost:8080/v1/query curl -d'{"type":"replace_metadata", "args":'$(cat metadata.json)'}' http://localhost:8080/v1/query
This command will read ``metadata.json`` file and makes a POST request to This command will read the ``metadata.json`` file and makes a POST request to
replace the metadata. If admin secret is set, add ``-H replace the metadata. If an admin secret is set, add ``-H
'X-Hasura-Admin-Secret: <your-admin-secret>'`` as the API is an admin-only 'X-Hasura-Admin-Secret: <your-admin-secret>'`` as the API is an admin-only
API. API.
@ -89,5 +90,5 @@ the existing metadata with that instance.
The ``curl`` based API calls can be easily integrated with your CI/CD workflows. The ``curl`` based API calls can be easily integrated with your CI/CD workflows.
In case you need an automated way of applying/importing the metadata, take a In case you need an automated way of applying/importing the metadata, take a
look at the :doc:`CLI-Migrations <auto-apply-migrations>` docker image, which look at the :doc:`CLI-Migrations <auto-apply-migrations>` Docker image, which
can start GraphQL Engine after automatically importing a mounted metadata file. can start the GraphQL engine after automatically importing a mounted metadata file.

View File

@ -1,6 +1,6 @@
.. _postgres_schema_metadata: .. _postgres_schema_metadata:
Managing Postgres Schema and Hasura Metadata Managing Postgres schema and Hasura metadata
============================================ ============================================
.. contents:: Table of contents .. contents:: Table of contents
@ -15,7 +15,7 @@ modifying SQL statements, as YAML files. These files are called migrations and
they can be applied and rolled back step-by-step. These files can be version they can be applied and rolled back step-by-step. These files can be version
controlled and can be used with your CI/CD system to make incremental updates. controlled and can be used with your CI/CD system to make incremental updates.
When you're looking to setup migrations, there are two scenarios: When you're looking to set up migrations, there are two scenarios:
#. :doc:`You already have a database and Hasura setup <existing-database>`. #. :doc:`You already have a database and Hasura setup <existing-database>`.
#. :doc:`You're starting from scratch - an empty database and a fresh Hasura instance <new-database>`. #. :doc:`You're starting from scratch - an empty database and a fresh Hasura instance <new-database>`.

View File

@ -11,16 +11,16 @@ scratch. You can use migrations to help track the database and GraphQL schema
changes. changes.
Step 0: Disable console on Server Step 0: Disable the console on the server
--------------------------------- -----------------------------------------
To use migrations effectively, console on the Server (which is served at To use migrations effectively, the console on the server (which is served at
``/console``) should be disabled and all changes must go through the console ``/console``) should be disabled and all changes must go through the console
served by CLI. Otherwise, changes could be made through the server-console and served by the CLI. Otherwise, changes could be made through the server console and
they will not be tracked by migrations. they will not be tracked by migrations.
So, the first step is to disable console served by the GraphQL Engine server. In So, the first step is to disable the console served by the GraphQL engine server. In
order to do that remove ``--enable-console`` flag from the command that starts order to do that, remove the ``--enable-console`` flag from the command that starts
the server or set the following environment variable to false: the server or set the following environment variable to false:
.. code-block:: bash .. code-block:: bash
@ -40,7 +40,7 @@ Follow the instructions in :ref:`install_hasura_cli`.
Step 2: Set up a project directory Step 2: Set up a project directory
---------------------------------- ----------------------------------
Execute the following command. For the endpoint referred here, let's say you've Execute the command below. For the endpoint referred here, let's say you've
deployed the GraphQL engine on Heroku, then this endpoint is: deployed the GraphQL engine on Heroku, then this endpoint is:
``https://my-graphql.herokuapp.com``. In case you've deployed this using Docker, ``https://my-graphql.herokuapp.com``. In case you've deployed this using Docker,
the URL might be ``http://xx.xx.xx.xx:8080``. This endpoint should not contain the URL might be ``http://xx.xx.xx.xx:8080``. This endpoint should not contain
@ -54,18 +54,18 @@ sub-path if it is configured that way.
cd my-project cd my-project
This will create a new directory called ``my-project`` with a ``config.yaml`` This will create a new directory called ``my-project`` with a ``config.yaml``
file and ``migrations`` directory. This directory structure is mandatory to use file and a ``migrations`` directory. This directory structure is mandatory to use
Hasura migrations. You can commit this directory to version control. Hasura migrations. You can commit this directory to version control.
.. note:: .. note::
In case there is an admin secret set, you can set it as an environment In case there is an admin secret set, you can set it as an environment
variable ``HASURA_GRAPHQL_ADMIN_SECRET=<your-admin-secret`` on the local variable ``HASURA_GRAPHQL_ADMIN_SECRET=<your-admin-secret`` on the local
machine and the the CLI will use it. You can also use it as a flag to CLI: machine and the CLI will use it. You can also use it as a flag to CLI:
``--admin-secret "<your-admin-secret>"``. ``--admin-secret "<your-admin-secret>"``.
Step 3: Open console from CLI Step 3: Open the console from the CLI
----------------------------- -------------------------------------
Instead of using the console at ``http://my-graphql.herokuapp.com/console`` you Instead of using the console at ``http://my-graphql.herokuapp.com/console`` you
should now use the console by running: should now use the console by running:
@ -81,8 +81,8 @@ Step 4: Add a new table and see how a migration is added
As you use the Hasura console UI to make changes to your schema, migration files As you use the Hasura console UI to make changes to your schema, migration files
are automatically generated in the ``migrations/`` directory in your project. are automatically generated in the ``migrations/`` directory in your project.
Step 5: Apply the migrations on another instance of GraphQL engine Step 5: Apply the migrations on another instance of the GraphQL engine
------------------------------------------------------------------ ----------------------------------------------------------------------
Apply all migrations present in the ``migrations/`` directory on a new Apply all migrations present in the ``migrations/`` directory on a new
instance at ``http://another-graphql-instance.herokuapp.com``: instance at ``http://another-graphql-instance.herokuapp.com``:
@ -93,12 +93,12 @@ instance at ``http://another-graphql-instance.herokuapp.com``:
hasura migrate apply --endpoint http://another-graphql-instance.herokuapp.com hasura migrate apply --endpoint http://another-graphql-instance.herokuapp.com
In case you need an automated way of applying the migrations, take a look at the In case you need an automated way of applying the migrations, take a look at the
:doc:`CLI-Migrations <auto-apply-migrations>` docker image, which can start :doc:`CLI-Migrations <auto-apply-migrations>` Docker image, which can start the
GraphQL Engine after automatically applying the migrations which are GraphQL engine after automatically applying the migrations which are
mounted into a directory. mounted into a directory.
Step 6: Check status of migrations Step 6: Check the status of migrations
---------------------------------- --------------------------------------
.. code-block:: bash .. code-block:: bash
@ -118,13 +118,13 @@ For example,
1550931962927 Present Present 1550931962927 Present Present
1550931970826 Present Present 1550931970826 Present Present
Such a migration status indicate that there are 3 migration versions in the Such a migration status indicates that there are 3 migration versions in the
local directory and all of them are applied on the database. local directory and all of them are applied on the database.
If ``SOURCE STATUS`` indicates ``Not Present``, it means that the migration If ``SOURCE STATUS`` indicates ``Not Present``, it means that the migration
version is present on the server, but not on the current user's local directory. version is present on the server, but not on the current user's local directory.
This typically happens if multiple people are collaborating on a project and one This typically happens if multiple people are collaborating on a project and one
of the collaborator forgot to pull the latest changes which included the latest of the collaborators forgot to pull the latest changes which included the latest
migration files or another collaborator forgot to push the latest migration migration files or another collaborator forgot to push the latest migration
files that were applied on the database. Syncing of the files would fix the files that were applied on the database. Syncing of the files would fix the
issue. issue.

Some files were not shown because too many files have changed in this diff Show More