graphql-engine/docs/graphql/manual/migrations/reference/how-it-works.rst

79 lines
3.8 KiB
ReStructuredText

How Hasura migrations work
==========================
.. contents:: Table of contents
:backlinks: none
:depth: 1
:local:
This is an explanation on how the Hasura migration system works. To understand how
to use the system, refer to :doc:`Migrations & Metadata <../index>`.
Metadata
--------
Let's first talk about metadata. Whenever you do certain actions on the console
or via the API, Hasura records it in the :doc:`metadata catalogue <../../how-it-works/metadata-schema>`
which is a schema called ``hdb_catalog`` in your Postgres database. For example, if you track
a table, a new entry is created in the ``hdb_catalog.hdb_table`` table in Postgres.
Similarly, there are more tables in this schema to track relationships, event triggers,
functions and remote schemas.
All information in this schema can be exported as a single JSON file. Export
options are available on the console, CLI and via the API. This file when
imported to an existing or new Hasura instance, will clear out the
``hdb_catalog`` schema on that instance and populates it again with the imported
data. One thing to note is that all the Postgres resources the metadata refers
to should already exist when the import happens, otherwise Hasura will throw an
error.
To understand the format of the ``metadata.json`` file, refer to :ref:`metadata_file_format`.
For more details on how to import and export metadata, refer to :ref:`manage_hasura_metadata`.
Migrations
----------
While metadata can be exported as a single file as a representation of the state
of Hasura, you might want more granular step-by-step checkpoints on the
evolution of the state. You might also want to track the Postgres schema changes
through Hasura's migration system.
Migrations are stored and applied as steps (or versions). A migration step (or
version) may contain changes to the Postgres schema and Hasura metadata. The
migration version can also store the ``up`` migration (creating resources) and
the ``down`` migration (deleting resources). For example, migration version
``1`` can include the SQL statements required to create a table called
``profile`` along with the Hasura metadata action to track this table as the
``up`` migration and SQL statements to drop this table along with the metadata
action to untrack the table as the ``down`` migration.
The migration versions can be automatically generated by the Hasura console or
can be written by hand. They are stored as YAML files in a directory
called ``migrations``.
For more details on the format of these files, refer to
:ref:`migration_file_format`.
When someone executes ``migrate apply`` using the Hasura CLI, the CLI will first
read the migration files present in the designated directory. The CLI would then
contact the Hasura Server and get the status of all migrations applied to the
server by reading the ``hdb_catalog.schema_migrations`` table. Each row in this
table denotes a migration version that is already applied on the server.
By comparing these two sets of versions, the CLI derives which versions are
already applied and which are not. The CLI would then go ahead and apply the
migrations on the server. This is done by executing the actions against the
database through the Hasura metadata APIs. Whenever the ``apply`` command is
used, all migrations that are to be applied are executed in a Postgres
transaction (through a ``bulk`` API call). The advantage of doing this is that if
there are any errors, all actions are rolled back and the user can properly
debug the error without worrying about partial changes.
The default action of the ``migrate apply`` command is to execute all the ``up``
migrations. In order to roll back changes, you would need to execute ``down``
migrations using the ``--down`` flag on the CLI.
This guide provides an overall idea of how the system works. For more details
on how to actually use the system, refer to :ref:`postgres_schema_metadata`.