How Hasura Migrations work? =========================== .. contents:: Table of contents :backlinks: none :depth: 1 :local: This is an explainer on how the Hasura Migration system works. To understand how to use the system, refer to :doc:`Migrations & Metadata <../index>`. Metadata -------- Let's first talk about metadata. Whenever you do certain actions on the console or via the API, Hasura will record it in a schema called ``hdb_catalog`` in your Postgres database. For example, if you track a table a new entry created in the ``hdb_catalog.hdb_table`` table in Postgres. Similarly there are more tables in this schema to track relationships, event triggers, functions and remote schemas. You can read more at :ref:`hasura_metadata_schema`. All information in this schema can be exported as a single JSON file. Export options are available on the console, CLI and via the API. This file when imported on to an existing or new Hasura instance, will clear out ``hdb_catalog`` schema on that instance and populates it again with the imported data. One thing to note is that all the Postgres resources the metadata refers to should already exist when the import happens, otherwise Hasura will throw an error. To understand the format of the ``metadata.json`` file, refer to :ref:`metadata_file_format`. For more details on how to import and export metadata, refer to :ref:`manage_hasura_metadata`. Migrations ---------- While metadata can be exported as a single file as a representation of the state of Hasura, you might want a more granular step-by-step checkpoints on the evolution of the state. You might also want to track the Postgres schema changes through Hasura's migration system. Migrations are stored and applied as steps (or versions). A migration step (or version) may contain changes to the Postgres schema and Hasura metadata. The migration version can also store the ``up`` migration (creating resources) and the ``down`` migration (deleting resources). For example, migration version ``1`` can include the SQL statements required to create a table called ``profile`` along with the Hasura metadata action to track this table as the ``up`` migration and SQL statements to drop this table along with metadata action to untrack the table as the ``down`` migration. The migration versions can be automatically generated by the Hasura Console or can be manually written by hand. They are stored as YAML files in a directory called ``migrations``. For more details on the format of these files, refer to :ref:`migration_file_format`. When someone executes ``migrate apply`` using the Hasura CLI, the CLI will fist read the migration files present in the designated directory. The CLI would then contact the Hasura Server and gets the status of all migrations applied on the server by reading ``hdb_catalog.schema_migrations`` table. Each row in this table denotes a migration version that is already applied on the server. By comparing these two sets of versions, the CLI derives which versions are already applied and which are not. CLI would then go ahead and apply the migrations on the server. This is done by executing the actions against the database though the Hasura metadata APIs. Whenever the ``apply`` command is used, all migrations that are to be applied are executed in a Postgres transaction (though a ``bulk`` API call). The advantage of doing this is that if there are any errors, all actions are rolled back and the user can properly debug the error without worrying about partial changes. The default action of ``migrate apply`` command is to execute all the ``up`` migrations. In order to roll back changes, you would need to execute ``down`` migrations using ``--down`` flag on the CLI. This guide only gives an overall idea of how the system works. For more details on how to actually use the system, refer to :ref:`postgres_schema_metadata`.