Docs: Add migrations best practices

[DOCS-1820]: https://hasurahq.atlassian.net/browse/DOCS-1820?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ

PR-URL: https://github.com/hasura/graphql-engine-mono/pull/10819
GitOrigin-RevId: cd7ca518a3ef29e85c479fff72baf9dabcb4d8ab
This commit is contained in:
Rob Dominguez 2024-05-23 08:58:36 -05:00 committed by hasura-bot
parent def7362ea5
commit 4dfff3e25e

View File

@ -0,0 +1,111 @@
---
description: Migrations best practices
keywords:
- hasura
- migrations
- github integration
- best practices
sidebar_label: Migrations Best Practices
sidebar_position: 13
---
# Migrations Best Practices
## Introduction
Effective management of migrations is crucial for maintaining the health and stability of your Hasura projects. This
guide outlines best practices for handling migrations, based on real-world experiences and common challenges encountered
in enterprise-scale Hasura ecosystems.
Migrations generated by the Hasura Console are suitable for general use, but may not be optimized for performance under
all conditions. If you have high traffic, or millions of records in your production database, you may want to manually
tweak any auto-generated SQL to match best practices for your chosen database.
## Recommended patterns
### Name environment variables consistently
Consistent naming of environment variables is essential to avoid configuration issues between different environments.
Define a standard naming convention for your environment variables and apply it uniformly across all projects. This
practice helps ensure that the correct databases are connected and reduces the risk of mismatches between source and
target projects. Document your naming conventions clearly and include them in your project setup guides to maintain
consistency across your team.
**Example:**
```yaml
# .env file for development
DATABASE_URL=postgres://user:password@localhost:5432/dev_db
# .env file for production
DATABASE_URL=postgres://user:password@localhost:5432/prod_db
```
### Track applied migrations
Keeping track of applied migrations is vital to prevent conflicts and ensure that your project state is accurately
reflected.
Implement a robust migration tracking system that records the status of each migration. Use a version control system to
manage migration files, and ensure that all changes are tracked and reviewed. Regularly synchronize your project's
migration status with the database to catch any discrepancies early. This proactive approach helps avoid issues like
"relation `<some-object>` already exists," ensuring smooth deployment processes.
To enhance the organization and clarity of your migrations, give each migration folder a name relevant to the feature it
addresses. This practice aids in identifying the purpose of each migration, making it easier to manage and review
changes.
Additionally, consider integrating the application of migrations in different environments into your CI/CD process.
Automating this step ensures consistency across development, staging, and production environments, reducing the risk of
manual errors and streamlining your deployment workflow.
**Example:**
```bash
# Apply and track migrations
hasura migrate apply
hasura metadata apply
# Check the status of your migrations
hasura migrate status
```
:::info Note
Hasura Cloud users can leverage the [GitHub integration](/cloud-ci-cd/github-integration.mdx) to automate deployments.
:::
### Optimize long-running processes
Long-running processes can disrupt your migration workflow and lead to issues.
To minimize the impact of long-running processes, plan and optimize your migrations carefully. Break down large
migrations into smaller, manageable steps that can be executed without requiring extensive locks.
For example, instead of adding a column directly to a heavily used table with thousands of rows in production, consider
creating a new table with the required columns, migrating the data in batches, and then swapping the tables. This
approach reduces the risk of locking issues and improves overall migration performance.
**Example:**
1. Create a new table with the additional columns:
```sql
CREATE TABLE new_table AS TABLE old_table WITH NO DATA;
ALTER TABLE new_table ADD COLUMN new_column TYPE;
```
2. Migrate data in batches:
```sql
INSERT INTO new_table (columns) SELECT columns FROM old_table WHERE <condition> LIMIT <batch_size>;
```
3. Swap the tables:
```sql
DROP TABLE old_table;
ALTER TABLE new_table RENAME TO old_table;
```