When applying a first migration against a schema, return the schema as
read form postgres, instead of an empty one.
This change allows migrations to happen against schemas created before
pg-roll install.
This change ensures we also catch DROP statements for their inclusion in
the migrations log.
It seems DROP statements don't make it to the `ddl_command_end` trigger.
We need to explictly listen for them under `sql_drop`.
This change adds a new `sql` operation, that allows to define an `up`
SQL statement to perform a migration on the schema.
An optional `down` field can be provided, this will be used when trying
to do a rollback after (for instance, in case of migration failure).
A new trigger is installed to capture DDL events coming from direct user
manipulations (not done by pg-roll), so they are stored as a migration,
getting to know the resulting schema in all cases.
Change the representation of a schema in `pg-roll`s state store from:
```go
type Schema struct {
// Tables is a map of virtual table name -> table mapping
Tables map[string]Table `json:"tables"`
}
```
to:
```go
type Schema struct {
// Name is the name of the schema
Name string `json:"name"`
// Tables is a map of virtual table name -> table mapping
Tables map[string]Table `json:"tables"`
}
```
ie, store the schema's name.
This allows the signature of `Start` to be simplified, removing the
`schemaName` parameter; the name can be retrieved from the
`schema.Schema` struct that is already provided.
Change the module name to match its import path.
In order for `pg-roll` to be usable as a module we need the be able to
import `"github.com/xataio/pg-roll/pkg/roll"` etc from other modules.
Changing the name of the module to match its import path ensures that
this is possible.
Add information about indexes on a table to `pg-roll`'s internal state
storage.
For each table, store an additional JSON object mapping each index name
on the table to details of the index (initially just its name).
An example of the resulting JSON is:
```json
{
"tables": {
"fruits": {
"oid": "16497",
"name": "fruits",
"columns": {
"id": {
"name": "id",
"type": "integer",
"comment": null,
"default": "nextval('_pgroll_new_fruits_id_seq'::regclass)",
"nullable": false
},
"name": {
"name": "name",
"type": "varchar(255)",
"comment": null,
"default": null,
"nullable": false
}
},
"comment": null,
"indexes": {
"_pgroll_idx_fruits_name": {
"name": "_pgroll_idx_fruits_name"
},
"_pgroll_new_fruits_pkey": {
"name": "_pgroll_new_fruits_pkey"
},
"_pgroll_new_fruits_name_key": {
"name": "_pgroll_new_fruits_name_key"
}
}
}
}
}
```
Also add fields to the `Schema` model structs to allow the new `indexes`
field to be unmarshalled.
Add a new field `Up` to **add column** migrations:
```json
{
"name": "03_add_column_to_products",
"operations": [
{
"add_column": {
"table": "products",
"up": "UPPER(name)",
"column": {
"name": "description",
"type": "varchar(255)",
"nullable": true
}
}
}
]
}
```
The SQL specified by the `up` field will be run whenever an row is
inserted into the underlying table when the session's `search_path` is
not set to the latest version of the schema.
The `up` SQL snippet can refer to existing columns in the table by name
(as in the the above example, where the `description` field is set to
`UPPER(name)`).
Add a `status` command to show the status of each schema that `pg-roll`
knows about (ie ,those schema that have had >0 migrations run in them).
`go run . status`
**Example output**:
```json
[
{
"Schema": "public",
"Version": "01_create_tables",
"Status": "In Progress"
}
]
```
or:
```json
[
{
"Schema": "public",
"Version": "01_create_tables",
"Status": "Complete"
}
]
```
In future the `json` output of the command should be behind a `-o json`
switch and the default output should be human readable.
This change will retrieve and store the resulting schema after a
migration is completed. This schema will be used as the base to execute
the next migration, making it possible to create views that are aware of
the full schema, and not only the one created by the last migration.
We use a function to retrieve the schema directly from Postgres instead
of building it from the migration files. This allows for more features
in the future, like doing an initial sync on top of the existing schema
or automatically detecting and storing out of band migrations from
triggers.
Example JSON stored schema:
```
{
"tables": {
"bills": {
"oid": "18272",
"name": "bills",
"columns": {
"id": {
"name": "id",
"type": "integer",
"comment": null,
"default": null,
"nullable": false
},
"date": {
"name": "date",
"type": "time with time zone",
"comment": null,
"default": null,
"nullable": false
},
"quantity": {
"name": "quantity",
"type": "integer",
"comment": null,
"default": null,
"nullable": false
}
},
"comment": null
},
"products": {
"oid": "18286",
"name": "products",
"columns": {
"id": {
"name": "id",
"type": "integer",
"comment": null,
"default": "nextval(_pgroll_new_products_id_seq::regclass)",
"nullable": false
},
"name": {
"name": "name",
"type": "varchar(255)",
"comment": null,
"default": null,
"nullable": false
},
"price": {
"name": "price",
"type": "numeric(10,2)",
"comment": null,
"default": null,
"nullable": false
}
},
"comment": null
},
"customers": {
"oid": "18263",
"name": "customers",
"columns": {
"id": {
"name": "id",
"type": "integer",
"comment": null,
"default": null,
"nullable": false
},
"name": {
"name": "name",
"type": "varchar(255)",
"comment": null,
"default": null,
"nullable": false
},
"credit_card": {
"name": "credit_card",
"type": "text",
"comment": null,
"default": null,
"nullable": true
}
},
"comment": null
}
}
}
```
After this change, I believe that the `create_table` operation is
feature complete and can be used for many sequential migrations.
Add a sentinel error `ErrNoActiveMigration` for the case where there is
no active migration. This improves the error strings presented to users
by not mentioning SQL errors.
**`pg-roll start` when there is a migration in progess:**
```
Error: a migration for schema "public" is already in progress
```
**`pg-roll rollback` when there is no migration in progress:**
```
Error: unable to get active migration: no active migration
```
**`pg-complete` when there is no active migration:**
```
Error: unable to get active migration: no active migration
```
This migrations introduces state handling by creating a dedicated
`pgroll` schema (name configurable). We will store migrations there, as
well as their state. So we keep some useful information, ie the
migration definition (so we don't need it for the `complete` state).
Schema includes the proper constraints to guarantee that:
* Only a migration is active at a time
* Migration history is linear (all migrations have a unique parent,
except the first one which is NULL)
* We now the current migration at all times
Some helper functions are included:
* `is_active_migration_period()` will return true if there is an active
migration.
* `latest_version()` will return the name of the latest version of the
schema.