add specific-schema feature, format, version bump, readme update

This commit is contained in:
Robert Lechte 2018-03-19 17:59:10 +11:00
parent 6bdce9bf15
commit c9c238bb49
23 changed files with 213 additions and 190 deletions

View File

@ -29,9 +29,11 @@ docs:
docsserve:
cd docs && mkdocs serve
fmt:
black .
lint:
flake8 migra
flake8 tests
flake8 .
tidy: clean lint

View File

@ -1,5 +1,20 @@
# migra: PostgreSQL migrations made almost painless
`migra` is a schema diff tool for PostgreSQL. Use it to compare database schemas or autogenerate migration scripts. Use it in your python scripts, or from the command line like this:
$ migra postgresql:///a postgresql:///b
alter table "public"."products" add column newcolumn text;
alter table "public"."products" add constraint "x" CHECK ((price > (0)::numeric));
`migra` magically figures out all the statements required to get from A to B.
You can also detect changes for a single specific schema only with `--schema myschema`.
## Already use `migra`? [Let us know how you're using it and what features would make it more useful](https://github.com/djrobstep/migra/issues/25).
## Folks, schemas are good
Schema migrations are without doubt the most cumbersome and annoying part of working with SQL databases. So much so that some people think that schemas themselves are bad!
But schemas are actually good. Enforcing data consistency and structure is a good thing. Its the migration tooling that is bad, because its harder to use than it should be. ``migra`` is an attempt to change that, and make migrations easy, safe, and reliable instead of something to dread.
@ -8,7 +23,7 @@ But schemas are actually good. Enforcing data consistency and structure is a goo
## Full documentation
Official documentation is at [migra.djrobstep.com](https://migra.djrobstep.com).
Documentation is at [migra.djrobstep.com](https://migra.djrobstep.com).
## How it Works
@ -16,18 +31,21 @@ Think of `migra` as a diff tool for schemas. Suppose database A and database B h
This includes changes to tables, views, functions, indexes, constraints, enums, sequences, and installed extensions.
You can use `migra` as a library to build your own migration scripts, tools, etc. Installing migra also installs the `migra` command, so you can use it as follows:
You can also use `migra` as a library to build your own migration scripts, tools, and custom migration flows.
$ migra postgresql:///a postgresql:///b
alter table "public"."products" add column newcolumn text;
With migra, a typical database migration is a simple three step process.
alter table "public"."products" add constraint "x" CHECK ((price > (0)::numeric));
1. Autogenerate:
If *b* is the target schema, then a new column and constraint needs to be applied to *a* to make it match *b*'s schema. Once we've reviewed the autogenerated SQL and we're happy with it, we can apply these changes as easily as:
$ migra --unsafe postgresql:///a postgresql:///b > migration_script.sql
$ migra --unsafe postgresql:///a postgresql:///b > migration_script.sql
# Then after careful review (obviously)...
$ psql a --single-transaction -f migration_script.sql
2. Review (and tweak if necessary).
# If you need to move data about during your script, you can add those changes to your script.
3. Apply:
$ psql a --single-transaction -f migration_script.sql
Migration complete!
@ -35,6 +53,8 @@ Migration complete!
**Migrations can never be fully automatic**. As noted above **ALWAYS REVIEW MIGRATION SCRIPTS CAREFULLY, ESPECIALLY WHEN DROPPING TABLES IS INVOLVED**.
Migra manages schema changes **but not your data**. If you need to move data around, as part of a migration, you'll need to handle that by editing the script or doing it separately before/after the schema changes.
Best practice is to run your migrations against a copy of your production database first. This helps verify correctness and spot any performance issues before they cause interruptions and downtime on your production database.
`migra` will deliberately throw an error if any generated statements feature the word "drop". This safety feature is by no means idiot-proof, but might prevent a few obvious blunders.
@ -61,8 +81,7 @@ As you can see, it's pretty simple (`S` here is a context manager that creates a
Here the code just opens connections to both databases for the Migration object to analyse. `m.add_all_changes()` generates the SQL statements for the changes required, and adds to the migration object's list of pending changes. The necessary SQL is now available as a property.
Features and Limitations
------------------------
## Features and Limitations
`migra` plays nicely with extensions. Schema contents belonging to extensions will be ignored and left to the extension to manage.
@ -70,13 +89,16 @@ Features and Limitations
Only SQL/PLPGSQL functions are confirmed to work so far. `migra` ignores functions that use other languages.
Installation
------------
## Installation
Assuming you have `pip <https://pip.pypa.io>`_ installed, all you need to do is install as follows:
Assuming you have [pip](https://pip.pypa.io), all you need to do is install as follows:
$ pip install migra
If you don't have psycopg2 (the PostgreSQL driver) installed yet, you can install this at the same time with:
If you don't have psycopg2-binary (the PostgreSQL driver) installed yet, you can install this at the same time with:
$ pip install migra[pg]
## Contributing
Contributing is easy. [Jump into the issues](https://github.com/djrobstep/migra/issues), find a feature or fix you'd like to work on, and get involved. Or create a new issue and suggest something completely different. Beginner-friendly issues are tagged "good first issue", and if you're unsure about any aspect of the process, just ask.

View File

@ -6,8 +6,5 @@ from .migra import Migration
from .command import do_command
__all__ = [
'Migration',
'Changes',
'Statements',
'UnsafeMigrationException',
'do_command']
'Migration', 'Changes', 'Statements', 'UnsafeMigrationException', 'do_command'
]

View File

@ -13,44 +13,36 @@ THINGS = [
'functions',
'views',
'indexes',
'extensions'
'extensions',
]
PK = 'PRIMARY KEY'
def statements_for_changes(
things_from,
things_target,
creations_only=False,
drops_only=False,
modifications=True,
dependency_ordering=False,
add_dependents_for_modified=False):
added, removed, modified, unmodified = \
differences(things_from, things_target)
things_from,
things_target,
creations_only=False,
drops_only=False,
modifications=True,
dependency_ordering=False,
add_dependents_for_modified=False,
):
added, removed, modified, unmodified = differences(things_from, things_target)
if add_dependents_for_modified:
for k, m in list(modified.items()):
for d in m.dependents_all:
if d in unmodified:
modified[d] = unmodified.pop(d)
modified = od(sorted(modified.items()))
statements = Statements()
if not creations_only:
pending_drops = set(removed)
if modifications:
pending_drops |= set(modified)
else:
pending_drops = set()
if not drops_only:
pending_creations = set(added)
if modifications:
pending_creations |= set(modified)
else:
@ -59,30 +51,29 @@ def statements_for_changes(
def has_remaining_dependents(v, pending_drops):
if not dependency_ordering:
return False
return bool(set(v.dependents) & pending_drops)
def has_uncreated_dependencies(v, pending_creations):
if not dependency_ordering:
return False
return bool(set(v.dependent_on) & pending_creations)
while True:
before = pending_drops | pending_creations
if not creations_only:
for k, v in removed.items():
if not has_remaining_dependents(v, pending_drops):
if k in pending_drops:
statements.append(v.drop_statement)
pending_drops.remove(k)
if not drops_only:
for k, v in added.items():
if not has_uncreated_dependencies(v, pending_creations):
if k in pending_creations:
statements.append(v.create_statement)
pending_creations.remove(k)
if modifications:
for k, v in modified.items():
if not creations_only:
@ -90,15 +81,12 @@ def statements_for_changes(
if k in pending_drops:
statements.append(v.drop_statement)
pending_drops.remove(k)
if not drops_only:
if not has_uncreated_dependencies(v, pending_creations):
if k in pending_creations:
statements.append(v.create_statement)
pending_creations.remove(k)
after = pending_drops | pending_creations
if not after:
break
@ -108,78 +96,53 @@ def statements_for_changes(
return statements
def get_enum_modifications(
tables_from,
tables_target,
enums_from,
enums_target):
def get_enum_modifications(tables_from, tables_target, enums_from, enums_target):
_, _, e_modified, _ = differences(enums_from, enums_target)
_, _, t_modified, _ = differences(tables_from, tables_target)
pre = Statements()
recreate = Statements()
post = Statements()
enums_to_change = e_modified
for t, v in t_modified.items():
t_before = tables_from[t]
_, _, c_modified, _ = differences(t_before.columns, v.columns)
for k, c in c_modified.items():
before = t_before.columns[k]
if c.is_enum == before.is_enum and c.dbtypestr == before.dbtypestr and c.enum != before.enum:
pre.append(before.change_enum_to_string_statement(t))
post.append(before.change_string_to_enum_statement(t))
for e in enums_to_change.values():
recreate.append(e.drop_statement)
recreate.append(e.create_statement)
return pre + recreate + post
def get_schema_changes(
tables_from,
tables_target,
enums_from,
enums_target):
def get_schema_changes(tables_from, tables_target, enums_from, enums_target):
added, removed, modified, _ = differences(tables_from, tables_target)
statements = Statements()
for t, v in removed.items():
statements.append(v.drop_statement)
for t, v in added.items():
statements.append(v.create_statement)
statements += get_enum_modifications(tables_from, tables_target, enums_from, enums_target)
statements += get_enum_modifications(
tables_from, tables_target, enums_from, enums_target
)
for t, v in modified.items():
before = tables_from[t]
c_added, c_removed, c_modified, _ = \
differences(before.columns, v.columns)
c_added, c_removed, c_modified, _ = differences(before.columns, v.columns)
for k, c in c_removed.items():
alter = v.alter_table_statement(c.drop_column_clause)
statements.append(alter)
for k, c in c_added.items():
alter = v.alter_table_statement(c.add_column_clause)
statements.append(alter)
for k, c in c_modified.items():
statements += c.alter_table_statements(before.columns[k], t)
return statements
class Changes(object):
def __init__(self, i_from, i_target):
self.i_from = i_from
self.i_target = i_target
@ -191,45 +154,42 @@ class Changes(object):
self.i_from.tables,
self.i_target.tables,
self.i_from.enums,
self.i_target.enums)
self.i_target.enums,
)
elif name == 'non_pk_constraints':
a = self.i_from.constraints.items()
b = self.i_target.constraints.items()
a_od = od((k, v) for k, v in a if v.constraint_type != PK)
b_od = od((k, v) for k, v in b if v.constraint_type != PK)
return partial(statements_for_changes, a_od, b_od)
elif name == 'pk_constraints':
a = self.i_from.constraints.items()
b = self.i_target.constraints.items()
a_od = od((k, v) for k, v in a if v.constraint_type == PK)
b_od = od((k, v) for k, v in b if v.constraint_type == PK)
return partial(statements_for_changes, a_od, b_od)
elif name == 'views_and_functions':
av = self.i_from.views.items()
bv = self.i_target.views.items()
af = self.i_from.functions.items()
bf = self.i_target.functions.items()
avf = list(av) + list(af)
bvf = list(bv) + list(bf)
avf = od(sorted(avf))
bvf = od(sorted(bvf))
return partial(
statements_for_changes,
avf,
bvf,
add_dependents_for_modified=True)
statements_for_changes, avf, bvf, add_dependents_for_modified=True
)
elif name in THINGS:
return partial(
statements_for_changes,
getattr(self.i_from, name),
getattr(self.i_target, name))
getattr(self.i_target, name),
)
else:
raise AttributeError(name)

View File

@ -13,57 +13,67 @@ from .statements import UnsafeMigrationException
def arg_context(x):
if x == 'EMPTY':
yield None
else:
with S(x) as s:
yield s
def parse_args(args):
parser = argparse.ArgumentParser(
description='Generate a database migration.')
parser = argparse.ArgumentParser(description='Generate a database migration.')
parser.add_argument(
'--unsafe',
dest='unsafe',
action='store_true',
help='Prevent migra from erroring upon generation of drop statements.')
help='Prevent migra from erroring upon generation of drop statements.',
)
parser.add_argument(
'dburl_from',
help='The database you want to migrate.')
'--schema',
dest='schema',
default=None,
help='Restrict output to statements for a particular schema',
),
parser.add_argument(
'dburl_target',
help='The database you want to use as the target.')
'--create-extensions-only',
dest='create_extensions_only',
action='store_true',
default=False,
help='Only output "create extension..." statements, nothing else.',
),
parser.add_argument('dburl_from', help='The database you want to migrate.')
parser.add_argument(
'dburl_target', help='The database you want to use as the target.'
)
return parser.parse_args(args)
def run(args, out=None, err=None):
schema = args.schema
if not out:
out = sys.stdout # pragma: no cover
if not err:
err = sys.stderr # pragma: no cover
with \
arg_context(args.dburl_from) as ac0, \
arg_context(args.dburl_target) as ac1:
m = Migration(ac0, ac1)
with arg_context(args.dburl_from) as ac0, arg_context(args.dburl_target) as ac1:
m = Migration(ac0, ac1, schema=schema)
if args.unsafe:
m.set_safety(False)
m.add_all_changes()
if args.create_extensions_only:
m.add_extension_changes(drops=False)
else:
m.add_all_changes()
try:
if m.statements:
print(m.sql, file=out)
except UnsafeMigrationException:
print('-- ERROR: destructive statements generated. Use the --unsafe flag to suppress this error.', file=err)
print(
'-- ERROR: destructive statements generated. Use the --unsafe flag to suppress this error.',
file=err,
)
return 3
if not m.statements:
return 0
else:
return 2

View File

@ -10,29 +10,29 @@ class Migration(object):
"""
The main class of migra
"""
def __init__(self, x_from, x_target):
def __init__(self, x_from, x_target, schema=None):
self.statements = Statements()
self.changes = Changes(None, None)
self.schema = schema
if isinstance(x_from, DBInspector):
self.changes.i_from = x_from
else:
self.changes.i_from = get_inspector(x_from)
self.changes.i_from = get_inspector(x_from, schema=schema)
if x_from:
self.s_from = x_from
if isinstance(x_target, DBInspector):
self.changes.i_target = x_target
else:
self.changes.i_target = get_inspector(x_target)
self.changes.i_target = get_inspector(x_target, schema=schema)
if x_target:
self.s_target = x_target
def inspect_from(self):
self.changes.i_from = get_inspector(self.s_from)
self.changes.i_from = get_inspector(self.s_from, schema=self.schema)
def inspect_target(self):
self.changes.i_target = get_inspector(self.s_target)
self.changes.i_target = get_inspector(self.s_target, schema=self.schema)
def clear(self):
self.statements = Statements()
@ -40,8 +40,7 @@ class Migration(object):
def apply(self):
for stmt in self.statements:
raw_execute(self.s_from, stmt)
self.changes.i_from = get_inspector(self.s_from)
self.changes.i_from = get_inspector(self.s_from, schema=self.schema)
safety_on = self.statements.safe
self.clear()
self.set_safety(safety_on)
@ -55,37 +54,36 @@ class Migration(object):
def set_safety(self, safety_on):
self.statements.safe = safety_on
def add_extension_changes(self, creates=True, drops=True):
if creates:
self.add(self.changes.extensions(creations_only=True))
if drops:
self.add(self.changes.extensions(drops_only=True))
def add_all_changes(self):
self.add(self.changes.schemas(creations_only=True))
self.add(self.changes.extensions(creations_only=True))
self.add(self.changes.enums(creations_only=True, modifications=False))
self.add(self.changes.sequences(creations_only=True))
self.add(self.changes.non_pk_constraints(drops_only=True))
self.add(self.changes.pk_constraints(drops_only=True))
self.add(self.changes.indexes(drops_only=True))
self.add(self.changes.views_and_functions(drops_only=True, dependency_ordering=True))
self.add(
self.changes.views_and_functions(drops_only=True, dependency_ordering=True)
)
self.add(self.changes.schema())
v_and_f_changes = self.changes.views_and_functions(creations_only=True, dependency_ordering=True)
v_and_f_changes = self.changes.views_and_functions(
creations_only=True, dependency_ordering=True
)
if v_and_f_changes:
self.add([
'set check_function_bodies = off;'
])
self.add(['set check_function_bodies = off;'])
self.add(v_and_f_changes)
self.add(self.changes.sequences(drops_only=True))
self.add(self.changes.enums(drops_only=True, modifications=False))
self.add(self.changes.extensions(drops_only=True))
self.add(self.changes.indexes(creations_only=True))
self.add(self.changes.pk_constraints(creations_only=True))
self.add(self.changes.non_pk_constraints(creations_only=True))
self.add(self.changes.schemas(drops_only=True))
@property

View File

@ -8,6 +8,7 @@ def check_for_drop(s):
class Statements(list):
def __init__(self, *args, **kwargs):
self.safe = True
super(Statements, self).__init__(*args, **kwargs)
@ -18,11 +19,14 @@ class Statements(list):
self.raise_if_unsafe()
if not self:
return ''
return '\n\n'.join(self) + '\n\n'
def raise_if_unsafe(self):
if any(check_for_drop(s) for s in self):
raise UnsafeMigrationException('unsafe/destructive change being autogenerated, refusing to carry on further')
raise UnsafeMigrationException(
'unsafe/destructive change being autogenerated, refusing to carry on further'
)
def __add__(self, other):
self += list(other)

View File

@ -4,17 +4,13 @@ from collections import OrderedDict as od
def differences(a, b, add_dependencies_for_modifications=True):
a_keys = set(a.keys())
b_keys = set(b.keys())
keys_added = set(b_keys) - set(a_keys)
keys_removed = set(a_keys) - set(b_keys)
keys_common = set(a_keys) & set(b_keys)
added = od((k, b[k]) for k in sorted(keys_added))
removed = od((k, a[k]) for k in sorted(keys_removed))
modified = od((k, b[k]) for k in sorted(keys_common) if a[k] != b[k])
unmodified = od((k, b[k]) for k in sorted(keys_common) if a[k] == b[k])
return added, removed, modified, unmodified

View File

@ -11,7 +11,7 @@ pylint
autopep8
flake8
psycopg2
psycopg2-binary
tox
yapf
@ -22,3 +22,4 @@ twine
mkdocs
doc2md
black

View File

@ -1,25 +1,19 @@
#!/usr/bin/env python
import io
from setuptools import setup, find_packages
readme = io.open('README.md').read()
setup(
name='migra',
version='1.0.1519192543',
version='1.0.1521442627',
url='https://github.com/djrobstep/migra',
description='Like diff but for PostgreSQL schemas',
long_description=readme,
long_description_content_type='text/markdown',
author='Robert Lechte',
author_email='robertlechte@gmail.com',
install_requires=[
'sqlbag',
'six',
'schemainspect'
],
install_requires=['sqlbag', 'six', 'schemainspect'],
zip_safe=False,
packages=find_packages(),
classifiers=[
@ -32,10 +26,6 @@ setup(
'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: Implementation :: CPython',
],
entry_points={
'console_scripts': [
'migra = migra:do_command',
],
},
extras_require={'pg': ['psycopg2-binary']}
entry_points={'console_scripts': ['migra = migra:do_command']},
extras_require={'pg': ['psycopg2-binary']},
)

View File

@ -1,8 +1,8 @@
create schema if not exists "evenbetterschema";
create extension "citext" with schema "public" version '1.4';
create extension if not exists "citext" with schema "public" version '1.4';
create extension "hstore" with schema "public" version '1.4';
create extension if not exists "hstore" with schema "public" version '1.4';
create type "public"."bug_status" as enum ('new', 'open', 'closed');

View File

@ -1,8 +1,8 @@
create schema if not exists "evenbetterschema";
create extension "citext" with schema "public" version '1.4';
create extension if not exists "citext" with schema "public" version '1.4';
create extension "hstore" with schema "public" version '1.4';
create extension if not exists "hstore" with schema "public" version '1.4';
create type "public"."bug_status" as enum ('new', 'open', 'closed');

View File

@ -0,0 +1,13 @@
create extension hstore;
create schema goodschema;
create table goodschema.t(id uuid, value text);
create table t(id uuid, value text);
CREATE TYPE goodschema.sdfasdfasdf AS ENUM ('not shipped', 'shipped', 'delivered');
create index on goodschema.t(id);
create view goodschema.v as select 1;

View File

@ -0,0 +1,9 @@
create extension citext;
create schema goodschema;
CREATE TYPE goodschema.sdfasdfasdf AS ENUM ('not shipped', 'shipped', 'delivered', 'not delivered');
create table goodschema.t(id uuid, name text, value text);
create view goodschema.v as select 2;

View File

@ -0,0 +1,13 @@
drop index if exists "goodschema"."t_id_idx";
drop view if exists "goodschema"."v" cascade;
drop type "goodschema"."sdfasdfasdf";
create type "goodschema"."sdfasdfasdf" as enum ('not shipped', 'shipped', 'delivered', 'not delivered');
alter table "goodschema"."t" add column "name" text;
set check_function_bodies = off;
create view "goodschema"."v" as SELECT 2;

View File

@ -0,0 +1,13 @@
create extension hstore;
create schema goodschema;
create table goodschema.t(id uuid, value text);
create table t(id uuid, value text);
CREATE TYPE goodschema.sdfasdfasdf AS ENUM ('not shipped', 'shipped', 'delivered');
create index on goodschema.t(id);
create view goodschema.v as select 1;

View File

@ -0,0 +1,9 @@
create extension citext;
create schema goodschema;
CREATE TYPE goodschema.sdfasdfasdf AS ENUM ('not shipped', 'shipped', 'delivered', 'not delivered');
create table goodschema.t(id uuid, name text, value text);
create view goodschema.v as select 2;

View File

@ -0,0 +1 @@
create extension if not exists "citext" with schema "public" version '1.4';

View File

@ -15,7 +15,6 @@ SQL = """select 1;
select 2;
"""
DROP = 'drop table x;'
@ -24,12 +23,9 @@ def test_statements():
s2 = Statements(['select 2;'])
s3 = s1 + s2
assert type(s1) == type(s2) == type(s3)
s3 = s3 + Statements([DROP])
with raises(UnsafeMigrationException):
assert s3.sql == SQL
s3.safe = False
SQL_WITH_DROP = SQL + DROP + '\n\n'
assert s3.sql == SQL_WITH_DROP
@ -42,81 +38,70 @@ def outs():
def test_with_fixtures():
for FIXTURE_NAME in ['dependencies', 'everything']:
do_fixture_test(FIXTURE_NAME)
for FIXTURE_NAME in ['singleschema']:
do_fixture_test(FIXTURE_NAME, schema='goodschema')
for FIXTURE_NAME in ['singleschema_ext']:
do_fixture_test(FIXTURE_NAME, create_extensions_only=True)
def do_fixture_test(fixture_name):
def do_fixture_test(fixture_name, schema=None, create_extensions_only=False):
flags = ['--unsafe']
if schema:
flags += ['--schema', schema]
if create_extensions_only:
flags += ['--create-extensions-only']
fixture_path = 'tests/FIXTURES/{}/'.format(fixture_name)
EXPECTED = io.open(fixture_path + 'expected.sql').read().strip()
with temporary_database() as d0, temporary_database() as d1:
with S(d0) as s0, S(d1) as s1:
load_sql_from_file(s0, fixture_path + 'a.sql')
load_sql_from_file(s1, fixture_path + 'b.sql')
args = parse_args([d0, d1])
assert not args.unsafe
assert args.schema is None
out, err = outs()
assert run(args, out=out, err=err) == 3
assert out.getvalue() == ''
assert err.getvalue() == '-- ERROR: destructive statements generated. Use the --unsafe flag to suppress this error.\n'
args = parse_args(['--unsafe', d0, d1])
args = parse_args(flags + [d0, d1])
assert args.unsafe
assert args.schema == schema
out, err = outs()
assert run(args, out=out, err=err) == 2
assert err.getvalue() == ''
assert out.getvalue().strip() == EXPECTED
ADDITIONS = io.open(fixture_path + 'additions.sql').read().strip()
EXPECTED2 = io.open(fixture_path + 'expected2.sql').read().strip()
if ADDITIONS:
with S(d0) as s0, S(d1) as s1:
m = Migration(s0, s1)
m.inspect_from()
m.inspect_target()
with raises(AttributeError):
m.changes.nonexist
m.set_safety(False)
m.add_sql(ADDITIONS)
m.apply()
m.add_all_changes()
assert m.sql.strip() == EXPECTED2 # sql generated OK
m.apply()
# check for changes again and make sure none are pending
m.add_all_changes()
assert m.changes.i_from == m.changes.i_target
assert not m.statements # no further statements to apply
assert m.sql == ''
out, err = outs()
assert run(args, out=out, err=err) == 0
# test alternative parameters
with S(d0) as s0, S(d1) as s1:
m = Migration(
get_inspector(s0),
get_inspector(s1)
)
m = Migration(get_inspector(s0), get_inspector(s1))
# test empty
m = Migration(None, None)
m.add_all_changes()
with raises(AttributeError):
m.s_from
with raises(AttributeError):
m.s_target
args = parse_args(['--unsafe', 'EMPTY', 'EMPTY'])
args = parse_args(flags + ['EMPTY', 'EMPTY'])
out, err = outs()
assert run(args, out=out, err=err) == 0