* make tilejson's `name` be the same as the ID of the source (even if
aliased)
* `/catalog` will always show ID, but now it will hide the `name` if it
is the same as the `id`
* make `description` be the longer version, e.g. `public.table.column`
format - not guaranteed to be stable
* make `vector_layers` have the fields auto-discovered in the PG table
* preserve the order of the serialized json fields
Fixes#583
* clean up reporting of the un-used config params - instead of printing,
collect them and print in one place if needed (allows testing too)
* remove `vector_layer` in catalog - too verbose, not needed - can be
received via tilejson for individual source
* clean up tests so that they all use the same config yaml
Adds a new [.mbtiles](https://github.com/mapbox/mbtiles-spec/blob/master/1.3/spec.md)
backend, without the grid support. Uses extensive tile content
detection, i.e. if the content is gzipped, png, jpeg, gif, webp.
From CLI, can be as easy as adding a path to a directory that contains a
.mbtiles file (works just like pmtiles support)
```bash
# All *.mbtiles files in this dir will be published.
# The filename will be used as the source ID
martin ./tests/fixtures
```
From configuration file, the path can be specified in a number of ways
(same as pmtiles)
```yaml
mbtiles:
paths:
# scan this whole dir, matching all *.mbtiles files
- /dir-path
# specific mbtiles file will be published as mbtiles2 source
- /path/to/mbtiles2.mbtiles
sources:
# named source matching source name to a single file
pm-src1: /tmp/mbtiles.mbtiles
# named source, where the filename is explicitly set. This way we will be able to add more options later
pm-src2:
path: /tmp/mbtiles.mbtiles
```
Fixes#494
* NEW: support for #512 - pg table/function auto-discovery
* can filter schemas
* can use patterns like `{schema}.{table}.{column}` and
`{schema}.{function}`
* NEW: add `disable_bounds` bool flag to allow disabling of the bounds
computation
* reworked integration tests to use yaml
* fixed SQL to work on older PG versions
* re-enable CI to test expected `test.sh` output against the one stored
in the `tests/expected`
* add postgres in docker tests on linux - one for the oldest supported
DB, and another using the more recent version
* minor justfile cleanup
* ensure config files are sorted alphabetically