Dynamically create image sprites for MapLibre rendering, given a
directory with images.
### TODO
* [x] Work with @flother to merge these PRs
* [x] https://github.com/flother/spreet/pull/59 (must have)
* [x] https://github.com/flother/spreet/pull/57
* [x] https://github.com/flother/spreet/pull/56
* [ ] https://github.com/flother/spreet/pull/62 (not required but nice
to have, can upgrade later without any code changes)
* [x] Add docs to the book
* [x] Add CLI param, e.g. `--sprite <dir_path>`
* [x] Don't output `.sprites` in auto-genned config when not in use
### API
Per [MapLibre sprites
API](https://maplibre.org/maplibre-style-spec/sprite/), we need to
support the following:
* `/sprite/<sprite_id>.json` metadata about the sprite file - all coming
from a single directory
* `/sprite/<sprite_id>.png` all images combined into a single PNG
* `/sprite/<sprite_id>@2x.json` same but for high DPI devices
* `/sprite/<sprite_id>@2x.png`
Multiple sprite_id values can be combined into one sprite with the same
pattern as for tile joining:
`/sprite/<sprite_id1>,<sprite_id2>,...,<sprite_idN>[.json|.png|@2x.json|@2x.png]`.
No ID renaming is done, so identical names will override one another.
### Configuration
[Config file](https://maplibre.org/martin/config-file.html) and possibly
CLI should have a simple option to serve sprites. The configuration may
look similar to how mbtiles and pmtiles are configured:
```yaml
# Publish sprite images
sprites:
paths:
# scan this whole dir, matching all image files, and publishing it as "my_images" sprite source
- /path/to/my_images
sources:
# named source matching source name to a directory
my_sprites: /path/to/some_dir
```
Implement #705
* make tilejson's `name` be the same as the ID of the source (even if
aliased)
* `/catalog` will always show ID, but now it will hide the `name` if it
is the same as the `id`
* make `description` be the longer version, e.g. `public.table.column`
format - not guaranteed to be stable
* make `vector_layers` have the fields auto-discovered in the PG table
* preserve the order of the serialized json fields
Fixes#583
Compression middleware turned out to be hard to use for image cases - it
simply looks at the content-encoding, and if not set, tries to compress
if accepted by the client.
Instead, now individual routes are configured with either that
middleware, or for tiles, I decompress and optionally recompress if
applicable.
Now encoding is tracked separately from the tile content, making it
cleaner too. Plus lots of tests for mbtiles & pmtiles.
Fixes#577
* Adds a view to `points1.sql` fixture
* Replaces `table` with `view` in log statements relating to views
---------
Co-authored-by: Chris Thiange <cthiange@gmail.com>
Warn users when a PG table geometry column has no index - thus accessing it would be slow. This is only done for tables. Issues with the views are not printed.
## Implementation
This adds two fields to `TableInfo`:
* `geom_idx: Option<bool>` to tell if a geo column has a spatial index
* `is_view: Option<bool>` to distinguish views from other relations
Missing spatial index warnings are logged for non-view relations. Views
will never have indexed columns and, if referencing a table with a
missing index, it will be logged already.
Couldn't figure out how to make `just test` accept the new warning (from
missing index), so I have them logged as INFO for now :)
fixes#540
---------
Co-authored-by: Christophe Thiange <cthiange@gmail.com>
Co-authored-by: Yuri Astrakhan <YuriAstrakhan@gmail.com>
Adds a new [.mbtiles](https://github.com/mapbox/mbtiles-spec/blob/master/1.3/spec.md)
backend, without the grid support. Uses extensive tile content
detection, i.e. if the content is gzipped, png, jpeg, gif, webp.
From CLI, can be as easy as adding a path to a directory that contains a
.mbtiles file (works just like pmtiles support)
```bash
# All *.mbtiles files in this dir will be published.
# The filename will be used as the source ID
martin ./tests/fixtures
```
From configuration file, the path can be specified in a number of ways
(same as pmtiles)
```yaml
mbtiles:
paths:
# scan this whole dir, matching all *.mbtiles files
- /dir-path
# specific mbtiles file will be published as mbtiles2 source
- /path/to/mbtiles2.mbtiles
sources:
# named source matching source name to a single file
pm-src1: /tmp/mbtiles.mbtiles
# named source, where the filename is explicitly set. This way we will be able to add more options later
pm-src2:
path: /tmp/mbtiles.mbtiles
```
Fixes#494
Merge after #548
Adds a new [.pmtiles](https://protomaps.com/docs/pmtiles/) backend.
Supports all formats like png, vector, etc.
From CLI, can be as easy as adding a path to a directory that contains a
.pmtiles file:
```bash
# All *.pmtiles files in this dir will be published.
# The filename will be used as the source ID
martin ./tests/fixtures
```
From configuration file, the path can be specified in a number of ways:
```yaml
pmtiles:
paths:
# scan this whole dir, matching all *.pmtiles files
- /dir-path
# specific pmtiles file will be published as pmtiles2 source
- /path/to/pmtiles2.pmtiles
sources:
# named source matching source name to a single file
pm-src1: /tmp/pmtiles.pmtiles
# named source, where the filename is explicitly set. This way we will be able to add more options later
pm-src2:
path: /tmp/pmtiles.pmtiles
```
Fixes#508
* NEW: support for #512 - pg table/function auto-discovery
* can filter schemas
* can use patterns like `{schema}.{table}.{column}` and
`{schema}.{function}`
* NEW: add `disable_bounds` bool flag to allow disabling of the bounds
computation
* reworked integration tests to use yaml
* added manual coverage justfile command
* a lot of small refactorings of config and argument parsing
* feature: support jsonb query param for functions
* cleaned up public/private access
* make all tests populate with a predefined values to avoid issues with
random data
Can now handle several additional Postgres functions to get a tile, plus
tons of small fixes
### Multiple result variants
* `getmvt(z,x,y) -> [bytea,md5]` (single row with two columns)
* `getmvt(z,x,y) -> [bytea]` (single row with a single column)
* `getmvt(z,x,y) -> bytea` (value)
### Multiple input parameter variants
* `getmvt(z, x, y)` or `getmvt(zoom, x, y)` (all 3 vars must be
integers)
* `getmvt(z, x, y, url_query)`, where instead of `url_query` it could be
any other name, but must be of type JSON
### Breaking
* srid is now the same type as PG -- `i32`
* renamed config vals `table_sources` and `function_sources` into
`tables` and `functions`
### Features and fixes
* if postgis is v3.1+, uses margin parameter to extend the search box by
the size of the buffer. I think we should make 3.1 minimal required.
* fixes feature ID issue from #466
* fixes mixed case names for schemas, tables and columns, functions and
parameter names per #389
### Notes
* More dynamic SQL generation in code instead of using external SQL
files. Those should only be used when they are not parametrized.
* The new function/table discovery mechanism: query for all functions in
the database, and match up those functions with the ones configured (if
any), plus adds all the rest of the un-declared ones if discovery mode
is on.
* During table and function discovery, the code generates a map of
`(PgSqlInfo, FunctionInfo)` (or table) tupples containing SQL needed to
get the tile.
* Auto-discovery mode is currently hidden - the discovery is on only
when no tables or functions are configured. TBD - how to configure it in
the future
* The new system allows for an easy way to auto-discover for the
specific schemas only, solving #47
* predictable order of table/function instantiation
* bounding boxes computed in parallel for all tables (when not
configured)
* proper identifier escaping
* test cleanup
fixes#378fixes#466fixes#65fixes#389
* All tests and internal code now uses ST_TileEnvelope function
* Remove `tile_bbox`
* Rename test function sources for clarity - this will be needed in a
subsequent PR to add other function tests
Rework CI to run tests locally using the VM-installed Postgres on all
target platforms.
### CI jobs
* Build release versions on Linux/Win/Mac and save build results as
output artifacts
* In a separate VMs (Linux/Win/Mac)
* use
[nyurik/action-setup-postgis](https://github.com/nyurik/action-setup-postgis)
to install postgis and run tests using the built artifacts
* run `cargo test` on Linux only
* copy built artifacts from the build step, and run tests using the
release martin binary
* package and publish if this is a release
### Other changes
* Port some minor changes from the rewrite to porting easier
* minor cleanups
* remove all "expected" data files - too unstable to be usable
* Add justfile to simplify running all the tests
* Save all PBF outputs to the text files
* Consolidate all tests to reuse the same code
* Consolidate database initialization
* updated readme with the new instructions
Note that while this PR creates "expected" files, the CI cannot validate
the generated results because the output is not stable. Eventually we
may try to output just the non-geometry values to have reasonable tests
comparing against the expected results.
This PR re-uses some ideas by @gbip from #448
* move all CI github workflow tests into the dedicated shell scripts
* consolitade two database initialization scripts into one