When applying raw (no gzip) patches, `mbtiles` was trying to un-gzip
them first. Now handles it properly. Also adds a number of tests to catch these cases.
* `mbtiles diff` now has an additional `--patch-type` param with
`whole`, `bin-diff-raw` and `bin-diff-gz` values:
* `whole` stores different tiles as before - as whole tiles in the
`tiles` table
* `bin-diff-raw` computes binary difference between tiles, and stores
them as brotli-encoded value in a `bsdiffraw` table, together with a
`xxh3_64` hash of the tile as it will be stored after patching
* `bin-diff-gz` same as `bin-diff-raw`, but assumes the tiles are
gzip-compressed, so it uncompresses them before comparing. The `xxh3_64`
stores the hash of the uncompressed tile. The data will be stored in the
`bsdiffrawgz` table (identical structure with above)
* `mbtiles copy --apply-patch` will automatically detect if
`bsdiffrawgz` or `bsdiffraw` tables exist, and will use binary patching.
* `mbtiles apply-patch` does not support binary patching yet
* `mbtiles copy --diff-with-file ... --patch-type ...` is an alias to
`mbtiles diff --patch-type ...`
This just adds a test for the unusual case of a table with quotes,
spaces, and dots in their identifiers. Another similar test should be
added for functions.
Make sure all values in the `tiles` table or view are correct:
* zoom_level is between 0 and 30 (max allowed zoom)
* the x,y values are within the limits for the corresponding zoom level
* the column type of z,x,y are all integers
* the `tile_data` is a NULL or a BLOB
When a SQL comment is set on a table or a function to customize
tilejson, and that tbl/func is pre-configured as part of the config
file, the comment was silently ignored. Now both table and function
cases are handled correctly.
Also, update docs to not include function parameters - makes SQL example
a bit simpler.
Thanks @jjcfrancisco for reporting!
Fixes: #1044
This implements dynamic font protobuf generation, allowing users to
request font ranges on the fly, and combining them in any order, e.g.
`Font1,Font2,Font3`, same as with sprites and tiles
This is a first iteration, without any multithreading support. In
theory, this could be done far faster by generating SDFs with multiple
threads.
### Current process
* during init, figure out all glyphs available in each font, and store
them as a bitset
* during request:
* combine requested bitsets to figure out which glyph should come from
which font file
* load those glyphs from files (using a single instance of the freetype
lib)
* convert them to SDFs and package them into a protobuf
---------
Co-authored-by: Lucas <zhangyijunmetro@hotmail.com>
* Fix metadata copying
* Introduce a new metadata field `agg_tiles_hash_after_apply` for diff
files
* Added a lot of new info and debug logging
* Simplified Copying interface - not much value in having all the
complex builder pattern here it seems, might as well use a simple
object.
## Testing
* Generate SQLite DBs in memory on the fly to validate just what we need
* Use `insta` for validating DB content
There is now a function `dump(connection) -> Vec<Entry>` to dump the
content of the entire SQLite DB into text with `serde`. At many steps
through the testing, the DB content is validated with the corresponding
.snap file with `insta` crate (which makes this process mega-simple,
including a simple way to "bless" (update) any changes).
## Discovered bugs
* Seems like normalized files do not get copied properly - they contain
extras that should be removed.
* Rename `global_hash` to `agg_tiles_hash`
This is still a big sticking point: what should be the name for the
metadata key for this value? The value represents the hash of all
`z,x,y,tile` over all rows of the `tiles` table (or view). Should it
include `md5` in its name, or should the hash be auto-detected by its
length? (details in #856)
* Generate it based on `tiles` table/view
* validate or generate, but not both (it will always fail otherwise)
* break up logic for per-tile, total, and integrity checks
* delete unused sqlx prep file
If a PostgreSQL function has an SQL comment, it will try to parse as
JSON and use its values to override the auto-generated TileJSON. It is
recommended to use this form when creating comments to ensure valid JSON
values.
```sql
DO $do$ BEGIN
EXECUTE 'COMMENT ON FUNCTION YOUR_FUNCTION (ARG1_TYPE,ARG2_TYPE,..ARGN_TYPE) IS $tj$' || $$
{
"description": "description override",
...
}
$$::json || '$tj$';
END $do$;
```
Partially implements #822
* Add `MbtType::FlatWithHash`
* Support copying, diffing and applying diffs to and from any
`MbtTypes`s
* Support validating tile data if hash is contained in `*.mbtiles` file
(i.e it is of `MbtType::FlatWithHash` or `MbtType::Normalized`)
---------
Co-authored-by: rstanciu <rstanciu@rivian.com>
Co-authored-by: Yuri Astrakhan <yuriastrakhan@gmail.com>
Resolves#682
- [x] Get id_column string from config.yaml and use for id column
- [x] Support for list of strings
- [x] Add info/warnings if column is not there or is of wrong type
- [x] if column for the feature ID is found, remove it from properties
(see inline comment)
- [x] cleanup logging messages
- [x] need more tests to catch other edge cases
---------
Co-authored-by: Yuri Astrakhan <YuriAstrakhan@gmail.com>
* on `--save-config`, only save configured `auto_publish` settings
* alias `from_schemas` as `from_schema`
* add integration testing for `auto_publish`
* if integration test DB preloading fails, try to clean up the test DB
* A few more info traces
This change should benefit testing of the #790 cc: @Binabh
* Add ability to generate diff file by specifying `--diff-with-file` to
the `copy` tool
---------
Co-authored-by: Yuri Astrakhan <YuriAstrakhan@gmail.com>
Dynamically create image sprites for MapLibre rendering, given a
directory with images.
### TODO
* [x] Work with @flother to merge these PRs
* [x] https://github.com/flother/spreet/pull/59 (must have)
* [x] https://github.com/flother/spreet/pull/57
* [x] https://github.com/flother/spreet/pull/56
* [ ] https://github.com/flother/spreet/pull/62 (not required but nice
to have, can upgrade later without any code changes)
* [x] Add docs to the book
* [x] Add CLI param, e.g. `--sprite <dir_path>`
* [x] Don't output `.sprites` in auto-genned config when not in use
### API
Per [MapLibre sprites
API](https://maplibre.org/maplibre-style-spec/sprite/), we need to
support the following:
* `/sprite/<sprite_id>.json` metadata about the sprite file - all coming
from a single directory
* `/sprite/<sprite_id>.png` all images combined into a single PNG
* `/sprite/<sprite_id>@2x.json` same but for high DPI devices
* `/sprite/<sprite_id>@2x.png`
Multiple sprite_id values can be combined into one sprite with the same
pattern as for tile joining:
`/sprite/<sprite_id1>,<sprite_id2>,...,<sprite_idN>[.json|.png|@2x.json|@2x.png]`.
No ID renaming is done, so identical names will override one another.
### Configuration
[Config file](https://maplibre.org/martin/config-file.html) and possibly
CLI should have a simple option to serve sprites. The configuration may
look similar to how mbtiles and pmtiles are configured:
```yaml
# Publish sprite images
sprites:
paths:
# scan this whole dir, matching all image files, and publishing it as "my_images" sprite source
- /path/to/my_images
sources:
# named source matching source name to a directory
my_sprites: /path/to/some_dir
```
Implement #705
* make tilejson's `name` be the same as the ID of the source (even if
aliased)
* `/catalog` will always show ID, but now it will hide the `name` if it
is the same as the `id`
* make `description` be the longer version, e.g. `public.table.column`
format - not guaranteed to be stable
* make `vector_layers` have the fields auto-discovered in the PG table
* preserve the order of the serialized json fields
Fixes#583
Compression middleware turned out to be hard to use for image cases - it
simply looks at the content-encoding, and if not set, tries to compress
if accepted by the client.
Instead, now individual routes are configured with either that
middleware, or for tiles, I decompress and optionally recompress if
applicable.
Now encoding is tracked separately from the tile content, making it
cleaner too. Plus lots of tests for mbtiles & pmtiles.
Fixes#577
* Adds a view to `points1.sql` fixture
* Replaces `table` with `view` in log statements relating to views
---------
Co-authored-by: Chris Thiange <cthiange@gmail.com>
Warn users when a PG table geometry column has no index - thus accessing it would be slow. This is only done for tables. Issues with the views are not printed.
## Implementation
This adds two fields to `TableInfo`:
* `geom_idx: Option<bool>` to tell if a geo column has a spatial index
* `is_view: Option<bool>` to distinguish views from other relations
Missing spatial index warnings are logged for non-view relations. Views
will never have indexed columns and, if referencing a table with a
missing index, it will be logged already.
Couldn't figure out how to make `just test` accept the new warning (from
missing index), so I have them logged as INFO for now :)
fixes#540
---------
Co-authored-by: Christophe Thiange <cthiange@gmail.com>
Co-authored-by: Yuri Astrakhan <YuriAstrakhan@gmail.com>
Adds a new [.mbtiles](https://github.com/mapbox/mbtiles-spec/blob/master/1.3/spec.md)
backend, without the grid support. Uses extensive tile content
detection, i.e. if the content is gzipped, png, jpeg, gif, webp.
From CLI, can be as easy as adding a path to a directory that contains a
.mbtiles file (works just like pmtiles support)
```bash
# All *.mbtiles files in this dir will be published.
# The filename will be used as the source ID
martin ./tests/fixtures
```
From configuration file, the path can be specified in a number of ways
(same as pmtiles)
```yaml
mbtiles:
paths:
# scan this whole dir, matching all *.mbtiles files
- /dir-path
# specific mbtiles file will be published as mbtiles2 source
- /path/to/mbtiles2.mbtiles
sources:
# named source matching source name to a single file
pm-src1: /tmp/mbtiles.mbtiles
# named source, where the filename is explicitly set. This way we will be able to add more options later
pm-src2:
path: /tmp/mbtiles.mbtiles
```
Fixes#494
Merge after #548
Adds a new [.pmtiles](https://protomaps.com/docs/pmtiles/) backend.
Supports all formats like png, vector, etc.
From CLI, can be as easy as adding a path to a directory that contains a
.pmtiles file:
```bash
# All *.pmtiles files in this dir will be published.
# The filename will be used as the source ID
martin ./tests/fixtures
```
From configuration file, the path can be specified in a number of ways:
```yaml
pmtiles:
paths:
# scan this whole dir, matching all *.pmtiles files
- /dir-path
# specific pmtiles file will be published as pmtiles2 source
- /path/to/pmtiles2.pmtiles
sources:
# named source matching source name to a single file
pm-src1: /tmp/pmtiles.pmtiles
# named source, where the filename is explicitly set. This way we will be able to add more options later
pm-src2:
path: /tmp/pmtiles.pmtiles
```
Fixes#508
* NEW: support for #512 - pg table/function auto-discovery
* can filter schemas
* can use patterns like `{schema}.{table}.{column}` and
`{schema}.{function}`
* NEW: add `disable_bounds` bool flag to allow disabling of the bounds
computation
* reworked integration tests to use yaml
* added manual coverage justfile command
* a lot of small refactorings of config and argument parsing
* feature: support jsonb query param for functions
* cleaned up public/private access
* make all tests populate with a predefined values to avoid issues with
random data
Can now handle several additional Postgres functions to get a tile, plus
tons of small fixes
### Multiple result variants
* `getmvt(z,x,y) -> [bytea,md5]` (single row with two columns)
* `getmvt(z,x,y) -> [bytea]` (single row with a single column)
* `getmvt(z,x,y) -> bytea` (value)
### Multiple input parameter variants
* `getmvt(z, x, y)` or `getmvt(zoom, x, y)` (all 3 vars must be
integers)
* `getmvt(z, x, y, url_query)`, where instead of `url_query` it could be
any other name, but must be of type JSON
### Breaking
* srid is now the same type as PG -- `i32`
* renamed config vals `table_sources` and `function_sources` into
`tables` and `functions`
### Features and fixes
* if postgis is v3.1+, uses margin parameter to extend the search box by
the size of the buffer. I think we should make 3.1 minimal required.
* fixes feature ID issue from #466
* fixes mixed case names for schemas, tables and columns, functions and
parameter names per #389
### Notes
* More dynamic SQL generation in code instead of using external SQL
files. Those should only be used when they are not parametrized.
* The new function/table discovery mechanism: query for all functions in
the database, and match up those functions with the ones configured (if
any), plus adds all the rest of the un-declared ones if discovery mode
is on.
* During table and function discovery, the code generates a map of
`(PgSqlInfo, FunctionInfo)` (or table) tupples containing SQL needed to
get the tile.
* Auto-discovery mode is currently hidden - the discovery is on only
when no tables or functions are configured. TBD - how to configure it in
the future
* The new system allows for an easy way to auto-discover for the
specific schemas only, solving #47
* predictable order of table/function instantiation
* bounding boxes computed in parallel for all tables (when not
configured)
* proper identifier escaping
* test cleanup
fixes#378fixes#466fixes#65fixes#389
* All tests and internal code now uses ST_TileEnvelope function
* Remove `tile_bbox`
* Rename test function sources for clarity - this will be needed in a
subsequent PR to add other function tests
Rework CI to run tests locally using the VM-installed Postgres on all
target platforms.
### CI jobs
* Build release versions on Linux/Win/Mac and save build results as
output artifacts
* In a separate VMs (Linux/Win/Mac)
* use
[nyurik/action-setup-postgis](https://github.com/nyurik/action-setup-postgis)
to install postgis and run tests using the built artifacts
* run `cargo test` on Linux only
* copy built artifacts from the build step, and run tests using the
release martin binary
* package and publish if this is a release
### Other changes
* Port some minor changes from the rewrite to porting easier
* minor cleanups
* remove all "expected" data files - too unstable to be usable
* Add justfile to simplify running all the tests
* Save all PBF outputs to the text files
* Consolidate all tests to reuse the same code
* Consolidate database initialization
* updated readme with the new instructions
Note that while this PR creates "expected" files, the CI cannot validate
the generated results because the output is not stable. Eventually we
may try to output just the non-geometry values to have reasonable tests
comparing against the expected results.
This PR re-uses some ideas by @gbip from #448
* move all CI github workflow tests into the dedicated shell scripts
* consolitade two database initialization scripts into one