martin/justfile

223 lines
6.4 KiB
Makefile
Raw Normal View History

#!/usr/bin/env just --justfile
2023-05-24 06:47:55 +03:00
set shell := ["bash", "-c"]
export PGPORT := "5411"
export DATABASE_URL := "postgres://postgres:postgres@localhost:" + PGPORT + "/db"
export CARGO_TERM_COLOR := "always"
2023-05-24 06:47:55 +03:00
Support z,x,y and record-returning funcs, table rework (#380) Can now handle several additional Postgres functions to get a tile, plus tons of small fixes ### Multiple result variants * `getmvt(z,x,y) -> [bytea,md5]` (single row with two columns) * `getmvt(z,x,y) -> [bytea]` (single row with a single column) * `getmvt(z,x,y) -> bytea` (value) ### Multiple input parameter variants * `getmvt(z, x, y)` or `getmvt(zoom, x, y)` (all 3 vars must be integers) * `getmvt(z, x, y, url_query)`, where instead of `url_query` it could be any other name, but must be of type JSON ### Breaking * srid is now the same type as PG -- `i32` * renamed config vals `table_sources` and `function_sources` into `tables` and `functions` ### Features and fixes * if postgis is v3.1+, uses margin parameter to extend the search box by the size of the buffer. I think we should make 3.1 minimal required. * fixes feature ID issue from #466 * fixes mixed case names for schemas, tables and columns, functions and parameter names per #389 ### Notes * More dynamic SQL generation in code instead of using external SQL files. Those should only be used when they are not parametrized. * The new function/table discovery mechanism: query for all functions in the database, and match up those functions with the ones configured (if any), plus adds all the rest of the un-declared ones if discovery mode is on. * During table and function discovery, the code generates a map of `(PgSqlInfo, FunctionInfo)` (or table) tupples containing SQL needed to get the tile. * Auto-discovery mode is currently hidden - the discovery is on only when no tables or functions are configured. TBD - how to configure it in the future * The new system allows for an easy way to auto-discover for the specific schemas only, solving #47 * predictable order of table/function instantiation * bounding boxes computed in parallel for all tables (when not configured) * proper identifier escaping * test cleanup fixes #378 fixes #466 fixes #65 fixes #389
2022-12-10 17:20:42 +03:00
# export RUST_LOG := "debug"
# export RUST_BACKTRACE := "1"
@_default:
2023-05-24 06:47:55 +03:00
just --list --unsorted
# Start Martin server and a test database
run *ARGS: start
2023-05-24 06:47:55 +03:00
cargo run -- {{ ARGS }}
Improve PG performance by 28% (!!!) (#703) A very long overdue PostgreSQL querying performance optimization that should have used cached queries, but ... somehow didn't. Also, this PR adds two new `just` tasks: `run-release` and `bench-http` I used [oha](https://github.com/hatoo/oha) for its visual appeal. All tests were using keep-alive, which I think is relatively accurate because clients make many tile requests on the same connection. As a target, I used the same non-empty small tile to reduce the PostgreSQL indexing load. ❯ just run-release ❯ just bench-http `bench-http` runs this command: ``` oha -z 120s http://localhost:3000/function_zxy_query/18/235085/122323 ``` <pre> | before the change | after the change | |----------------------------------|----------------------------------| | Summary: | Summary: | | Success rate: 1.0000 | Success rate: 1.0000 | | Total: 120.0004 secs | Total: 120.0002 secs | | Slowest: 0.1339 secs | Slowest: 0.3505 secs | | Fastest: 0.0015 secs | Fastest: 0.0012 secs | | Average: 0.0076 secs | Average: 0.0055 secs | | Requests/sec: 6583.6946 | Requests/sec: 9073.5398 | | | | | Total data: 113.02 MiB | Total data: 155.76 MiB | | Size/request: 150 B | Size/request: 150 B | | Size/sec: 964.41 KiB | Size/sec: 1.30 MiB | | | | | Response time histogram: | Response time histogram: | | 0.002 [1] | 0.001 [1] | | 0.015 [785706] ■■■■■■■■■■■■■ | 0.036 [1088825] ■■■■■■■■■■■■■ | | 0.028 [4225] | 0.071 [0] | | 0.041 [111] | 0.106 [0] | | 0.054 [2] | 0.141 [0] | | 0.068 [0] | 0.176 [0] | | 0.081 [0] | 0.211 [0] | | 0.094 [0] | 0.246 [0] | | 0.107 [0] | 0.281 [0] | | 0.121 [0] | 0.316 [0] | | 0.134 [1] | 0.350 [1] | | | | | Latency distribution: | Latency distribution: | | 10% in 0.0057 secs | 10% in 0.0039 secs | | 25% in 0.0064 secs | 25% in 0.0045 secs | | 50% in 0.0073 secs | 50% in 0.0053 secs | | 75% in 0.0084 secs | 75% in 0.0063 secs | | 90% in 0.0098 secs | 90% in 0.0074 secs | | 95% in 0.0107 secs | 95% in 0.0082 secs | | 99% in 0.0135 secs | 99% in 0.0102 secs | </pre> Fixes #678
2023-06-04 22:02:00 +03:00
# Start release-compiled Martin server and a test database
run-release *ARGS: start
cargo run -- {{ ARGS }}
# Start Martin server and open a test page
debug-page *ARGS: start
open tests/debug.html # run will not exit, so open debug page first
2023-05-24 06:47:55 +03:00
just run {{ ARGS }}
# Run PSQL utility against the test database
psql *ARGS:
2023-05-24 06:47:55 +03:00
psql {{ ARGS }} {{ DATABASE_URL }}
# Perform cargo clean to delete all build files
clean: clean-test stop
cargo clean
# Delete test output files
[private]
clean-test:
rm -rf tests/output
# Start a test database
start: (docker-up "db")
# Start an ssl-enabled test database
start-ssl: (docker-up "db-ssl")
# Start a legacy test database
start-legacy: (docker-up "db-legacy")
# Start a specific test database, e.g. db or db-legacy
[private]
docker-up name:
2023-05-24 06:47:55 +03:00
docker-compose up -d {{ name }}
docker-compose run -T --rm db-is-ready
alias _down := stop
alias _stop-db := stop
# Stop the test database
stop:
docker-compose down
# Run benchmark tests
bench: start
cargo bench
Improve PG performance by 28% (!!!) (#703) A very long overdue PostgreSQL querying performance optimization that should have used cached queries, but ... somehow didn't. Also, this PR adds two new `just` tasks: `run-release` and `bench-http` I used [oha](https://github.com/hatoo/oha) for its visual appeal. All tests were using keep-alive, which I think is relatively accurate because clients make many tile requests on the same connection. As a target, I used the same non-empty small tile to reduce the PostgreSQL indexing load. ❯ just run-release ❯ just bench-http `bench-http` runs this command: ``` oha -z 120s http://localhost:3000/function_zxy_query/18/235085/122323 ``` <pre> | before the change | after the change | |----------------------------------|----------------------------------| | Summary: | Summary: | | Success rate: 1.0000 | Success rate: 1.0000 | | Total: 120.0004 secs | Total: 120.0002 secs | | Slowest: 0.1339 secs | Slowest: 0.3505 secs | | Fastest: 0.0015 secs | Fastest: 0.0012 secs | | Average: 0.0076 secs | Average: 0.0055 secs | | Requests/sec: 6583.6946 | Requests/sec: 9073.5398 | | | | | Total data: 113.02 MiB | Total data: 155.76 MiB | | Size/request: 150 B | Size/request: 150 B | | Size/sec: 964.41 KiB | Size/sec: 1.30 MiB | | | | | Response time histogram: | Response time histogram: | | 0.002 [1] | 0.001 [1] | | 0.015 [785706] ■■■■■■■■■■■■■ | 0.036 [1088825] ■■■■■■■■■■■■■ | | 0.028 [4225] | 0.071 [0] | | 0.041 [111] | 0.106 [0] | | 0.054 [2] | 0.141 [0] | | 0.068 [0] | 0.176 [0] | | 0.081 [0] | 0.211 [0] | | 0.094 [0] | 0.246 [0] | | 0.107 [0] | 0.281 [0] | | 0.121 [0] | 0.316 [0] | | 0.134 [1] | 0.350 [1] | | | | | Latency distribution: | Latency distribution: | | 10% in 0.0057 secs | 10% in 0.0039 secs | | 25% in 0.0064 secs | 25% in 0.0045 secs | | 50% in 0.0073 secs | 50% in 0.0053 secs | | 75% in 0.0084 secs | 75% in 0.0063 secs | | 90% in 0.0098 secs | 90% in 0.0074 secs | | 95% in 0.0107 secs | 95% in 0.0082 secs | | 99% in 0.0135 secs | 99% in 0.0102 secs | </pre> Fixes #678
2023-06-04 22:02:00 +03:00
# Run HTTP requests benchmark using OHA tool. Use with `just run-release`
bench-http: (cargo-install "oha")
Improve PG performance by 28% (!!!) (#703) A very long overdue PostgreSQL querying performance optimization that should have used cached queries, but ... somehow didn't. Also, this PR adds two new `just` tasks: `run-release` and `bench-http` I used [oha](https://github.com/hatoo/oha) for its visual appeal. All tests were using keep-alive, which I think is relatively accurate because clients make many tile requests on the same connection. As a target, I used the same non-empty small tile to reduce the PostgreSQL indexing load. ❯ just run-release ❯ just bench-http `bench-http` runs this command: ``` oha -z 120s http://localhost:3000/function_zxy_query/18/235085/122323 ``` <pre> | before the change | after the change | |----------------------------------|----------------------------------| | Summary: | Summary: | | Success rate: 1.0000 | Success rate: 1.0000 | | Total: 120.0004 secs | Total: 120.0002 secs | | Slowest: 0.1339 secs | Slowest: 0.3505 secs | | Fastest: 0.0015 secs | Fastest: 0.0012 secs | | Average: 0.0076 secs | Average: 0.0055 secs | | Requests/sec: 6583.6946 | Requests/sec: 9073.5398 | | | | | Total data: 113.02 MiB | Total data: 155.76 MiB | | Size/request: 150 B | Size/request: 150 B | | Size/sec: 964.41 KiB | Size/sec: 1.30 MiB | | | | | Response time histogram: | Response time histogram: | | 0.002 [1] | 0.001 [1] | | 0.015 [785706] ■■■■■■■■■■■■■ | 0.036 [1088825] ■■■■■■■■■■■■■ | | 0.028 [4225] | 0.071 [0] | | 0.041 [111] | 0.106 [0] | | 0.054 [2] | 0.141 [0] | | 0.068 [0] | 0.176 [0] | | 0.081 [0] | 0.211 [0] | | 0.094 [0] | 0.246 [0] | | 0.107 [0] | 0.281 [0] | | 0.121 [0] | 0.316 [0] | | 0.134 [1] | 0.350 [1] | | | | | Latency distribution: | Latency distribution: | | 10% in 0.0057 secs | 10% in 0.0039 secs | | 25% in 0.0064 secs | 25% in 0.0045 secs | | 50% in 0.0073 secs | 50% in 0.0053 secs | | 75% in 0.0084 secs | 75% in 0.0063 secs | | 90% in 0.0098 secs | 90% in 0.0074 secs | | 95% in 0.0107 secs | 95% in 0.0082 secs | | 99% in 0.0135 secs | 99% in 0.0102 secs | </pre> Fixes #678
2023-06-04 22:02:00 +03:00
@echo "Make sure Martin was started with 'just run-release'"
@echo "Warming up..."
oha -z 5s --no-tui http://localhost:3000/function_zxy_query/18/235085/122323 > /dev/null
oha -z 120s http://localhost:3000/function_zxy_query/18/235085/122323
# Run all tests using a test database
test: (docker-up "db") test-unit test-int
# Run all tests using an SSL connection to a test database. Expected output won't match.
test-ssl: (docker-up "ssl") test-unit clean-test
tests/test.sh
# Run all tests using the oldest supported version of the database
test-legacy: (docker-up "db-legacy") test-unit test-int
# Run Rust unit and doc tests (cargo test)
test-unit *ARGS:
2023-05-24 06:47:55 +03:00
cargo test --all-targets {{ ARGS }}
cargo test --doc
# Run integration tests
test-int: clean-test install-sqlx
#!/usr/bin/env bash
set -euo pipefail
tests/test.sh
2023-06-13 06:05:54 +03:00
if ! diff --brief --recursive --new-file tests/output tests/expected; then
echo "** Expected output does not match actual output"
echo "** If this is expected, run 'just bless' to update expected output"
exit 1
else
echo "Expected output matches actual output"
fi
# Run integration tests and save its output as the new expected output
bless: start clean-test
Add dynamic sprites support (#715) Dynamically create image sprites for MapLibre rendering, given a directory with images. ### TODO * [x] Work with @flother to merge these PRs * [x] https://github.com/flother/spreet/pull/59 (must have) * [x] https://github.com/flother/spreet/pull/57 * [x] https://github.com/flother/spreet/pull/56 * [ ] https://github.com/flother/spreet/pull/62 (not required but nice to have, can upgrade later without any code changes) * [x] Add docs to the book * [x] Add CLI param, e.g. `--sprite <dir_path>` * [x] Don't output `.sprites` in auto-genned config when not in use ### API Per [MapLibre sprites API](https://maplibre.org/maplibre-style-spec/sprite/), we need to support the following: * `/sprite/<sprite_id>.json` metadata about the sprite file - all coming from a single directory * `/sprite/<sprite_id>.png` all images combined into a single PNG * `/sprite/<sprite_id>@2x.json` same but for high DPI devices * `/sprite/<sprite_id>@2x.png` Multiple sprite_id values can be combined into one sprite with the same pattern as for tile joining: `/sprite/<sprite_id1>,<sprite_id2>,...,<sprite_idN>[.json|.png|@2x.json|@2x.png]`. No ID renaming is done, so identical names will override one another. ### Configuration [Config file](https://maplibre.org/martin/config-file.html) and possibly CLI should have a simple option to serve sprites. The configuration may look similar to how mbtiles and pmtiles are configured: ```yaml # Publish sprite images sprites: paths: # scan this whole dir, matching all image files, and publishing it as "my_images" sprite source - /path/to/my_images sources: # named source matching source name to a directory my_sprites: /path/to/some_dir ``` Implement #705
2023-06-16 15:19:47 +03:00
cargo test --features bless-tests
tests/test.sh
rm -rf tests/expected
mv tests/output tests/expected
# Build and open mdbook documentation
book: (cargo-install "mdbook")
2023-06-04 21:11:44 +03:00
mdbook serve docs --open --port 8321
# Build debian package
package-deb: (cargo-install "cargo-deb")
cargo deb -v -p martin --output target/debian/martin.deb
# Build and open code documentation
docs:
cargo doc --no-deps --open
# Run code coverage on tests and save its output in the coverage directory. Parameter could be html or lcov.
coverage FORMAT='html': (cargo-install "grcov")
#!/usr/bin/env bash
set -euo pipefail
2023-06-13 06:05:54 +03:00
if ! rustup component list | grep llvm-tools-preview &> /dev/null; then \
echo "llvm-tools-preview could not be found. Installing..." ;\
rustup component add llvm-tools-preview ;\
fi
just clean
just start
PROF_DIR=target/prof
mkdir -p "$PROF_DIR"
PROF_DIR=$(realpath "$PROF_DIR")
2023-05-24 06:47:55 +03:00
OUTPUT_RESULTS_DIR=target/coverage/{{ FORMAT }}
mkdir -p "$OUTPUT_RESULTS_DIR"
export CARGO_INCREMENTAL=0
export RUSTFLAGS=-Cinstrument-coverage
# Avoid problems with relative paths
export LLVM_PROFILE_FILE=$PROF_DIR/cargo-test-%p-%m.profraw
export MARTIN_PORT=3111
cargo test --all-targets
tests/test.sh
set -x
2023-05-24 06:47:55 +03:00
grcov --binary-path ./target/debug \
-s . \
-t {{ FORMAT }} \
--branch \
--ignore 'benches/*' \
--ignore 'tests/*' \
--ignore-not-existing \
-o target/coverage/{{ FORMAT }} \
--llvm \
"$PROF_DIR"
{ set +x; } 2>/dev/null
# if this is html, open it in the browser
2023-05-24 06:47:55 +03:00
if [ "{{ FORMAT }}" = "html" ]; then
open "$OUTPUT_RESULTS_DIR/index.html"
fi
# Build martin docker image
docker-build:
docker build -t ghcr.io/maplibre/martin .
# Build and run martin docker image
docker-run *ARGS:
2023-05-24 06:47:55 +03:00
docker run -it --rm --net host -e DATABASE_URL -v $PWD/tests:/tests ghcr.io/maplibre/martin {{ ARGS }}
# Do any git command, ensuring that the testing environment is set up. Accepts the same arguments as git.
[no-exit-message]
git *ARGS: start
2023-05-24 06:47:55 +03:00
git {{ ARGS }}
# Print the connection string for the test database
print-conn-str:
2023-05-24 06:47:55 +03:00
@echo {{ DATABASE_URL }}
# Run cargo fmt and cargo clippy
lint: fmt clippy
# Run cargo fmt
fmt:
cargo fmt --all -- --check
# Run Nightly cargo fmt, ordering imports
fmt2:
cargo +nightly fmt -- --config imports_granularity=Module,group_imports=StdExternalCrate
# Run cargo clippy
clippy:
cargo clippy --workspace --all-targets --bins --tests --lib --benches -- -D warnings
# These steps automatically run before git push via a git hook
[private]
git-pre-push: stop start
rustc --version
cargo --version
just lint
just test
# Update sqlite database schema.
prepare-sqlite: install-sqlx
mkdir -p martin-mbtiles/.sqlx
cd martin-mbtiles && cargo sqlx prepare --database-url sqlite://$PWD/../tests/fixtures/files/world_cities.mbtiles
# Install SQLX cli if not already installed.
[private]
install-sqlx: (cargo-install "cargo-sqlx" "sqlx-cli" "--no-default-features" "--features" "sqlite,native-tls")
# Check if a certain Cargo command is installed, and install it if needed
[private]
cargo-install $COMMAND $INSTALL_CMD="" *ARGS="":
@if ! command -v $COMMAND &> /dev/null; then \
echo "$COMMAND could not be found. Installing it with cargo install ${INSTALL_CMD:-$COMMAND} {{ ARGS }}" ;\
cargo install ${INSTALL_CMD:-$COMMAND} {{ ARGS }} ;\
fi