Currently, business tier users created after business tier launch can't
access Stats API due to faulty grandfathering logic. This change should
fix that.
* Export and import custom events via CSV
* Add prop support of url for cloaked links and path for 404s in imported queries
* Handle custom events with empty URL and path properties gracefully
* Make events with properties logic DRY and fix missed cloaked link
* Add test for path property breakdown
* Update raw CH data fixture and extend CSV importer tests
* Fix broken query condition after rebase
* Update CHANGELOG.md
* New struct format for query after parsing
* WIP refactoring
* WIP: Validations working
* WIP: tuple to list
* continued refactoring
* WIP: parsing defaults
* Breakdown tests pass
* Window functions fix
* Fix default
* Remove dead argument
* Update filters tests
* Update query_test.exs
* Fix table_decider
* sources tests pass
* Filter suggestions fix
* revenue/goal filter applied refactor
* Update top_stats matching
* Get stats_controller tests passing
* Update neighbor_aggregate_time_on_page
* Refactor Query.remove_event_filters into Query.remove_filters, add new callsites
* Move goal where clause building to new WhereBuilder module
* Move event:name filters
* Move more filters to WhereBuilder
* Update fragment to allow non-static meta columns
* Build where clause for events table using WhereBuilder
* Build sessions table where clause using WhereBuilder
* Move time range filtering and site checking to WhereBuilder
* WhereBuilder.build_condition method
* Remove TODO
* _rest pattern for TableDecider, Query pattern matching
Future-proofing in a tiny way
* Hacky fix to get tests passing for Google API tests
* Typespec fix
* Merge conflict
* refactor special goal filter logic in imported.ex
* Docs feedback
* put_filter
---------
Co-authored-by: Robert Joonas <robertjoonas16@gmail.com>
* Apply filters in search console request
* Remove dead code from search console modal
* Remove unimportant information from keyword modal
* Show invalid filters from search console
* Fix tests
* Add/Fix tests
* Fix typo
* Remove unused variable
* Fix typo
* Changelog entry
* Fix Credo
* Display impressions, CTR and position in keyword modal
* Undo change that should not have been committed
* Fix test
* Fix test
* filters -> search_console_filters
* Delete dead code
* Speed up calculating usage for users with many many sites
Currently, the settings page time outs for a user with 14k sites.
This PR speeds things up by:
1. Doing the work in parallel (max 10 queries at once)
2. Increasing chunking size (300 -> 1000)
Note that the query is relatively lightweight on clickhouse - running
these queries manually takes ~70ms. If this becomes slow we can also
introduce a PROJECTION to speed up the calculation, but this wasn't a
bottleneck currently.
On chunking size:
ClickHouse can handle even 10k site_ids in a single query fast if run
via clickhouse-client , but running the same query via ecto_ch it becomes
really slow (60ms vs 1s).
Not sure if this is a driver, serialization or networking issue.
* add new goal suggestions API
* silence credo
* Order suggestions from subqueries explicitly
* allow autoconfiguring goals
Co-authored-by: Adrian Gruntkowski <adrian.gruntkowski@gmail.com>
* Fix form modal tab switching behavior
* add test
* Remove redundant and invalid action link title
---------
Co-authored-by: Adrian Gruntkowski <adrian.gruntkowski@gmail.com>
* refactor filter suggestions with a more DRY approach
* Avoid DRYing props string->atom translation
---------
Co-authored-by: Adrian Gruntkowski <adrian.gruntkowski@gmail.com>
* Add Ecto schema for imported custom events
* Start importing custom events from GA4
* query imported goals
* make it possible to query events metric from imported
* make it possible to query pageviews in goal breakdown
* make it possible to query conversion rate
* fix rate limiting test
* add CR tests for dashboard API
* implement imported link_url breakdown
* override special custom event names coming from GA4
* allow specific goal filters in imported_q
* update GA4 import tests to use Stats API
* Improve tests slightly
* Update CHANGELOG.md
---------
Co-authored-by: Robert Joonas <robertjoonas16@gmail.com>
* Fix event props paygate
Previous code wasn't properly omitting event property filters from
queries.
Discovered while refactoring the code. Extracting fix from refactor for easier reviewability.
* a
* Drop function
* add inline csv fixture
* use new csvs
* cleanup csv reading and site_id replacing
* perform comparisons between native and imported queries
* help help help
* help help
* help
* eh
* fin
* exclude export/import e2e test when experimental_reduced_joins flag is enabled
* adapt to new pageviews
* adapt to experimental_reduced_joins
* credo is formatter
* cleanup
* assert bounce rates equal in city breakdown
* fix rebase against master
* clean-up dataset
* update comment
* fix typo
* apply csv changes to the files
* use sessions timestamp for exports' dates
---------
Co-authored-by: RobertJoonas <56999674+RobertJoonas@users.noreply.github.com>
* Fix typo in test name
* Update test_helper, enable experimental_session_count together with experimental_reduced_joins
* Return session in each time bucket its active in for hourly/minute timeseries
The behavior is behind experimental_session_count flag
This results in more accurate visitor showing compared to previous approach of showing each user only
active the _last_ time they did a pageview.
Were not doing this for monthly/weekly graphs due to query performance cost and it having a small effect there.
See also https://3.basecamp.com/5308029/buckets/35611491/messages/7085680123
* Add tests for new behavior
Note the new behavior mimics the old one precisely, these tests fail if only
experimental_reduced_joins is on, but not experimental_session_count
* Type erasure
* Dead comment remove
* Expected_plot change
* keep breakdown prop in the query struct
* Explicitly ignore property param in aggregate and timeseries
Since parameter validation depends on the breakdown property, we need to
make sure it doesn't have any unexpected effect in endpoints where it's
not expected.
* Add support for resuming import to GA4 importer
* Handle rate limiting gracefully for all remainig GA4 HTTP requests
* Show notice tooltip for long running imports
* Bump resume job schedule delay to 65 minutes
* Fix tooltip styling
* Rely on con_cache telemetry
Now that https://github.com/sasa1977/con_cache/pull/76
is released, we don't have to use low-level operations
to emit hit/miss events.
This PR also wraps cache processes with
a function returning appropriate child specs lists.
Ideally each cache will have its own supervisor/child specs
going forward. This is an intermediate step in that direction.
* Update lib/plausible/application.ex
Co-authored-by: Adrian Gruntkowski <adrian.gruntkowski@gmail.com>
* Declare caches without warmers with plain child specs
---------
Co-authored-by: Adrian Gruntkowski <adrian.gruntkowski@gmail.com>