This commit updates mix.exs to resolve bamboo_postmark to our fork. The
fork encodes names with quotes when building e-mails, adding support for
special names with commas and quotes. Related to
plausible/bamboo_postmark#1.
Closes#1885
* Implement sites by domain caching interface + warmer
* Add test
* Implement hit rate interface
* Add moduledocs
* Fix up typespec
* s/warmer/warmer_fn
* Extract measure_duration/2
* Fix up typespec
* Log errors and return nil on cache internal errors
* Fix up non-existing cache test
* Retrieve specific db columns when pre-filling the cache
* Reduce the subset of fields retrieved from the DB
See 63f3c6233d (r89871536)
This pull request improves the current OpenTelemetry implementation. Currently only 1% of the spans are sent, due to the high volume of ingestion requests to /api/event. I enabled the 1% sampling to /api/event only, recording 100% of the other traces.
* Update Timex version from 3.7.7 to 3.7.8
* Generate timezone list from Tzdata
This commit fixes a bug where timezone changes weren't updating the
timezone list displayed when editing or creating a site.
Timezones were being pulled from a static list. This commit changes it
to generate the list from Tzdata, that uses a timezone database with
updated information on time changes. Additionally it adds more timezones
with aliases and links to the list.
Closes#1340
* Use timezone name from browser to recommend timezone
This commit matches the timezone name instead of offset to recommend a
timezone when creating a new site. The JavaScript Intl.DateTimeFormat
API is widely supported according to the link. In any case, if the
timezone fails to match by name, it fallbacks to the offset strategy.
https://caniuse.com/mdn-javascript_builtins_intl_datetimeformat_resolvedoptions_computed_timezoneCloses#904
* Create separate module for GA HTTP requests
* Fetch GA data entirely instead of monthly
* Add buffering to GA imports
* Change positional args to maps when serializing from GA
* Create Google Analytics VCR tests
* Upgrade geolix
* Remove geolix pool config
* Save unnecessary Task.async_stream roundtrip
Normally the Geolix API accepts `:where` keyword option that designates
the database to look up. In case no parameter is supplied, it'll spawn
a parallel map over all databases available. In this case we have only
one DB anyway, so there is no need for the extra instrumentation.
* Follow up on direct :geolocation lookups
* Introduce Finch for Sentry integration
* Make sure the DummyAgent can be started
* No need to sanitize the dsn, finch takes care of that
* Simplify the dummy child spec
* Annotate redirects clause
* Make use of new `get_int_from_path_or_env`
* Actually use finch in Sentry config
* Configure `excluded_domains` correctly for Sentry
The way sentry is configured currently, when we get an HTTP error it
will be logged twice - once from Sentry.PlugCapture and once from
Sentry.LoggerBackend. The logger backend module does the right thing
by default but for some reason we've been overriding the config
parameter that by default stops double-counting errors. This commit
returns to the default configuration which is better.
* Default to 15s timeout
* Attempt to send twice at most
* Warn in sentry client
* Use warn level in sentry client
Co-authored-by: Adam Rutkowski <hq@mtod.org>
* Add has_imported_stats boolean to Site
* Add Google Analytics import panel to general settings
* Get GA profiles to display in import settings panel
* Add import_from_google method as entrypoint to import data
* Add imported_visitors table
* Remove conflicting code from migration
* Import visitors data into clickhouse database
* Pass another dataset to main graph for rendering in red
This adds another entry to the JSON data returned via the main graph API
called `imported_plot`, which is similar to `plot` in form but will be
completed with previously imported data. Currently it simply returns
the values from `plot` / 2. The data is rendered in the main graph in
red without fill, and without an indicator for the present. Rationale:
imported data will not continue to grow so there is no projection
forward, only backwards.
* Hook imported GA data to dashboard timeseries plot
* Add settings option to forget imported data
* Import sources from google analytics
* Merge imported sources when queried
* Merge imported source data native data when querying sources
* Start converting metrics to atoms so they can be subqueried
This changes "visitors" and in some places "sources" to atoms. This does
not change the behaviour of the functions - the tests all pass unchanged
following this commit. This is necessary as joining subqueries requires
that the keys in `select` statements be atoms and not strings.
* Convery GA (direct) source to empty string
* Import utm campaign and utm medium from GA
* format
* Import all data types from GA into new tables
* Handle large amounts of more data more safely
* Fix some mistakes in tables
* Make GA requests in chunks of 5 queries
* Only display imported timeseries when there is no filter
* Correctly show last 30 minutes timeseries when 'realtime'
* Add with_imported key to Query struct
* Account for injected :is_not filter on sources from dashboard
* Also add tentative imported_utm_sources table
This needs a bit more work on the google import side, as GA do not
report sources and utm sources as distinct things.
* Return imported data to dashboard for rest of Sources panel
This extends the merge_imported function definition for sources to
utm_sources, utm_mediums and utm_campaigns too. This appears to be
working on the DB side but something is incomplete on the client side.
* Clear imported stats from all tables when requested
* Merge entry pages and exit pages from imported data into unfiltered dashboard view
This requires converting the `"visits"` and `"visit_duration"` metrics
to atoms so that they can be used in ecto subqueries.
* Display imported devices, browsers and OSs on dashboard
* Display imported country data on dashboard
* Add more metrics to entries/exits for modals
* make sure data is returned via API with correct keys
* Import regions and cities from GA
* Capitalize device upon import to match native data
* Leave query limits/offsets until after possibly joining with imported data
* Also import timeOnPage and pageviews for pages from GA
* imported_countries -> imported_locations
* Get timeOnPage and pageviews for pages from GA
These are needed for the pages modal, and for calculating exit rates for
exit pages.
* Add indicator to dashboard when imported data is being used
* Don't show imported data as separately line on main graph
* "bounce_rate" -> :bounce_rate, so it works in subqueries
* Drop imported browser and OS versions
These are not needed.
* Toggle displaying imported data by clicking indicator
* Parse referrers with RefInspector
- Use 'ga:fullReferrer' instead of 'ga:source'. This provides the actual
referrer host + path, whereas 'ga:source' includes utm_mediums and
other values when relevant.
- 'ga:fullReferror' does however include search engine names directly,
so they are manually checked for as RefInspector won't pick up on
these.
* Keep imported data indicator on dashboard and strikethrough when hidden
* Add unlink google button to import panel
* Rename some GA browsers and OSes to plausible versions
* Get main top pages and exit pages panels working correctly with imported data
* mix format
* Fetch time_on_pages for imported data when needed
* entry pages need to fetch bounces from GA
* "sample_percent" -> :sample_percent as only atoms can be used in subqueries
* Calculate bounce_rate for joined native and imported data for top pages modal
* Flip some query bindings around to be less misleading
* Fixup entry page modal visit durations
* mix format
* Fetch bounces and visit_duration for sources from GA
* add more source metrics used for data in modals
* Make sources modals display correct values
* imported_visitors: bounce_rate -> bounces, avg_visit_duration -> visit_duration
* Merge imported data into aggregate stats
* Reformat top graph side icons
* Ensure sample_percent is yielded from aggregate data
* filter event_props should be strings
* Hide imported data from frontend when using filter
* Fix existing tests
* fix tests
* Fix imported indicator appearing when filtering
* comma needed, lost when rebasing
* Import utm_terms and utm_content from GA
* Merge imported utm_term and utm_content
* Rename imported Countries data as Locations
* Set imported city schema field to int
* Remove utm_terms and utm_content when clearing imported
* Clean locations import from Google Analytics
- Country and region should be set to "" when GA provides "(not set)"
- City should be set to 0 for "unknown", as we cannot reliably import
city data from GA.
* Display imported region and city in dashboard
* os -> operating_system in some parts of code
The inconsistency of using os in some places and operating_system in
others causes trouble with subqueries and joins for the native and
imported data, which would require additional logic to account for. The
simplest solution is the just use a consistent word for all uses. This
doesn't make any user-facing or database changes.
* to_atom -> to_existing_atom
* format
* "events" metric -> :events
* ignore imported data when "events" in metrics
* update "bounce_rate"
* atomise some more metrics from new city and region api
* atomise some more metrics for email handlers
* "conversion_rate" -> :conversion_rate during csv export
* Move imported data stats code to own module
* Move imported timeseries function to Stats.Imported
* Use Timex.parse to import dates from GA
* has_imported_stats -> imported_source
* "time_on_page" -> :time_on_page
* Convert imported GA data to UTC
* Clean up GA request code a bit
There was some weird logic here with two separate lists that really
ought to be together, so this merges those.
* Fail sooner if GA timezone can't be identified
* Link imported tables to site by id
* imported_utm_content -> imported_utm_contents
* Imported GA from all of time
* Reorganise GA data fetch logic
- Fetch data from the start of time (2005)
- Check whether no data was fetched, and if so, inform user and don't
consider data to be imported.
* Clarify removal of "visits" data when it isn't in metrics
* Apply location filters from API
This makes it consistent with the sources etc which filter out 'Direct /
None' on the API side. These filters are used by both the native and
imported data handling code, which would otherwise both duplicate the
filters in their `where` clauses.
* Do not use changeset for setting site.imported_source
* Add all metrics to all dimensions
* Run GA import in the background
* Send email when GA import completes
* Add handler to insert imported data into tests and imported_browsers_factory
* Add remaining import data test factories
* Add imported location data to test
* Test main graph with imported data
* Add imported data to operating systems tests
* Add imported data to pages tests
* Add imported data to entry pages tests
* Add imported data to exit pages tests
* Add imported data to devices tests
* Add imported data to sources tests
* Add imported data to UTM tests
* Add new test module for the data import step
* Test import of sources GA data
* Test import of utm_mediums GA data
* Test import of utm_campaigns GA data
* Add tests for UTM terms
* Add tests for UTM contents
* Add test for importing pages and entry pages data from GA
* Add test for importing exit page data
* Fix module file name typo
* Add test for importing location data from GA
* Add test for importing devices data from GA
* Add test for importing browsers data from GA
* Add test for importing OS data from GA
* Paginate GA requests to download all data
* Bump clickhouse_ecto version
* Move RefInspector wrapper function into module
* Drop timezone transform on import
* Order imported by side_id then date
* More strings -> atoms
Also changes a conditional to be a bit nicer
* Remove parallelisation of data import
* Split sources and UTM sources from fetched GA data
GA has only a "source" dimension and no "UTM source" dimension. Instead
it returns these combined. The logic herein to tease these apart is:
1. "(direct)" -> it's a direct source
2. if the source is a domain -> it's a source
3. "google" -> it's from adwords; let's make this a UTM source "adwords"
4. else -> just a UTM source
* Keep prop names in queries as strings
* fix typo
* Fix import
* Insert data to clickhouse in batches
* Fix link when removing imported data
* Merge source tables
* Import hostname as well as pathname
* Record start and end time of imported data
* Track import progress
* Fix month interval with imported data
* Do not JOIN when imported date range has no overlap
* Fix time on page using exits
Co-authored-by: mcol <mcol@posteo.net>
* WIP
* Actually activate the user
* Send email verification codes
* Send activation code with email
* Only show onboarding steps during first site creation
* Add worker to config
* Consistent form styles
* Send welcome email when user activates account
* Add changelog entry
* Use https in new site form
* Correct spelling in email
* Ability to add event metadata
* Close Dropdown on outside click
* Show (none) value in metadata breakdown
* Allow filtering for metadata key/val pairs
* Use correct clickhouse_ecto
* Better naming for meta filter
* Add tests
* Add changelog entry
* Remove change made for testing
* Show utm_medium, utm_source, and utm_campaign in sources modal
* Allow filtering by UTM tags
* Integrate filters with URL bar
* Add CHANGELOG entry
* Refresh modal when filter changes
* Remove Direct / None from campaign results
* Add UTM tabs to top sources report
* Add pagination
* Remove dropdown from top sources popup
* Fix bug in clickhouse_ecto
* Remove referrers_for_goal
* Make sure UTM tags work OK with goals
* Make source tab selection sticky
* Consistent styling for devices and source tabs
* Add noref in realtime to utm tabs
* Fix tests
* Use clickhouse-ecto for stats
* Use clickhouse ecto instead of low-level clickhousex
* Remove defaults from event schema
* Remove all references to Clickhousex
* Document configuration change
* Ensure createdb and migrations can be run in a release
* Remove config added for debug
* Update plausible_variables.sample.env
* Use oban for work
* Remove mix task for email reports
* Convert remaining tasks to Oban workers
* Add crontab for scheduled jobs
* Only enable cron if configured to do so
* first commit with test and compile job
Signed-off-by: Chandra Tungathurthi <tckb@tgrthi.me>
* adding 'prepare' stage
Signed-off-by: Chandra Tungathurthi <tckb@tgrthi.me>
* updated ci script to include "test" compile phase
Signed-off-by: Chandra Tungathurthi <tckb@tgrthi.me>
* adding environment variables for connecting to postgresql
Signed-off-by: Chandra Tungathurthi <tckb@tgrthi.me>
* updated ci config for postgres
Signed-off-by: Chandra Tungathurthi <tckb@tgrthi.me>
* using non-alpine version of elixir
Signed-off-by: Chandra Tungathurthi <tckb@tgrthi.me>
* re-using the 'compile' artifacts and added explict env variables for testing
Signed-off-by: Chandra Tungathurthi <tckb@tgrthi.me>
* removing redundant deps fetching from common code
Signed-off-by: Chandra Tungathurthi <tckb@tgrthi.me>
* formatting using mix.format -- beware no-code changes!
Signed-off-by: Chandra Tungathurthi <tckb@tgrthi.me>
* added release config
Signed-off-by: Chandra Tungathurthi <tckb@tgrthi.me>
* adding consistent env variable for Database
Signed-off-by: Chandra Tungathurthi <tckb@tgrthi.me>
* more cleaning up of environment variables
Signed-off-by: Chandra Tungathurthi <tckb@tgrthi.me>
* Adding releases config for enabling releases
Signed-off-by: Chandra Tungathurthi <tckb@tgrthi.me>
* cleaning up env configs
Signed-off-by: Chandra Tungathurthi <tckb@tgrthi.me>
* Cleaned up config and prepared config for releases
Signed-off-by: Chandra Tungathurthi <tckb@tgrthi.me>
* updated CI script with new config for test
Signed-off-by: Chandra Tungathurthi <tckb@tgrthi.me>
* Added Dockerfile for creating production docker image
Signed-off-by: Chandra Tungathurthi <tckb@tgrthi.me>
* Adding "docker" build job yay!
Signed-off-by: Chandra Tungathurthi <tckb@tgrthi.me>
* using non-slim version of debian and installing webpack
Signed-off-by: Chandra Tungathurthi <tckb@tgrthi.me>
* Adding overlays for migrations on releases
Signed-off-by: Chandra Tungathurthi <tckb@tgrthi.me>
* restricting the docker built to master branch only
Signed-off-by: Chandra Tungathurthi <tckb@tgrthi.me>
* typo fix
Signed-off-by: Chandra Tungathurthi <tckb@tgrthi.me>
* adding "Hosting.md" to explain hosting instructions
Signed-off-by: Chandra Tungathurthi <tckb@tgrthi.me>
* removed the default comments
Signed-off-by: Chandra Tungathurthi <tckb@tgrthi.me>
* Added documentation related to env variables
* updated documentation and fixed typo
Signed-off-by: Chandra Tungathurthi <tckb@tgrthi.me>
* updated documentation
* Bumping up elixir version as `overlays` are only supported in latest version
read release notes: https://github.com/elixir-lang/elixir/releases/tag/v1.10.0
Signed-off-by: Chandra Tungathurthi <tckb@tgrthi.me>
* Adding tarball assembly during release
Signed-off-by: Chandra Tungathurthi <tckb@tgrthi.me>
* updated HOSTING.md
Signed-off-by: Chandra Tungathurthi <tckb@tgrthi.me>
* Added support for db migration
Signed-off-by: Chandra Tungathurthi <tckb@tgrthi.me>
* minor corrections
Signed-off-by: Chandra Tungathurthi <tckb@tgrthi.me>
* initializing admin user
Admin user has been added in the "migration" phase. A default user is automatically created in the process. One can provide the related env variables, else a new one will be automatically created for you.
Signed-off-by: Chandra Tungathurthi <tckb@tgrthi.me>
* Initial base domain update - phase#1
These changes are only meant for correct operating it under self-hosting. There are many other cosmetic changes, that require updates to email, site and other places where the original website and author is used.
Signed-off-by: Chandra Tungathurthi <tckb@tgrthi.me>
* Using dedicated config variable `base_domain` instead
Signed-off-by: Chandra Tungathurthi <tckb@tgrthi.me>
* adding base_domain to releases config
Signed-off-by: Chandra Tungathurthi <tckb@tgrthi.me>
* removing the dedicated config "base_domain", relying on endpoint host
Signed-off-by: Chandra Tungathurthi <tckb@tgrthi.me>
* Removed the usage of "Mix" in code!
It is bad practice to use "mix" module inside the code as in actual release this module is unavailable. Replacing this with a config environment variable
Signed-off-by: Chandra Tungathurthi <tckb@tgrthi.me>
* Added support for SMTP via Bamboo Smtp Adapter
Signed-off-by: Chandra Tungathurthi <tckb@tgrthi.me>
* Capturing SMTP errors via Sentry
Signed-off-by: Chandra Tungathurthi <tckb@tgrthi.me>
* Minor updates
Signed-off-by: Chandra Tungathurthi <tckb@tgrthi.me>
* Adding junit formatter -- useful for generating test reports
Signed-off-by: Chandra Tungathurthi <tckb@tgrthi.me>
* adding documentation for default user
* Resolve "Gitlab Adoption: Add supported services in "Security & Compliance""
* bumping up the debian version to fix issues
fixing some vulnerabilities identified by the scanning tools
* More updates for self-hosting
Changes in most of the places to suit self-hosting. Although, there are some which have been left-off.
Signed-off-by: Chandra Tungathurthi <tckb@tgrthi.me>
* quick-dirty-fix!
* bumping up the db connect timeout
Signed-off-by: Chandra Tungathurthi <tckb@tgrthi.me>
* bumping up the db connect timeout
Signed-off-by: Chandra Tungathurthi <tckb@tgrthi.me>
* bumping up the db connect timeout
Signed-off-by: Chandra Tungathurthi <tckb@tgrthi.me>
* bumping up timeout - skipping MRs :-/
* removing restrictions on watching for changes
this stuff isn't working
* Update HOSTING.md
* renamed the module name
* reverting formatting-whitespace changes
Signed-off-by: Chandra Tungathurthi <tckb@tgrthi.me>
* reverting the name to release
Signed-off-by: Chandra Tungathurthi <tckb@tgrthi.me>
* adding docker-compose.yml and related instructions
Signed-off-by: Chandra Tungathurthi <tckb@tgrthi.me>
* using `plausible_url` instead of assuming `https`
this is because, it is much to test in local dev machines and in most cases there's already a layer above which is capable for `https` termination and http -> https upgrade
Signed-off-by: Chandra Tungathurthi <tckb@tgrthi.me>
* WIP: merging changes from upstream
Signed-off-by: Chandra Tungathurthi <tckb@tgrthi.me>
* wip: more changes
* Pushing in changes from upstream
Signed-off-by: Chandra Tungathurthi <tckb@tgrthi.me>
* changes to ci for testing
Signed-off-by: Chandra Tungathurthi <tckb@tgrthi.me>
* cleaning up and finishing clickhouse integration
Signed-off-by: Chandra Tungathurthi <tckb@tgrthi.me>
* updating readme with hosting details
* Get stats from clickhosue
* Pull stats from clickhouse
* Use correct Query namespace
* Use Clickhouse in unit tests
* Use Clickhouse in stats controller tests
* Use fixtures for unit tests
* Add Clickhouse to travis
* Use Clickhouse session store for sessions
* Add garbage collection to session store
* Reload session state from Clickhouse on server restart
* Query from sessions table
* Trap exits in event write buffer
* Run hydration without starting the whole app
* Make session length 30 minutes
* Revert changes to fingerprint schema
* Remove clickhouse from fingerprint sessions
* Flush buffers before shutdown
* Use old stats when merging
* Remove old session schema
* Fix tests with CH sessions
* Add has_pageviews? to Stats
* Use CH in staging
* Update schema
* Fix test setup
* WIP
* Get all stats from Clickhouse
* Use https dependency
* Do not namespace db tables in hydrate task
* Update hydration task
* Ingest data to clickhouse
* Double-write to both Clickhouse and Postgres
* Add test setup
* Keep old stats module
* Prepare for live ingestion test
* Create shared links UI
* Show shared links in a list on settings page
* Show dropdown for each shared link
* Show icon actions for shared links
* Log user in when they click non-password-protected shared link
* Actually authenticate using a password
* Delete shared links