- `https` was getting caught somewhere with nock and metascraper and
caused each test case to hang for 3 seconds
- `http` still tests what we want and is instant
refs https://github.com/TryGhost/Ghost/issues/12420
- updated `order` bookshelf plugin's `parseOrderOption()` method to return multiple order-related properties
- `order` same as before, a key-value object of property-direction
- `orderRaw` new property that is a raw SQL order string generated from `orderRawQuery()` method in models
- `eagerLoad` new property that is an array of properties the `eagerLoad` plugin should use to join across
- updated `pagination.fetchAll()` to apply normal order + raw order if both are available and to handle eager loading / joins when `options.eagerLoad` is populated
- updated post model to include details for email relationship and to add `orderRawQuery()` that allows `email.open_rate` to be used as an order option
- refactoring the acceptance tests to use async-await removes all the Promise
chaining we had, and streamlines the coding styles we have across the code so
test files are more alike
- at the time of writing, the v3 API === canary API
- we have both v3 + canary regression tests, which are nearly the same
but there are slight deviations that we keep missing when adding new
tests
- the canary tests are actually describing functionality of the v3 API
- therefore, we should be ok to delete the v3 regression tests for now
- when v3 is stable, we can copy the canary tests back to v3
no issue
- job registration was checking for submitted emails in it's email count but the job registration method is called as soon as an email is created meaning the email has a status of 'pending' which prevented the analytics job from being started until a second email was sent
no issue
- email analytics may be desirable to fully switch off in certain circumstances, when that happens we want to prevent related background jobs from running and expose the feature flag via the config endpoint in the Admin API so that clients can adjust accordingly
no issue
- if emails are older than 30 days we wouldn't be able to fetch any analytics for them and if a site used emails in the past but is no longer using them it doesn't make sense to keep potentially expensive background worker threads spinning up
no issue
- recurring jobs spin up worker threads which can be quite CPU intensive even when not performing much processing, this can be problematic in environments where there are many Ghost instances running
- updated the email job scheduling to be skipped on bootup when there are no emails in the database and to be started when the first email is created as long as we're not in testing env
- increase analytics job schedule from every 2 minutes to every 5 minutes to help spread the load further across instances
no issue
- typically cron/later schedules will schedule for :00 on the minute which would create API spikes with every members-email-using Ghost site hitting the API at the same time
- adjusted the scheduling to use cron syntax with job runs every 2 minutes on 1,3,5... or 2,4,6... and a random seconds value to smooth usage across sites
no refs
Ghost's Portal script is loaded via unpkg which was till now pinned to load `@latest` version, which unpkg auto-resolved to the latest released Portal version. This allowed fast iterations on Portal while still in active beta development to test latest Portal releases.
Going forward, Portal will be pinned to latest specific minor version that allows releasing new features that are not backward compatible without affecting older Ghost releases.
Note: All previous Ghost releases with Portal `@latest` will continue to resolve to latest version and will need to update to latest Ghost 3.x to use all Portal features.
- this test was present in the v3 test suite but not in the canary ones
- given v3 == canary currently, these tests should essentially be the
same
- copied the test over
no issue
- it's possible background jobs may cause unintended side-effects so it's useful to have a kill-switch to disable them individually to keep sites working
no issue
- some tests started failing locally because example.com was not resolvable
- mocked dns lookup for all tests so that we're always testing against expected public/private IP address blocks
- users imported from CSV with no created_at date where having their created_at date being stored as an int rather than a datetime.
- this was causing parsing issues with the graph so this commit fixes the formatting
closes#12083
- fixes a parsing issue where negative offset values were incorrectly having the + sign added regardless of actual offset for sqlite databases.
- for mysql databases absolute values of offset were taken with sign applied where appropriate to stop issues where both hours and minutes could be negative which would cause both an issue with offsets that could present as -2:30 and by the look of the code also trigger extra padding to result in -2:-030 rather than the expected -2:30
no issue
- the 4.2.0 version of `sqlite3` that we're using is not compatible with `worker_threads`
- 5.0.0 should add support this but there are other errors
- 5.0.1 is released but not published (https://github.com/mapbox/node-sqlite3/issues/1386)
- we export i18n from `core/frontend/services/proxy` and this is used in
the most of the places in the frontend code
- this commit aligns the rest of the code in core/frontend to use the
proxy too
- unfortunately core/frontend/services/themes/i18n.js loops back to the
proxy so we have a circular dependency
no issue
- added `EmailAnalyticsService`
- `.fetchAll()` grabs and processes all available events
- `.fetchLatest()` grabs and processes all events since the last seen event timestamp
- `EventProcessor` passed event objects and updates `email_recipients` or `members` records depending on the event being analytics or list hygiene
- always returns a `EventProcessingResult` instance so that progress can be tracked and merged across individual events, batches (pages of events), and total runs
- adds email_id and member_id to the returned result where appropriate so that the stats aggregator can limit processing to data that has changed
- sets `email_recipients.{delivered_at, opened_at, failed_at}` for analytics events
- sets `members.subscribed = false` for permanent failure/unsubscribed/complained list hygiene events
- `StatsAggregator` takes an `EventProcessingResult`-like object containing arrays of email ids and member ids on which to aggregate statistics.
- jobs for `fetch-latest` and `fetch-all` ready for use with the JobsService
- added `initialiseRecurringJobs()` function to Ghost bootup procedure that schedules the email analytics "fetch latest" job to run every minute