no ref
Added Prometheus metrics for the job queue throughput and email analytics throughput. We'll likely keep these around as good metrics to keep an eye on, though for the moment their primary function is to establish a baseline for users w/o the job queue enabled so we can observe the full impact once switching it on.
- using lodash to do this is unnecessarily heavy, so this commit
switches the code to the equivalent native version
- as mentioned in the comment I added, I think we can further optimize
this by storing it as a Set and then calling `Array.from` once, but
that's a step too far for now
ref https://linear.app/tryghost/issue/ENG-1556/
- added background job queue behind config flags
- when enabled, is only used for the member email analytics updates in
order to speed up the parent job, and take load off of the main process
that is serving requests
The intent here is to decouple certain code paths from the main process where it is unnecessary, or worse, where it's part of the request. Primary use cases are email analytics (particularly the member stats [open rate]) which are not particularly helpful in the period immediately following an email send, while the click traffic and delivered/opened events are.
Related, the email link clicks themselves send off a cascade of events that are quite a burden on the main process currently and are somewhat tied to the request response when they needn't be. We'll be looking to tackle that after some initial testing with the email analytics job.
no ref
We had an instance where this was a ms off and I should've used mock
timers when I first wrote this. This should prevent any rare clock
mishaps.
ref https://github.com/TryGhost/Ghost/pull/20835
- reimplemented email analytics changes that prioritized opened events
over other events in order to speed up open analytics
- added db persistence to fetch missing job to ensure we re-fetch every
window of events, especially important if we restart following a large
email batch
We learned a few things with the previous trial run of this. Namely,
that event throughput is not as high as we initially saw in the data for
particularly large databases. This set of changes is more conservative,
while a touch more complicated, in ensuring we capture edge cases for
really large newsletter sends (100k+ members).
In general, we want to make sure we're fetching new open events at least
every 5 mins, and often much faster than that, unless it's a quiet
period (suggesting we haven't had a newsletter send or much outstanding
event data).
ref https://linear.app/tryghost/issue/ENG-1518
After releasing the analytics job improvements, it appears for large
sites we're awfully close to missing some Mailgun events because of an
unexpected behavior of the aggregateStats call for just the opened
events job. This is taking 2-5x(+) the amount of time that the aggregate
queries take for the other jobs, despite not being dependent on the
events.
To err on the side of caution, we're going to roll this back and look to
optimize the aggregation queries before re-implementing. And we may be a
bit more cautious in giving _some_ but not _all_ priority to the
`opened` events.
ref https://linear.app/tryghost/issue/ENG-952
- added persistence to the job timestamps
This set of changes reduces the potential for gaps in our email event
processing by adding persistence to the job timestamps. This avoids
expensive queries on the `email_recipients` table after every boot, and
reduces reliance on fallbacks in periods of heavy processing or reboot.
This is our first use of the jobs table to create a persistent line,
instead of its initial use case of single-run jobs. We may expand this
capability and move to use of the jobs model over knex.raw in order to
make this a bit friendlier.
Note: this works with sqlite but datetimes are stored as ints. It still
works fine. https://github.com/knex/knex/pull/5272
ref https://linear.app/tryghost/issue/ENG-1477
- updated email analytics job to prioritize open events
- put limits on non-open event fetching
- updated job to now restart itself until processing is at a
sufficiently low volume
Previously the EmailAnalytics job would process all event data equally.
When there's sufficient recipients (>20k), we could see delays in the
open rate data in Admin because of all the delivered events being
processed. Open events are far more important to users, so we've now
prioritized processing those events before any others.
Processing of events shouldn't be any faster or slower with this as this
doesn't change throughput, just order.
NOTE: Use the mailgun-mock-server in TryGhost/Toolbox for testing.
refs: https://github.com/TryGhost/Toolbox/issues/188
- some of our older packages used a pattern for linting which missed using test config for linting tests
- we need this to be consistent so that we can add more eslint rules for testing
- two packages also didn't use the lib pattern, which made the lint pattern error - so this was fixed as well
As discussed with the product team we want to enforce kebab-case file names for
all files, with the exception of files which export a single class, in which
case they should be PascalCase and reflect the class which they export.
This will help find classes faster, and should push better naming for them too.
Some files and packages have been excluded from this linting, specifically when
a library or framework depends on the naming of a file for the functionality
e.g. Ember, knex-migrator, adapter-manager
fixes https://github.com/TryGhost/Team/issues/2562
New event fetching loops:
- Reworked the analytics fetching algorithm. Instead of starting again
where we stopped during the last fetching minus 30 minutes, we now just
continue where we stopped. But with ms precision (because no longer
database dependent after first fetch), and we stop at NOW - 1 minute to
reduce chance of missing events.
- Apart from that, a missing fetching loop is introduced. This fetches
events that are older than 30 minutes, and just processes all events a
second time to make sure we didn't skip any because of storage delays in
the Mailgun API.
- A new scheduled fetching loop, that allows us to schedule between a
given start/end date (currently only persisted in memory, so stops after
a reboot)
UI and endpoint changes:
- New UI to show the state of the analytics 'loops'
- New endpoint to request the analytics loop status
- New endpoint to schedule analytics
- New endpoint to cancel scheduled analytics
- Some number formatting improvements, and introduction of 'opened'
count in debug screen
- Live reload of data in the debug screen
Other changes:
- This also improves the support for maxEvents. We can now stop a
fetching loop after x events without worrying about lost events. This is
used to reduce the fetched events in the missing and scheduled event
loop (e.g. when the main one is fetching lots of events, we skip the
other loops).
- Prevents fetching the same events over and over again if no new events
come in (because we always started at the same begin timestamp). The
code increases the begin timestamp with 1 second if it is safe to do so,
to prevent the API from returning the same events over and over again.
- Some optimisations in handing the processing results (less merges to
reduce CPU usage in cases we have lots of events).
Testing:
- You can test with lots of events using the new mailgun mocking server
(Toolbox repo `scripts/mailgun-mock-server`). This can also simulate
events that are only returned after x minutes because of storage delays.
no issue
- When we receive an email failure with an empty message, the saving of
the model would fail because of schema validation that requires strings
to be non-empty.
- This adds more logging to the email analytics service to help debug
future issues
- Performance improvement to storing delivered, opened and failed emails
by replacing COALESCE with WHERE X IS NULL (tested and should give a
decent performance boost locally).
- this was all getting terribly behind so I've done several things:
- majority of `@tryghost/*` except Lexical packages
- gscan + knex-migrator to remove old `@tryghost/errors` usage
- bumped lockfile
fixes https://github.com/TryGhost/Team/issues/2332
Saves events in the database and collects error information.
Do note that we can emit the same events multiple times, and as a result
out of order. That means we should correctly handle that a delivered
event might be fired after a permanent failure. So a delivered event is
ignored if the email is already marked as failed. Also delivered_at is
reset to null when we receive a permanent failure.
fixes https://github.com/TryGhost/Team/issues/2310
This moves the processing of the events from the event-processor to a
new email-event-processor in the email-service package.
- The `EmailEventProcessor` only translates events from
providerId/emailId to their known emailId, memberId and recipientId, and
dispatches the corresponding events.
- Since `EmailEventProcessor` runs in a separate worker thread, we can't
listen for the dispatched events on the main thread. To accomplish this
communication, the events dispatched from the `EmailEventProcessor`
class are 'posted' via the postMessage method and redispatched on the
main thread.
- A new `EmailEventStorage` class reacts to the email events and stores
it in the database. This code mostly corresponds to the (now deleted)
subclass of the old `EmailEventProcessor`
- Updating a members last_seen_at timestamp has moved to the
lastSeenAtUpdater.
- Email events no longer store `ObjectID` because these are not
encodable across threads via postMessage
- Includes new E2E tests that test the storage of all supported Mailgun
events. Note that in these tests we run the processing on the main
thread instead of on a separate thread (couldn't do this because
stubbing is not possible across threads)
There are some missing pieces that will get added in later PRs (this PR
focuses on porting the existing functionality):
- Handling temporary failures/bounces
- Capturing the error messages of bounce events
- because of how the npm scripts were set up, we were running the full
Admin integration tests during the unit tests phase of CI
- this commit renames the majority of `test` to `test:unit` in the
package.json files, and aliases `test` to `test:unit`
- special packages like Admin have no-op'd `test:unit` scripts so we
don't end up running its tests
refs https://github.com/TryGhost/Team/issues/1723
- Added count.replies to comments
- Added replies endpoint
- Limited returned replies to 3.
- Replaced likes_count with count.likes in comments
- Instead of fetching all the likes of a comment to determine the total count, we'll now use count.likes
- Instead of fetching all the likes of a comment to determine whether a member liked a comment, we'll now use count.liked (which returns the amount of likes of the current member, being 0 or 1). This is mapped to `liked` to make it more natural to work with.
The `members.test.snap` file changed because we no longer include `liked: false` if we didn't fetch the liked relation. And in the comments events of the activity feed the liked property is therefore removed.
These changes requires an update to the `bookshelf-include-count` plugin:
- Updated to also work for nested relations
- This moves the count queries from the `bookshelf-include-count` plugin to the `countRelations` method of each model.
- Updated to keep the counts after saving a model (crud.edit didn't return the counts before)
refs https://github.com/TryGhost/Toolbox/issues/354
- these READMEs were migrated over from when each package was in a
different repo
- they also assume you're going to be publishing the packages because it
mentions install instructions
- only a few of them contain custom content
- this commit deletes the majority of these files because they're now
not useful
- any that contained other instructions have been cut down
refs https://github.com/TryGhost/Toolbox/issues/354
- these repository links made sense when they were in different repos
and published to NPM but we don't publish these packages any more
- this commit deletes those keys from the files
- these were copied over during the monorepo conversion but we're not
going to be publishing these packages so the top-level LICENSE file
covers all packages here