refs https://github.com/TryGhost/Ghost/issues/15725
This pull request adds a new configuration option for the Mailgun email
provider that allows the user to set the maximum number of recipients
per email batch via a new config option `bulkEmail.batchSize`
refs: https://github.com/TryGhost/Toolbox/issues/595
We're rolling out new rules around the node assert library, the first of which is enforcing the use of assert/strict. This means we don't need to use the strict version of methods, as the standard version will work that way by default.
This caught some gotchas in our existing usage of assert where the lack of strict mode had unexpected results:
- Url matching needs to be done on `url.href` see aa58b354a4
- Null and undefined are not the same thing, there were a few cases of this being confused
- Particularly questionable changes in [PostExporter tests](c1a468744b) tracked [here](https://github.com/TryGhost/Team/issues/3505).
- A typo see eaac9c293a
Moving forward, using assert strict should help us to catch unexpected behaviour, particularly around nulls and undefineds during implementation.
As discussed with the product team we want to enforce kebab-case file names for
all files, with the exception of files which export a single class, in which
case they should be PascalCase and reflect the class which they export.
This will help find classes faster, and should push better naming for them too.
Some files and packages have been excluded from this linting, specifically when
a library or framework depends on the naming of a file for the functionality
e.g. Ember, knex-migrator, adapter-manager
- we have calls to the metrics library so we can measure the time it
takes the Mailgun API to return a response
- however, there's a bug in the code whereby if the `batchHandler`
takes a long time and then throws an error, this time will be reported
to metrics
- this is misleading because it looks like Mailgun is taking a long time
if the databases are slow
- this pulls the specific SDK call out into a function so it's easier to
wrap with timing code
fixes https://github.com/TryGhost/Team/issues/2562
New event fetching loops:
- Reworked the analytics fetching algorithm. Instead of starting again
where we stopped during the last fetching minus 30 minutes, we now just
continue where we stopped. But with ms precision (because no longer
database dependent after first fetch), and we stop at NOW - 1 minute to
reduce chance of missing events.
- Apart from that, a missing fetching loop is introduced. This fetches
events that are older than 30 minutes, and just processes all events a
second time to make sure we didn't skip any because of storage delays in
the Mailgun API.
- A new scheduled fetching loop, that allows us to schedule between a
given start/end date (currently only persisted in memory, so stops after
a reboot)
UI and endpoint changes:
- New UI to show the state of the analytics 'loops'
- New endpoint to request the analytics loop status
- New endpoint to schedule analytics
- New endpoint to cancel scheduled analytics
- Some number formatting improvements, and introduction of 'opened'
count in debug screen
- Live reload of data in the debug screen
Other changes:
- This also improves the support for maxEvents. We can now stop a
fetching loop after x events without worrying about lost events. This is
used to reduce the fetched events in the missing and scheduled event
loop (e.g. when the main one is fetching lots of events, we skip the
other loops).
- Prevents fetching the same events over and over again if no new events
come in (because we always started at the same begin timestamp). The
code increases the begin timestamp with 1 second if it is safe to do so,
to prevent the API from returning the same events over and over again.
- Some optimisations in handing the processing results (less merges to
reduce CPU usage in cases we have lots of events).
Testing:
- You can test with lots of events using the new mailgun mocking server
(Toolbox repo `scripts/mailgun-mock-server`). This can also simulate
events that are only returned after x minutes because of storage delays.
refs https://github.com/TryGhost/Team/issues/2486
Stop the event fetching loop as soon as we receive events that were
created later then when we started the loop. This ensures that we don't
miss events if we receive a giant batch of events that take a long time
to process.
refs https://github.com/TryGhost/Toolbox/issues/501
- this reverts commit 48dda23554
- also includes a resolution for `@elastic/elasticsearch` so we don't
run a version that is potentially problematic - see referenced issue
for context
- in the event the Mailgun config doesn't exist, we return `null` from
this function
- this updates the jsdoc to correct the return type of `getInstance`
- this was all getting terribly behind so I've done several things:
- majority of `@tryghost/*` except Lexical packages
- gscan + knex-migrator to remove old `@tryghost/errors` usage
- bumped lockfile
refs: https://github.com/TryGhost/Ghost/issues/15725
- our users are having difficulties getting onboarded with mailgun
- we're adding an explicit and unique tag to all requests, to help mailgun detect when mail is being sent from Ghost
refs https://github.com/TryGhost/Team/issues/2255
These methods will be used by the Mailgun implementation of EmailSuppressionList
so that emails are removed from both our internal list and Mailguns.
fixes https://github.com/TryGhost/Team/issues/2332
Saves events in the database and collects error information.
Do note that we can emit the same events multiple times, and as a result
out of order. That means we should correctly handle that a delivered
event might be fired after a permanent failure. So a delivered event is
ignored if the email is already marked as failed. Also delivered_at is
reset to null when we receive a permanent failure.
fixes https://github.com/TryGhost/Team/issues/2096
When generating the recipient data for emails, the email clicks
implementation is resulting in a recipient variable being added called
replacement_xxx once for each link containing the same UUID.
This generates a lot of unnecessary data overhead for emails, and it
turns out that mailgun has a 25MB message limit. We wouldn't have come
close if we only included the uuid once.
fixes https://github.com/TryGhost/Ghost/issues/15190
refs https://github.com/TryGhost/framework/pull/76
- log output always uses UTC timestamps, but it may be desirable to
configure logs to use the local machine timezone
- a new config option has been added to `@tryghost/logging` so you can
switch the logs to the local timezone
- this commit bumps the package and sets the default config option to
`false`, so it doesn't suddenly change the timezone of the logs
- docs will be updated soon but if you'd like to use the
timezone-altered timestamps, you can set `logging.useLocalTime` to
`true`
- credits to https://github.com/levee223 for the implementation and PR
refs https://github.com/TryGhost/Toolbox/issues/164
- see referenced issue for more context but Ghost sometimes has issues
with the email analytics job getting stuck
- we don't provide a timeout to the Mailgun library, so we just
sit there idling for eternity if something between us and Mailgun is
causing issues
- this commit adds a 60s timeout so we can at least error out and try
again next time
- during a refactor, I moved the `BATCH_SIZE` variable around
- putting the variable export above a more general export means it get
overwritten and the value is `undefined` outside of the module
- when we chunk the emails, we were chunking in sized of `undefined`,
so I'm guessing it just defaulted to 1
- this means the email batches were of size 1 instead of 1000 - oops
- cleaned up unused dependencies
- adds missing dependencies that are used in the code
- this should help us be more explicit about the dependencies a package
uses
refs https://github.com/TryGhost/Toolbox/issues/363
- this commit switches us to using the official and maintained
`mailgun.js` SDK, and updates the `mailgun-client` code to reflect the
changes between the two