refs: https://github.com/TryGhost/Toolbox/issues/595
We're rolling out new rules around the node assert library, the first of which is enforcing the use of assert/strict. This means we don't need to use the strict version of methods, as the standard version will work that way by default.
This caught some gotchas in our existing usage of assert where the lack of strict mode had unexpected results:
- Url matching needs to be done on `url.href` see aa58b354a4
- Null and undefined are not the same thing, there were a few cases of this being confused
- Particularly questionable changes in [PostExporter tests](c1a468744b) tracked [here](https://github.com/TryGhost/Team/issues/3505).
- A typo see eaac9c293a
Moving forward, using assert strict should help us to catch unexpected behaviour, particularly around nulls and undefineds during implementation.
refs 551532f874
refs https://github.com/TryGhost/Team/issues/3324
- After analyzing data dumps, the data revealed that we have extra data from a stray batch. The filtering logic manually filters out the data to the recipients that belong to a "current batch".
- Hunting down the root cause of the data mixup proved to be too expensive of an investigation, so this is a "good enough patch" to deal with the problem.
- Most likely cause is the concurrent batch sending, but reducing the concurrency would be too expensive of a performance price to pay instead of filtering the data rarely.
closes https://github.com/TryGhost/Team/issues/3324
- When the recipients batch size is larger than the limit in addition to logging the error we need extra data to figure out what exactly is inside those `2000` or `3000` records causing faulty behavior.
- This change grabs all available models and dumps them into a file inside of the `content/data` folder. The code is temporary and should be removed once the problem is narrowed down
refs TryGhost/Team#3229
- The issue we are observing that even though the returned amount of email recipients should not ever accede the max batch size (1000 in case of MailGun), there are rare glitches when this number is doubled and we fetch 2000 records instead.
- The fix takes it's best guess in de-duping data in the batch and then truncates it if the amount of records is still above the threshold. This ensures we at least end up sending the emails out to some of the recipients instead of none.
As discussed with the product team we want to enforce kebab-case file names for
all files, with the exception of files which export a single class, in which
case they should be PascalCase and reflect the class which they export.
This will help find classes faster, and should push better naming for them too.
Some files and packages have been excluded from this linting, specifically when
a library or framework depends on the naming of a file for the functionality
e.g. Ember, knex-migrator, adapter-manager
fixes https://github.com/TryGhost/Team/issues/2560
When an email fails, and you reschedule the post, the error dialog was
shown (from the previous try). The retry button on that page allowed you
to retry sending the email immediately, which could be very confusing.
- The email error dialog is no longer shown for scheduled emails
- The email status is no longer polled for scheduled emails
- Retrying an email is not possible via the API if the post status is
not published or sent
- Added some extra snapshot tests
- When retrying an email, we immediately update the email status to
'pending' to have a better API response (instead of still returning
failed).
- Disabled email sending retrying in development (otherwise very hard to
test failed emails if it takes 10 mins before it gives up automatic
retrying)
fixes https://github.com/TryGhost/Team/issues/2512
The email sending is a crucial flow that should not be interrupted. If
there are database connection issues, we should try to recover from them
automatically by retrying individual database queries and transactions
for a limited amount of time.
This adds a helper method to retry database operations. It limits all
database queries before sending the actual email to maximum 10 minutes.
Database operations that happen after sending have a higher retry time
because they are more crucial to prevent data loss (e.g. saving that an
email was sent).
refs https://github.com/TryGhost/Team/issues/2482
This change adds a small sleep in between dispatching events in the
worker thread that reads the events from Mailgun. That should reduce the
amount of queries we fire parallel to each other and could cause the
connection pool to run out of connections.
It also reduces the amount of concurrent sending to 2 from 10. Also to
make sure the connection pool doesn't run out of connections while
sending emails, and to reduce the chance of new connections falling back
on a (delayed) replicated database.
no issue
Tests stopped working because the Mailgun mocker stopped working since we moved to the new email flow.
This also fixes a unit test that needed to get updated.