no issue
- following on from f4fb0fcbaa,
this commit moves around some package requires in Ghost
- these are often niche packages that do something in a subsystem of
Ghost, and are not necessarily be needed to boot the application
- these packages use non-negligible CPU and memory when they are
required, so it makes sense to lazy-require them
- the concern here is that we obscure the code too much by moving
random requires further into code, but the changes are small and the
improvements big
- this commit bring the boot time since 4.19.0 down ~31% and initial
memory usage down by a total of ~12%
- This is part of the quest to separate the frontend and server & get rid of all the places where there are cross-requires
- At the moment the settings cache is one big shared cache used by the frontend and server liberally
- This change doesn't really solve the fundamental problems, as we still depend on events, and requires from inside frontend
- However it allows us to control the misuse slightly better by getting rid of restricted requires and turning on that eslint ruleset
refs https://github.com/TryGhost/Ghost/issues/12610
refs https://github.com/mailgun/mailgun-js-boland/blob/v0.22.0/lib/request.js#L285-L333
The mailgun domain is used by the mailgun API to construct the URL for
the API. e.g for a domain of "mg.example.com" the URL for the API
messages would look like:
https://api.mailgun.net/v3/mg.example.com/messages
One weird thing about the mailgun API is that if the path does not map
to an API endpoint, then instead of a 404, we get a 200, with a body of
"Mailgun Magnificent API".
The `mailgun-js` library which we use, expects a JSON response, and will
return a body of undefined if it does not get one.
This all resulted in us trying to read the property `id` of an undefined
`body` variable. The fix here is to reject the containing Promise, if
there is no body. So that the default error handling will kick in.
closes https://github.com/TryGhost/Ghost/issues/12492
The changes to email processing models had set replyTo address for an email batch as `reply_to` instead of `replyTo` which was not picked by mailgun service for setting newsletter reply address
no issue
- the `'bulk-email`' tag was only being added to bulk emails if another more specific tag was set up via config
- we always want the `'bulk-email'` tag to be present for better event filtering
no issue
- set `emails.track_opens` to `true` when the `enableDeveloperExperiments` flag is set
- update mailgun bulk-email provider to pass the open-tracking header to Mailgun when the email's `track_opens` flag is set
no issue
- adding user variables via the mailgun API when sending emails means that events related to email have those variables attached to them
- adding the `email.id` value to user variables means we can easily associate mailgun events to emails, otherwise we'd have look up the batch via `email_batches.provider_id` then use a join to get back to the associated email
no issue
- there was an typo in the recent email sending refactor that resulted in `Error: 'to' parameter is missing` errors when sending email previews and bulk emails
no issue
- store raw content in email record
- keep any replacement strings in the html/plaintext content so that it can be used when sending email rather than needing to re-serialize the post content which may have changed
- split post email serializer into separate serialization and replacement parsing functions
- serialization now returns any email content that is derived from the post content (subject/html/plaintext) rather than post content plus replacements
- `parseReplacements` has been split out so that it can be run against email content rather than a post, this allows mega and the email preview service to work with the stored email content
- move mailgun-specific functionality into the mailgun provider
- previously mailgun-specific behaviour was spread across the post email serializer, mega, and bulk-email service
- the per-batch `send` functionality was moved from the `bulk-email` service to the mailgun provider and updated to take email content, recipient info, and replacement info so that all mailgun-specific headers and replacement formatting can be handled in one place
- exposes the `BATCH_SIZE` constant because batch sizes are limited to what the provider allows
- `bulk-email` service split into three methods
- `send` responsible for taking email content and recipients, parsing replacement info from the email content and using that to collate a recipient data object, and finally coordinating the send via the mailgun provider. Usable directly for use-cases such as test emails
- `processEmail` takes an email ID, loads it and coordinates sending related batches concurrently
- `processEmailBatch` takes an email_batch ID, loads it along with associated email_recipient records and passes the data through to the `send` function, updating the batch status as it's processed
- `processEmail` and `processEmailBatch` take IDs rather than objects ready for future use by job-queues, it's best to keep job parameters as minimal as possible
- refactored `mega` service
- modified `getEmailData` to collate email content (from/reply-to/subject/html/plaintext) rather than being responsible for dealing with replacements and mailgun-specific replacement formats
- used for generating email content before storing in the email table, and when sending test emails
- from/reply-to calculation moved out of the post-email-serializer into mega and extracted into separate functions used by `getEmailData`
- `sendTestEmail` updated to generate `EmailRecipient`-like objects for each email address so that appropriate data can be supplied to the updated `bulk-email.send` method
- `sendEmailJob` updated to create `email_batches` and associated `email_recipients` records then hand over processing to the `bulk-email` service
- member row fetching extracted into a separate function and used by `createEmailBatches`
- moved updating of email status from `mega` to the `bulk-email` service, keeps concept of Successful/FailedBatch internal to the `bulk-email` service
no issue
- `mailgun()` expects the `host` option not to include a port but `url.host` will include the port, we instead want to use `url.hostname` which skips the port
- Represents that logging is shared across all parts of Ghost at present
* moved core/server/lib/common/logging to core/shared/logging
* updated logging path for generic imports
* updated migration and schema imports of logging
* updated tests and index logging import
* 🔥 removed logging from common module
* fixed tests
* moved `server/config` to `shared/config`
* updated config import paths in server to use shared
* updated config import paths in frontend to use shared
* updated config import paths in test to use shared
* updated config import paths in root to use shared
* trigger regression tests
* of course the rebase broke tests
- Use array destructuring
- Use @tryghost/errors
- Part of the big move towards decoupling, this gives visibility on what's being used where
- Biting off manageable chunks / fixing bits of code I'm refactoring for other reasons
no issue
- Increased default mailgun retry limit to 5
- Handling retry logic closer to SDK layer gives less future manual handling
- Allowed failing request to be passed through to the caller
- To be able to handle failed requests more gracefully in the future we need all available error information to be given to the caller
- The previous method with `Promise.all` would have rejected a whole batch without providing details on each specific batch.
- Limited data returned with a failed message to batch values
- Added better error handling on mega layer
- Added new column to store failed batch info
- Added reference to mailgan error docs
- Refactored batch emailer to respond with instances of an object
- It's hard to reason about the response type of bulk mailer when multiple object types can be returned
- This gives more clarity and ability to check with `instanceof` check