refs https://github.com/TryGhost/Team/issues/828
- Before sending out batches with members we need to partition all members based on the segment they belong to. Special segment "unsegmented" is used in case none of the segments used in the emal cards cover part of the members set (for example only free members card used when emailing all members)
- This is part of the quest to separate the frontend and server & get rid of all the places where there are cross-requires
- At the moment the settings cache is one big shared cache used by the frontend and server liberally
- This change doesn't really solve the fundamental problems, as we still depend on events, and requires from inside frontend
- However it allows us to control the misuse slightly better by getting rid of restricted requires and turning on that eslint ruleset
refs https://github.com/TryGhost/Team/issues/828
- We need a way to recreate a filter that was used to create an email content for specific email_recipient. By saving member_segment value for each email_batch we can traverse back to the filter that was applied during email creation.
refs https://github.com/TryGhost/Team/issues/828
- When sending email batches out they need to be created without mixing different member segments. This allows for easier reasoning about what data has been sent out to each specific email recipient
- Modified email batches to chunk based on segments defined in the HTML content of the post
refs: #12808
- we need to use the uuid, not the id, so that e.g. unsubscribe urls are set correctly
- this is only for test emails, but it's still important to be able to test things fully!
no issue
The only pieces of Ghost-Ignition used in Ghost were debug and
logging. Both of these modules have been superceded by the Framework
monorepo, and all usages of Ignition have now been removed, replaced
with @tryghost/debug and @tryghost/logging.
refs https://github.com/TryGhost/Team/issues/588
refs d72ba77aba
- When limit is in place we don't want to allow sending out a new batch of emails if it would go over limit
- See referenced commit for example configuration
refs https://github.com/TryGhost/Team/issues/581
refs https://github.com/TryGhost/Team/issues/582
When publishing a post via the API it was possible to send it using `?email_recipient_filter=all/free/paid` which allowed you to send to members only based on their payment status which is quite limiting for some sites.
This PR updates the `?email_recipient_filter` query param to support Ghost's `?filter` param syntax which enables more specific recipient lists, eg:
`?email_recipient_filter=status:free` = free members only
`?email_recipient_filter=status:paid` = paid members only
`?email_recipient_filter=label:vip` = members that have the `vip` label attached
`?email_recipient_filter=status:paid,label:vip` = paid members and members that have the `vip` label attached
The older `free/paid` values are still supported by the API for backwards compatibility.
- updates `Post` and `Email` models to transform legacy `free` and `paid` values to their NQL equivalents on read/write
- lets us not worry about supporting legacy values elsewhere in the code
- cleanup migration to transform all rows slated for 5.0
- removes schema and API `isIn` validations for recipient filters so allow free-form filters
- updates posts API input serializers to transform `free` and `paid` values in the `?email_recipient_filter` param to their NQL equivalents for backwards compatibility
- updates Post API controllers `edit` methods to run a query using the supplied filter to verify that it's valid
- updates `mega` service to use the filter directly when selecting recipients
refs 829e8ed010
- i18n is used everywhere but only requires shared or external packages, therefore it's a good candidate for living in shared
- this reduces invalid requires across frontend and server, and lets us use it everywhere until we come up with a better option
- Having these as destructured from the same package is hindering refactoring now
- Events should really only ever be used server-side
- i18n should be a shared module for now so it can be used everywhere until we figure out something better
- Having them seperate also allows us to lint them properly
refs https://github.com/TryGhost/Team/issues/588
- This check allows for a on/off switch to be set up on the instance and control limits around sending emails
- An example configuration for such check would look like following in config's hostSettings section, e.g.:
```
"emails": {
"disabled": true,
"error": "Email sending has been temporarily disabled whilst your account is under review."
}
```
refs c873899e49
- as of `bson-objectid` v2.0.0, this library exports the function
to generate an ObjectID directly, and then you need to use `.toHexString()`
to get the 24 character hex string - 6696f27d82
- this commit removes all uses of `.generate()` and replaces with this
change
refs: https://github.com/TryGhost/Team/issues/510
- The current member limit was implemented as a member-specific concept
- The new limit service is much more generic, here we are swapping old for new
- The updated concept here is blocking all publishing, not just email sending, when a site is over its member limit
- To determine that we are publishing a post, we must be in the model layer. The code has been moved to the permissible function which makes sense as this is a permissions error that we are throwing
- I've left the extra check for email retries in, in case there is some loophole here (but we may wish to change it)
refs https://github.com/TryGhost/Ghost/issues/12716
- The code in serializePostModel was broken and always defaulted to 'v3'! It refered to non-existent `model.get('api_version')` there's no such field in posts model! Changed the implementation so that the API version is passed in as a parameter to the method instead
- The style of providing "defaults" everywhere creates a need for future maintenance when we bump the version e.g in Ghost v5. Maybe reworking these methods to require a passed version and throwing an error instead would be more maintainable long-term?
refs https://github.com/TryGhost/Ghost/issues/12602
* Updated members_status_events table
By replacing the `status` column with a `from_status` and `to_status`
column, we are able to track the changes between multiple statuses
easier, and accumulate the data. e.g. the delta of paid members in a
given time range is the sum of the `to_status` columns set to 'paid'
minus the sum of the `from_status` columns set to 'paid' within that
time range
* Updated MEGA to handle addition of 'comped' status
With the addition of the 'comped' status, we need to ensure that MEGA
will still send emails to the correct recipients. I've opted to use an
"inverse" filter, as that is the intention of the free/paid split in
MEGA - as far as MEGA is concerned, "free" is the opposite of "paid"
* Updated customQuery for MemberStatusEvent
With the `status` column replaced with `from_status` and `to_status`
this allows us to fix and update the customQuery to correctly accumulate
the data into deltas over time, broken down by day.
* Populated members_status_events table
As the table will be used to generate deltas, we need to backfill the
data so that existing sites will be able to sum up the deltas and
calculate correct data.
The assumptions used in backfilling is that a Member's current status,
is their only status.
no-issue
* Removed support for paid param from v3 & canary API
* Updated active subscription checks to use status flag
* Updated MEGA to use status filter over paid flag
* Removed support for paid option at model level
* Installed @tryghost/members-api@1.0.0-rc.0
* Updated members fixtures
no issue
- recurring jobs spin up worker threads which can be quite CPU intensive even when not performing much processing, this can be problematic in environments where there are many Ghost instances running
- updated the email job scheduling to be skipped on bootup when there are no emails in the database and to be started when the first email is created as long as we're not in testing env
- increase analytics job schedule from every 2 minutes to every 5 minutes to help spread the load further across instances
no-issue
* Used email_recipient_filter in MEGA
This officially decouples the newsletter recipients from the post
visibility allowing us to send emails to free members only
* Supported enum for send_email_when_published in model
This allows us to migrate from the previously used boolean to an enum
when we eventually rename the email_recipient_filter column to
send_email_when_published
* Updated the posts API to handle email_recipient_filter
We now no longer rely on the send_email_when_published property to send
newsletters, meaning we can remove the column and start cleaning up the
new columns name
* Handled draft status changes when emails not sent
We want to reset any concept of sending an email when a post is
transition to the draft status, if and only if, and email has not
already been sent. If an email has been sent, we should leave the email
related fields as they were.
* Removed send_email_when_published from add method
This is not supported at the model layer
* Removed email_recipient_filter from v2&Content API
This should not be exposed on previous api versions, or publicly
* Removed reference to send_email_when_published
This allows us to move completely to the email_recipient_filter
property, keeping the code clean and allowing us to delete the
send_email_when_published column in the database. We plan to then
migrate _back_ to the send_email_when_published name at both the
database and api level.
no issue
- set `emails.track_opens` to `true` when the `enableDeveloperExperiments` flag is set
- update mailgun bulk-email provider to pass the open-tracking header to Mailgun when the email's `track_opens` flag is set
closes https://github.com/TryGhost/Ghost/issues/12259
- adds a `DISTINCT` to the query used to fetch member rows when generating an email recipient list
- this increases query time 2.7s vs 1.6s locally with ~94k paid members but once the `members.paid` column is implemented this slow query can be removed
no issue
- wrap email batch/recipient record creation in a transaction so if an error occurs during creation we're not left with a partially created batch/recipient set in the database
no issue
- if an error occurred whilst creating email batch/recipient records the email status was never updated and was left in the 'pending' status
- adjusted the error handling to update the email status and record the error message if such a scenario occurs
no issue
- the paid-member SQL query that is obtained using `models.Member.getFilteredCollectionQuery({paid: true})` can return multiple columns with the same name (eg, `email`, `name`), when that happens the last column with duplicate names "wins" and it's value is used in the resulting knex row instance
- in the `mega` service when fetching email recipient rows we ran into this problem, to avoid it we adjust the query to explicitly select only the data from the `members` table
no issue
- we've had an issue with emails failing due to unexpectedly missing data when inserting email recipient rows
- added a validation check before adding recipient details along with a log so that invalid data can be investigated
no issue
- store raw content in email record
- keep any replacement strings in the html/plaintext content so that it can be used when sending email rather than needing to re-serialize the post content which may have changed
- split post email serializer into separate serialization and replacement parsing functions
- serialization now returns any email content that is derived from the post content (subject/html/plaintext) rather than post content plus replacements
- `parseReplacements` has been split out so that it can be run against email content rather than a post, this allows mega and the email preview service to work with the stored email content
- move mailgun-specific functionality into the mailgun provider
- previously mailgun-specific behaviour was spread across the post email serializer, mega, and bulk-email service
- the per-batch `send` functionality was moved from the `bulk-email` service to the mailgun provider and updated to take email content, recipient info, and replacement info so that all mailgun-specific headers and replacement formatting can be handled in one place
- exposes the `BATCH_SIZE` constant because batch sizes are limited to what the provider allows
- `bulk-email` service split into three methods
- `send` responsible for taking email content and recipients, parsing replacement info from the email content and using that to collate a recipient data object, and finally coordinating the send via the mailgun provider. Usable directly for use-cases such as test emails
- `processEmail` takes an email ID, loads it and coordinates sending related batches concurrently
- `processEmailBatch` takes an email_batch ID, loads it along with associated email_recipient records and passes the data through to the `send` function, updating the batch status as it's processed
- `processEmail` and `processEmailBatch` take IDs rather than objects ready for future use by job-queues, it's best to keep job parameters as minimal as possible
- refactored `mega` service
- modified `getEmailData` to collate email content (from/reply-to/subject/html/plaintext) rather than being responsible for dealing with replacements and mailgun-specific replacement formats
- used for generating email content before storing in the email table, and when sending test emails
- from/reply-to calculation moved out of the post-email-serializer into mega and extracted into separate functions used by `getEmailData`
- `sendTestEmail` updated to generate `EmailRecipient`-like objects for each email address so that appropriate data can be supplied to the updated `bulk-email.send` method
- `sendEmailJob` updated to create `email_batches` and associated `email_recipients` records then hand over processing to the `bulk-email` service
- member row fetching extracted into a separate function and used by `createEmailBatches`
- moved updating of email status from `mega` to the `bulk-email` service, keeps concept of Successful/FailedBatch internal to the `bulk-email` service
requires https://github.com/TryGhost/Ghost/pull/12192
- added initial `EmailBatch` and `EmailRecipient` model definitions with defaults and relationships
- added missing `post` relationship function to email model
- fetch member list without bookshelf
- bookshelf can add around 3x overhead when fetching the members list for an email
- we don't need full members at this point, only having the data is fine
- if we need full models later on we can push the model hydration into background jobs where recipient batches are fetched ready for an email to be sent
- bookshelf model instantiation of many models blocks the event loop, using knex directly keeps concurrent requests fast
- adds `getFilteredCollectionQuery` method to base model to facilitate getting a knex query based on our normal model filters along with transaction/forUpdate applied
- store recipient list before sending email
- chunk already-fetched members list into batches and insert records into the `email_recipients` table via knex
- chunked into batches of 1000 to match the number of emails that Mailgun accepts in a single API request but this may not be the absolute fastest batch size for recipient insertion:
| Batch size | Batch time | Total time |
| ---------- | ---------- | ---------- |
| 500 | 20ms | 4142ms |
| 1000 | 50ms | 4651ms |
| 5000 | 170ms | 3540ms |
| 10000 | 370ms | 3684ms |
- create an email_batch record before inserting recipient rows so we can effeciently fetch recipients by batch and store the overall batch status
This reverts commit 80af56b530.
- reverting temporarily so that all associated functionality can be merged in a single release
- creating email batch/recipient records without using them would cause inconsistent data
requires https://github.com/TryGhost/Ghost/pull/12192
- added initial `EmailBatch` and `EmailRecipient` model definitions with defaults and relationships
- added missing `post` relationship function to email model
- fetch member list without bookshelf
- bookshelf can add around 3x overhead when fetching the members list for an email
- we don't need full members at this point, only having the data is fine
- if we need full models later on we can push the model hydration into background jobs where recipient batches are fetched ready for an email to be sent
- bookshelf model instantiation of many models blocks the event loop, using knex directly keeps concurrent requests fast
- store recipient list before sending email
- chunk already-fetched members list into batches and insert records into the `email_recipients` table via knex
- chunked into batches of 1000 to match the number of emails that Mailgun accepts in a single API request but this may not be the absolute fastest batch size for recipient insertion:
| Batch size | Batch time | Total time |
| ---------- | ---------- | ---------- |
| 500 | 20ms | 4142ms |
| 1000 | 50ms | 4651ms |
| 5000 | 170ms | 3540ms |
| 10000 | 370ms | 3684ms |
- create an email_batch record before inserting recipient rows so we can effeciently fetch recipients by batch and store the overall batch status
no issue
- By default for new sites, support address is set same as from address to `noreply` , with full email address using the domain for `@`
- For newsletter emails, the support address was missing the default site domain to be added to address if its `noreply`
- Fix updates the support address to use the same format as from address and add relevant domain for default case
no refs
- The `update` method in members-api package was edited to return Model object instead of JSON directly - TryGhost/Members@a28bcc5
- This unsubscribe handler was returning the raw member object returned from `update` method, which is now a model object and not able to access `member.email`
- Fix updates the unsubscribe request handler to return the member JSON again
no issue
- moves the meat of `pendingEmailHandler()` code into a new function `sendEmailJob()` that is passed over to the new job service
- lets the server keep processing email generation and sending when it receives a shutdown request rather than halting processing mid-send and ending up in a partial state
no-issue
* Added stripeSubscriptions relation to member model
This allows us to fetch the subscriptions for a member via standard
model usage, e.g. `withRelated: ['stripeSubscriptions']` rather than
offloading to loops and `decorateWithSubscriptions` functions, this is
more performant and less non-standard than the existing method.
* Updated serialize methods to match existing format
The current usage of `decorateWithSubscriptions` and the usage of
members throughout the codebase has a subscriptions array on a stripe
object on the member, this ensures that when we serialize members to
JSON that we are using the same format.
There is definitely room to change this in future, but this is an
attempt to create as few breaking changes as possible.
* Installed @tryghost/members-api@0.26.0
This includes the required API changes so that everywhere can use
members-api directly rather than models and/or helper methods
no-issue
- switch from `membersService.api.members.list` to using bookshelf `Member.findPage()` with the `{paid: true}` filter to avoid per-member queries (N+1) to decorate members with subscriptions and a heavy post-fetch filter via `contentGating`
- add concurrency to the Mailgun API requests in `bulk-email` service to reduce overall time submitting API requests
- add debug statements with timing output for easier measurements
- Member limit code was duplicated in 2 places unnecessarily
- Also used member api code that fetched members and subscriptions fully hyrated when we only need a count
- Using a raw query significantly improves performance here
- Represents that logging is shared across all parts of Ghost at present
* moved core/server/lib/common/logging to core/shared/logging
* updated logging path for generic imports
* updated migration and schema imports of logging
* updated tests and index logging import
* 🔥 removed logging from common module
* fixed tests
* moved `server/config` to `shared/config`
* updated config import paths in server to use shared
* updated config import paths in frontend to use shared
* updated config import paths in test to use shared
* updated config import paths in root to use shared
* trigger regression tests
* of course the rebase broke tests
- Use array destructuring
- Use @tryghost/errors
- Part of the big move towards decoupling, this gives visibility on what's being used where
- Biting off manageable chunks / fixing bits of code I'm refactoring for other reasons
no issue
- the `email.{html,plaintext}` fields are only used to display what was sent in the email so it doesn't make sense to store the mailgun-specific content which can be confusing when viewing in the admin area
- store the raw serialized post content with a basic no-data replacement of replacement strings rather than the output of full data fetching and mailgun transformation