no issue
The email table should be a reference for all data that was used when sending an email. From and Reply-to addresses can change over time and we don't have any other reference for their value at the time of sending an email so we should store them alongside the email content.
- schema updated with `from` and `reply_to` columns
- both are set to `nullable` because we don't have historic data (can be populated and changed in later migrations if needed)
- neither `from` or `reply_to` have `isEmail` validations because they can have name+email in an email-specific format
- will help keep concerns separated in the future. `mega` service can deal with all of the email contents/properties, and the `bulk-email` service's concerns are then only email sending and any provider-specific needs
refs #12033
- Allowing to change parent integration opens up possible security holes and has no clear usecase at the moment. After a webhook record is created it should not be possible to change parent integration.
- Had do partially duplicate JSON schema definition from webhooks definition as there is no proper composition technique available in current version of JSON Schema.
refs https://github.com/TryGhost/Ghost/issues/12033
refs https://github.com/TryGhost/Ghost/issues/10567
- Creating a webhook without valid parent integration leads to orphaned webhook records, which shoult not ever happen
- This scenario is only possible for non-integration authentication,
because in case of integration being authenticated it's id is
automatically assigned to creatd webhook
refs #11729
- When ordering is done by fields from a relation (like post's `meta_title` that comes form `posts_meta` table), Bookshelf does not include those relations in the original query which caused errors. To support this usecase added a mechanism to detect fields from a relation and load those relations into query.
- Extended ordering to include table name in ordered field name. The information about the table name is needed to avoid using `tableName` within pagination plugin and gives path to having other than original table ordering fields (e.g. order by posts_meta table fields)
- Added test case to check ordering on posts_meta fields
- Added support for "eager loading" relations. Allows to extend query builder object with joins to related tables,
which could be used in ordering (possibly in filtering later). Bookshelf does not support ordering/filtering by proprieties coming from relations, that's why this kind of plugin and query expansion is needed
- Added note about lack of support for child relations with same property names.
closes#12038
Previously we were emitting changed events for _all_ settings which would
cause any listeners for those to be triggered, this ensures that listeners are
only triggered if the corresponding setting, _did_ in fact change.
no-issue
This change allows themes to collect member names on signup with use of the
`data-members-name` attribute on an input element in the `data-members-form`
refs #11729
- Having a plugin allows to reason about ordering better and gives way
to further extraction away form the core.
- Just like with filtering, ordering falls into similar category of having effect on knex'es query builder object extension through additional statements - ORDER BY in this case.
- Because there were bugs associated with use of permittedAttributes inside of `parseOrderOption` method, introduced separate `orderAttributes` function which returns only those fields which are orderable (https://github.com/TryGhost/Ghost/issues/11729#issuecomment-685740836)
refs #11878
- When password reset link is invalid previous messaging left the user
without clear information about why the reset failed and what they could do about it.
- Updated messaging around password reset tokens including detection of
when password token has invalid structure, has expired or has already
been used
This reverts commit 80af56b530.
- reverting temporarily so that all associated functionality can be merged in a single release
- creating email batch/recipient records without using them would cause inconsistent data
no-issue
* Added SingleUseTokenProvider to members service
This implements the TokenProvider interface required by members-api to
generate magic links. It handles checking if the token is expired and
pulls out any associated data.
Future improvments may include the email in the error for expired
tokens, which would make resending a token simpler.
* Passed SingleUseTokenProvider to members-api
This sets up the members-api module to use the new single use tokens
* Installed @tryghost/members-api@0.30.0
This includes the change to allow us to pass a token provider to the members-api
no-issue
This is a model for the tokens table, which handles the single use
aspect by customising the `findOne` method to automatically destroy the
model after reading from it
no-issue
refs: https://github.com/TryGhost/Members/commit/63942f03
The above commit updated all magic-link methods to be async, this change
ensures that we will handle a Promise return when it's updated in Ghost.
`await` has no effect on non Promise return values so it's safe to add
this now.
no-issue
After discussion with Matt, we decided that 192 bits for the token is a
good number, as it has no padding when base64 encoded and is more secure
than 128 bits, whilst still a managable size.
no-issue
This is a table to store single use tokens for use in magic links, the
columns are as simple as possible at the moment and are designed as:
id - standard ObjectID like all of our tables
token - 128bit base64 encoded string
data - arbitrary data to store against the token
created_at - timestamp to allow for expiry to be implemented for tokens
no issue
- In a recent change to ownership verification email flow, we changed the FROM address of ownership verification mails to use the same email as the one we are verifying, aka TO address.
- Email clients like Gmail flags off such emails as possible spam
- Fix updates the `FROM` address to `noreply@domain.com` where domain.com is domain for TO address
- In case the TO is already noreply@domain.com, we use no-reply@domain.com to bypass the same address restriction.
no issue
- The new Portal config flag allows switching on Portal conditionally with config
- The dev experiment flag still works for enabling Portal
- The flag currently defaults to `false` as Portal is still a beta feature and switched off by default
- We expose it on the admin api config endpoint so that the Ghost-Admin client can use it to conditionally render Portal settings
no issue
- Members site data was not appending blog domain for default support address which is `noreply`
- The change allows Portal to use default support address correctly
requires https://github.com/TryGhost/Ghost/pull/12192
- added initial `EmailBatch` and `EmailRecipient` model definitions with defaults and relationships
- added missing `post` relationship function to email model
- fetch member list without bookshelf
- bookshelf can add around 3x overhead when fetching the members list for an email
- we don't need full members at this point, only having the data is fine
- if we need full models later on we can push the model hydration into background jobs where recipient batches are fetched ready for an email to be sent
- bookshelf model instantiation of many models blocks the event loop, using knex directly keeps concurrent requests fast
- store recipient list before sending email
- chunk already-fetched members list into batches and insert records into the `email_recipients` table via knex
- chunked into batches of 1000 to match the number of emails that Mailgun accepts in a single API request but this may not be the absolute fastest batch size for recipient insertion:
| Batch size | Batch time | Total time |
| ---------- | ---------- | ---------- |
| 500 | 20ms | 4142ms |
| 1000 | 50ms | 4651ms |
| 5000 | 170ms | 3540ms |
| 10000 | 370ms | 3684ms |
- create an email_batch record before inserting recipient rows so we can effeciently fetch recipients by batch and store the overall batch status
no issue
We want to store a list of recipients for each bulk email so that we have a consistent set of data that background processing/sending jobs can work from without worrying about moving large data sets around or member data changing mid-send.
- `email_batches` table acts as a join table with status for email<->email_recipient
- stores a provider-specific ID that we get back when submitting a batch for sending to the bulk email provider
- `status` allows for batch-specific status updates and picking up where we left off when submitting batches if needed
- explicitly tying a list of email recipients to a batch allows for partial retries
- `email_recipients` table acts as a join table for email<->member
- `member_id` does not have a foreign key constraint because members can be deleted but does have an index so that we can efficiently query which emails a member has received
- stores static copies of the member info present at the time of sending an email for consistency in background jobs and auditing/historical data
refs #2635
- Adds 'Location' header to endpoints which create new resources and have corresponding `GET` endpoint as speced in JSON API - https://jsonapi.org/format/#crud-creating-responses-201. Specifically:
/posts/
/pages/
/integrations/
/tags/
/members/
/labels/
/notifications/
/invites/
- Adding the header should allow for better resource discoverability and improved logging readability
- Added `url` property to the frame constructor. Data in `url` should give enough information to later build up the `Location` header URL for created resource.
- Added Location header to headers handler. The Location value is built up from a combination of request URL and the id that is present in the response for the resource. The header is automatically added to requests coming to `add` controller methods which return `id` property in the frame result
- Excluded Webhooks API as there is no "GET" endpoint available to fetch the resource
closes#12045
- When member's email is updated to an already existing email of different member it caused table's unique constraint error, which was not handled properly.
- Added handling for this error similar to one in members `add` method.
closes#11999
- When the routes.yaml file changes (manually or through API) we need
to store a checksum to be able to optimize routes reloads in the future
- Added mechanism to detect differences between stored and current routes.yaml hash value
- Added routes.yaml sync on server boot
- Added routes.yaml handling in controllers
- Added routes hash synchronization method in core settings. It lives in core settings
as it needs access to model layer. To avoid coupling with the frontend settings it accepts
a function which has to resolve to a routes hash
- Added note about settings validation side-effect. It mutates input!
- Added async check for currently loaded routes hash
- Extended frontend settings loader with async loader. The default behavior of the loader is
to load settings syncronously for reasons spelled in 0ac19dcf84
To avoid blocking the eventloop added async loading method
- Refactored frontend setting loader for reusability of settings file path
- Added integrity check test for routes.yaml file
closes#12060
- A 500 error what happening when invited user provided an email that is associated with an existing user
- Additional validation for existing email address was added to prevent invalid data hitting db constraint error
refs #11999
- The `routes_hash` setting will be used during the boot process to update the hash
of currently loaded routes.yaml file in case it's different from last restart
no issue
- The async/await syntax makes it easier to reason about the code. Because adding 'Location' header is in the works it's a prep-work in a sense
closes https://github.com/TryGhost/members.js/issues/94
- The member-api package was recently updated to work directly with models and needs explicit `withRelated` options to attack relations
- Without options, the endpoint was returning the default member data without subscriptions attached, which in Portal showed paid member as free
- Fix updates the middleware for updating member data to correctly pass the relations needed to populate the member
no issue
- By default for new sites, support address is set same as from address to `noreply` , with full email address using the domain for `@`
- For newsletter emails, the support address was missing the default site domain to be added to address if its `noreply`
- Fix updates the support address to use the same format as from address and add relevant domain for default case
no refs
- The `update` method in members-api package was edited to return Model object instead of JSON directly - TryGhost/Members@a28bcc5
- This unsubscribe handler was returning the raw member object returned from `update` method, which is now a model object and not able to access `member.email`
- Fix updates the unsubscribe request handler to return the member JSON again
no issue
- members who have trial subscriptions added directly via Stripe will have a status of `"trialed"` in their Ghost subscription
- the `paid: true` filter was not taking that into account meaning trial users were not receiving newsletters sent to paid members even though they have a "paid" subscription
no issue
- We used existing "from" address as sender for mails sent to new email address for verification, but that breaks the flow to update if the current "from" address has DMARC policy set.
- This updates the flow to always send the ownership verification email TO the new address and FROM the new address which both verifies the email deliverability for new address and ownership
no issue
- The newsletter emails are sent out with `from` address as sender
- The new `members_reply_address` setting is now used to set reply-to address for emails, which can be either newsletter or support address
no issue
- Member auth emails were previously using the `from` address as sender
- New `members_support_address` was introduced with default as original "from" address
- Auth emails use the new support address as sender
no issue
- Updated magic link generation and validation methods for email update API to handle new support address
- Updated importer to ignore the new support address as it can only be updated via verification
- Updated members service to listen on settings edit for new support/reply address fields as well
- Updated tests to include the new settings
no issue
- Added default settings for the two new setting fields - `members_support_address` and `members_reply_address`
- Added migrations for setting group for new email settings
- Migration sets current from address as new support address default
- Added migration to set new support address same as from address
- Updated tests for new settings
- `members_support_address` - How members can reach for help with their account, public setting
- `members_reply_address` - Where you receive responses to newsletters
closes#12167
- Tags API v2 was ignoring `count.posts` include parameter.
- Regression was introduced with a3f693b472
- Introduced regression tests across all Content API versions to avoid similar bug in the future
no issue
- Added default settings for the two new setting fields - `members_support_address` and `members_reply_address`
- Added migrations for setting group for new email settings
- Migration sets current from address as new support address default
- Added migration to set new support address same as from address
- Updated tests for new settings
- `members_support_address` - How members can reach for help with their account
- `members_reply_address` - Where you receive responses to newsletters
no issue
- When an import was done and there were no "global labels" present Ghost created generic `import-[data]` label which later helped to find a specific batch of imported data
- It did not make sense to create such generic label when user provided their own unique label
- The rules that work now are:
1. When there is no global provided Ghost generates on and removes it in case there are no imported records
2. When there is a unique new global label provided no new label is generated, but the label stays even if there are no imported records
no issue
- tested performance between knex raw, knex `count()` and bookshelf `count()` and found no difference over 1000 iterations of each (each ~19,500ms +- 500ms for 104k members locally)
- switched to using bookshelf as the code is the simplest
no issue
- extract filtering of an collection into a separate function
- use extracted function in `findAll()` so that it's query behaviour matches `findPage()`
no issue
- for large result sets or complex queries the count query itself can be quite time consuming
- when `limit: 'all'` is passed as an option there's no need to perform a separate count query because we can determine the pagination data from the final result set
- skipped count query when `limit: 'all'` option is present
- re-ordered comments to be closer to the code they reference (ie, why we have our own count query instead of Bookshelf's `.count()`