closes#12273
- `comped` field has been allowed when editing a member or importing from a CSV. There has been a usecase (Zapier Integration) for API client to create a member with "Complimentary" plan, which made this change necessary
- Previously the logic for comped field was to skip and continue member record creation if Stripe was not connected. Now we throw an error - same as the one we have been throwing before when stripe_customer_id field was passed in. The implication of this change is that we won't be creating any record now if comped === true and Stripe is disabled.
- Bumped admin-api-schema-package. Contains `comped` schema change so this field gets passed through to controller
no issue
When scheduling a post to publish+send the "view online" link was pointing at https://site.com/404/ rather than the published post's url.
The problem occurred because the `/schedules/` endpoint wraps it's post read+edit calls in a transaction. Context:
- when a post is published with with the "send email" option the email record is immediately generated and added to the API response, as part of the email record generation we render the email content including fetching the url for the "view online" link
- urls for all resources are handled by our `url` service, that service updates it's internal cache based upon model events such as the "edited" event triggered when a post is published
- if the posts API controller is given a transaction, the email record is also generated inside of that transaction however at this point the `url` service will not have been updated because the post record hasn't been committed meaning it has no available url for the post
Fix:
- removed the `models.Base.transaction()` wrapper around the post read+update in the `/schedules/` API controllers
- we don't need a transaction here. It was added as protection against another write request coming in between the `/schedules/` controller reading a post and publishing a post but we already have protection against that in the form of collision detection - if a write request comes in and commits between the schedules controller reading the post and updating it, the scheduler's update call will fail with a collision error at which point the scheduler itself should retry the request which could then publish the post successfully if everything else is in order
closes https://github.com/TryGhost/Ghost/issues/12260
- if a card type was not explicitly chosen (i.e. a url was pasted into the editor) then abort fetching the oembed endpoint if we detect it's a `wp-json` oembed and return a bookmark card payload instead
- cleaned up an unused argument in the internal `fetchBookmarkData()` method
closes https://github.com/TryGhost/Ghost/issues/12257
- there was a destructuring problem introduced in the recent email refactor which meant the correct replacement data was not being passed over to the Mailgun provider when sending email
closes https://github.com/TryGhost/Ghost/issues/12259
- adds a `DISTINCT` to the query used to fetch member rows when generating an email recipient list
- this increases query time 2.7s vs 1.6s locally with ~94k paid members but once the `members.paid` column is implemented this slow query can be removed
closes https://github.com/TryGhost/Ghost/issues/12247
- Internal preview controller was lacking "mapping" call to post object which handled not only missing meta attribute information but lots of other mappings (e.g. users, tags, etc.)
- Have added a regression test to catch issues like this in the future
no issue
- fixed passing of errors up through send/processBatch/processEmail
- fixed errant overwrite of email status with a "submitted" status after a failure had occurred
no issue
- wrap email batch/recipient record creation in a transaction so if an error occurs during creation we're not left with a partially created batch/recipient set in the database
no issue
- if an error occurred whilst creating email batch/recipient records the email status was never updated and was left in the 'pending' status
- adjusted the error handling to update the email status and record the error message if such a scenario occurs
no issue
- the paid-member SQL query that is obtained using `models.Member.getFilteredCollectionQuery({paid: true})` can return multiple columns with the same name (eg, `email`, `name`), when that happens the last column with duplicate names "wins" and it's value is used in the resulting knex row instance
- in the `mega` service when fetching email recipient rows we ran into this problem, to avoid it we adjust the query to explicitly select only the data from the `members` table
no issue
- we've had an issue with emails failing due to unexpectedly missing data when inserting email recipient rows
- added a validation check before adding recipient details along with a log so that invalid data can be investigated
no issue
- We had previously allowed accent_color setting for member site settings behind portal flag, but Ghost Admin also needs the public site setting with accent color to correctly reflect the accent color when flag is switched on
- Removes deletion of accent color setting when behind the Portal flag OR dev experiment flag
no issue
- there was an typo in the recent email sending refactor that resulted in `Error: 'to' parameter is missing` errors when sending email previews and bulk emails
refs #12249
This was incorrectly assuming the presence of the data-members-name
element in the document. By guarding against it and defaulting to
undefined, we fallback to the existing behaviour when the element is not
present.
closes#12244
As per RFC 6454 the Origin header MUST be set to the string 'null' when
in a "privacy-sensitive" context. We were not handling this string and
this was causing errors. This commit updates all checks of the 'Origin'
header to treat the value 'null' as if the header was not present.
ref: https://tools.ietf.org/html/rfc6454#section-7.3
refs #12055
As part of the work in TryGhost/Members#206 we load the stripeCustomers relation on the member model, and we do not want this to be part of the API response. The changes here include a refactor but the main thing is that the serialized object is explicit and does not include unexpected or unknown fields.
* Moved mapMember out of mapper file
This cleans up the serializer a bit by keeping it's functionality all in
one place, rather than a shared mapper file
* Refactored members controller to return models
Previously the controller was calling toJSON, which is serialization,
this updates the controller to only deal with models, leaving all of the
serialization to the serializer!
* Refactored members serializer
This adds typings to all of the methods/functions in the serializer, as
well as making the serializating explicit, rather than returning the
result of toJSON, we explicitly set the properties we expect to be on
the output object. This protects us against accidental API changes in
the future.
closes https://github.com/TryGhost/Ghost/issues/10628
- JSON Schemas were extracted into a separate module to allow other clients to reuse them (for example documentation). Having them in a separate package also slims down the amount of code needed to be maintained in the core.
- Updated canary API input validators to use admin-api-schema module
- Removed canary schemas that moved into admin-api-schema package
- Updated v2 API input validators to use admin-api-schema package
- Removed v2 schemas that moved into admin-api-schema package
- Updated tests to contain needed information in apiConfig to pick up correct validation
- Added @tryghost/admin-api-schema package dependency
refs https://github.com/TryGhost/Ghost/pull/12232
When viewing sent emails in Ghost's admin area, it displays the `html` field directly from the `email` relation loaded with the post. Since the `mega` refactor we now store raw content in that field rather than sanitized "preview" content so it's necessary to modify the API output to match the old behaviour.
- use the API output serializers to parse replacements in email content and replace with the desired fallback or empty string
no issue
- store raw content in email record
- keep any replacement strings in the html/plaintext content so that it can be used when sending email rather than needing to re-serialize the post content which may have changed
- split post email serializer into separate serialization and replacement parsing functions
- serialization now returns any email content that is derived from the post content (subject/html/plaintext) rather than post content plus replacements
- `parseReplacements` has been split out so that it can be run against email content rather than a post, this allows mega and the email preview service to work with the stored email content
- move mailgun-specific functionality into the mailgun provider
- previously mailgun-specific behaviour was spread across the post email serializer, mega, and bulk-email service
- the per-batch `send` functionality was moved from the `bulk-email` service to the mailgun provider and updated to take email content, recipient info, and replacement info so that all mailgun-specific headers and replacement formatting can be handled in one place
- exposes the `BATCH_SIZE` constant because batch sizes are limited to what the provider allows
- `bulk-email` service split into three methods
- `send` responsible for taking email content and recipients, parsing replacement info from the email content and using that to collate a recipient data object, and finally coordinating the send via the mailgun provider. Usable directly for use-cases such as test emails
- `processEmail` takes an email ID, loads it and coordinates sending related batches concurrently
- `processEmailBatch` takes an email_batch ID, loads it along with associated email_recipient records and passes the data through to the `send` function, updating the batch status as it's processed
- `processEmail` and `processEmailBatch` take IDs rather than objects ready for future use by job-queues, it's best to keep job parameters as minimal as possible
- refactored `mega` service
- modified `getEmailData` to collate email content (from/reply-to/subject/html/plaintext) rather than being responsible for dealing with replacements and mailgun-specific replacement formats
- used for generating email content before storing in the email table, and when sending test emails
- from/reply-to calculation moved out of the post-email-serializer into mega and extracted into separate functions used by `getEmailData`
- `sendTestEmail` updated to generate `EmailRecipient`-like objects for each email address so that appropriate data can be supplied to the updated `bulk-email.send` method
- `sendEmailJob` updated to create `email_batches` and associated `email_recipients` records then hand over processing to the `bulk-email` service
- member row fetching extracted into a separate function and used by `createEmailBatches`
- moved updating of email status from `mega` to the `bulk-email` service, keeps concept of Successful/FailedBatch internal to the `bulk-email` service
requires https://github.com/TryGhost/Ghost/pull/12192
- added initial `EmailBatch` and `EmailRecipient` model definitions with defaults and relationships
- added missing `post` relationship function to email model
- fetch member list without bookshelf
- bookshelf can add around 3x overhead when fetching the members list for an email
- we don't need full members at this point, only having the data is fine
- if we need full models later on we can push the model hydration into background jobs where recipient batches are fetched ready for an email to be sent
- bookshelf model instantiation of many models blocks the event loop, using knex directly keeps concurrent requests fast
- adds `getFilteredCollectionQuery` method to base model to facilitate getting a knex query based on our normal model filters along with transaction/forUpdate applied
- store recipient list before sending email
- chunk already-fetched members list into batches and insert records into the `email_recipients` table via knex
- chunked into batches of 1000 to match the number of emails that Mailgun accepts in a single API request but this may not be the absolute fastest batch size for recipient insertion:
| Batch size | Batch time | Total time |
| ---------- | ---------- | ---------- |
| 500 | 20ms | 4142ms |
| 1000 | 50ms | 4651ms |
| 5000 | 170ms | 3540ms |
| 10000 | 370ms | 3684ms |
- create an email_batch record before inserting recipient rows so we can effeciently fetch recipients by batch and store the overall batch status
no issue
- The accent_color setting was being removed from members site data when behind portal flag
- The accent color setting is now allowed in members site data for all cases as it doesn't make any sense to remove it specifically from here where we already have all the other Portal settings included which is a dev/portal flag feature anyways
no-issue
By using the "email" validation, we were validating emails in CSV
imports using a different validator to the rest of the API. AJV's built
in email validation was failing on emails with "special" characters,
such as letters with an umlaut above them.
This commit brings the validation for CSV imports in line with the rest
of the API.
no issue
The email table should be a reference for all data that was used when sending an email. From and Reply-to addresses can change over time and we don't have any other reference for their value at the time of sending an email so we should store them alongside the email content.
- schema updated with `from` and `reply_to` columns
- both are set to `nullable` because we don't have historic data (can be populated and changed in later migrations if needed)
- neither `from` or `reply_to` have `isEmail` validations because they can have name+email in an email-specific format
- will help keep concerns separated in the future. `mega` service can deal with all of the email contents/properties, and the `bulk-email` service's concerns are then only email sending and any provider-specific needs
refs #12033
- Allowing to change parent integration opens up possible security holes and has no clear usecase at the moment. After a webhook record is created it should not be possible to change parent integration.
- Had do partially duplicate JSON schema definition from webhooks definition as there is no proper composition technique available in current version of JSON Schema.
refs https://github.com/TryGhost/Ghost/issues/12033
refs https://github.com/TryGhost/Ghost/issues/10567
- Creating a webhook without valid parent integration leads to orphaned webhook records, which shoult not ever happen
- This scenario is only possible for non-integration authentication,
because in case of integration being authenticated it's id is
automatically assigned to creatd webhook
refs #11729
- When ordering is done by fields from a relation (like post's `meta_title` that comes form `posts_meta` table), Bookshelf does not include those relations in the original query which caused errors. To support this usecase added a mechanism to detect fields from a relation and load those relations into query.
- Extended ordering to include table name in ordered field name. The information about the table name is needed to avoid using `tableName` within pagination plugin and gives path to having other than original table ordering fields (e.g. order by posts_meta table fields)
- Added test case to check ordering on posts_meta fields
- Added support for "eager loading" relations. Allows to extend query builder object with joins to related tables,
which could be used in ordering (possibly in filtering later). Bookshelf does not support ordering/filtering by proprieties coming from relations, that's why this kind of plugin and query expansion is needed
- Added note about lack of support for child relations with same property names.
closes#12038
Previously we were emitting changed events for _all_ settings which would
cause any listeners for those to be triggered, this ensures that listeners are
only triggered if the corresponding setting, _did_ in fact change.
no-issue
This change allows themes to collect member names on signup with use of the
`data-members-name` attribute on an input element in the `data-members-form`
refs #11729
- Having a plugin allows to reason about ordering better and gives way
to further extraction away form the core.
- Just like with filtering, ordering falls into similar category of having effect on knex'es query builder object extension through additional statements - ORDER BY in this case.
- Because there were bugs associated with use of permittedAttributes inside of `parseOrderOption` method, introduced separate `orderAttributes` function which returns only those fields which are orderable (https://github.com/TryGhost/Ghost/issues/11729#issuecomment-685740836)
refs #11878
- When password reset link is invalid previous messaging left the user
without clear information about why the reset failed and what they could do about it.
- Updated messaging around password reset tokens including detection of
when password token has invalid structure, has expired or has already
been used
This reverts commit 80af56b530.
- reverting temporarily so that all associated functionality can be merged in a single release
- creating email batch/recipient records without using them would cause inconsistent data
no-issue
* Added SingleUseTokenProvider to members service
This implements the TokenProvider interface required by members-api to
generate magic links. It handles checking if the token is expired and
pulls out any associated data.
Future improvments may include the email in the error for expired
tokens, which would make resending a token simpler.
* Passed SingleUseTokenProvider to members-api
This sets up the members-api module to use the new single use tokens
* Installed @tryghost/members-api@0.30.0
This includes the change to allow us to pass a token provider to the members-api
no-issue
This is a model for the tokens table, which handles the single use
aspect by customising the `findOne` method to automatically destroy the
model after reading from it
no-issue
refs: https://github.com/TryGhost/Members/commit/63942f03
The above commit updated all magic-link methods to be async, this change
ensures that we will handle a Promise return when it's updated in Ghost.
`await` has no effect on non Promise return values so it's safe to add
this now.
no-issue
After discussion with Matt, we decided that 192 bits for the token is a
good number, as it has no padding when base64 encoded and is more secure
than 128 bits, whilst still a managable size.
no-issue
This is a table to store single use tokens for use in magic links, the
columns are as simple as possible at the moment and are designed as:
id - standard ObjectID like all of our tables
token - 128bit base64 encoded string
data - arbitrary data to store against the token
created_at - timestamp to allow for expiry to be implemented for tokens
no issue
- In a recent change to ownership verification email flow, we changed the FROM address of ownership verification mails to use the same email as the one we are verifying, aka TO address.
- Email clients like Gmail flags off such emails as possible spam
- Fix updates the `FROM` address to `noreply@domain.com` where domain.com is domain for TO address
- In case the TO is already noreply@domain.com, we use no-reply@domain.com to bypass the same address restriction.
no issue
- The new Portal config flag allows switching on Portal conditionally with config
- The dev experiment flag still works for enabling Portal
- The flag currently defaults to `false` as Portal is still a beta feature and switched off by default
- We expose it on the admin api config endpoint so that the Ghost-Admin client can use it to conditionally render Portal settings
no issue
- Members site data was not appending blog domain for default support address which is `noreply`
- The change allows Portal to use default support address correctly