- The RSS cache has lived for a really long time, but I'm not sure it's useful
- Want to be able to determine if it gets used much, and if not, then we can remove it
ref 78311591d0
- updated tests to not click a button on the setup/done screen that is no longer shown
- fixed setup flow showing an alert bar due to not handling the `TransitionAborted` error that is thrown by the setup/done->dashboard redirect
ref https://linear.app/tryghost/issue/KTLO-1/members-spam-signups
- Some customers are seeing many spammy signups ("hundreds a day") — our
hypothesis is that bots and/or email link checkers are able to signup by
simply following the link in the email without even loading the page in
a browser.
- Currently new members signup by clicking a magic link in an email,
which is a simple GET request. When the user (or a bot) clicks that link, Ghost
creates the member and signs them in for the first time.
- This change, behind an alpha flag, requires a new member to click the
link in the email, which takes them to a new frontend route `/confirm_signup/`, then submit a form on the page which sends a POST request to the
server. If JavaScript is enabled, the form will be submitted
automatically so the only change to the user is an extra flash/redirect
before being signed in and redirected to the homepage.
- This change is behind the alpha flag `membersSpamPrevention` so we can
test it out on a few customer's sites and see if it helps reduce the
spam signups. With the flag off, the signup flow remains the same as
before.
ref https://linear.app/tryghost/issue/ENG-790/remove-use-of-sub-queries-in-email-analytics
- the `delivered_at` column is typically entirely/nearly entirely filled with values meaning the `IS NOT NULL` query matches a huge number of rows that MySQL has to fetch from the index to count
- using `IS NULL` switches that behaviour around as it will now match very few rows which has been shown in testing to be considerably quicker
- after switching to `IS NULL` the query returns an "undelivered" count rather than a "delivered" count, in order to keep the rest of the system behaviour the same we can calculate the delivered count by subtracting the query result from the total number of emails sent which we can fetch using a very fast primary key lookup query on the `emails` table
closes https://linear.app/tryghost/issue/ENG-790/remove-use-of-sub-queries-in-email-analytics
Avoiding sub queries means we don't have a process tied up for longer than necessary and we can more easily see if one of the queries is non-performant.
- extracted the count queries into separate queries and used the retrieved values in the final update query
- removed a query by moving the email open rate calculation into JS as we've already fetched the necessary data before that point
ref https://linear.app/tryghost/issue/CFR-13
- enabled saving traces on browser test failure; this makes troubleshooting a lot easier
- updated handling in offers tests to ensure the tier has fully loaded in the UI (not just `networkidle`)
- updated publishing test to examine the publish button reaction to the save action response instead of a 300ms pause
In general, our tests use a lot of watching for 'networkidle' - and sometimes just raw timeouts - which do not scale well into running tests on CI. In particular, 'networkidle' does not work if we're expecting to see React components' state updates propagate and re-render. We should always instead look to the content which encapsulates the response and the UI updates. This is something we should tackle on a larger scale.
ref ENG-774
ref https://linear.app/tryghost/issue/ENG-774
Staff Tokens will have both a `user` and an `apiKey` present on the
`loadedPermissions`.
The check here for `apiKey` was written when we could assume that an
`apiKey` was an Admin Integration - so it completely overwrote the
previous `allowed` list. When we added the concept of Staff Tokens -
this resulted in a privilege escalation.
This is a good lesson in not using proxies or indicators for data, as
changes elsewhere can invalidate them - if we had been specific and
checked the role of the current actor we wouldn't've had this bug!
ref https://linear.app/tryghost/issue/CFR-4/
- added request queueing middleware (express-queue) to handle high
request volume
- added new config option `optimization.requestQueue`
- added new config option `optimization.requestConcurrency`
- added logging of request queue depth - `req.queueDepth`
We've done a fair amount of investigation around improving Ghost's
resiliency to high request volume. While we believe this to be partly
due to database connection contention, it also seems Ghost gets
overwhelmed by the requests themselves. Implementing a simple queueing
system allows us a simple lever to change the volume of requests Ghost
is actually ingesting at any given time and gives us options besides
simply increasing database connection pool size.
---------
Co-authored-by: Michael Barrett <mike@ghost.org>
ref ENG-728
ref https://linear.app/tryghost/issue/ENG-728
This is not used anywhere, and makes the code more complicated, it's a good
step toward simplifying permissions and pulling them out of the database.
ref ENG-728
ref https://linear.app/tryghost/issue/ENG-728
This is NOT a functionality change. The Post#permissible method unit
tests have been updated to pass `true` as `hasUserPermission` and we can
see that the permission functionality remains the same.
The permissible method of the post model is responsible for removing
permission based on the data that is being modified, but the permissions
module is setup to allow the permissible method to grant permission -
this means that we call permissible, even if the current actor doesn't
have permission, this results in code that is hard to understand and
manage.
We are going to be instead returning early if an actor does not have
permission, this will allow permissible method signatures to be greatly
simplified (removing the need for hasUserPermission, hasApiKeyPermission
& hasMemberPermission arguments).
fixes https://linear.app/tryghost/issue/ENG-746/http-500-responses-when-handle-image-sizes-middleware-hits-missing
- in the event a request comes in for a resized image, but the source
image does not exist, we return a rendered 404 page
- we do this because we pass the NotFoundError to `next`, which skips
over the static asset code where we return a plaintext 404
- also included a breaking test that ensure we go to the next middleware
without an error
ref ENG-761
ref https://linear.app/tryghost/issue/ENG-761
Creating these pipelines is expensive, and we don't want to do it
repeatedly for the same controller. Adding caching should reduce the
amount of time spent setting up pipelines for each usage of the `get`
helper.
ref https://linear.app/tryghost/issue/IPC-66/onboarding-checklist-v1
- Adds a basic version of a new onboarding checklist behind the feature
flag, without incomplete/complete state logic
- Links to Design settings, Members screen and new post
- Opens amodal that we’ll use as Share modal
---------
Co-authored-by: Daniël van der Winden <danielvanderwinden@ghost.org>
ref [ENG-747](https://linear.app/tryghost/issue/ENG-747/)
ref https://linear.app/tryghost/issue/ENG-747
H'okay - so what we're trying to do here is make get helper queries more
cacheable. The way we're doing that is by modifying the filter used when
we're trying to remove a single post from the query.
The idea is that we can remove that restriction on the filter, increase
the number of posts fetched by 1 and then filter the fetched posts back
down, this means that the same query, but filtering different posts,
will be updated to make _exactly_ the same query, and so share a cache!
We've been purposefully restrictive in the types of filters we
manipulate, so that we only deal with the simplest cases and the code is
easier to understand.
ref ENG-742
ref https://linear.app/tryghost/issue/ENG-742
We don't do any parsing of layouts in gscan, which means themes can be
uploaded which use non-existent files for their layout.
We can catch the error in the res.render call, and wrap it, just like we
do for missing templates (e.g. the StaticRoutesRouter)
closes ENG-632
- This listens to a new property in the `milestones` config to set a minimum value of Milestones we wanna use the Slack notification service for
refs https://linear.app/tryghost/issue/ENG-750
- when adding a recommendation, we fetch the recommended site's metadata
- before this change, if the metadata fetch failed for some reason, we'd show an error and block the recommendation from being added
- after this change, we use fallback values if the metadata fails to fetch, instead of blocking the recommendation from being added. We use the site domain as the title and leave the rest empty (no favicon, no description)
- this change also means we are not checking whether a site exists or not for the publisher anymore. It’s then up to the publisher to make sure they don’t enter broken URLs
ref ENG-747
ref https://linear.app/tryghost/issue/ENG-747
H'okay - so what we're trying to do here is make get helper queries more
cacheable. The way we're doing that is by modifying the filter used when
we're trying to remove a single post from the query.
The idea is that we can remove that restriction on the filter, increase
the number of posts fetched by 1 and then filter the fetched posts back
down, this means that the same query, but filtering different posts,
will be updated to make _exactly_ the same query, and so share a cache!
We've been purposefully restrictive in the types of filters we
manipulate, so that we only deal with the simplest cases and the code is
easier to understand.
ref https://linear.app/tryghost/issue/ENG-721
- when changing the response to a `204` for requests with no cookie we'd lost the `Cache-Control: public, max-age: 0` header which meant some cache systems weren't caching as efficiently as possible
ref https://linear.app/tryghost/issue/TRI-65
In the context of referrals, we want to understand how useful our
“Powered by Ghost” badges are.
There are currently a few places where the “Powered by Ghost” badge can
be rendered:
- in newsletters (enabled/disabled by publisher, on a newsletter basis)
- in Portal popups, e.g. member signup/signin/account settings
- in the footer of some themes, including Source & Casper
We're adding the query param ?via to evaluate the usage of the badge in
newsletters.
no-issue
This adds the barebones of a NestJS application wired up to the Admin API
behind a feature flag, so that we can experiement with how to use Nest in the
context of Ghost
closes ENG-730
closes https://linear.app/tryghost/issue/ENG-730/
We've updated the input serializer to parse the filter, and responded
with an error if it cannot be parsed correctly.
Now that it's parsed, we can pass a mongo query object through the
stack, which will lend itself to better typing for this code, which is a
direction we want to go in anyway. We've had to update all the internal
usages of the `browse` method to use mongo query objects.
ref https://linear.app/tryghost/issue/ENG-740/http-500-error-when-image-processing-fails
refs 4aad551c72
- upon further discussion, we've decided it's better to throw an error
in this case because the uploaded image is deemed invalid and storing
it on the filesystem might cause more issues with resizing/further
processing in the future
- this commit implements that and alters the tests
fixes ENG-740
fixes https://linear.app/tryghost/issue/ENG-740/http-500-error-when-image-processing-fails
- in the event the image transform library throws (which can happen for
many reasons; sharp/libvips can come across a number of errors), we
currently return this as a HTTP 500 error to the user
- in this case, we should just try-catch the call and jump to the
non-processing flow where it just saves the original image
- also added breaking test
closes https://linear.app/tryghost/issue/ENG-721
ref https://linear.app/tryghost/issue/ENG-708
Comments-UI loads `/ghost/admin-frame/` in an iframe to check if a Staff User is authenticated in order to show moderation options. That iframe request loads a HTML page which in turn contains a script that fires off an API request that attempts to fetch the logged-in user details, resulting in a 403 "error" showing up when not authenticated. In the vast majority of cases there will be no staff user authenticated so lots of extra requests and "errors" are seen unnecessarily.
- adjusted the `/ghost/auth-frame/` endpoint to check if the request contains an Admin session cookie
- if it does, continue as before with rendering the HTML page so the script is loaded
- if it doesn't, return an empty 204 response avoiding the script request and subsequent 403-generating API request
- eliminates the 403 error being generated for all typical visitor traffic, the error should only be seen when an Admin was previously logged in but their cookie is no longer valid (either from logging out, or going past the 6month validity period)
fixes https://github.com/TryGhost/Product/issues/4237
- this fixes the fact that we return a HTTP 500 response when the oembed
library receives an error, such as a 401 or 403
- includes special handling for cases where we want to return a slightly
different error message
- also adds unit tests for @tryghost/oembed-service package
fixes ENG-733
ref https://linear.app/tryghost/issue/ENG-733/handle-image-uploads-where-name-is-too-long
- filesystems usually have a filename length limit; ie. on macOS it is
255 characters
- if a file is uploaded with a longer filename, we'll return a HTTP 500
- we shouldn't do this as it is user error, so we can just catch the
error code and return BadRequest
- this implements that, and adds a breaking test
- this version is written in TS, but was published a few months ago and
needs to be bumped here
- also updates a previous deep include into the library, which was
unnecessary anyway
refs INC-36
fixes https://github.com/TryGhost/Ghost/issues/19796
- The tiers-only paywall was incorrectly rendering "Free". Example:
"This post is for subscribers of the Free, Silver and Gold tiers only"
- Steps to reproduce the issue:
1. Create a post with public visibility, publish it
2. Then swap the visibility to specific tiers. The default selects all
paid tiers. Leave it like that
3. Update the post. The paywall show Free, even though it should be
showing only the paid tiers
- This fix filters out the "free" tier when visibility is set to tiers,
before updating a Post or a Page. The fix includes bulk updates from the
list of Posts and Pages (right-click on a Post/Page > Change Access).
refs INC-36
fixes https://github.com/TryGhost/Ghost/issues/19796
- The tiers-only paywall was incorrectly rendering "Free". Example:
"This post is for subscribers of the Free, Silver and Gold tiers only"
- Steps to reproduce the issue:
1. Create a post with public visibility, publish it
2. Then swap the visibility to specific tiers. The default selects all
paid tiers. Leave it like that
3. Update the post. The paywall show Free, even though it should be
showing only the paid tiers
- This fix filters out the "free" tier when visibility is set to tiers,
before updating a Post or a Page. The fix includes bulk updates from the
list of Posts and Pages (right-click on a Post/Page > Change Access).
no issue
Bumps `Comments-UI` app version that contains an improvement to data loading:
- within the comments block we only use Admin auth to show moderation options on each displayed comment but we were always pre-emptively loading the `admin-auth` frame and making the associated Admin API user request. That loading has now been deferred until at least one comment has been displayed cutting down unnecessary requests on each post view
no issue
Bumps `Comments-UI` app version that contains an improvement to data loading:
- within the comments block we only use Admin auth to show moderation options on each displayed comment but we were always pre-emptively loading the `admin-auth` frame and making the associated Admin API user request. That loading has now been deferred until at least one comment has been displayed cutting down unnecessary requests on each post view
closes ENG-627
We were using `cheerio` to parse+modify+serialize our rendered HTML to modify links for member attribution. Cheerio's serializer has a [long-standing issue](https://github.com/cheeriojs/cheerio/issues/720) (that we've [had to deal with before](https://github.com/TryGhost/SDK/issues/124)) where it replaces single-quote attributes with double-quote attributes. That was resulting in broken rendering when content used single-quotes such as in HTML cards that have JSON data inside a `data-` attribute or otherwise used single-quotes to avoid escaping double-quotes in an attribute value.
- swapped the implementation that uses `cheerio` for one that uses `html5parser` to tokenize the html string, from there we can loop over the tokens and replace the href attribute values in the original string without touching any other part of the content. Avoids a full parse+serialize process which is both more costly and can result unexpected content changes due to serializer opinions.
- fixes the quote change bug
- uses tokenization directly to avoid cost of building a full AST
- updated Content API Posts snapshot
- one of our fixtures has a missing closing tag which we're no longer "fixing" with a full parse+serialize step in the link replacer (keeps modified src closer to original and better matches behaviour elsewhere in the app / without member-attribution applied)
- the link replacer no longer converts `attr=""` to `attr` (these are equivalent in the HTML spec so no change in behaviour other than preserving the original source html)
- added a benchmark test file comparing the two implementations because the link replacer runs on render so it's used in a hot path
- new implementation has a 3x performance improvement
- the separate files with the old/new implementations have been cleaned up but I've left the benchmark test file in place for future reference
Benchmark results comparing implementations:
```
❯ node test/benchmark.js
LinkReplacer
├─ cheerio: 5.03K /s ±2.20%
├─ html5parser: 16.5K /s ±0.43%
Completed benchmark in 0.9976526670455933s
┌─────────────┬─────────┬────────────┬─────────┬───────┐
│ (index) │ percent │ iterations │ current │ max │
├─────────────┼─────────┼────────────┼─────────┼───────┤
│ cheerio │ '' │ '5.03K/s' │ 5037 │ 5037 │
│ html5parser │ '' │ '16.5K/s' │ 16534 │ 16534 │
└─────────────┴─────────┴────────────┴─────────┴───────┘
```
refs https://linear.app/tryghost/issue/ENG-712/
I don't think we ever need to respond with a 500 here, if the verify call
fails, we know that the token is unauthorized for use.
refs. https://linear.app/tryghost/issue/DES-122/bookmark-card-issues
This PR addresses the following content card related problems:
1. The design of the following cards are more self-contained so it makes
more sense to use `px` for their font-sizes and spacings so it looks the
same regardless of the theme. Of course themes still can override these
values.
Updated cards to use `px` for font sizing:
- audio
- bookmark
- file
- product
2. So far header and signup cards had been using `rem` for font-sizes
and some sizing. This commit updates these to use `em` instead so that
it's consistent with all other cards.
3. The favicon sometimes is not available for bookmark cards. This PR also
fixes that by providing a default favicon for these cases.
no issue
Bumps `Comments-UI` app version that contains a few changes:
- comments data is now lazy-loaded with API requests being deferred until the comments block is scrolled into view, saving up-front visitor data usage as well as reducing server-load for page views where the comments are never seen
- comments data is now fetched from `/members/api/comments/{post_id}/` rather than using the post_id in the `filter` param to enable cache bucketing and cache invalidation
- `created_at` timestamp has been dropped from the initial comments data request so the results can be cached, on pagination requests the timestamp has been improved to use the created_at data from the response so it remains consistent and can also be cached
- `order` param has been dropped from API requests as the API has been updated to include our default ordering
closes ENG-681
There's no need to provide an `order` param with every request in Comments-UI if the API has default ordering that matches our requirements. The order param makes logs more noisy/harder to read than they need to be so we want to get rid of it.
- modified comments API input serializer to add a default order param to the browse and replies endpoints when none is provided
- removed order param from the requests that Comments-UI makes
refs https://linear.app/tryghost/issue/ENG-676/
We want to make sure that we're not serving stale liked counts for
comments, which means we need to cache bust when they're liked/unliked
Unfortuantely this means we need to fetch the comment from the db so
that we have access to the post id.
refs https://linear.app/tryghost/issue/ENG-676/
This is the meat of the change and actually causes the cache to be
invalidated on adds and edits to the comments endpoints.
It doesn't currently include the liked/unliked actions at the moment
as we don't have easy access to the post id from those endpoints.
refs https://linear.app/tryghost/issue/ENG-676/
This is pretty simple as we can reuse the existing browse method
on the CommentsController, but we need to add support for the post_id
option to the endpoint, for it to be added to the frame.
We also need to update the browse method to enforce the post_id on the
NQL filter. I initially tried this with string concatenation, but ran
into way too many bugs, so we're using a mongo transformer instead.
refs https://linear.app/tryghost/issue/ENG-670
We keep running into issues with a sites content not being correct,
and slow get helpers being the suspect - but it's difficult to prove.
The idea behind this it to give us concrete evidence, which will allow
us to diagnose the problem faster.
closes ENG-660
- added tagged template function to strip leading whitespace from our plaintext email strings without making the source file harder to read
closes ENG-666
- the Admin API `GET /slugs/{type}/{slug}/` endpoint is used by Admin to check when a potential slug needs de-duping by adding a `-{x}` suffix. Most often this occurs when setting a draft post title
- the endpoint was returning a full-site cache invalidation header meaning hosting services could be blowing away their site caches and needlessly hurting performance because this endpoint is purely a read operation and makes no changes to the site
- updated the endpoint to return no cache invalidation header
closes ENG-666
- the Admin API `GET /slugs/{type}/{slug}/` endpoint is used by Admin to check when a potential slug needs de-duping by adding a `-{x}` suffix. Most often this occurs when setting a draft post title
- the endpoint was returning a full-site cache invalidation header meaning hosting services could be blowing away their site caches and needlessly hurting performance because this endpoint is purely a read operation and makes no changes to the site
- updated the endpoint to return no cache invalidation header
no refs
- Offers browser tests were subject to a race condition. I'm guessing
this dates back to when we moved to Settings X (and React), as it seems
the url for the offer is not present on the first render of the page -
despite being returned in the `POST` request of the offer creation, the
component does a `GET` on render to get the link. This is now awaited.
- The Publishing timezone test also seemed to suffer from a race
condition. This is less sure of a fix as it's a much less frequent
failure. The date time picker input is now validated in the test before
continuing.
- Offers browser tests often timed out so the timeout has been moved to
90s for these tests.
- All tests were bumped to 75s timeout as we generally would
occasionally hit the timeout.
closes ENG-608
- bumps Koenig rendering packages to include fix for HTML entities in HTML card content being decoded during rendering which could result in unexpected/broken output
ref ENG-607
- also added the option to show the monthly pricing by default during
signup
Co-authored-by: Simon Backx <simon@ghost.org>
Co-authored-by: Djordje Vlaisavljevic <dzvlais@gmail.com>
no issue
- to test if we can access Private Sites in Admin when set as a private
site.
- the issue is, we have CORS issues that doesn't allow a cookie to be
passed via Admin when the site uses a custom domain.
- generally does not affect self hosters.
refs https://linear.app/tryghost/issue/ENG-600
- users need an option so they can perform actions like delete users
without blowing up Ghost as large dbs can OOM node
refs INC-18
- release v5.69.0 introduced a data discrepancy in the free tier
visibility: the "free" tier visibility got out of sync with the
"portal_plans" setting due to a bug in the new Admin settings. The bug
was corrected in a patch release rolled out a few days later, v5.69.4
- however, the data discrepancy has not been corrected for all
customers; this data migration fixes the data discrepancy
refs https://linear.app/tryghost/issue/ENG-599
- Portal tests occasionally failed without clear cause on CI, possibly
due to GH runner region
- Portal tests never successfully ran locally for US-based IPs because
of a required prompt for Stripe Pass
refs https://linear.app/tryghost/issue/ENG-599
- member count is based on the cache which only updates ~every minute
- forced cache clear on manual member add/delete (not import)
- tests were failing based on the assumption that a new site that adds a
member has a nonzero member count, although the cache did not reflect
this quickly enough for the test to pass
Previously on a new site if you tried to publish a newsletter, it would
require at least one member. If you quickly added a member and tried to
send a newsletter, it would stop you saying you need at least one
member, requiring a browser refresh. This was a bug that is resolved
with this changes, as well as odd behaviour to try to write tests
around.
refs https://github.com/TryGhost/Toolbox/issues/501
- at this point, we have no real reason to keep this behind as it wasn't
proven what the cause of the high CPU was, and it's just causing more
lockfile issues with the resolution
closes https://github.com/TryGhost/Product/issues/4234
- bumps Koenig packages to version containing a fix to our denest transform so it properly handles denesting element nodes inside list item nodes
ref https://github.com/TryGhost/Ghost/issues/12802
fixes DMA-27
- You can choose any support and newsletter email address in the UI
without verification (as long as your SMTP-server / Mailgun can send
from it)
- All emails will use the mail.from config as the from address as a
default:
- Staff notification emails no longer use the made up ghost@domain email
address
- Newsletters no longer default to 'noreply@domain'
- Member related emails (signin/signup/comment notifications...) will
continue to be send from the chosen support address (Portal settings →
Account page), but will now default to the mail.from config instead of
noreply@domain if no support address is set.
refs ENG-599
- added refresh to publishing workflow test
- member count is cached and not updated immediately upon adding a
member, but a count >0 is required in order to send a newsletter (what
this test tests)
- we are looking at updating the cached count; until then, a refresh
will be a performance hit but allow this test to pass
no issue
- Renaming the configuration parameter created in this commit:
e0dae46dfc
- No functional difference, this change just makes the configuration a
bit more succinct
no issue
- To help debug potential causes of slow/aborted get helpers, it would
be cool to get more visibility into how Ghost handles database
connections, particularly if it has to spend a long time waiting to
acquire a new connection from the pool.
- Under the hood, knex uses a package called tarn
(https://github.com/Vincit/tarn.js/tree/3.0.2) to manage the connection
pool. Tarn provides some hooks for instrumentation, so we can use those
to get some basic visibility into the connection pool.
- This PR adds handling for creating, acquiring and releasing
connections from Tarn's connection pool which logs some basic metrics,
particularly the queue length and time it takes to acquire a connection.
no issue
- To help debug ABORTED_GET_HELPER errors, this PR adds Sentry
instrumentation to the get helpers
- It also adds the homepage, any pages/posts, the tag page, and the
author page to the list of transactions that will send to Sentry
refs TryGhost/Ghost#19559
- custom excerpts are truncated based on character length
- escaped characters added extra length but we didn't account for this,
resulting in poor truncation of excerpts
refs TryGhost/Ghost#19559
- custom excerpts are truncated based on character length
- escaped characters added extra length but we didn't account for this,
resulting in poor truncation of excerpts
fixes PROD-102
When a newsletter has a sender_email stored in the database that Ghost
is not allowed to send from, we no longer return it as sender_email in
the API. Instead we return it as the sender_reply_to. That way the
expected behaviour is shown correctly in the frontend and the API result
also makes more sense.
In addition to that, when a change is made to a newsletters reply_to
address we'll clear any invalid sender_email values in that newsletter.
That makes sure we can clear the sender_reply_to value instead of
keeping the current fallback to sender_email if that one is stored.
On top of that, this change correclty updates the browse endpoint to use
the newsletter service instead of directly using the model.
no issue
- we recently added a redirect to disable access to the preview endpoint for sent email-only posts but the condition was too broad and also disabled access to scheduled email-only posts
- adjusted so we only apply the /p/ -> /email/ redirect for sent posts
refs TryGhost/Product#4243
- Externally hosted images added in the editor were not populating the
`width` and `height` attributes, which could result in overflowing
images in certain email clients, particularly Outlook.
- This fix populates the `width` and `height` attributes in the editor
when adding an external image by URL or copy/pasting, which in turn
corrects the rendering in Outlook.
- Various other fixes and improvements to editor related packages, see
https://github.com/tryghost/koenig repo for more info
refs PROD-215 PROD-216
- Added toast notifications for successful sender and reply-to email
address change behind the flag, instead of the modal
- Updated email template for verifying new sender or reply-to email
closes https://github.com/TryGhost/Product/issues/4247
- bumps `@tryghost/kg-default-transforms` with a fix to our de-nesting transform so ListNode is no longer ignored as a badly nested child node which can occur through copy/paste from other editors
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
refs TryGhost/Product#4243
- Externally hosted images added in the editor were not populating the
`width` and `height` attributes, which could result in overflowing
images in certain email clients, particularly Outlook.
- This fix populates the `width` and `height` attributes in the editor
when adding an external image by URL or copy/pasting, which in turn
corrects the rendering in Outlook.
- Various other fixes and improvements to editor related packages, see
https://github.com/tryghost/koenig repo for more info
refs PROD-215 PROD-216
- Added toast notifications for successful sender and reply-to email
address change behind the flag, instead of the modal
- Updated email template for verifying new sender or reply-to email
no issue
- issue reported via the forum https://forum.ghost.org/t/video-embed-break-page-on-mobile/44172
- due to historical issues we check against http/https and non-www/www URLs to match an oembed provider in case our library's provider list is out of date. However we checked http first which could match and then update the original URL to be `http` in place of `https` leading to potentially broken oembed fetch requests as was the case with http://odysee.com URLs
refs https://linear.app/tryghost/issue/BIZ-6/[wip]-update-segment-events
- With the removal of the `integration.added` event, we have no more model events remaining to listen to for our analytics
- Removal of the function entirely seems the easier and more straightforward way
refs https://linear.app/tryghost/issue/BIZ-6/[wip]-update-segment-events
- Removed model events to listen to: `post.published`, `page.published`, and `theme.uploaded` in segment service, as we're not actively using those.
- Updated tests to reflect the changes (from 4 events to 1 model event)
This logic is so simple it isn't worth having the indirection of another class.
This also removes the indirection of wrapped getters/setters, which is useful
because otherwise we need to update the wrapper with new methods each time
theunderlying implementation is changed. There was a note about losing the
context of this, but I haven't found anywhere that the context is lost.
closes https://github.com/TryGhost/Product/issues/4247
- bumps `@tryghost/kg-default-transforms` with a fix to our de-nesting transform so ListNode is no longer ignored as a badly nested child node which can occur through copy/paste from other editors
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
The main changes are:
- Updating the pipeline to allow for doing a background refresh of the
cache
- Remove the use of the EventAwareCacheWrapper for the posts public
cache
### Background refresh
This is just an initial implementation, and tbh it doesn't sit right
with me that the logic for this is in the pipeline - I think this should
sit in the cache implementation itself, and then we call out to it with
something like: `cache.get(key, fetchData)` and then the updates can
happen internally.
The `cache-manager` project actually has a method like this called
`wrap` - but every time I've used it it hangs, and debugging was a pain,
so I don't really trust it.
### EventAwareCacheWrapper
This is such a small amount of logic, I don't think it's worth creating
an entire wrapper for it, at least not a class based one. I would be
happy to refactor this to use a `Proxy` too, so that we don't have to
add methods to it each time we wanna change the underlying cache
implementation.
no issue
When creating the batches when sending an email, we log a message to
Sentry when there is an unexpected offset of 1% between creating the
email and actually creating the batch recipients. We used a method that
was not mapped in our Sentry proxy.
Location of error: ghost/email-service/lib/BatchSendingService.js:286
- we want to pass in the schema tables instead of cross requiring them
from a different package because it means the package isn't standalone
and moving the code structure around breaks the data generator
ref PROD-233
- Stored whether Docker is used in the config files
- When running `yarn setup`, any existing Docker container will be
reset. Run with `-y` to skip the confirmation step.
- `yarn setup` will always init the database and generate fake data
- Increased amount of default generated data to 100,000 members + 500
posts.
- Made lots of performance improvements in the data generator so we can
generate the default data in ±170s
- we don't need to load this if we haven't configured Node profiling to occur
- this might help fix random segfaults we've been seeing in CI, which
only started occurring after this dependency was added
refs
[ARCH-33](https://linear.app/tryghost/issue/ARCH-33/fix-sentry-environment)
To ensure that we are correctly identifying the environment that data is
being sent to Sentry from, we can use the `PRO_ENV` environment variable
if it is available. This will be set to `production` in production and
`staging` in staging. If `PRO_ENV` is not available, we will fall back
to retrieving the environment from config (`env`)
refs ARCH-29
- Added Sentry Profiling to collect more detailed performance data on
the backend.
- This feature is opt-in behind a config. To enable profiling, first
enable tracing with `sentry.tracing.enabled: true`, then set
`sentry.profiling.enabled: true` and `sentry.profiling.sampleRate` to a
decimal number between 0 and 1.
refs ARCH-21
- We currently have NewRelic setup for a few of our largest customers
for monitoring performance, but it is too expensive to enable across all
sites
- Sentry has similar (but simpler) performance monitoring tools to keep
track of response times that are available to us for free, but we just
haven't configured them
- This PR sets up Sentry Performance monitoring for API requests so we
can have one place for monitoring errors + performance so we can stay on
top of response times more easily.
- Tracing is disabled by default, so there is no additional overhead
unless `sentry.tracing.enabled` is set to `true` in the site's config.
Additionally, `sentry.tracing.sampleRate` should be set to a decimal
value between 0 and 1. This value defaults to 0 to avoid accidentally
blowing through quota, and requires a value to explicitly be set in
order to send the traces to Sentry.
refs
[ARCH-33](https://linear.app/tryghost/issue/ARCH-33/fix-sentry-environment)
To ensure that we are correctly identifying the environment that data is
being sent to Sentry from, we can use the `PRO_ENV` environment variable
if it is available. This will be set to `production` in production and
`staging` in staging. If `PRO_ENV` is not available, we will fall back
to retrieving the environment from config (`env`)
fixes PROD-290
- in order to receive webmentions (e.g. recommendations), Ghost sites
expose a /webmentions/receive endpoint. This endpoint is wrongly being
indexed by Google as a regular page, and causes indexing errors in
Google Search Console
- we don't need to load this if we haven't configured Node profiling to occur
- this might help fix random segfaults we've been seeing in CI, which
only started occurring after this dependency was added
refs ARCH-29
- Added Sentry Profiling to collect more detailed performance data on
the backend.
- This feature is opt-in behind a config. To enable profiling, first
enable tracing with `sentry.tracing.enabled: true`, then set
`sentry.profiling.enabled: true` and `sentry.profiling.sampleRate` to a
decimal number between 0 and 1.
refs TryGhost/Product#4125
This PR adds two new integration tests to ensure all our Koenig cards
are rendered properly after going through the EmailRenderer. Although we
have thorough tests for the cards themselves in the Koenig repo, the
EmailRenderer does post-processing on the rendered HTML, such as
inlining CSS, which can adversely impact the rendered output of our
cards in email clients (usually Outlook).
Since email newsletters are a core feature of Ghost, these bugs are
typically fairly urgent, and since it is email, they are also quite
difficult to troubleshoot and fix. These two tests are intended to
prevent bugs of this sort, which in the past have been created by
seemingly harmless changes like bumping dependencies that are used in
the EmailRenderer.
The idea is to create a 'Golden Post' which has at least 1 of every card
from Koenig, run that post through the EmailRenderer, and take a
snapshot of the rendered HTML. In the future, if we make any changes to
the EmailRenderer or the Koenig cards themselves, this will trigger us
to carefully consider the changes, and it provides an 'expected' output
to compare our changes against.
Additionally, the second test simply checks that all cards from
`kg-default-nodes` are included in the 'Golden Post'. This protects
against any new cards that we will add in the future — as soon as we add
them to Koenig and bump `kg-default-nodes` in Ghost, this test will
fail, prompting us to add the new card to the Golden Post and update the
snapshots.
We should also run the 'Golden Post' through a test in Litmus, which
allows us to visually inspect the rendered email across many different
email clients. Ideally we would create a process to review the output of
the 'Golden Post' in Litmus whenever we update the snapshot as well.
refs ARCH-21
- We currently have NewRelic setup for a few of our largest customers
for monitoring performance, but it is too expensive to enable across all
sites
- Sentry has similar (but simpler) performance monitoring tools to keep
track of response times that are available to us for free, but we just
haven't configured them
- This PR sets up Sentry Performance monitoring for API requests so we
can have one place for monitoring errors + performance so we can stay on
top of response times more easily.
- Tracing is disabled by default, so there is no additional overhead
unless `sentry.tracing.enabled` is set to `true` in the site's config.
Additionally, `sentry.tracing.sampleRate` should be set to a decimal
value between 0 and 1. This value defaults to 0 to avoid accidentally
blowing through quota, and requires a value to explicitly be set in
order to send the traces to Sentry.
refs https://github.com/TryGhost/Product/issues/4209
- bumped Koenig packages
- `koenig-lexical` added nested editor TK support
- all packages dropped Node 16 support
- switched to using `isTKEnabled` prop and `<TKCountPlugin>`
Co-authored-by: Kevin Ansfield <kevin@lookingsideways.co.uk>
fixes PROD-61
This adds a new default plan setting. It defaults to yearly, which is
the current default selected interval in Portal.
Behind the new portal improvements feature flag, the default plan can be
changed. It will also change automatically if the available intervals
are changed.
This PR also wires up passing the new setting to the Portal preview.