refs f39d1d3aa3
- similar to the commit above, the JSON parser changed between Node 18
and Node 20, so the error message changed too
- we actually just want to check the error is forwarded to the user, so
we can do that by getting the error message from JSON.parse and check
against that
ref https://linear.app/tryghost/issue/MOM-48
This required some structural changes to our NestJS setup so that we can mount
it on multiple parts of the Ghost express app.
We've used the RouterModule to allow adding submodules that are mounted on
different paths, and we've had to be explicit about the base path for each
module. We've also had to switch back to using the Module decorator, because
RouterModule doesn't work with DynamicModule definitions.
Now that the NestJS app has knowledge of the full path, we need to "reset" the
url & baseUrl when passing the request into NestJS so that it can correctly
match the path. This is probably needed for the frontend too, for subdirs, but
that causes further issues - as this in prototype stage, we'll look later
Another issue is that NestJS replaces the express app instance with its own,
which isn't an issue for the Admin API (though we've fixed it anyway for
consistency), but did cause problems for the frontend, because the express app
is where view engine and directory information is stored.
The fix for this is to save a reference to the original ghost express
application, and reattach it to the request if it is not handled by Nest
Now that we have the Nest app mounted on the frontend, we're able to have it
handle the /.well-known/webfinger route with a proper controller, which is nice!
ref MOM-31
ref https://linear.app/tryghost/issue/MOM-31
We'll be building a lot of the new code for ActivityPub in Nest, so we'll need
to have it enabled in Ghost to work.
- The RSS cache has lived for a really long time, but I'm not sure it's useful
- Want to be able to determine if it gets used much, and if not, then we can remove it
ref 78311591d0
- updated tests to not click a button on the setup/done screen that is no longer shown
- fixed setup flow showing an alert bar due to not handling the `TransitionAborted` error that is thrown by the setup/done->dashboard redirect
ref https://linear.app/tryghost/issue/KTLO-1/members-spam-signups
- Some customers are seeing many spammy signups ("hundreds a day") — our
hypothesis is that bots and/or email link checkers are able to signup by
simply following the link in the email without even loading the page in
a browser.
- Currently new members signup by clicking a magic link in an email,
which is a simple GET request. When the user (or a bot) clicks that link, Ghost
creates the member and signs them in for the first time.
- This change, behind an alpha flag, requires a new member to click the
link in the email, which takes them to a new frontend route `/confirm_signup/`, then submit a form on the page which sends a POST request to the
server. If JavaScript is enabled, the form will be submitted
automatically so the only change to the user is an extra flash/redirect
before being signed in and redirected to the homepage.
- This change is behind the alpha flag `membersSpamPrevention` so we can
test it out on a few customer's sites and see if it helps reduce the
spam signups. With the flag off, the signup flow remains the same as
before.
ref https://linear.app/tryghost/issue/ENG-790/remove-use-of-sub-queries-in-email-analytics
- the `delivered_at` column is typically entirely/nearly entirely filled with values meaning the `IS NOT NULL` query matches a huge number of rows that MySQL has to fetch from the index to count
- using `IS NULL` switches that behaviour around as it will now match very few rows which has been shown in testing to be considerably quicker
- after switching to `IS NULL` the query returns an "undelivered" count rather than a "delivered" count, in order to keep the rest of the system behaviour the same we can calculate the delivered count by subtracting the query result from the total number of emails sent which we can fetch using a very fast primary key lookup query on the `emails` table
closes https://linear.app/tryghost/issue/ENG-790/remove-use-of-sub-queries-in-email-analytics
Avoiding sub queries means we don't have a process tied up for longer than necessary and we can more easily see if one of the queries is non-performant.
- extracted the count queries into separate queries and used the retrieved values in the final update query
- removed a query by moving the email open rate calculation into JS as we've already fetched the necessary data before that point
ref https://linear.app/tryghost/issue/CFR-13
- enabled saving traces on browser test failure; this makes troubleshooting a lot easier
- updated handling in offers tests to ensure the tier has fully loaded in the UI (not just `networkidle`)
- updated publishing test to examine the publish button reaction to the save action response instead of a 300ms pause
In general, our tests use a lot of watching for 'networkidle' - and sometimes just raw timeouts - which do not scale well into running tests on CI. In particular, 'networkidle' does not work if we're expecting to see React components' state updates propagate and re-render. We should always instead look to the content which encapsulates the response and the UI updates. This is something we should tackle on a larger scale.
ref ENG-774
ref https://linear.app/tryghost/issue/ENG-774
Staff Tokens will have both a `user` and an `apiKey` present on the
`loadedPermissions`.
The check here for `apiKey` was written when we could assume that an
`apiKey` was an Admin Integration - so it completely overwrote the
previous `allowed` list. When we added the concept of Staff Tokens -
this resulted in a privilege escalation.
This is a good lesson in not using proxies or indicators for data, as
changes elsewhere can invalidate them - if we had been specific and
checked the role of the current actor we wouldn't've had this bug!
ref https://linear.app/tryghost/issue/CFR-4/
- added request queueing middleware (express-queue) to handle high
request volume
- added new config option `optimization.requestQueue`
- added new config option `optimization.requestConcurrency`
- added logging of request queue depth - `req.queueDepth`
We've done a fair amount of investigation around improving Ghost's
resiliency to high request volume. While we believe this to be partly
due to database connection contention, it also seems Ghost gets
overwhelmed by the requests themselves. Implementing a simple queueing
system allows us a simple lever to change the volume of requests Ghost
is actually ingesting at any given time and gives us options besides
simply increasing database connection pool size.
---------
Co-authored-by: Michael Barrett <mike@ghost.org>
ref ENG-728
ref https://linear.app/tryghost/issue/ENG-728
This is not used anywhere, and makes the code more complicated, it's a good
step toward simplifying permissions and pulling them out of the database.