This logic is so simple it isn't worth having the indirection of another class.
This also removes the indirection of wrapped getters/setters, which is useful
because otherwise we need to update the wrapper with new methods each time
theunderlying implementation is changed. There was a note about losing the
context of this, but I haven't found anywhere that the context is lost.
closes https://github.com/TryGhost/Product/issues/4247
- bumps `@tryghost/kg-default-transforms` with a fix to our de-nesting transform so ListNode is no longer ignored as a badly nested child node which can occur through copy/paste from other editors
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
fixes PROD-201
The issue was caused because we were searching the 'name' field instead of the 'title' field.
This also increases the performance when loading the posts:
- Makes sure no relations are loaded
- Only return the fields we actually need
- Stop using limit=all, and replaced it with network based search
refs https://github.com/TryGhost/Ghost/pull/19284
- previous commit had only partially rolled back usage of `errorInfo` variable but it wasn't caught because the line in question has linting disabled
The main changes are:
- Updating the pipeline to allow for doing a background refresh of the
cache
- Remove the use of the EventAwareCacheWrapper for the posts public
cache
### Background refresh
This is just an initial implementation, and tbh it doesn't sit right
with me that the logic for this is in the pipeline - I think this should
sit in the cache implementation itself, and then we call out to it with
something like: `cache.get(key, fetchData)` and then the updates can
happen internally.
The `cache-manager` project actually has a method like this called
`wrap` - but every time I've used it it hangs, and debugging was a pain,
so I don't really trust it.
### EventAwareCacheWrapper
This is such a small amount of logic, I don't think it's worth creating
an entire wrapper for it, at least not a class based one. I would be
happy to refactor this to use a `Proxy` too, so that we don't have to
add methods to it each time we wanna change the underlying cache
implementation.
no issue
The data generator created an offer for the free product. This caused an
error in admin UI because it couldn't find the tier for the offer.
This fixes the issue in both the data generator and the admin UI.
no issue
The data generator went out of memory when trying to generate fake data
for > 2M members. This adds some improvements to make sure it doesn't go
out of memory.
---------
Co-authored-by: Fabien "egg" O'Carroll <fabien@allou.is>
no issue
When creating the batches when sending an email, we log a message to
Sentry when there is an unexpected offset of 1% between creating the
email and actually creating the batch recipients. We used a method that
was not mapped in our Sentry proxy.
Location of error: ghost/email-service/lib/BatchSendingService.js:286
no issue
The members stats API is a lot more slower than the normal members count
cache, and we use the members count cache in a lot more places. So we
can cache it more.
no issue
When we open the editor, we fire 4 requests to fetch member counts. This
commit fixes this by replacing those calls with the members count cache
service.
no issue
- the members API endpoint by default adds `order by created_at` to the SQL queries which creates unnecessary overhead when we only care about a count because MySQL sorts the table before querying the single member
- specifying an explicit `order by id` overrides the default API behaviour
- locally with 2 million members the query times drop from >5sec to ~1sec
no refs
- member_count endpoint was queried multiple times on load
- moved to collectively use the member stats service
- prevented multiple tasks from being queued up to return count
- we want to pass in the schema tables instead of cross requiring them
from a different package because it means the package isn't standalone
and moving the code structure around breaks the data generator
ref PROD-233
- Stored whether Docker is used in the config files
- When running `yarn setup`, any existing Docker container will be
reset. Run with `-y` to skip the confirmation step.
- `yarn setup` will always init the database and generate fake data
- Increased amount of default generated data to 100,000 members + 500
posts.
- Made lots of performance improvements in the data generator so we can
generate the default data in ±170s
- we don't need to load this if we haven't configured Node profiling to occur
- this might help fix random segfaults we've been seeing in CI, which
only started occurring after this dependency was added
refs
[ARCH-33](https://linear.app/tryghost/issue/ARCH-33/fix-sentry-environment)
To ensure that we are correctly identifying the environment that data is
being sent to Sentry from, we can use the `PRO_ENV` environment variable
if it is available. This will be set to `production` in production and
`staging` in staging. If `PRO_ENV` is not available, we will fall back
to retrieving the environment from config (`env`)
refs ARCH-29
- Added Sentry Profiling to collect more detailed performance data on
the backend.
- This feature is opt-in behind a config. To enable profiling, first
enable tracing with `sentry.tracing.enabled: true`, then set
`sentry.profiling.enabled: true` and `sentry.profiling.sampleRate` to a
decimal number between 0 and 1.
refs ARCH-21
- We currently have NewRelic setup for a few of our largest customers
for monitoring performance, but it is too expensive to enable across all
sites
- Sentry has similar (but simpler) performance monitoring tools to keep
track of response times that are available to us for free, but we just
haven't configured them
- This PR sets up Sentry Performance monitoring for API requests so we
can have one place for monitoring errors + performance so we can stay on
top of response times more easily.
- Tracing is disabled by default, so there is no additional overhead
unless `sentry.tracing.enabled` is set to `true` in the site's config.
Additionally, `sentry.tracing.sampleRate` should be set to a decimal
value between 0 and 1. This value defaults to 0 to avoid accidentally
blowing through quota, and requires a value to explicitly be set in
order to send the traces to Sentry.
refs
[ARCH-33](https://linear.app/tryghost/issue/ARCH-33/fix-sentry-environment)
To ensure that we are correctly identifying the environment that data is
being sent to Sentry from, we can use the `PRO_ENV` environment variable
if it is available. This will be set to `production` in production and
`staging` in staging. If `PRO_ENV` is not available, we will fall back
to retrieving the environment from config (`env`)