no issue
- issue reported via the forum https://forum.ghost.org/t/video-embed-break-page-on-mobile/44172
- due to historical issues we check against http/https and non-www/www URLs to match an oembed provider in case our library's provider list is out of date. However we checked http first which could match and then update the original URL to be `http` in place of `https` leading to potentially broken oembed fetch requests as was the case with http://odysee.com URLs
refs https://linear.app/tryghost/issue/BIZ-6/[wip]-update-segment-events
- With the removal of the `integration.added` event, we have no more model events remaining to listen to for our analytics
- Removal of the function entirely seems the easier and more straightforward way
refs https://linear.app/tryghost/issue/BIZ-6/[wip]-update-segment-events
- Removed model events to listen to: `post.published`, `page.published`, and `theme.uploaded` in segment service, as we're not actively using those.
- Updated tests to reflect the changes (from 4 events to 1 model event)
This logic is so simple it isn't worth having the indirection of another class.
This also removes the indirection of wrapped getters/setters, which is useful
because otherwise we need to update the wrapper with new methods each time
theunderlying implementation is changed. There was a note about losing the
context of this, but I haven't found anywhere that the context is lost.
closes https://github.com/TryGhost/Product/issues/4247
- bumps `@tryghost/kg-default-transforms` with a fix to our de-nesting transform so ListNode is no longer ignored as a badly nested child node which can occur through copy/paste from other editors
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
The main changes are:
- Updating the pipeline to allow for doing a background refresh of the
cache
- Remove the use of the EventAwareCacheWrapper for the posts public
cache
### Background refresh
This is just an initial implementation, and tbh it doesn't sit right
with me that the logic for this is in the pipeline - I think this should
sit in the cache implementation itself, and then we call out to it with
something like: `cache.get(key, fetchData)` and then the updates can
happen internally.
The `cache-manager` project actually has a method like this called
`wrap` - but every time I've used it it hangs, and debugging was a pain,
so I don't really trust it.
### EventAwareCacheWrapper
This is such a small amount of logic, I don't think it's worth creating
an entire wrapper for it, at least not a class based one. I would be
happy to refactor this to use a `Proxy` too, so that we don't have to
add methods to it each time we wanna change the underlying cache
implementation.
no issue
When creating the batches when sending an email, we log a message to
Sentry when there is an unexpected offset of 1% between creating the
email and actually creating the batch recipients. We used a method that
was not mapped in our Sentry proxy.
Location of error: ghost/email-service/lib/BatchSendingService.js:286
- we want to pass in the schema tables instead of cross requiring them
from a different package because it means the package isn't standalone
and moving the code structure around breaks the data generator
ref PROD-233
- Stored whether Docker is used in the config files
- When running `yarn setup`, any existing Docker container will be
reset. Run with `-y` to skip the confirmation step.
- `yarn setup` will always init the database and generate fake data
- Increased amount of default generated data to 100,000 members + 500
posts.
- Made lots of performance improvements in the data generator so we can
generate the default data in ±170s
- we don't need to load this if we haven't configured Node profiling to occur
- this might help fix random segfaults we've been seeing in CI, which
only started occurring after this dependency was added
refs
[ARCH-33](https://linear.app/tryghost/issue/ARCH-33/fix-sentry-environment)
To ensure that we are correctly identifying the environment that data is
being sent to Sentry from, we can use the `PRO_ENV` environment variable
if it is available. This will be set to `production` in production and
`staging` in staging. If `PRO_ENV` is not available, we will fall back
to retrieving the environment from config (`env`)
refs ARCH-29
- Added Sentry Profiling to collect more detailed performance data on
the backend.
- This feature is opt-in behind a config. To enable profiling, first
enable tracing with `sentry.tracing.enabled: true`, then set
`sentry.profiling.enabled: true` and `sentry.profiling.sampleRate` to a
decimal number between 0 and 1.
refs ARCH-21
- We currently have NewRelic setup for a few of our largest customers
for monitoring performance, but it is too expensive to enable across all
sites
- Sentry has similar (but simpler) performance monitoring tools to keep
track of response times that are available to us for free, but we just
haven't configured them
- This PR sets up Sentry Performance monitoring for API requests so we
can have one place for monitoring errors + performance so we can stay on
top of response times more easily.
- Tracing is disabled by default, so there is no additional overhead
unless `sentry.tracing.enabled` is set to `true` in the site's config.
Additionally, `sentry.tracing.sampleRate` should be set to a decimal
value between 0 and 1. This value defaults to 0 to avoid accidentally
blowing through quota, and requires a value to explicitly be set in
order to send the traces to Sentry.
refs
[ARCH-33](https://linear.app/tryghost/issue/ARCH-33/fix-sentry-environment)
To ensure that we are correctly identifying the environment that data is
being sent to Sentry from, we can use the `PRO_ENV` environment variable
if it is available. This will be set to `production` in production and
`staging` in staging. If `PRO_ENV` is not available, we will fall back
to retrieving the environment from config (`env`)
fixes PROD-290
- in order to receive webmentions (e.g. recommendations), Ghost sites
expose a /webmentions/receive endpoint. This endpoint is wrongly being
indexed by Google as a regular page, and causes indexing errors in
Google Search Console
- we don't need to load this if we haven't configured Node profiling to occur
- this might help fix random segfaults we've been seeing in CI, which
only started occurring after this dependency was added