Regression tests are not written to ensure coverage of code, in fact they are
barely written at all anymore, instead we write unit and e2e tests. Because of
this the coverage is constantly dropping as the codebase grows. This causes
significant pain and suffering for developers and slows down development.
refs https://github.com/TryGhost/Team/issues/3269
- fixes continuation of list sequence when a non-list-node separates list nodes bringing rendered output in line with editor depiction
- this will allow us to see which set of tests are consuming the most
amount of time in CI
- in order to split apart the commands, I've had to override the
coverage thresholds for integration+regression tests in order to keep
c8 happy
- also sprinkled some more labels into the workflows to make things
clearer to read
refs https://github.com/TryGhost/Team/issues/3167
- This is scaffolding for collections API. Contains wiring for service wrapper, e2e test, and a browse endpoint
- Adds basic implementation of the GET /collections endpoint to build up upon
- Note, there are no permissions in this version as they will be added in later stages of development with migrations etc
no issue
- the lexical lib file makes use of `jsdom` but there was no explicit dependency for it in `package.json` meaning we were relying on it being incidentally depended on through another package which is brittle
These versions use the latest version of @tryghost/errors, which uses
the correct import for @stdlib/utils-copy. This should hopefully stop
missing module errors when running locally.
refs 9d104c8511
- we've seen recurring instances where Ghost will hog memory after image
uploads
- we use `jemalloc` to try and help this, but it still seems to happen
- according to the sharp thread referenced in my commit above, memory
fragmentation can also be helped by reducing the concurrency within
sharp
- this is a bit of an experiment and we can revert if it causes issues
refs 27e4523aec
- we no longer use `oembed-parser`, so we can remove it from
package.json
- also pins the `@extractus/oembed-extractor` package and adds it into
`@tryghost/oembed-service` where it was missing
refs TryGhost/Ghost#16048
- When attempting to embed a Youtube video that has had embedding
disabled by its owner/author, Ghost displayed a generic error message
that didn't indicate the reason for the failed emebed.
- This change updated the error message when Youtube (or any provider)
returns 401: Unauthorized to indicate that the owner of the resource has
explicitly disabled embedding.
- Updates the prepare script in the top level to run prepare on packages, so
that packages can be built when running `yarn`
- Updates the build script in ghost/core to run build on packages, so that
packages are built before being monobundled
- Updates monobundle to be a dependency and use the new TryGhost repo, which
includes some minor fixes and improvements, such as supporting devDeps
- Updates the GitHub workflows to run the build command in the top level
directory rather than ghost/core so that other packages are built, too.
fixes https://github.com/TryGhost/Team/issues/2385
The Sentry version has been locked to v7.11.1 for some time because Sentry still used a legacy Node feature, called domains. Due to a bug or change in in Noide 16+, those domains broke handling uncaught promise execptions. So Ghost crashed when a promise exception wasn't caught. But that shouldn't be the case because we have a global uncaught exception handler.
Luckily Sentry switched to AsyncLocalStorage in v7.48.0. This fixes the issue as demonstrated in c0cd62184c
refs TryGhost/Team#2904
<!-- Leave the line below if you'd like GitHub Copilot to generate a
summary from your commit -->
<!--
copilot:summary
-->
### <samp>🤖 Generated by Copilot at b3f5423</samp>
This pull request adds support for multiple formats of snippet content,
especially the `lexical` format, to the Ghost CMS. It modifies the
snippets API, model, and test files to handle the format conversion,
filtering, and serialization of snippets.
no issue
Rough prototype only, current limitations:
- **No persistence**. Docs are in-memory only, YJS state will be lost on server restart although it could be re-populated by clients if they reconnect without closing their local doc (needs testing/investigation)
- **No tie-in with saved lexical state**. Lexical state is updated in the post model via normal API requests from Admin which can mean the multiplayer doc and the saved lexical state become out of sync but there's no detection/indication of that state at present. Will also trigger the "someone else is editing" errors because multiplayer doesn't yet override the default post update collision detection
- **New posts don't start in multiplayer**. New posts don't have an ID and so can't have a respective YJS doc, after initial save we don't transition to multiplayer because the React component in Ember doesn't re-render on prop changes yet
- **No tests**. Experimental code just to get something working and help answer questions for what's next
Changes:
- added `lexicalMultiplayer` labs flag
- updated `<KoenigLexicalEditor>` to pass through the required `<KoenigComposer>` props for multiplayer when enabled
- added `lexical-multiplayer` service
- `init()` called during boot, used to set up the `enable()` and `disable()` methods so the flag can be toggled without restarts
- when enabled it adds `upgrade` request handling to the base Ghost server
- returns 404 if the URL doesn't match `/ghost/api/admin/posts/multiplayer/*`
- returns 401 if a valid session cookie is not present
- if everything is good, hands off to code in `y-websocket.js` that handles YJS doc creation, awareness, keepalive, etc
- uses doc names in the format `${post.id}/${docId}` where `docId` is `main` for the primary document and a GUID for any sub-documents like captions and nested editors in cards
- updated `SettingsBREADService` to check if the `labs` setting is changed, and enables/disables the `lexical-multiplayer` service as needed so the websockets server can be started and shutdown when toggling without requiring a restart
refs https://github.com/TryGhost/Team/issues/2691
- This bump changes the "sentEmailCount" method to a more descriptive "assertSentEmailCount" and adds chaining to this method.
refs TryGhost/Team#2691
- The bump adds possibility to make email's html/text snapshots with dynamic content. The breaking change here is with separate "matchPlaintextSnapshot" method extracted out of "matchMetadataSnapshot" to handle dynamic content in "text" part of the sent email.
- we previously used `@stdlib/utils` instead of the child package
`@stdlib/copy`, which is a lot smaller and contains our only use of
the parent
- this saves 140+MB of dependencies
- we keep ending up with multiple versions of the depedency in our tree,
and it's causing problems when comparing instances
- the workaround I'm implementing for now is to bump the package
everywhere and set a resolution so we only have 1 shared instance
- hopefully we can come up with a better method down the line
refs https://github.com/TryGhost/Team/issues/2845
We needed to update the html out of the cards to include images for light
and dark mode, and then we've used CSS to show/hide them
Co-authored-by: Fabien "egg" O'Carroll <fabien@allou.is>
no issue
This change moves the way that we enable developer experiments in the browser-based test suite from being an environment variable into the config file for that environment. Just a tidy-up, no functional change.
- by default, got retries failed requests, which is causing issues in
tests because we've disabled the network with `nock`
- this is causing huge idle time because got pauses before retrying
- this change disables the retries if we're running tests, so things are
more stable
refs https://github.com/TryGhost/Ghost/issues/15502
- this adds the scaffolding for enabling i18n translations within Ghost
core
- also adds `yarn translate:ghost` as an option to the `i18n` package
- the locale is currently hardcoded to `en` so we don't utilize the
translations until we're ready
fixes https://github.com/TryGhost/Team/issues/2611
The old email flow is no longer used since we introduced the email stability flow. This commit removes the related code and tests. The general test coverage decreased a bit as a result, because the old email flow probably had a high test coverage. The new flow is in separate packages, so it couldn't contribute to a higher test coverage (but it does have 100% unit test coverage).
refs 6460522352
- these were both bumped around the time that Playwright browser tests
started going awry, so they may be the cause of some random failures
refs https://github.com/TryGhost/Toolbox/issues/523
- This is a first pass media inliner going through all posts and checking to inline media from specified domains
- As a working copy the inliner looks for image content from Revue and Substack
refs https://github.com/TryGhost/Toolbox/issues/523
- We need to process generic files like .pdf, .md, etc. on the importer "handler" stage.
- The media handler can process more than just media files after few refactorings. Renaming it to a generic content file handler indicates it can process any type of content file.
- In future we can substitute the built-in "images" import handler with this generic content file handler.
refs https://github.com/TryGhost/Toolbox/issues/523
- When a zip file is imported into Ghost we need to recognize and process media files with following extensions:
".mp4",".webm", ".ogv", ".mp3", ".wav", ".ogg", ".m4a"
- The media files can come from a "media" or "content/media" folder inside of zip file
refs https://github.com/TryGhost/Toolbox/issues/522
- Using a generic resource repository for caching purposes didn't prove to be effective as code was moved to api-level caching. There's no need to introduce extra abstraction level for simple calls like findPage when there's no extra logic to it.
refs https://github.com/TryGhost/Toolbox/issues/522
- When caching responses the posts cache can create a situation where it becomes stale within the TTL period and would give stale responses to shared caches.
- Having full cache reset on 'site.changed' event makes cached content evergreen reducing the risk of caching stale content in shared caches
- there's a weird situation when we have mixed versions of the
dependency because different libraries try to compare instances
- this brings the usage up to 1.2.21 so we can fix the build for now
Refs TryGhost/Team#2459
-upgraded got from v9.6.0 to v11.8.6 to support following redirects (and
other fixes)
-got v12+ requires ESM, so we do not want to upgrade further at this
time
-required changes to a few libraries that use externalRequests
-mention discovery service tests updated to test for follow redirects
refs
https://www.notion.so/ghost/Marketing-Milestone-email-campaigns-1d2c9dee3cfa4029863edb16092ad5c4?pvs=4
- Added a `slack-notifications` repository which handles sending Slack
messages to a URL as defined in our Ghost(Pro) config (also includes a
global switch to disable the feature if needed) and listens to
`MilestoneCreatedEvents`.
- Added a `slack-notification` service which listens to the events on
boot.
- In order to have access to further information such as the reason why
a Milestone email hasn't been sent, or the current ARR or Member value
as comparison to the achieved milestone, I added a `meta` object to the
`MilestoneCreatedEvent` which then gets accessible by the event
subscriber. This avoid doing further requests to the DB as we need to
have this information in relation to the event occurred.
---------
Co-authored-by: Fabien "egg" O'Carroll <fabien@allou.is>
refs https://github.com/TryGhost/Team/issues/2561
- added simple socket-io implementation to Ghost server
- added alpha flag for websockets
- added route in admin to test websockets using a simple counter stored in server local memory (refreshes on reboot)
refs https://github.com/TryGhost/Toolbox/issues/522
- The caching rules for Content API resources like Tags / Posts / Pages / Authors are the same, so it makes sense to make a common repository which would take in a model as a parameter and perform the queries on that level instead of implementing repository-per-model.
- With this refactor getting Posts Content API caching would be much simpler change.
no issue
- The way we're going to implement milestones diverged from the original idea of handling email sending within the milestone-emails package, as we'll be sending events instead and will utilise the StaffService to listen to them and send the emails
- This renames the package as well as the service in core itself and all relevant tests
closes https://github.com/TryGhost/Team/issues/2558
- bumped `kg-lexical` packages so we're working with latest suite of default nodes and renderer
- added a `render()` method directly to our `lexicalLib` object
- allows us to pass through all of Ghost's config for image transforms etc in one place rather than every time we want to render something
refs https://github.com/TryGhost/Toolbox/issues/515
- This implementation allows to use Redis cluster as a caching adapter. The cache adapter can be configured through same adapter configuration interface as others. It accepts following config values:
- "ttl" - time in SECONDS for stored entity to live in Redis cache
- "keyPrefix" - special cache key prefix to use with stored entities
- "host" / "port" / "password" / "clusterConfig" - Redis instance specific configs
- Set test coverage to non-standard 75% because the code mostly consists of the glue code that would require unnecessary Redis server mocking and would be a bad ROI. This module has been used in production for a long time already, so can be considered pretty stable.
refs https://github.com/TryGhost/Toolbox/issues/515
- There are a lot of repeated cacheable tag-related queries coming from
{{get}} helpers in themes that can be cached.
- Having a repository layer deal with very specific type of query allows
to add extra functionality, like caching, on top of the database query
- This commit is wiring code that addds a default in-memory cache to
all db queries. Note, it lasts forever and has no "reset" listeners. The
production cache is mean to have a short time-to-live (TTL) - removes a need
to keep the cache always fresh.
- Kept the cache key shortened. Without a "context" and any other non-model options the cache-key can store more variations of queries. For example, there is no member-specific or integration-specific query results, so having those in the cache key would only partition the cache and use up more memory.
fixes https://github.com/TryGhost/Team/issues/481
This change fixes an issue when multiple images with the same name are
uploaded in parallel. The current system does not guarantee that the
original filename is stored under NAME+`_o`, because the upload for the
original file and the resized file are happening in parallel.
Solution:
- Wait for the storage of the resized image (= the image without the _o
suffix) before storing the original file.
- When that is stored, use the generated file name of the stored image
to generate the filename with the _o suffix. This way, it will always
match and we don't risk both files to have a different number suffix.
We'll also set the `targetDir` argument when saving the file, to avoid
storing the original file in a different directory (when uploading a
file around midnight both files could be stored in 2023/01 and 2023/02).
Some extra optimisations needed with this fix:
- Previously when uploading image.jpg, while it already exists, it would
store two filenames on e.g., `image-3.jpg` and `image_o-3.jpg`. Note the
weird positioning of `_o`. This probably caused bugs when uploading
files named `image-3.jpg`, which would store the original in
`image-3_o.jpg`, but this original would never be used by the
handle-image-sizes middleware (it would look for `image_o-3.jpg`). This
fix would solve this weird naming issue, and make it more consistent.
But we need to make sure our middlewares (including handle-image-sizes)
will be able to handle both file locations to remain compatible with the
old format. This isn't additional work, because it would fix the old bug
too.
- Prevent uploading files that end with `_o`, e.g. by automatically
stripping that suffix from uploaded files. To prevent collisions.
Advantage(s):
- We keep the original file name, which is better for SEO.
- No changes required to the storage adapters.
Downside(s):
- The storage of both files will nog happen parallel any longer. But I
expect the performance implications to be minimal.
- Changes to the routing: normalize middleware is removed
refs https://github.com/TryGhost/Toolbox/issues/503
- The Dynamic URL service no longer generates "url.added" event when only a partial resource update happened - only non-url forming properties were modified. The sitemaps service still needs to know when to update the lastmod ("Last Modified") field associated with specific URL.
refs https://github.com/TryGhost/Toolbox/issues/501
- this reverts commit 48dda23554
- also includes a resolution for `@elastic/elasticsearch` so we don't
run a version that is potentially problematic - see referenced issue
for context
refs https://github.com/TryGhost/Team/issues/2416
This extends the mock API to use a more formal pattern of moving our
entity code into a separate package, and use the service/repository
patterns we've been work toward.
The repository is currently in memory, this allows us to start using
the API without having to make commitments to the database structure.
We've also injected a single fake webmention for testing. I'd expect
the Mention object to change a lot from this initial definition as we
gain more information about the type of data we expect to see.
refs https://github.com/TryGhost/Toolbox/issues/499
refs 6bcc47a0ad
- Using module directly caused issues with snapshots manager instance initialization (mocha hooks did not apply to a correct instance)
- See refed commit for more
refs https://github.com/TryGhost/Toolbox/issues/499
- Outgoing emails have been a weak point of Ghost's stability recently. The concept of "emailMockReceiver" similarly to "webhookMockReceiver", allows to test side-effects like outgoing emails.
- This is a first iteration which should lay groundwork for testing all outgoing emails in the future
- The change adds a new concept of "email mock receiver" which is very similar to how the "webhook mock receiver" works. The email mock receiver exposes two methods to record and verify snapshots:
- matchHTMLSnapshot - records and verifies only the HTML content of the outgoint email
- matchMetadataSnapshot - records and verifies all the non-HTML properties sent along an email content, e.g.: to address, plaintext, subject, etc.
- What's missing is matching content based on dynamic content like dates, links with JWT tokens, etc.
closes https://github.com/TryGhost/Toolbox/issues/497
- The classification of fatal/non-fatal errors has been updated to only be fatal when causing page renders with 5xx or 4xx responses.
- Some of the rules checking Ghost 5.x compatibility have been relaxed to only be "error" with the gscan version bump
- You can find more details on which exact rules were relaxed in the gscan's commit log - https://github.com/TryGhost/gscan/compare/v4.35.1...v4.36.0
refs https://github.com/TryGhost/Toolbox/issues/488
- Node 18 is now LTS so we're adding support for it
- this adds Node 18.12.1 (the latest security release) to our supported
ranges and CI
- this was all getting terribly behind so I've done several things:
- majority of `@tryghost/*` except Lexical packages
- gscan + knex-migrator to remove old `@tryghost/errors` usage
- bumped lockfile