refs https://github.com/TryGhost/Toolbox/issues/522
- Caching on a repository level was pretty hard to achieve with more complex models like "posts", so that approach was abandoned in favor of API-response level caching.
- Also removed use of "public-resource-repository" as it was not serving any specific purpose anymore.
no issue
- The caching has been moved down the layer - the the api-framework's "pipeline", so there's no need to add complexity to post fetching logic with repository pattern.
refs https://github.com/TryGhost/Toolbox/issues/522
- When caching responses the posts cache can create a situation where it becomes stale within the TTL period and would give stale responses to shared caches.
- Having full cache reset on 'site.changed' event makes cached content evergreen reducing the risk of caching stale content in shared caches
refs https://github.com/TryGhost/Toolbox/issues/522
- The main feature of this cache wrapper is being able to "reset" the the cache without calling the "reset" on the wrapped cache. Being able to invalidate caches without accessing the data is a feature needed to run on caches with shared environment.
- Cache invalidation happens through a special "reset time" key being added to each key when setting or getting a value, when the cache is reset the reset time is set to a new value - essentially invalidating all previously accessible values.
refs https://github.com/TryGhost/Toolbox/issues/522
- The public posts "browse" endpoint is causing the most strain on the instance performance. Caching responses with small TTL would allow to reduce the amount of request processing.
refs https://github.com/TryGhost/Toolbox/issues/522
- API-level response caching allows to cache responses bypassing the "pipeline" processing
- The main usecase for these caches is caching GET requests for expensive Content API requests
- To enable response caching add a "cache" key with a cache instance as a value, for example for posts public cache configuration can look like:
```
module.exports = {
docName: 'posts',
browse: {
cache: postsPublicService.api.cache,
options: [ ...
```
no refs
-spam prevention test was causing subsequent tests to fail randomly
-moving to the end ensures (for now) we don't interrupt other tests
-seems to be an issue with awaiting the jobservice which do concurrent
no issue
- we added support for and a demo of clicking below the editor moving focus to it in the Koenig repo but the implementation in Admin was missing
- replaced the commented-out mobiledoc handling in `<GhKoenigEditorLexical>` with the new Koenig-lexical equivalent
No issue
- These specific styles were clashing with the Tailwind classes used in the Lexical editor. As they are not used anywhere in Admin, the simplest solution is to remove them.
- there's a weird situation when we have mixed versions of the
dependency because different libraries try to compare instances
- this brings the usage up to 1.2.21 so we can fix the build for now
refs https://github.com/TryGhost/Toolbox/issues/522
- Having simpler method signature makes it easier to use it in different context - needed for changes in public resource repository
- TLDR of the changes - reduced parameter 'frame.options' -> 'options'
no issue
Previously the number of opened emails was being generated incorrectly as the number of delivered emails was being reported too high.
Also, the faker date function occasionally fails for dates which are
too close together so this switches to manually generating a date
between the two.
fixes https://github.com/TryGhost/Team/issues/2562
New event fetching loops:
- Reworked the analytics fetching algorithm. Instead of starting again
where we stopped during the last fetching minus 30 minutes, we now just
continue where we stopped. But with ms precision (because no longer
database dependent after first fetch), and we stop at NOW - 1 minute to
reduce chance of missing events.
- Apart from that, a missing fetching loop is introduced. This fetches
events that are older than 30 minutes, and just processes all events a
second time to make sure we didn't skip any because of storage delays in
the Mailgun API.
- A new scheduled fetching loop, that allows us to schedule between a
given start/end date (currently only persisted in memory, so stops after
a reboot)
UI and endpoint changes:
- New UI to show the state of the analytics 'loops'
- New endpoint to request the analytics loop status
- New endpoint to schedule analytics
- New endpoint to cancel scheduled analytics
- Some number formatting improvements, and introduction of 'opened'
count in debug screen
- Live reload of data in the debug screen
Other changes:
- This also improves the support for maxEvents. We can now stop a
fetching loop after x events without worrying about lost events. This is
used to reduce the fetched events in the missing and scheduled event
loop (e.g. when the main one is fetching lots of events, we skip the
other loops).
- Prevents fetching the same events over and over again if no new events
come in (because we always started at the same begin timestamp). The
code increases the begin timestamp with 1 second if it is safe to do so,
to prevent the API from returning the same events over and over again.
- Some optimisations in handing the processing results (less merges to
reduce CPU usage in cases we have lots of events).
Testing:
- You can test with lots of events using the new mailgun mocking server
(Toolbox repo `scripts/mailgun-mock-server`). This can also simulate
events that are only returned after x minutes because of storage delays.
no refs.
- Spark email client injected styles for the new email signup email which broke the layout. This commit forces those styles for the email to be consistent in all clients.
closes https://github.com/TryGhost/Team/issues/2572
The mentions browse output previously only showed resource info if the
mentioned resource type was a `post`.
Additionally, the `resource_type` column basically defaulted to `post`
regardless whether it was a page or in fact a post.
With this change we now have `resource_type` wired in to correctly
determine if the mentioned url was a page or a post.
fixes https://github.com/TryGhost/Team/issues/2562
New event fetching loops:
- Reworked the analytics fetching algorithm. Instead of starting again
where we stopped during the last fetching minus 30 minutes, we now just
continue where we stopped. But with ms precision (because no longer
database dependent after first fetch), and we stop at NOW - 1 minute to
reduce chance of missing events.
- Apart from that, a missing fetching loop is introduced. This fetches
events that are older than 30 minutes, and just processes all events a
second time to make sure we didn't skip any because of storage delays in
the Mailgun API.
- A new scheduled fetching loop, that allows us to schedule between a
given start/end date (currently only persisted in memory, so stops after
a reboot)
UI and endpoint changes:
- New UI to show the state of the analytics 'loops'
- New endpoint to request the analytics loop status
- New endpoint to schedule analytics
- New endpoint to cancel scheduled analytics
- Some number formatting improvements, and introduction of 'opened'
count in debug screen
- Live reload of data in the debug screen
Other changes:
- This also improves the support for maxEvents. We can now stop a
fetching loop after x events without worrying about lost events. This is
used to reduce the fetched events in the missing and scheduled event
loop (e.g. when the main one is fetching lots of events, we skip the
other loops).
- Prevents fetching the same events over and over again if no new events
come in (because we always started at the same begin timestamp). The
code increases the begin timestamp with 1 second if it is safe to do so,
to prevent the API from returning the same events over and over again.
- Some optimisations in handing the processing results (less merges to
reduce CPU usage in cases we have lots of events).
Testing:
- You can test with lots of events using the new mailgun mocking server
(Toolbox repo `scripts/mailgun-mock-server`). This can also simulate
events that are only returned after x minutes because of storage delays.
Refs TryGhost/Team#2459
-upgraded got from v9.6.0 to v11.8.6 to support following redirects (and
other fixes)
-got v12+ requires ESM, so we do not want to upgrade further at this
time
-required changes to a few libraries that use externalRequests
-mention discovery service tests updated to test for follow redirects
no issue
- Instead of running milestone service directly on boot, set a random
timeout of 0-4 days to run after boot
- Updated tests
- Service is still behind a beta flag
refs 0306c397d0
refs https://github.com/TryGhost/Toolbox/issues/522
- 'extraProperties' should have been cleaned up along with the referenced commit. This property does not perform any logic in current codebase (see ref) and makes it problematic to make "post" resource serialization more generic (for caching purposes).
refs https://github.com/TryGhost/Toolbox/issues/515
- We don't have a good way to test TTL caches without setting up Redis in the environment
- Adding in-memory cache adapter with TTL allows to run tests on CI without having to install Redis
- Also, TTL in memory cache can be a great substitution for Redis-based caches on instances that
have a lot of spare RAM and don't need to use Redis necessarily
- MemoryTTL cache accepts two parameters "TTL" and "max"
- TTL - is time in milliseconds to hold the value for in cache
- max - is the maximum amount of items to keep in the cache
- To use MemoryTTL cache specify following config in the cache section:
```
"adapters": {
"cache": {
"imageSizes": {
"adapter": "MemoryTTL",
"ttl": 3600
}
}
}
```
- Above config would apply MemoryTTL cache to imageSizes feature with TTL fo 3600 ms
Removed the wrapper class for the email service from coverage, because this only wires up a lot of dependencies, which is hard to test in a unit test because we also have to init all the dependencies in a unit test. It is already covered by E2E tests.
refs https://github.com/TryGhost/Team/issues/2550
By using cheerio to parse the HTML we can correctly look for elements
which use the target URL as the href attribute, rather than doing a
plaintext search. This closer to what the spec says.
refs https://github.com/TryGhost/Team/issues/2550
Whilst this isn't ideal making multiple requests for the same site
it's a first step towards verification properties, and can be
refactored to improve performance later. With the current volume of
incoming Webmentions we've seen so far this shouldn't be a problem.
refs https://github.com/TryGhost/Team/issues/2548
We need to be able to do this outside of the verify method so that
repository implementations have the ability to create existing
instances without running expensive verification checks!
[Added initial mentions-jobs
service](3656190114)
This is the result of running `cp -r jobs mentions-jobs` in the services
directory.
[Waited for mentions-jobs queue before
shutdown](2bb1a12a89)
This matches the functionality of the existing jobs service where we
will wait
for jobs to complete before closing the process.
[Used mentions-jobs service in the mentions
service](4e4f9fdd00)
This ensures that any delays in the mentions jobs queue does not effect
other
parts of the application.
refs
https://www.notion.so/ghost/Marketing-Milestone-email-campaigns-1d2c9dee3cfa4029863edb16092ad5c4?pvs=4
- Added a `slack-notifications` repository which handles sending Slack
messages to a URL as defined in our Ghost(Pro) config (also includes a
global switch to disable the feature if needed) and listens to
`MilestoneCreatedEvents`.
- Added a `slack-notification` service which listens to the events on
boot.
- In order to have access to further information such as the reason why
a Milestone email hasn't been sent, or the current ARR or Member value
as comparison to the achieved milestone, I added a `meta` object to the
`MilestoneCreatedEvent` which then gets accessible by the event
subscriber. This avoid doing further requests to the DB as we need to
have this information in relation to the event occurred.
---------
Co-authored-by: Fabien "egg" O'Carroll <fabien@allou.is>
refs https://github.com/TryGhost/Team/issues/2561
- added simple socket-io implementation to Ghost server
- added alpha flag for websockets
- added route in admin to test websockets using a simple counter stored in server local memory (refreshes on reboot)