refs https://github.com/TryGhost/Toolbox/issues/522
- Using a generic resource repository for caching purposes didn't prove to be effective as code was moved to api-level caching. There's no need to introduce extra abstraction level for simple calls like findPage when there's no extra logic to it.
refs https://github.com/TryGhost/Toolbox/issues/522
- Caching on a repository level was pretty hard to achieve with more complex models like "posts", so that approach was abandoned in favor of API-response level caching.
- Also removed use of "public-resource-repository" as it was not serving any specific purpose anymore.
no issue
- The caching has been moved down the layer - the the api-framework's "pipeline", so there's no need to add complexity to post fetching logic with repository pattern.
refs https://github.com/TryGhost/Toolbox/issues/522
- When caching responses the posts cache can create a situation where it becomes stale within the TTL period and would give stale responses to shared caches.
- Having full cache reset on 'site.changed' event makes cached content evergreen reducing the risk of caching stale content in shared caches
refs https://github.com/TryGhost/Toolbox/issues/522
- The main feature of this cache wrapper is being able to "reset" the the cache without calling the "reset" on the wrapped cache. Being able to invalidate caches without accessing the data is a feature needed to run on caches with shared environment.
- Cache invalidation happens through a special "reset time" key being added to each key when setting or getting a value, when the cache is reset the reset time is set to a new value - essentially invalidating all previously accessible values.
refs https://github.com/TryGhost/Toolbox/issues/522
- The public posts "browse" endpoint is causing the most strain on the instance performance. Caching responses with small TTL would allow to reduce the amount of request processing.
refs https://github.com/TryGhost/Toolbox/issues/522
- API-level response caching allows to cache responses bypassing the "pipeline" processing
- The main usecase for these caches is caching GET requests for expensive Content API requests
- To enable response caching add a "cache" key with a cache instance as a value, for example for posts public cache configuration can look like:
```
module.exports = {
docName: 'posts',
browse: {
cache: postsPublicService.api.cache,
options: [ ...
```
no refs
-spam prevention test was causing subsequent tests to fail randomly
-moving to the end ensures (for now) we don't interrupt other tests
-seems to be an issue with awaiting the jobservice which do concurrent
no issue
- we added support for and a demo of clicking below the editor moving focus to it in the Koenig repo but the implementation in Admin was missing
- replaced the commented-out mobiledoc handling in `<GhKoenigEditorLexical>` with the new Koenig-lexical equivalent
No issue
- These specific styles were clashing with the Tailwind classes used in the Lexical editor. As they are not used anywhere in Admin, the simplest solution is to remove them.
- there's a weird situation when we have mixed versions of the
dependency because different libraries try to compare instances
- this brings the usage up to 1.2.21 so we can fix the build for now
refs https://github.com/TryGhost/Toolbox/issues/522
- Having simpler method signature makes it easier to use it in different context - needed for changes in public resource repository
- TLDR of the changes - reduced parameter 'frame.options' -> 'options'
no issue
Previously the number of opened emails was being generated incorrectly as the number of delivered emails was being reported too high.
Also, the faker date function occasionally fails for dates which are
too close together so this switches to manually generating a date
between the two.
fixes https://github.com/TryGhost/Team/issues/2562
New event fetching loops:
- Reworked the analytics fetching algorithm. Instead of starting again
where we stopped during the last fetching minus 30 minutes, we now just
continue where we stopped. But with ms precision (because no longer
database dependent after first fetch), and we stop at NOW - 1 minute to
reduce chance of missing events.
- Apart from that, a missing fetching loop is introduced. This fetches
events that are older than 30 minutes, and just processes all events a
second time to make sure we didn't skip any because of storage delays in
the Mailgun API.
- A new scheduled fetching loop, that allows us to schedule between a
given start/end date (currently only persisted in memory, so stops after
a reboot)
UI and endpoint changes:
- New UI to show the state of the analytics 'loops'
- New endpoint to request the analytics loop status
- New endpoint to schedule analytics
- New endpoint to cancel scheduled analytics
- Some number formatting improvements, and introduction of 'opened'
count in debug screen
- Live reload of data in the debug screen
Other changes:
- This also improves the support for maxEvents. We can now stop a
fetching loop after x events without worrying about lost events. This is
used to reduce the fetched events in the missing and scheduled event
loop (e.g. when the main one is fetching lots of events, we skip the
other loops).
- Prevents fetching the same events over and over again if no new events
come in (because we always started at the same begin timestamp). The
code increases the begin timestamp with 1 second if it is safe to do so,
to prevent the API from returning the same events over and over again.
- Some optimisations in handing the processing results (less merges to
reduce CPU usage in cases we have lots of events).
Testing:
- You can test with lots of events using the new mailgun mocking server
(Toolbox repo `scripts/mailgun-mock-server`). This can also simulate
events that are only returned after x minutes because of storage delays.
no refs.
- Spark email client injected styles for the new email signup email which broke the layout. This commit forces those styles for the email to be consistent in all clients.
closes https://github.com/TryGhost/Team/issues/2572
The mentions browse output previously only showed resource info if the
mentioned resource type was a `post`.
Additionally, the `resource_type` column basically defaulted to `post`
regardless whether it was a page or in fact a post.
With this change we now have `resource_type` wired in to correctly
determine if the mentioned url was a page or a post.
fixes https://github.com/TryGhost/Team/issues/2562
New event fetching loops:
- Reworked the analytics fetching algorithm. Instead of starting again
where we stopped during the last fetching minus 30 minutes, we now just
continue where we stopped. But with ms precision (because no longer
database dependent after first fetch), and we stop at NOW - 1 minute to
reduce chance of missing events.
- Apart from that, a missing fetching loop is introduced. This fetches
events that are older than 30 minutes, and just processes all events a
second time to make sure we didn't skip any because of storage delays in
the Mailgun API.
- A new scheduled fetching loop, that allows us to schedule between a
given start/end date (currently only persisted in memory, so stops after
a reboot)
UI and endpoint changes:
- New UI to show the state of the analytics 'loops'
- New endpoint to request the analytics loop status
- New endpoint to schedule analytics
- New endpoint to cancel scheduled analytics
- Some number formatting improvements, and introduction of 'opened'
count in debug screen
- Live reload of data in the debug screen
Other changes:
- This also improves the support for maxEvents. We can now stop a
fetching loop after x events without worrying about lost events. This is
used to reduce the fetched events in the missing and scheduled event
loop (e.g. when the main one is fetching lots of events, we skip the
other loops).
- Prevents fetching the same events over and over again if no new events
come in (because we always started at the same begin timestamp). The
code increases the begin timestamp with 1 second if it is safe to do so,
to prevent the API from returning the same events over and over again.
- Some optimisations in handing the processing results (less merges to
reduce CPU usage in cases we have lots of events).
Testing:
- You can test with lots of events using the new mailgun mocking server
(Toolbox repo `scripts/mailgun-mock-server`). This can also simulate
events that are only returned after x minutes because of storage delays.
Refs TryGhost/Team#2459
-upgraded got from v9.6.0 to v11.8.6 to support following redirects (and
other fixes)
-got v12+ requires ESM, so we do not want to upgrade further at this
time
-required changes to a few libraries that use externalRequests
-mention discovery service tests updated to test for follow redirects
no issue
- Instead of running milestone service directly on boot, set a random
timeout of 0-4 days to run after boot
- Updated tests
- Service is still behind a beta flag
refs 0306c397d0
refs https://github.com/TryGhost/Toolbox/issues/522
- 'extraProperties' should have been cleaned up along with the referenced commit. This property does not perform any logic in current codebase (see ref) and makes it problematic to make "post" resource serialization more generic (for caching purposes).
refs https://github.com/TryGhost/Toolbox/issues/515
- We don't have a good way to test TTL caches without setting up Redis in the environment
- Adding in-memory cache adapter with TTL allows to run tests on CI without having to install Redis
- Also, TTL in memory cache can be a great substitution for Redis-based caches on instances that
have a lot of spare RAM and don't need to use Redis necessarily
- MemoryTTL cache accepts two parameters "TTL" and "max"
- TTL - is time in milliseconds to hold the value for in cache
- max - is the maximum amount of items to keep in the cache
- To use MemoryTTL cache specify following config in the cache section:
```
"adapters": {
"cache": {
"imageSizes": {
"adapter": "MemoryTTL",
"ttl": 3600
}
}
}
```
- Above config would apply MemoryTTL cache to imageSizes feature with TTL fo 3600 ms
Removed the wrapper class for the email service from coverage, because this only wires up a lot of dependencies, which is hard to test in a unit test because we also have to init all the dependencies in a unit test. It is already covered by E2E tests.
refs https://github.com/TryGhost/Team/issues/2550
By using cheerio to parse the HTML we can correctly look for elements
which use the target URL as the href attribute, rather than doing a
plaintext search. This closer to what the spec says.
refs https://github.com/TryGhost/Team/issues/2550
Whilst this isn't ideal making multiple requests for the same site
it's a first step towards verification properties, and can be
refactored to improve performance later. With the current volume of
incoming Webmentions we've seen so far this shouldn't be a problem.
refs https://github.com/TryGhost/Team/issues/2548
We need to be able to do this outside of the verify method so that
repository implementations have the ability to create existing
instances without running expensive verification checks!
[Added initial mentions-jobs
service](3656190114)
This is the result of running `cp -r jobs mentions-jobs` in the services
directory.
[Waited for mentions-jobs queue before
shutdown](2bb1a12a89)
This matches the functionality of the existing jobs service where we
will wait
for jobs to complete before closing the process.
[Used mentions-jobs service in the mentions
service](4e4f9fdd00)
This ensures that any delays in the mentions jobs queue does not effect
other
parts of the application.
refs
https://www.notion.so/ghost/Marketing-Milestone-email-campaigns-1d2c9dee3cfa4029863edb16092ad5c4?pvs=4
- Added a `slack-notifications` repository which handles sending Slack
messages to a URL as defined in our Ghost(Pro) config (also includes a
global switch to disable the feature if needed) and listens to
`MilestoneCreatedEvents`.
- Added a `slack-notification` service which listens to the events on
boot.
- In order to have access to further information such as the reason why
a Milestone email hasn't been sent, or the current ARR or Member value
as comparison to the achieved milestone, I added a `meta` object to the
`MilestoneCreatedEvent` which then gets accessible by the event
subscriber. This avoid doing further requests to the DB as we need to
have this information in relation to the event occurred.
---------
Co-authored-by: Fabien "egg" O'Carroll <fabien@allou.is>
refs https://github.com/TryGhost/Team/issues/2561
- added simple socket-io implementation to Ghost server
- added alpha flag for websockets
- added route in admin to test websockets using a simple counter stored in server local memory (refreshes on reboot)
refs https://github.com/TryGhost/Toolbox/issues/522
- Browse endpoints for Posts and Pages are creating the most database traffic in the system. These are read-only endpoints that don't have to be fresh 100% of the time. Having optional cache allows to offload some of the database querying to more efficient storage.
- To enable cache for Posts/Pages browse endpoints there are two prerequisites:
- set 'hostSettings:postsPublicCache:enabled' to 'true' in the configuration file
- add 'postsPublic' cache adapter in cache configuration
- Example config for adapters with 60s TTL for a cached resource:
```
"adapters": {
"cache": {
"postsPublic": {
"adapter": "Redis",
"ttl": 60,
"keyPrefix": "site_id_here:posts-content-api:"
}
}
},
```
refs https://github.com/TryGhost/Toolbox/issues/522
- The caching rules for Content API resources like Tags / Posts / Pages / Authors are the same, so it makes sense to make a common repository which would take in a model as a parameter and perform the queries on that level instead of implementing repository-per-model.
- With this refactor getting Posts Content API caching would be much simpler change.
no issue
This should massively increase the speed of importing for the large dataset, which is important as the time to import it on Pro is >10 minutes at the moment
fixes https://github.com/TryGhost/Ghost/issues/16278
`urlToPath` is not a method that is defined on the StorageBase, and thus is not implemented on any custom storage adapter.
We need this method to prevent uploading a file in two different directories. Currently this is an edge case:
- If you upload a file at midnight between a month shift, it is possible that we upload one file at `/2023/02/file.png` and the original at `/2023/03/file_o.png`
- To make sure we upload the second at the same directory, we need the `urlToPath` implementation. The save method sadly only returns the URL, which could be different than the stored file path, so we cannot use that return value.
This fix only uses `urlToPath` if available, to prevent above issue.
fixes https://github.com/TryGhost/Team/issues/2433
- Moved all outbound link tagging code to separate OutboundLinkTagger
- Because a site can easily enable/disable this feature, we don't store
the ?refs in the HTML but add them on the fly for now in the Content
API.
closes https://github.com/TryGhost/Team/issues/2551
Rather than blindly passing all data through the API we explicitly include each
new property. This allows us to make changes to the core entities without
affecting the API. The verified property is being added now to give design the
ability to display these mentions differently.
We also needed to include the verified property in the return value of toJSON,
this was missed as part of the original entity changes
refs https://github.com/TryGhost/Toolbox/issues/520
- The cluster config is taking over local adapter ttl configuration - the priority should be reverse adapter config first followed by cluster config
- In addition if we add nested config merging to adapter manager we could achieve the same effect by having per-adapter configuraiton with "clusterConfig.options.ttl" value
no issue
- With the switch of using a `MilestoneCreatedEvent` we'll be decoupling the mailing functionality and not need `GhostMailer` as dependency in the package anymore
no issue
- In preparation of using event emitting for Milestone achievements, we needed to add a dedicated `MilestoneCreatedEvent` to the `Milestone` entity.
- The event will be emitted using `DomainEvents` when a new milesteone is saved, which will allow us to listen to these events.
no issue
- The way we're going to implement milestones diverged from the original idea of handling email sending within the milestone-emails package, as we'll be sending events instead and will utilise the StaffService to listen to them and send the emails
- This renames the package as well as the service in core itself and all relevant tests
no issue
- For better testability with in-memory repository, refactor the
milestones service to preserve the API instance
- Fetching the information about Stripe live mode from Stripe service
was causing difficulties when testing. As a workaround we switched to
reading the live mode keys and determine it that way.
---------
Co-authored-by: Fabien "egg" O'Carroll <fabien@allou.is>
refs https://github.com/TryGhost/Toolbox/issues/520
- The cluster config is taking over local adapter ttl configuration - the priority should be reverse adapter config first followed by cluster config
- In addition if we add nested config merging to adapter manager we could achieve the same effect by having per-adapter configuraiton with "clusterConfig.options.ttl" value
refs c489343831
- `lexicalLib.lexicalHtmlRenderer` can't be used directly because it's missing required config for nodes to render properly
- updated email serializers to use `lexicalLib.render()` instead
closes https://github.com/TryGhost/Team/issues/2558
- bumped `kg-lexical` packages so we're working with latest suite of default nodes and renderer
- added a `render()` method directly to our `lexicalLib` object
- allows us to pass through all of Ghost's config for image transforms etc in one place rather than every time we want to render something
no issue
- local development Ghost installs should not default to a locally served `koenig-lexical.umd.js` file but should use the default CDN version
- a locally served file should be set in `config.local.json` when you want to develop against an unreleased version
no issue
- added `requestMethod` to `fileTypes` map
- added pass-through of `formData` options to `upload(file, options)` so `url` property can be passed to map uploaded image file to media file
- fixed `resourceName` for media thumbnail uploads
fixes https://github.com/TryGhost/Team/issues/2557
When a member doen't have a name, and the first_name replacement doesn't have a fallback, we did show %recipient.first_name% instead of an empty string.
no issue
- When we receive an email failure with an empty message, the saving of
the model would fail because of schema validation that requires strings
to be non-empty.
- This adds more logging to the email analytics service to help debug
future issues
- Performance improvement to storing delivered, opened and failed emails
by replacing COALESCE with WHERE X IS NULL (tested and should give a
decent performance boost locally).
refs https://github.com/TryGhost/Koenig/pull/491
- `<KoenigComposer>` now takes a `fileUpoader` object in place of `imageUploadFunction`
- updated the upload functions in `koenig-lexical-editor.js` to match expected patterns, handle multiple files and file types, and return expected upload progress, result, and error details
closes https://github.com/TryGhost/Team/issues/2548
Rather than use a setter here we've used a verify method which takes the HTML
string and naively validates that the target URL is present. This is so that the
logic of verification is encapsulated in the Mention, and should mean that
erroneous verification doesn't happen.
We could consider down the line that the verify method fetches the content
itself, but if we're to do that we should pass in `got` as a param, so that it's
possible to stub in tests.
One thing to think about when it comes time to making this as performant as
possible is doing a single fetch of the source document and using that for
verification and metadata extraction. At that point we should probably
consolidate both of those operations, either moving the metadata extraction into
the Mention entity (passing in any necessary deps) OR we move the verification
out to the same layer as metadata extraction.
closes https://github.com/TryGhost/Team/issues/2552
We send a Webmention for the same URL twice, but change the contents
of the source document, and we check that the source metadata is
updated appropriately.
We should consider extending all of these tests to include featured
images and logos etc...
fixes https://github.com/TryGhost/Team/issues/2542
fixes https://github.com/TryGhost/Team/issues/2543
fixes https://github.com/TryGhost/Team/issues/2544
- Hides incomplete subscriptions
- Shows Past Due subscriptions
- Fixed UI issues with 3+ subscriptions
- Fixed missing complimentary subscription when one subscription was
incomplete/inactive
- Fixed sending a paid subscription started email for incomplete
subscriptions. This change also required us to actually send the email
when the incomplete subscription eventually becomes active. So the
introduction of a new `SubscriptionActivatedEvent` made sense/was
required (because sending a SubscriptionCreatedEvent again would cause
other issues).
refs https://github.com/TryGhost/Toolbox/issues/515
- This implementation allows to use Redis cluster as a caching adapter. The cache adapter can be configured through same adapter configuration interface as others. It accepts following config values:
- "ttl" - time in SECONDS for stored entity to live in Redis cache
- "keyPrefix" - special cache key prefix to use with stored entities
- "host" / "port" / "password" / "clusterConfig" - Redis instance specific configs
- Set test coverage to non-standard 75% because the code mostly consists of the glue code that would require unnecessary Redis server mocking and would be a bad ROI. This module has been used in production for a long time already, so can be considered pretty stable.
refs https://github.com/TryGhost/Toolbox/issues/515
- Redis-based caches can be used on hosted-environments to store information with high memory impact - when in-memory caches would be too impractical to use
- This is a placeholder package for a cluster-aware Redis cache implementation compatible with Ghost's cache adapter interface (a41d351f16/packages/adapter-base-cache)
refs https://github.com/TryGhost/Toolbox/issues/515
- The caches relying on external storage like e.g.: Redis, the get/set operations are usually async. The tags repository should be working with these as caching is expected to be non-in-memory for these data.
refs https://github.com/TryGhost/Toolbox/issues/515
- By moving the cache initialization behind the hostSettings configuration we can limit the experimental feature to only hosted environments with special capacities
- An example configuration to enable Tags caching looks like this:
```
"hostSettings": {
"tagsPublicCache": {
"enabled": true
},
```
In addition to have the caching backed by a Redis backend or even InMemoryTTL cache the site configuration should include a cache adapter configuration like this:
```
"adapters": {
"cache": {
"active": "Memory",
"tagsPublic": {
"adapter": "TTL",
"ttl": 60000 // 60 * 1000 minute
},
```
refs https://github.com/TryGhost/Toolbox/issues/515
- There are a lot of repeated cacheable tag-related queries coming from
{{get}} helpers in themes that can be cached.
- Having a repository layer deal with very specific type of query allows
to add extra functionality, like caching, on top of the database query
- This commit is wiring code that addds a default in-memory cache to
all db queries. Note, it lasts forever and has no "reset" listeners. The
production cache is mean to have a short time-to-live (TTL) - removes a need
to keep the cache always fresh.
- Kept the cache key shortened. Without a "context" and any other non-model options the cache-key can store more variations of queries. For example, there is no member-specific or integration-specific query results, so having those in the cache key would only partition the cache and use up more memory.
closes https://github.com/TryGhost/Team/issues/2547
Changed the configuration for testing to be a bit more strict, by slowing down the amount of requests it can handle to give CI enough time to kick in the rate limiter. Before this, CI simply wasn't hitting the API fast enough to trigger the rate limiter.
Co-authored-by: Ronald Langeveld <hi@ronaldlangeveld.com>
refs https://github.com/TryGhost/Team/issues/2535
At the moment we only update the metadata of the Webmention source, so
that we can capture update titles, excerpts, etc... when a post is updated.
refs https://github.com/TryGhost/Team/issues/2535
Moving this into a separate method allows us to set the metadata externally from
the Mention entity and keep all of the validation, without having duplicate code
refs https://ghost.slack.com/archives/C04MSE4MKJT/p1675948815531779
- running at a fixed hh:mm every day means a platform with a large number
of Ghost sites will get hammered with DB requests when they all start
up
- this reconfigures the cron to run at a random minute and second
between 0am and 5am, which gives a 6 hour window
refs https://github.com/TryGhost/Team/issues/2534
As we're using soft deletes for mentions we need to store the `deleted` column
as well as enforce a `'deleted:false'` filter on the bookshelf model.
We've also implemented the handling for deleting mentions. Where we remove a
mention anytime we receive and update from or to a page which no longer exists.
Co-authored-by: Steve Larson <9larsons@gmail.com>
refs https://github.com/TryGhost/Team/issues/2534
This is so that we can support soft deletes for Mentions.
We need to add the defaults to the model so that write continue to work.
Co-authored-by: Fabien "egg" O'Carroll <fabien@allou.is>
closes https://github.com/TryGhost/Team/issues/2526
- Mention emails can now be toggled inside staff user' profiles, if they
have the webmention flag enabled on their Ghost site.
- Removed the flag dedicated to webmention email notifications and is
now handled by the `webmention` flag.
- Does not send email notification if `webmention` flag is not enabled.
- Updated tests.
---------
Co-authored-by: Fabien "egg" O'Carroll <fabien@allou.is>
refs https://github.com/TryGhost/Team/issues/2482
This moves the processing of Mailgun events to the main thread. By using a simple approach where we emit a start event on the worker thread (via the job manager) and listen for it on the main thread. This is needed because for now the job manager doesn't support scheduling periodic jobs on the main thread (not offloaded).
Apart from that, the email processor now uses the email event storage directy instead of emitting events (it is still emitting event for now). This makes sure we await for the event to be processed before continuing with the next event.
refs https://github.com/TryGhost/Ghost/issues/16125
The previous fix for incorrect recipient details being shown when
re-sending a failed email introduced another bug that prevented the
"Match post visibility" default recipients setting from working.
- the server always sets `post.emailSegment` to `'all'` for new posts so
the publish flow recipient filter logic that checked for
`post.emailSegment` being present always defaulted to `'all'` rather
than falling back to the selected default recipients setting
- when a post has been published but the email failed it will have its
`newsletter` value set so we can use that as a check for using the
`post.emailSegment` value in place of the default recipients setting
refs https://github.com/TryGhost/Team/issues/2526
- created a migration for a new boolean column in users that would
determine if the staff user gets an email when the publication receive a
new mention.
closes https://github.com/TryGhost/Team/issues/2419
- adds a rate limiter implementation to the mentions receiving
endpoint.
- Current configuration is `{"minWait": 10,
"maxWait": 100,
"lifetime": 1000,
"freeRetries": 100}` which is still very open and almost unrestricted.
- currently makes use of database storage to track the limits, but can be relatively easily swapped out to something eg Redis should we find this endpoint getting hit too often and maliciously.
no issue
Free and premium newsletters were the other way around in the demo-data. This was a good opportunity to stop the email table importer from relying on the newsletter name, and use the order alone.
refs https://github.com/TryGhost/Ghost/issues/16125
The previous fix for incorrect recipient details being shown when
re-sending a failed email introduced another bug that prevented the
"Match post visibility" default recipients setting from working.
- the server always sets `post.emailSegment` to `'all'` for new posts so
the publish flow recipient filter logic that checked for
`post.emailSegment` being present always defaulted to `'all'` rather
than falling back to the selected default recipients setting
- when a post has been published but the email failed it will have its
`newsletter` value set so we can use that as a check for using the
`post.emailSegment` value in place of the default recipients setting
refs
https://www.notion.so/ghost/Marketing-Milestone-email-campaigns-1d2c9dee3cfa4029863edb16092ad5c4
This adds a milestone entity and in-memory repository in a new
`milestone-emails` package. This also adds a first initial definition of
milestones and their types which is held in the default config to avoid
DB changes when, e. g. values change.
This should get everything in place to begin with the service
implementation.
fixes https://github.com/TryGhost/Team/issues/2522
When sending an email for multiple batches at the same time, we now
reuse the same email body for each batch in the same segment. This
reduces the amount of database queries and makes the sending more
reliable in case of database failures.
The cache is short lived. After sending the email it is automatically
garbage collected.
fixes https://github.com/TryGhost/Team/issues/2522
When sending an email for multiple batches at the same time, we now
reuse the same email body for each batch in the same segment. This
reduces the amount of database queries and makes the sending more
reliable in case of database failures.
The cache is short lived. After sending the email it is automatically
garbage collected.
refs TryGhost/Team#2508
-added sending service e2e tests
-should job off this sending service for better tests
-and for ghost to finish processing the job before shutdown
refs https://github.com/TryGhost/Toolbox/issues/515
- The link redirect handled was querying database on every single frontend request causing a significant amount of unnecessary traffic
- The optimization is returning early if the incoming URL does not start with a common "r/" prefix
fixes https://github.com/TryGhost/Team/issues/2512
The email sending is a crucial flow that should not be interrupted. If
there are database connection issues, we should try to recover from them
automatically by retrying individual database queries and transactions
for a limited amount of time.
This adds a helper method to retry database operations. It limits all
database queries before sending the actual email to maximum 10 minutes.
Database operations that happen after sending have a higher retry time
because they are more crucial to prevent data loss (e.g. saving that an
email was sent).
refs 7f1e970a0b
- `koenig-card-callout.hbs` was touched without fixing the lint errors or updating to todo list causing the lint todos to become out of sync
- fixed the lint error and updated the todo list
refs https://github.com/TryGhost/Team/issues/2503
This is in the MentionController atm as it's considered a presentation
concern. We might want to consider moving this into the MentionsAPI in
future so that we can simplify the controller and even remove it
completely in favour of putting the data-mapping in the endpoint definition.
refs https://github.com/TryGhost/Toolbox/issues/515
- The link redirect handled was querying database on every single frontend request causing a significant amount of unnecessary traffic
- The optimization is returning early if the incoming URL does not start with a common "r/" prefix
refs TryGhost/Team#2477
-removed post.edited as it was too inclusive
-changed to post.published, post.published.edited, post.unpublished
-blocked import and internal data from triggering mentions
refs https://github.com/TryGhost/Team/issues/2506
`email` objects no longer contain a `html` field and the fallback logic in the email preview modal was failing resulting in a 404 from trying to fetch an email preview using the email id rather than a post id.
- added quick-fix to the preview modal logic to use `data.post_id || data.id` for generating the preview URL (previous logic never expected to reach the fallback when working with an email record)
This is a pretty simple way for us to track which webmentions are sent
by Ghost. Although it's easily spoofed, so are other approaches like
using a header (e.g. User-Agent). If we find that this data is being
spoofed we can look at different approach.
Becuase our receiving implementation stores the payload of the
Webmention, we'll be able to know inside Ghost which Mentions
originated from another Ghost installation, which is useful for stats
and gives us the possibility to display that information in the feed.
Longer term we might want to consider storing this data in a separate
column for Mentions, rather than the `payload` column - but that is
outside the scope of this change.
refs https://github.com/TryGhost/Toolbox/issues/497
refs fb7532bf5d
- We downgraded the 'GS090-NO-PRICE-DATA-CURRENCY-CONTEXT' rule in gscan to non-fatal, meaning Ghost should not be throwing an error but instead render an empty value for {{price}} helper when price data is empty.
- For example, a legacy syntax like this: '{{price currency=@price.currency}}' should not cause a page render error but return an empty price string.
- The pattern of returning an empty string instead of crashing is used in other helpers like {{img_url}} and and {{url}}
refs https://github.com/TryGhost/Toolbox/issues/497
refs fb7532bf5d
- We downgraded the 'GS090-NO-PRICE-DATA-CURRENCY-CONTEXT' rule in gscan to non-fatal, meaning Ghost should not be throwing an error but instead render an empty value for {{price}} helper when price data is empty.
- For example, a legacy syntax like this: '{{price currency=@price.currency}}' should not cause a page render error but return an empty price string.
- The pattern of returning an empty string instead of crashing is used in other helpers like {{img_url}} and and {{url}}
closes https://github.com/TryGhost/Team/issues/2420
- Added user roles and permissions for the mentions admin API.
- We only have a `browse` function for our current use case, accessible
by `administrator` and `admin integration`.