no issue
- added `requestMethod` to `fileTypes` map
- added pass-through of `formData` options to `upload(file, options)` so `url` property can be passed to map uploaded image file to media file
- fixed `resourceName` for media thumbnail uploads
fixes https://github.com/TryGhost/Team/issues/2557
When a member doen't have a name, and the first_name replacement doesn't have a fallback, we did show %recipient.first_name% instead of an empty string.
no issue
- When we receive an email failure with an empty message, the saving of
the model would fail because of schema validation that requires strings
to be non-empty.
- This adds more logging to the email analytics service to help debug
future issues
- Performance improvement to storing delivered, opened and failed emails
by replacing COALESCE with WHERE X IS NULL (tested and should give a
decent performance boost locally).
refs https://github.com/TryGhost/Koenig/pull/491
- `<KoenigComposer>` now takes a `fileUpoader` object in place of `imageUploadFunction`
- updated the upload functions in `koenig-lexical-editor.js` to match expected patterns, handle multiple files and file types, and return expected upload progress, result, and error details
closes https://github.com/TryGhost/Team/issues/2548
Rather than use a setter here we've used a verify method which takes the HTML
string and naively validates that the target URL is present. This is so that the
logic of verification is encapsulated in the Mention, and should mean that
erroneous verification doesn't happen.
We could consider down the line that the verify method fetches the content
itself, but if we're to do that we should pass in `got` as a param, so that it's
possible to stub in tests.
One thing to think about when it comes time to making this as performant as
possible is doing a single fetch of the source document and using that for
verification and metadata extraction. At that point we should probably
consolidate both of those operations, either moving the metadata extraction into
the Mention entity (passing in any necessary deps) OR we move the verification
out to the same layer as metadata extraction.
closes https://github.com/TryGhost/Team/issues/2552
We send a Webmention for the same URL twice, but change the contents
of the source document, and we check that the source metadata is
updated appropriately.
We should consider extending all of these tests to include featured
images and logos etc...
fixes https://github.com/TryGhost/Team/issues/2542
fixes https://github.com/TryGhost/Team/issues/2543
fixes https://github.com/TryGhost/Team/issues/2544
- Hides incomplete subscriptions
- Shows Past Due subscriptions
- Fixed UI issues with 3+ subscriptions
- Fixed missing complimentary subscription when one subscription was
incomplete/inactive
- Fixed sending a paid subscription started email for incomplete
subscriptions. This change also required us to actually send the email
when the incomplete subscription eventually becomes active. So the
introduction of a new `SubscriptionActivatedEvent` made sense/was
required (because sending a SubscriptionCreatedEvent again would cause
other issues).
refs https://github.com/TryGhost/Toolbox/issues/515
- This implementation allows to use Redis cluster as a caching adapter. The cache adapter can be configured through same adapter configuration interface as others. It accepts following config values:
- "ttl" - time in SECONDS for stored entity to live in Redis cache
- "keyPrefix" - special cache key prefix to use with stored entities
- "host" / "port" / "password" / "clusterConfig" - Redis instance specific configs
- Set test coverage to non-standard 75% because the code mostly consists of the glue code that would require unnecessary Redis server mocking and would be a bad ROI. This module has been used in production for a long time already, so can be considered pretty stable.
refs https://github.com/TryGhost/Toolbox/issues/515
- Redis-based caches can be used on hosted-environments to store information with high memory impact - when in-memory caches would be too impractical to use
- This is a placeholder package for a cluster-aware Redis cache implementation compatible with Ghost's cache adapter interface (a41d351f16/packages/adapter-base-cache)
refs https://github.com/TryGhost/Toolbox/issues/515
- The caches relying on external storage like e.g.: Redis, the get/set operations are usually async. The tags repository should be working with these as caching is expected to be non-in-memory for these data.
refs https://github.com/TryGhost/Toolbox/issues/515
- By moving the cache initialization behind the hostSettings configuration we can limit the experimental feature to only hosted environments with special capacities
- An example configuration to enable Tags caching looks like this:
```
"hostSettings": {
"tagsPublicCache": {
"enabled": true
},
```
In addition to have the caching backed by a Redis backend or even InMemoryTTL cache the site configuration should include a cache adapter configuration like this:
```
"adapters": {
"cache": {
"active": "Memory",
"tagsPublic": {
"adapter": "TTL",
"ttl": 60000 // 60 * 1000 minute
},
```
refs https://github.com/TryGhost/Toolbox/issues/515
- There are a lot of repeated cacheable tag-related queries coming from
{{get}} helpers in themes that can be cached.
- Having a repository layer deal with very specific type of query allows
to add extra functionality, like caching, on top of the database query
- This commit is wiring code that addds a default in-memory cache to
all db queries. Note, it lasts forever and has no "reset" listeners. The
production cache is mean to have a short time-to-live (TTL) - removes a need
to keep the cache always fresh.
- Kept the cache key shortened. Without a "context" and any other non-model options the cache-key can store more variations of queries. For example, there is no member-specific or integration-specific query results, so having those in the cache key would only partition the cache and use up more memory.
closes https://github.com/TryGhost/Team/issues/2547
Changed the configuration for testing to be a bit more strict, by slowing down the amount of requests it can handle to give CI enough time to kick in the rate limiter. Before this, CI simply wasn't hitting the API fast enough to trigger the rate limiter.
Co-authored-by: Ronald Langeveld <hi@ronaldlangeveld.com>
refs https://github.com/TryGhost/Team/issues/2535
At the moment we only update the metadata of the Webmention source, so
that we can capture update titles, excerpts, etc... when a post is updated.
refs https://github.com/TryGhost/Team/issues/2535
Moving this into a separate method allows us to set the metadata externally from
the Mention entity and keep all of the validation, without having duplicate code
refs https://ghost.slack.com/archives/C04MSE4MKJT/p1675948815531779
- running at a fixed hh:mm every day means a platform with a large number
of Ghost sites will get hammered with DB requests when they all start
up
- this reconfigures the cron to run at a random minute and second
between 0am and 5am, which gives a 6 hour window
refs https://github.com/TryGhost/Team/issues/2534
As we're using soft deletes for mentions we need to store the `deleted` column
as well as enforce a `'deleted:false'` filter on the bookshelf model.
We've also implemented the handling for deleting mentions. Where we remove a
mention anytime we receive and update from or to a page which no longer exists.
Co-authored-by: Steve Larson <9larsons@gmail.com>
refs https://github.com/TryGhost/Team/issues/2534
This is so that we can support soft deletes for Mentions.
We need to add the defaults to the model so that write continue to work.
Co-authored-by: Fabien "egg" O'Carroll <fabien@allou.is>
closes https://github.com/TryGhost/Team/issues/2526
- Mention emails can now be toggled inside staff user' profiles, if they
have the webmention flag enabled on their Ghost site.
- Removed the flag dedicated to webmention email notifications and is
now handled by the `webmention` flag.
- Does not send email notification if `webmention` flag is not enabled.
- Updated tests.
---------
Co-authored-by: Fabien "egg" O'Carroll <fabien@allou.is>
refs https://github.com/TryGhost/Team/issues/2482
This moves the processing of Mailgun events to the main thread. By using a simple approach where we emit a start event on the worker thread (via the job manager) and listen for it on the main thread. This is needed because for now the job manager doesn't support scheduling periodic jobs on the main thread (not offloaded).
Apart from that, the email processor now uses the email event storage directy instead of emitting events (it is still emitting event for now). This makes sure we await for the event to be processed before continuing with the next event.
refs https://github.com/TryGhost/Ghost/issues/16125
The previous fix for incorrect recipient details being shown when
re-sending a failed email introduced another bug that prevented the
"Match post visibility" default recipients setting from working.
- the server always sets `post.emailSegment` to `'all'` for new posts so
the publish flow recipient filter logic that checked for
`post.emailSegment` being present always defaulted to `'all'` rather
than falling back to the selected default recipients setting
- when a post has been published but the email failed it will have its
`newsletter` value set so we can use that as a check for using the
`post.emailSegment` value in place of the default recipients setting
refs https://github.com/TryGhost/Team/issues/2526
- created a migration for a new boolean column in users that would
determine if the staff user gets an email when the publication receive a
new mention.
closes https://github.com/TryGhost/Team/issues/2419
- adds a rate limiter implementation to the mentions receiving
endpoint.
- Current configuration is `{"minWait": 10,
"maxWait": 100,
"lifetime": 1000,
"freeRetries": 100}` which is still very open and almost unrestricted.
- currently makes use of database storage to track the limits, but can be relatively easily swapped out to something eg Redis should we find this endpoint getting hit too often and maliciously.
no issue
Free and premium newsletters were the other way around in the demo-data. This was a good opportunity to stop the email table importer from relying on the newsletter name, and use the order alone.
refs https://github.com/TryGhost/Ghost/issues/16125
The previous fix for incorrect recipient details being shown when
re-sending a failed email introduced another bug that prevented the
"Match post visibility" default recipients setting from working.
- the server always sets `post.emailSegment` to `'all'` for new posts so
the publish flow recipient filter logic that checked for
`post.emailSegment` being present always defaulted to `'all'` rather
than falling back to the selected default recipients setting
- when a post has been published but the email failed it will have its
`newsletter` value set so we can use that as a check for using the
`post.emailSegment` value in place of the default recipients setting
refs
https://www.notion.so/ghost/Marketing-Milestone-email-campaigns-1d2c9dee3cfa4029863edb16092ad5c4
This adds a milestone entity and in-memory repository in a new
`milestone-emails` package. This also adds a first initial definition of
milestones and their types which is held in the default config to avoid
DB changes when, e. g. values change.
This should get everything in place to begin with the service
implementation.
fixes https://github.com/TryGhost/Team/issues/2522
When sending an email for multiple batches at the same time, we now
reuse the same email body for each batch in the same segment. This
reduces the amount of database queries and makes the sending more
reliable in case of database failures.
The cache is short lived. After sending the email it is automatically
garbage collected.
fixes https://github.com/TryGhost/Team/issues/2522
When sending an email for multiple batches at the same time, we now
reuse the same email body for each batch in the same segment. This
reduces the amount of database queries and makes the sending more
reliable in case of database failures.
The cache is short lived. After sending the email it is automatically
garbage collected.