- we don't want to include the actual date here because it'll change
- this adds a regex to match the date and replace it with a standard
"date" string, which won't change
no issue
Some things break in some email clients with this new setting. Disabled it for now and moved the required css style to hide the member name row to @media all.
refs https://github.com/TryGhost/Team/issues/2736
If the name is not known for a member, we'll hide the name row in the subscription details in an email. This method is supported in most email clients, and requires the support of `<style>` in `<head>`.
refs
https://www.notion.so/ghost/Marketing-Milestone-email-campaigns-1d2c9dee3cfa4029863edb16092ad5c4?pvs=4
- Added email template for milestones with using a configuration file
for different member milestone values, as we're sending different
content for each one
- Implement sending the email to users who have
`milestone-notifications` enabled, currently still behind a flag
Co-authored-by: Peter Zimon <peter.zimon@gmail.com>
refs https://github.com/TryGhost/Team/issues/2754
Previously, we didn't have any backup for storing source site title and it could have stored as empty if missing. This change ensures the source site title is stored as site host instead as fallback if not present.
refs TryGhost/Team#2667
- Added notification check before checking that user successfully unsubscribed from all newsletters. It helps to make the test more stable
- without this, Node will try and resolve the domain name but local DNS
resolvers can take a while to timeout, which causes the tests to timeout
- `nodemailer-direct-transport` calls `dns.resolveMx`, so if we stub that
function and return an empty array, we can avoid any real DNS lookups
refs https://github.com/TryGhost/Toolbox/issues/523
- The reverted fix did not take into account the "original path" of the
files would be truncated. This path has to be full relative to the root
of the zip to later be used during importer url substitution logic.
- This reverts commit 831a76505c.
Because there is no guarantee about a daily job running exactly once a
day, we need to store the last time that the email was sent, so that we
can refrain from sending one if it's been less than a day since the
last.
A setting has been used for this as we don't currently have a pattern
for it, we might want to consider moving this to some kind of cache
based solution in future. This has been added as a core setting so that
we don't expose it via the API.
The setting is stored as a number to allow us to store value as unix timestamp.
---------
Co-authored-by: Rishabh <zrishabhgarg@gmail.com>
fixes https://github.com/TryGhost/Team/issues/2724
This change also includes new snapshots for email sending (similar for email previews, but this time for the real emails to make sure we catch changes).
fixes https://github.com/TryGhost/Team/issues/2705
- Added showPostTitleSection to newsletter model in admin
- Wired up UI to admin model so it saves to the database
- Implemented showPostTitleSection in newsletter preview and added some
minor temporary css styling
- Implemented showPostTitleSection in newsletter template in backend,
and added some extra CSS styling to fix spacing
no issue
When using `getLazyRelation` on an optional relation that is not set, it
will return a newly created model instead of a model from the database.
- Adds a new require option to `getLazyRelation`, that throws an error
if the relation is not set (off by default to match existing use cases)
- This caused a bug (not visible because we always pass a newsletter id)
in email previews, where when the newsletter id was not explicitly set,
it would use `newsletter = (await post.getLazyRelation('newsletter')) ??
(await this.models.Newsletter.getDefaultNewsletter());`, which always
returned the first one, and could return a newly initiated newsletter
with all properties set to undefined.
- Some page snapshots are altered by this, because the usage of
`getLazyRelation` on a post no longer sets the email relation to some
new model.
refs
https://www.notion.so/ghost/Marketing-Milestone-email-campaigns-1d2c9dee3cfa4029863edb16092ad5c4?pvs=4
- When milestones will be activated we would send out emails to users
that are way above the achieved milestone, as we didn't record
milestones before
- The plan is to implement a 0 milestone and don't send an email for
achieving those and also add all achieved milestones in the first run
until a first milestone is stored in the DB, then increment from there.
- This change takes care of two cases:
1. Milestones gets enabled and runs initially. We don't want to send
emails unless there's already at least one milestone achieved. For that
we add a 0 milestone helper and add a `initial` reason to the meta
object for the milestone event, so we can choose not to ping Slack and
also disable email sending for all milestones achieved in this initial
run.
2. All achieved milestones will be stored in the DB, even when that
means we skip some. This introduces the `skipped` reason which also
doesn't send emails for the skipped milestones, but will do for
correctly achieved milestones (always the highest one).
- Added handling for slack notifications to not attempt sending when
reason is `skipped` or `initial`
no issue
The Stripe Mocker mocks the Stripe API in memory, to make it much easier
to test subscription flows. Currently it is more a POC to see if it
works well. It probably needs a bit more work to support more scenarios.
- Added new tests for the subscription stats endpoint for 3D secure +
free trial flows using the new Stripe Mocker
- Updated members admin api tests to use Stripe Mocker (+ added new test
for deleting members with Stripe cancellation)
- Some tests called mockStripe at the beginning, but that method did
nothing apart from disabling network (which is the default now), then
they mocked Stripe inside the tests file... so I've removed those
because those conflict with the new mocker that is enabled when calling
mockStripe. We'll need to port those over later.
- this cleans up all imports or variables that aren't currently being used
- this really helps keep the tests clean by only allowing what is needed
- I've left `should` as an exemption for now because we need to clean up
how it is used
- this is generally an anti-pattern in tests and leads to flaky
behaviour when tests are ran on different machines/loads
- this is currently unused so it is an easy removal
fixes https://github.com/TryGhost/Team/issues/2560
When an email fails, and you reschedule the post, the error dialog was
shown (from the previous try). The retry button on that page allowed you
to retry sending the email immediately, which could be very confusing.
- The email error dialog is no longer shown for scheduled emails
- The email status is no longer polled for scheduled emails
- Retrying an email is not possible via the API if the post status is
not published or sent
- Added some extra snapshot tests
- When retrying an email, we immediately update the email status to
'pending' to have a better API response (instead of still returning
failed).
- Disabled email sending retrying in development (otherwise very hard to
test failed emails if it takes 10 mins before it gives up automatic
retrying)
closes https://github.com/TryGhost/Team/issues/2531
This commit fixes the issue where non-canonical URLs are included in the
XML sitemap, leading to poor SEO for our user's sites. The solution
implemented is to exclude any page or post that specifies a canonical
URL in its metadata from the sitemap.
To achieve this, a condition has been added to the 'addUrl' method,
which checks for the existence of a canonical URL in the metadata of the
resource being added to the sitemap. If a canonical URL is present, the
resource is excluded from the sitemap.
With this fix, our user's sites will have better SEO and improved search
engine visibility.
fixes https://github.com/TryGhost/Team/issues/2666
- Somehow occurrences of `&map_` got replaced with `↦`
- Disables escaping &, ', " and other HTML characters when not needed
(escaping is already handled by mobiledoc/lexical)
- Bumps unit test coverage of link replacer to 100%
no issue
- Nock doesn't support multiple calls to enableNetConnect -> only the last one counts. This fixes that issue.
- Some tests interacted directly with nock instead of using the mockManager to restore everything.
https://github.com/TryGhost/Toolbox/issues/523
- During import process of content files the files from the root directory were also copied over. This is causing chaos in the root of content folder with files that only needed for data import. For example, the csv files needed for Revue import were also copied over by "file importer" even though those do not belong to any content.
- Any content import files - images, media, files, should be in according folders in the imported zip file. The root files in the base zip directory are for data-related imports
fixes https://github.com/TryGhost/Team/issues/2611
The old email flow is no longer used since we introduced the email stability flow. This commit removes the related code and tests. The general test coverage decreased a bit as a result, because the old email flow probably had a high test coverage. The new flow is in separate packages, so it couldn't contribute to a higher test coverage (but it does have 100% unit test coverage).
fixes https://github.com/TryGhost/Team/issues/2683
When sending a newsletter with a replacement that has a fallback, the
replacement only happens in the HTML version of the newsletter. The
plaintext version isn't replaced.
This commit fixes the issue and adds some tests to make sure it doesn't
happen again.
The cause of the issue was that we used the original matched Regex text
to replace. But that was calculated on the HTML version, so double
quotes were encoded. This change updates the generated 'token' regex to
also match on both a double quote as the escaped double quote.
refs https://github.com/TryGhost/Team/issues/2667
Some tests still accessed the internet. Now network access is disabled
by default. This change also introduces two helper methods related to
networking (mocking Slack and Mailgun).
This fixes two unreliable tests:
- Staff service was accessing a Slack test API -> timeout possible
- MentionSendingService was trying to send webmentions for every post
publish/change -> possible timeouts and job issues
refs https://github.com/TryGhost/Toolbox/issues/523
- The test is useful for future iterations of the response format and as a quick reference on which parameters the media inlining endpoint accepts.
refs: https://github.com/TryGhost/Toolbox/issues/389
Instead of logging errors, this will warn when adding a duplicate URL in the test environment.
At the moment, this is happening a lot in the test suite. While we also need to fix the root cause of this so we're not erroring in the product, it's a massive amount of spam in the logs when running the test suite which could prevent us from finding other errors which are causing issues.
refs: https://github.com/TryGhost/Toolbox/issues/389
This removes many error logs when the end-to-end test suite is run with the log-level set to error. Many errors are intentional, so the resolution is typically to stub the error log function and assert that it would have been called.
refs: https://github.com/TryGhost/Toolbox/issues/389
The newsletter fixtures no longer errors when accessing the test image, and tests which intentionally error now stub the logging call
refs https://github.com/TryGhost/Toolbox/issues/523
- When migrating or importing ZIP files into Ghost there's often a need to include document files.
- When document files are present in the imported zip file they are now copied across and processed along with the rest of import files: json, images, csvs, etc.
- The importer also searches for use of the document files in the imported "posts" substituting the links with local ones
- The document files importer recognizes media files inside of "files" or "content/files" folders present in the zip.
- The supported media file extensions are same as for file upload widget:
".pdf",".json",".jsonld",".odp",".ods",".odt",".ppt",".pptx",".rtf",".txt",".xls",".xlsx",".xml"
with following content-types:
"application/pdf", "application/json", "application/ld+json", "application/vnd.oasis.opendocument.presentation", "application/vnd.oasis.opendocument.spreadsheet", "application/vnd.oasis.opendocument.text", "application/vnd.ms-powerpoint", "application/vnd.openxmlformats-officedocument.presentationml.presentation", "application/rtf", "text/plain", "application/vnd.ms-excel", "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet", "application/xml", "application/atom+xml"
refs https://github.com/TryGhost/Toolbox/issues/523
refs c2534e3c86/packages/mg-assetscraper/lib/AssetScraper.js (L14-L16)
refs https://developer.mozilla.org/en-US/docs/Web/HTTP/Basics_of_HTTP/MIME_types/Common_types
- Importer needs to process and recognize document files like pdfs, presentations etc to be able to import them into sites file storage.
- The handler allows a new root directory "files" to place imported documents
- The handler adds validation and processing for following file extensions:
".pdf",
".json",
".jsonld",
".odp",
".ods",
".odt",
".ppt",
".pptx",
".rtf",
".txt",
".xls",
".xlsx",
".xml"
- With following content types:
"application/pdf",
"application/json",
"application/ld+json",
"application/vnd.oasis.opendocument.presentation",
"application/vnd.oasis.opendocument.spreadsheet",
"application/vnd.oasis.opendocument.text",
"application/vnd.ms-powerpoint",
"application/vnd.openxmlformats-officedocument.presentationml.presentation",
"application/rtf",
"text/plain",
"application/vnd.ms-excel",
"application/vnd.openxmlformats-officedocument.spreadsheetml.sheet",
"application/xml",
"application/atom+xml"
refs https://github.com/TryGhost/Toolbox/issues/523
- When migrating or importing ZIP files into Ghost there's often a need to include more than just post content and images.
- When media files are present in the imported zip file they are now copied across and processed along with the rest of import files: json, images, csvs, etc.
- The importer also searches for use of the media files in the imported "posts" substituting the links with local ones
- The media importer recognizes media files inside of "media" or "content/media" folders present in the zip. The supported media file extensions are same as for media upload widget:
".mp4", ".webm", ".ogv", ".mp3", ".wav", ".ogg", ".m4a"
with following content-types:
"video/mp4", "video/webm", "video/ogg", "audio/mpeg", "audio/vnd.wav", "audio/wave", "audio/wav", "audio/x-wav", "audio/ogg", "audio/mp4", "audio/x-m4a"
refs https://github.com/TryGhost/Toolbox/issues/523
- This is ground work before introducing a "media" content type importer
- Previous "image" file name was not describing well what the importer was capable of doing
refs https://github.com/TryGhost/Toolbox/issues/523
- We need to be able to use different storage mechanisms when importing different types of content
- Having the storage passed in using constructor DI allows to have more flexible storage mechanism in the Images importer (soon to become a generic file importer)
refs https://github.com/TryGhost/Toolbox/issues/523
- The class syntax would allow swapping out the storage mechanism in the importer making it universal to use with other file types like media or generic files.
refs https://github.com/TryGhost/Toolbox/issues/523
- The "importer/index.test.js" test suite is testing more than it should. ImageHandler test suite section is one of the examples of test cases that should live in a separate file.
- Having these tests in different files makes it easier to reason about coverage and extract to it's own packages.
refs https://github.com/TryGhost/Toolbox/issues/523
- When a zip file is imported into Ghost we need to recognize and process media files with following extensions:
".mp4",".webm", ".ogv", ".mp3", ".wav", ".ogg", ".m4a"
- The media files can come from a "media" or "content/media" folder inside of zip file
closes https://github.com/TryGhost/Ghost/issues/16332
Passing `SafeString` input to `asset` helper was resulting in the
exception being thrown. This meant that we couldn’t combine `asset`
helper with other helpers which produce `SafeString` e.g. `concat`
helper for string concatenation.
fixes https://github.com/TryGhost/Ghost/issues/16301
Previously, audio/x-m4a was allowed but not audio/mp4. This meant
uploads of m4a files failed in some cases e.g. Firefox on Windows.
refs: https://github.com/TryGhost/Toolbox/issues/389
Calling validate always uses the cache system, so this commit makes sure that the cache system is always initialised correctly by the tests.
no issue
- Reduced the amount of diffeerent properties by not populating a `currentARR` and `currentMembers` fields, but use a `currentValue` instead.
- The type of milestone can still be determined by its `type` property, so we actually don't need two different props here
no issue
- Switches to used newly added config values throughout the services
- Updated the `shouldSendEmail` fn to check if actual value is too far from achieved milestone as determined by the percentage setting (e. g. 998 members should not accidentally receive an email for achieving 100 members)
no refs
-spam prevention test was causing subsequent tests to fail randomly
-moving to the end ensures (for now) we don't interrupt other tests
-seems to be an issue with awaiting the jobservice which do concurrent
refs https://github.com/TryGhost/Toolbox/issues/522
- Having simpler method signature makes it easier to use it in different context - needed for changes in public resource repository
- TLDR of the changes - reduced parameter 'frame.options' -> 'options'
fixes https://github.com/TryGhost/Team/issues/2562
New event fetching loops:
- Reworked the analytics fetching algorithm. Instead of starting again
where we stopped during the last fetching minus 30 minutes, we now just
continue where we stopped. But with ms precision (because no longer
database dependent after first fetch), and we stop at NOW - 1 minute to
reduce chance of missing events.
- Apart from that, a missing fetching loop is introduced. This fetches
events that are older than 30 minutes, and just processes all events a
second time to make sure we didn't skip any because of storage delays in
the Mailgun API.
- A new scheduled fetching loop, that allows us to schedule between a
given start/end date (currently only persisted in memory, so stops after
a reboot)
UI and endpoint changes:
- New UI to show the state of the analytics 'loops'
- New endpoint to request the analytics loop status
- New endpoint to schedule analytics
- New endpoint to cancel scheduled analytics
- Some number formatting improvements, and introduction of 'opened'
count in debug screen
- Live reload of data in the debug screen
Other changes:
- This also improves the support for maxEvents. We can now stop a
fetching loop after x events without worrying about lost events. This is
used to reduce the fetched events in the missing and scheduled event
loop (e.g. when the main one is fetching lots of events, we skip the
other loops).
- Prevents fetching the same events over and over again if no new events
come in (because we always started at the same begin timestamp). The
code increases the begin timestamp with 1 second if it is safe to do so,
to prevent the API from returning the same events over and over again.
- Some optimisations in handing the processing results (less merges to
reduce CPU usage in cases we have lots of events).
Testing:
- You can test with lots of events using the new mailgun mocking server
(Toolbox repo `scripts/mailgun-mock-server`). This can also simulate
events that are only returned after x minutes because of storage delays.
fixes https://github.com/TryGhost/Team/issues/2562
New event fetching loops:
- Reworked the analytics fetching algorithm. Instead of starting again
where we stopped during the last fetching minus 30 minutes, we now just
continue where we stopped. But with ms precision (because no longer
database dependent after first fetch), and we stop at NOW - 1 minute to
reduce chance of missing events.
- Apart from that, a missing fetching loop is introduced. This fetches
events that are older than 30 minutes, and just processes all events a
second time to make sure we didn't skip any because of storage delays in
the Mailgun API.
- A new scheduled fetching loop, that allows us to schedule between a
given start/end date (currently only persisted in memory, so stops after
a reboot)
UI and endpoint changes:
- New UI to show the state of the analytics 'loops'
- New endpoint to request the analytics loop status
- New endpoint to schedule analytics
- New endpoint to cancel scheduled analytics
- Some number formatting improvements, and introduction of 'opened'
count in debug screen
- Live reload of data in the debug screen
Other changes:
- This also improves the support for maxEvents. We can now stop a
fetching loop after x events without worrying about lost events. This is
used to reduce the fetched events in the missing and scheduled event
loop (e.g. when the main one is fetching lots of events, we skip the
other loops).
- Prevents fetching the same events over and over again if no new events
come in (because we always started at the same begin timestamp). The
code increases the begin timestamp with 1 second if it is safe to do so,
to prevent the API from returning the same events over and over again.
- Some optimisations in handing the processing results (less merges to
reduce CPU usage in cases we have lots of events).
Testing:
- You can test with lots of events using the new mailgun mocking server
(Toolbox repo `scripts/mailgun-mock-server`). This can also simulate
events that are only returned after x minutes because of storage delays.
Refs TryGhost/Team#2459
-upgraded got from v9.6.0 to v11.8.6 to support following redirects (and
other fixes)
-got v12+ requires ESM, so we do not want to upgrade further at this
time
-required changes to a few libraries that use externalRequests
-mention discovery service tests updated to test for follow redirects
no issue
- Instead of running milestone service directly on boot, set a random
timeout of 0-4 days to run after boot
- Updated tests
- Service is still behind a beta flag
refs https://github.com/TryGhost/Team/issues/2550
By using cheerio to parse the HTML we can correctly look for elements
which use the target URL as the href attribute, rather than doing a
plaintext search. This closer to what the spec says.
[Added initial mentions-jobs
service](3656190114)
This is the result of running `cp -r jobs mentions-jobs` in the services
directory.
[Waited for mentions-jobs queue before
shutdown](2bb1a12a89)
This matches the functionality of the existing jobs service where we
will wait
for jobs to complete before closing the process.
[Used mentions-jobs service in the mentions
service](4e4f9fdd00)
This ensures that any delays in the mentions jobs queue does not effect
other
parts of the application.
refs
https://www.notion.so/ghost/Marketing-Milestone-email-campaigns-1d2c9dee3cfa4029863edb16092ad5c4?pvs=4
- Added a `slack-notifications` repository which handles sending Slack
messages to a URL as defined in our Ghost(Pro) config (also includes a
global switch to disable the feature if needed) and listens to
`MilestoneCreatedEvents`.
- Added a `slack-notification` service which listens to the events on
boot.
- In order to have access to further information such as the reason why
a Milestone email hasn't been sent, or the current ARR or Member value
as comparison to the achieved milestone, I added a `meta` object to the
`MilestoneCreatedEvent` which then gets accessible by the event
subscriber. This avoid doing further requests to the DB as we need to
have this information in relation to the event occurred.
---------
Co-authored-by: Fabien "egg" O'Carroll <fabien@allou.is>
fixes https://github.com/TryGhost/Team/issues/2433
- Moved all outbound link tagging code to separate OutboundLinkTagger
- Because a site can easily enable/disable this feature, we don't store
the ?refs in the HTML but add them on the fly for now in the Content
API.
closes https://github.com/TryGhost/Team/issues/2551
Rather than blindly passing all data through the API we explicitly include each
new property. This allows us to make changes to the core entities without
affecting the API. The verified property is being added now to give design the
ability to display these mentions differently.
We also needed to include the verified property in the return value of toJSON,
this was missed as part of the original entity changes
no issue
- The way we're going to implement milestones diverged from the original idea of handling email sending within the milestone-emails package, as we'll be sending events instead and will utilise the StaffService to listen to them and send the emails
- This renames the package as well as the service in core itself and all relevant tests
no issue
- For better testability with in-memory repository, refactor the
milestones service to preserve the API instance
- Fetching the information about Stripe live mode from Stripe service
was causing difficulties when testing. As a workaround we switched to
reading the live mode keys and determine it that way.
---------
Co-authored-by: Fabien "egg" O'Carroll <fabien@allou.is>
closes https://github.com/TryGhost/Team/issues/2558
- bumped `kg-lexical` packages so we're working with latest suite of default nodes and renderer
- added a `render()` method directly to our `lexicalLib` object
- allows us to pass through all of Ghost's config for image transforms etc in one place rather than every time we want to render something
no issue
- When we receive an email failure with an empty message, the saving of
the model would fail because of schema validation that requires strings
to be non-empty.
- This adds more logging to the email analytics service to help debug
future issues
- Performance improvement to storing delivered, opened and failed emails
by replacing COALESCE with WHERE X IS NULL (tested and should give a
decent performance boost locally).
closes https://github.com/TryGhost/Team/issues/2552
We send a Webmention for the same URL twice, but change the contents
of the source document, and we check that the source metadata is
updated appropriately.
We should consider extending all of these tests to include featured
images and logos etc...
fixes https://github.com/TryGhost/Team/issues/2542
fixes https://github.com/TryGhost/Team/issues/2543
fixes https://github.com/TryGhost/Team/issues/2544
- Hides incomplete subscriptions
- Shows Past Due subscriptions
- Fixed UI issues with 3+ subscriptions
- Fixed missing complimentary subscription when one subscription was
incomplete/inactive
- Fixed sending a paid subscription started email for incomplete
subscriptions. This change also required us to actually send the email
when the incomplete subscription eventually becomes active. So the
introduction of a new `SubscriptionActivatedEvent` made sense/was
required (because sending a SubscriptionCreatedEvent again would cause
other issues).
closes https://github.com/TryGhost/Team/issues/2547
Changed the configuration for testing to be a bit more strict, by slowing down the amount of requests it can handle to give CI enough time to kick in the rate limiter. Before this, CI simply wasn't hitting the API fast enough to trigger the rate limiter.
Co-authored-by: Ronald Langeveld <hi@ronaldlangeveld.com>
refs https://github.com/TryGhost/Team/issues/2534
This is so that we can support soft deletes for Mentions.
We need to add the defaults to the model so that write continue to work.
Co-authored-by: Fabien "egg" O'Carroll <fabien@allou.is>
closes https://github.com/TryGhost/Team/issues/2526
- Mention emails can now be toggled inside staff user' profiles, if they
have the webmention flag enabled on their Ghost site.
- Removed the flag dedicated to webmention email notifications and is
now handled by the `webmention` flag.
- Does not send email notification if `webmention` flag is not enabled.
- Updated tests.
---------
Co-authored-by: Fabien "egg" O'Carroll <fabien@allou.is>
refs https://github.com/TryGhost/Team/issues/2526
- created a migration for a new boolean column in users that would
determine if the staff user gets an email when the publication receive a
new mention.
closes https://github.com/TryGhost/Team/issues/2419
- adds a rate limiter implementation to the mentions receiving
endpoint.
- Current configuration is `{"minWait": 10,
"maxWait": 100,
"lifetime": 1000,
"freeRetries": 100}` which is still very open and almost unrestricted.
- currently makes use of database storage to track the limits, but can be relatively easily swapped out to something eg Redis should we find this endpoint getting hit too often and maliciously.
refs TryGhost/Team#2508
-added sending service e2e tests
-should job off this sending service for better tests
-and for ghost to finish processing the job before shutdown
refs https://github.com/TryGhost/Team/issues/2503
This is in the MentionController atm as it's considered a presentation
concern. We might want to consider moving this into the MentionsAPI in
future so that we can simplify the controller and even remove it
completely in favour of putting the data-mapping in the endpoint definition.
refs https://github.com/TryGhost/Toolbox/issues/497
refs fb7532bf5d
- We downgraded the 'GS090-NO-PRICE-DATA-CURRENCY-CONTEXT' rule in gscan to non-fatal, meaning Ghost should not be throwing an error but instead render an empty value for {{price}} helper when price data is empty.
- For example, a legacy syntax like this: '{{price currency=@price.currency}}' should not cause a page render error but return an empty price string.
- The pattern of returning an empty string instead of crashing is used in other helpers like {{img_url}} and and {{url}}
closes https://github.com/TryGhost/Team/issues/2420
- Added user roles and permissions for the mentions admin API.
- We only have a `browse` function for our current use case, accessible
by `administrator` and `admin integration`.
fixes https://github.com/TryGhost/Team/issues/481
This change fixes an issue when multiple images with the same name are
uploaded in parallel. The current system does not guarantee that the
original filename is stored under NAME+`_o`, because the upload for the
original file and the resized file are happening in parallel.
Solution:
- Wait for the storage of the resized image (= the image without the _o
suffix) before storing the original file.
- When that is stored, use the generated file name of the stored image
to generate the filename with the _o suffix. This way, it will always
match and we don't risk both files to have a different number suffix.
We'll also set the `targetDir` argument when saving the file, to avoid
storing the original file in a different directory (when uploading a
file around midnight both files could be stored in 2023/01 and 2023/02).
Some extra optimisations needed with this fix:
- Previously when uploading image.jpg, while it already exists, it would
store two filenames on e.g., `image-3.jpg` and `image_o-3.jpg`. Note the
weird positioning of `_o`. This probably caused bugs when uploading
files named `image-3.jpg`, which would store the original in
`image-3_o.jpg`, but this original would never be used by the
handle-image-sizes middleware (it would look for `image_o-3.jpg`). This
fix would solve this weird naming issue, and make it more consistent.
But we need to make sure our middlewares (including handle-image-sizes)
will be able to handle both file locations to remain compatible with the
old format. This isn't additional work, because it would fix the old bug
too.
- Prevent uploading files that end with `_o`, e.g. by automatically
stripping that suffix from uploaded files. To prevent collisions.
Advantage(s):
- We keep the original file name, which is better for SEO.
- No changes required to the storage adapters.
Downside(s):
- The storage of both files will nog happen parallel any longer. But I
expect the performance implications to be minimal.
- Changes to the routing: normalize middleware is removed
no issue
There are a couple of issues with resetting the Ghost instance between
E2E test files:
These issues came to the surface because of new tests written in
https://github.com/TryGhost/Ghost/pull/16117
**1. configUtils.restore does not work correctly**
`config.reset()` is a callback based method. On top of that, it doesn't
really work reliably (https://github.com/indexzero/nconf/issues/93)
What kinda happens, is that you first call `config.reset` but
immediately after you correcty reset the config using the `config.set`
calls afterwards. But since `config.reset` is async, that reset will
happen after all those sets, and the end result is that it isn't reset
correctly.
This mainly caused issues in the new updated images tests, which were
updating the config `imageOptimization.contentImageSizes`, which is a
deeply nested config value. Maybe some references to objects are reused
in nconf that cause this issue?
Wrapping `config.reset()` in a promise does fix the issue.
**2. Adapters cache not reset between tests**
At the start of each test, we set `paths:contentPath` to a nice new
temporary directory. But if a previous test already requests a
localStorage adapter, that adapter would have been created and in the
constructor `paths:contentPath` would have been passed. That same
instance will be reused in the next test run. So it won't read the new
config again. To fix this, we need to reset the adapter instances
between E2E tests.
How was this visible? Test uploads were stored in the actual git
repository, and not in a temporary directory. When writing the new image
upload tests, this also resulted in unreliable test runs because some
image names were already taken (from previous test runs).
**3. Old 2E2 test Ghost server not stopped**
Sometimes we still need access to the frontend test server using
`getAgentsWithFrontend`. But that does start a new Ghost server which is
actually listening for HTTP traffic. This could result in a fatal error
in tests because the port is already in use. The issue is that old E2E
tests also start a HTTP server, but they don't stop the server. When you
used the old `startGhost` util, it would check if a server was already
running and stop it first. The new `getAgentsWithFrontend` now also has
the same functionality to fix that issue.
refs https://github.com/TryGhost/Team/issues/2419
We use a job queue to ensure that webmentions can be processed outside of
the request/response cycle, but still finish executing if the processed is closed.
With this we're able to update the e2e tests to await the processing of the mention
rather than sleepign for arbitrary lengths of time, and we've reintroduced the tests
removed previously
- aa14207b69
- 48e9393159
fixes https://github.com/TryGhost/Team/issues/2484
The flow only send the email to segments that were targeted in the email
content. But if a part of the email is only visible for `status:free`,
that doesn't mean we don't want to send the email to `status:-free`.
This has been corrected in the new email flow.
closes https://github.com/TryGhost/Team/issues/2429
- sends email notifications to staff users when their site receives a Webmention.
- currently behind a flag, that can be toggled in the labs settings.
refs https://github.com/TryGhost/Team/issues/2476
When upgrading from a Complimentary subscription with an expiry, to a paid Subscription of the same Tier, the Member was eventually losing access to the Tier when the complimentary subscription expires as the `expiry_at` on the mapping was not removed. This change fixes the code by setting expiry as null when a member upgrades their subscription to paid. This also adds 2 migrations to fix any side-effects on existing sites -
- Removed invalid expiry tier expiry date for paid members
- Restored missing tier mapping for paid members
This test is failing because the `sleep` isn't long enough. Removing this test
until we've refactored to use the jobs service, at which point we can remove the
sleep and wait for the job to be complete.
We were incorrectly handling a "no resource found" return value from the
ResourceService, instead of an object with `null` values, we were expecting a
`null` value - so we were considering all URL's to be pointing toward a
resource.
refs https://github.com/TryGhost/Team/issues/2466
Now that we're checking for resources at the URL and rejecting if
there isn't one found, we want to make sure that we can handle pages
which are not a resource.
The idea here is to make a HEAD request to determine whether or not
the page exists. We don't need the full response so HEAD saves us some
bandwidth and we allow both 2xx and 3xx status codes because Ghost has
redirects to add missing trailing slashes, which may not be present in
the URL we're passed.
refs https://github.com/TryGhost/Team/issues/2466
The existing implementation was a very basic check to get us to the
first milestone. By checking if the page points to a resource we can
know for sure the URL exists on the site.
refs https://github.com/TryGhost/Toolbox/issues/503
- There was an error thrown due to empty "model._changed" field
- When attached or detached events (e.g. tag.attached) are sent through, their models do not contain any _changed properties. This was taken into account when checking for route related resource changes
refs https://github.com/TryGhost/Toolbox/issues/503
- The listener was not covered during quick and dirty implementation. While in the area did some cleanup to the sitemap manager test
- One of the problems I've stumbled upon when adding a test is having multiple instances of SiteManager in the test, which in turn created multiple "subscribe" events and repeat handle executions. Fixed it by having just one site manager instance (a singleton) as that's the pattern that used in main codebase
refs https://github.com/TryGhost/Toolbox/issues/503
- Full URL regeneration process was happening even when only unrelated to URL generation fields were updated (e.g. 'plaintext' change in post does not affect the URL of the post). Stopping the "resource updated" event processing early circumvents full url regeneration inside of DynamicRouting, which can be quite heavy depending on routing configuration
- The URLResourceUpdatedEvent is supposed to be emmited whenever there's an update to the resource already associated with the URL and no url-affecting fields were touched.
no issue
Tests stopped working because the Mailgun mocker stopped working since we moved to the new email flow.
This also fixes a unit test that needed to get updated.
fixes https://github.com/TryGhost/Team/issues/2432
Adds outbound_link_tagging setting (enabled by default and behind
feature flag). If the feature flag is enabled, and the setting is
disabled, we won't add ?ref to links in emails.
This includes new E2E tests for email click tracking, which were also
extended to check outbound link tagging (for both MEGA and the new email
stability flow).
Also fixes a test fixture for the comments_enabled setting.
fixes https://github.com/TryGhost/Team/issues/2461
- Ignores 'edited' links when there is only one second differences.
- Make sure we don't set updatedAt when linking a post to a redirect
This allows us to share the implementation with other parts of the codebase, the
specific usecase here being fetching the metadata from webmention sources, for
display in the mentions UI, which will be borrowing a lot of stuff from the
bookmark card.
refs acf0baa8c7
Due to the bump in express-test, we now handle string bodies 'properly'. So they now pass all the Express middlewares. In the past this failing test did not really pass by the bodyParser.raw middleware,
so the content-type check on the `bodyParser.raw({type: 'application/json'})` middleware was not executed. Now it is, and the test fails because the content-type header was not set to application/json.
refs https://github.com/TryGhost/Team/issues/2400
- we've deemed it useful to start to return `Content-Version` for all
API requests, because it becomes useful to know which version of Ghost
a response has come from in logs
- this should also help us detect Admin<->Ghost API mismatches, which
was the cause of a bug recently (ref'd issue)
refs https://github.com/TryGhost/Toolbox/issues/499
- The mockManager's sentEmailCount is left here to avoid breaking many tests that already depend on this method. With future improvements to email snapshot tests this method should not be used. Instead, emailMockReceiver's own sentEmailCount method should be used directly.
refs https://github.com/TryGhost/Toolbox/issues/499
refs 6bcc47a0ad
- Using module directly caused issues with snapshots manager instance initialization (mocha hooks did not apply to a correct instance)
- See refed commit for more
refs https://github.com/TryGhost/Toolbox/issues/499
- Outgoing emails have been a weak point of Ghost's stability recently. The concept of "emailMockReceiver" similarly to "webhookMockReceiver", allows to test side-effects like outgoing emails.
- This is a first iteration which should lay groundwork for testing all outgoing emails in the future
- The change adds a new concept of "email mock receiver" which is very similar to how the "webhook mock receiver" works. The email mock receiver exposes two methods to record and verify snapshots:
- matchHTMLSnapshot - records and verifies only the HTML content of the outgoint email
- matchMetadataSnapshot - records and verifies all the non-HTML properties sent along an email content, e.g.: to address, plaintext, subject, etc.
- What's missing is matching content based on dynamic content like dates, links with JWT tokens, etc.
We've wrapped both changes in a try/catch to make sure this has no
adverse affects. The endpoint currently doesn't exist - we're only
adding this to get an idea of how much traffic we'll expect to see.
Long term we'll want to read the endpoint from the webmention service.
This introduces the new suppressions feature which will automatically
unsubscribe members from newsletters when their email is added to the
suppression list in Mailgun, this is usually due to emails either
permanently bouncing to the address, or the member making a spam
complaint.
Both Members and Admins are able to see that the email has been added to
the list, and Members are be able to request their email be removed from
the list via Portal.
Overall this feature should improve delivery rates of newsletters and
improve the rating of the domain you're sending from.
closes https://github.com/TryGhost/Team/issues/2338
If a site has the Free tier hidden from the Portal, and subsequently the Stripe connection is disconnected, this produces a dead-end state where no new members can sign up and the Free tier cannot be reactivated again in Portal settings as its hidden. This change -
- enables free tier toggle to be always shown on site irrespective of Stripe connection
fixes https://github.com/TryGhost/Team/issues/2339
The email service is now fully covered by tests, and this commit also forces the test coverage to remain 100% after future changes.
closes https://github.com/TryGhost/Team/issues/2011
- Gives publishers the ability to filter members based on which offer they used (redeemed) when they subscribed for a paid membership.
- On the offers page, the redemption count number links to a the members page with the filter already applied making it easy to have insight on which members used the offer / coupon.
We're seeing behaviour from Mailgun where permanent failures with a
5xx error code are not being added to their internal suppression list,
which is resulting in the Ghost list becoming out of sync with
Mailgun.
Rather than adding emails to the suppression list when Mailgun does,
we're instead going to add emails _after_ Mailgun does, by waiting for
an error code which tells us the email is already on the suppression
list.
Those codes are 605 for previous bounces and 607 for previous spam complaints.
fixes https://github.com/TryGhost/Team/issues/2398
There was an error when fetching the existing email recipient failure. It ended up matching all recipient failures. The result was that only one failure was stored in the database.
refs https://github.com/TryGhost/Team/issues/2371
- cleans up and adds comments for portal playwright tests
- updates data test attributes for portal trigger and popup selectors for consistency
- updates data attribute usage for offers
refs https://github.com/TryGhost/Team/issues/2371
- in case all tiers are archived before new tier is created, the add tier section can be collapsed and will need to be opened first before going through add tier flow
refs https://github.com/TryGhost/Team/issues/2393
- During boot and loading the active theme, we now cache the result of
the gscan validation. Cache configuration can happen in
`adapters.cache.gscan`
- We now also return non-fatal errors when activating or adding a theme.
- When the `themeErrorsNotification` feature flag is on, we fetch the
active theme (which includes the validation information) when loading
admin
- If the currently active theme has errors, we show an error
notification that can open the error modal
- Added a new endpoint: `/ghost/api/admin/themes/active/` that returns
the result of the last gscan validation of the active theme. If no cache
is available, it will run a new gscan validation.
- Added new permissions for the active action/endpoint (author, editor,
administrator)
refs https://github.com/TryGhost/Toolbox/issues/497
- During gscan fatal error downgrade to non-fatal some of the deprecated helpers were a bit vague to debug with no information on which exact "resource" was invalid
- Added resource name to the log for clarity. Should make life easier when debugging potential get helper misuses
- the test was using incorrect test state that was copied over from adding label test
- also adds guard for empty newsletters in member filters as in some cases it might not exist as found by test
When Mailgun fails to deliver an email to an address because the
address has already bounced before, it gives us a permanent fail event
with a 605 error code rather than a 5xx one. Because we want to
"backfill" our suppressions data with previously bounced email
addresses, we want to handle this specific error code.
We may update this logic in the future based on new information from
Mailgun with respect to their 6xx error codes and the
meanings/underlying cause of theme.
This also moves the tests which check for whether or not emails are
suppressed into their own fail so that we do not pollute the event
storage tests, and adds more tests cases.
We also fix a leaky sinon stub which we were not resetting in the email
event storage tests
The email_recipient fixtures were using duplicate and mismatched email addresses
rather than having them correctly map to the Members, which is required for testing
email suppressions.
no issue
With the increased usage of DomainEvents, it gets harder to build
reliable tests without having to resort to timeouts. This utility method
allows us to wait for all events to be processed before continuing with
the test.
This change should speed up tests and make them more reliable.
It only adds extra code when running tests and shouldn't impact
production.
closes https://github.com/TryGhost/Team/issues/2361
If a free trial tier existed on site and its set to 'Invite only' in membership settings, the free trial copy still showed on portal.
- removes free trial copy from portal if site is invite only
- adds playwright test to make sure free trial copy is not shown for invite only sites
There are currently two issues with the suppressions table:
- We have some incorrect rows
- We have missing UNIQUE constraints
We want to completely wipe the tables and start fresh, as well as make
sure that the UNIQUE constraints are added, so we drop the table
completely, and then re-add it, which should result in an empty
suppressions table with all expected constraints.
We've also renamed the `email_address` column to `email` to match our
`users` & `members` tables
fixes https://github.com/TryGhost/Team/issues/2366
refs https://ghost.slack.com/archives/C02G9E68C/p1670232405014209
Probem described in issue.
In the old MEGA flow:
- The `email_verification_required` check is now repeated inside the job
In the new email service flow:
- The `email_verification_required` is now checked (didn't happen
before)
- When generating the email batch recipients, we only include members
that were created before the email was created. That way it is
impossible to avoid limit checks by inserting new members between
creating an email and sending an email.
- We don't need to repeat the check inside the job because of the above
changes
Improved handling of large imports:
- When checking `email_verification_required`, we now also check if the
import threshold is reached (a new method is introduced in
vertificationTrigger specifically for this usage). If it is, we start
the verification progress. This is required for long running imports
that only check the verification threshold at the very end.
- This change increases the concurrency of fastq to 3 (refs
https://ghost.slack.com/archives/C02G9E68C/p1670232405014209). So when
running a long import, it is now possible to send emails without having
to wait for the import. Above change makes sure it is not possible to
get around the verification limits.
Refactoring:
- Removed the need to use `updateVerificationTrigger` by making
thresholds getters instead of fixed variables.
- Improved awaiting of members import job in regression test
The MailgunEmailSuppression list was incorrectly adding emails
to the suppression list for permanent failure events which have
an error code outside of the 5xx range.
fixes https://github.com/TryGhost/Team/issues/1996
**Issue**
Our Magic links are valid for 24 hours. After first usage, the token
lives for a further 10 minutes, so that in the case of email servers or
clients that "visit" links, the token can still be used.
The implementation of the 10 minute window uses setTimeout, meaning if
the process is interrupted, the 10 minute window is ignored completely,
and the token will continue to live for the remainder of it's 24 hour
validity period. To prevent that, the tokens are cleared on boot at the
moment.
**Solution**
To remove the boot clearing logic, we need to make sure the tokens are
only valid for 10 minutes after first use even during restarts.
This commit adds 3 new fields to the SingleUseToken model:
- updated_at: for storing the last time the token was changed/used). Not
really used atm.
- first_used_at: for storing the first time the token was used
- used_count: for storing the number of times the token has been used
Using these fields:
- A token can only be used 3 times
- A token is only valid for 10 minutes after first use, even if the
server restarts in between
- A token is only valid for 24 hours after creation (not changed)
We now also delete expired tokens in a separate job instead of on boot /
in a timeout.
refs https://github.com/TryGhost/Team/issues/2235
We found some cases which can cause a site to have member emails that have invalid characters like `member@example.com�`. This happened due to the `validator` version used by Ghost not able to catch some specific cases as invalid email, allowing members to be created with them either via Admin or Importer or direct signup. Portal UI already blocked these email as invalid. This change:
- updates `@tryghost/validator` to include a latest version of email validator that catches these invalid cases
- doesn't allow member creation with invalid email like above
- doesn't allow existing member emails to be edited to invalid
refs: https://www.getrevue.co/app/offboard
- Revue is stopping all paid subscriptions on 20th Dec, and shutting down on Jan 18th.
- This update allows Ghost to accept and handle the zip file Revue are providing as an export in Labs > Importer
- It will import posts (as best as we can with the data provided) and subscribers as free members
- At present it doesn't import paid subscribers, as we don't have that info, but you can disconnect Revue from your Stripe account to prevent all your subscriptions being cancelled & there's the option this can be fixed later
- There will be further updates to polish up this tooling - this is just a first pass to try to get something in people's hands
Co-authored-by: Paul Davis <PaulAdamDavis@users.noreply.github.com>
fixes https://github.com/TryGhost/Team/issues/2386
**Issue:**
- When trying to import a member that already exists, and has
'subscribed' set to 'true' in the CSV, the newsletters the member is
subscribed to are reset to the default newsletters.
- When ediging a member with the API and setting `subscribed` to true,
the same happens.
**Cause:**
A faulty check for the `status` property of a newsletter.
Fixed and added a new E2E test.
- Now that the importer runs in a job, it seems sensble that we should
do this
- If posts are imported with HTML set, but not mobiledoc, we now convert html -> mobiledoc
- Note: This also converts the mobiledoc -> html so _may_ be lossy
- Without this, imports that only have HTML, not mobiledoc, would have
resulted in empty posts, so lossy > empty
refs https://ghost.slack.com/archives/C02G9E68C/p1670960248186789
This reverts a change that was made here:
f4fdb4fa6c (r93071549),
but it still moved the original code to a new location in the
LastSeenAtUpdater
It includes a new E2E test to make sure timezones are supported
correctly.
- By not using Bookshelf, we no longer fire webhook calls
- By not using the member repository, we don't fetch and update the
member model and the labels relation in a forUpdate transaction, which
caused deadlock issues on the labels/members_labels tables which were
hard to resolve. Until now I was unable to find the other conflicting
transaction that caused this deadlock. Moving to raw knex (instead of
Bookshelf) and only updating the last_updated_at column should remove
the deadlock issue.
This removed the test for the email service wrapper, since it started
failing for an unknown reason and the test didn't make much sense (was
added earlier only to bump test threshold).
- The get helper can sometimes take a long time, and in themes that have many get helpers, the request can take far too long to respond
- This adds a timeout to the get helper, so that the page render doesn't block forever
- This won't abort the request to the DB, but instead just means the page will render sooner, and without the get block
refs https://github.com/TryGhost/Team/issues/2371
- test publishes a post with access for a single tier then checks the front-end with no member, member on wrong tier, and member on right tier