no issue
- In order to listen to `DomainEvents` for `MilestoneCreatedEvents` we need to add a `DomainEvents` listener and handler to the Segment analytics service.
- For better readability and to be more consistent with how code is currently written in Ghost, I refactored the service index file and split the two types of event listener into separate classes which is much cleaner and easier to test.
refs https://github.com/TryGhost/Team/issues/3376
- When the Ghost instance is initialized it has to have a set of built-in collections. With these changes Ghost starts with a "featured posts" collection - available to be used right away.
refs https://github.com/TryGhost/Team/issues/3224
When a product has a slug that is a single letter, checking if a user
had access to view a post associated with that product would cause a 500
error. The underlying cause of this issue is
https://github.com/TryGhost/NQL/issues/20 This fix circumvents this
issue by providing a value that the nql lexer will not error out on
We want to cache access to Tiers, and it's easier to do that in the
TierRepository. So we update a heavy user of Tiers to use the Tier
service so it can take adv of caching. The serializers are a big
offender for making calls to fetch Tiers.
As discussed with the product team we want to enforce kebab-case file names for
all files, with the exception of files which export a single class, in which
case they should be PascalCase and reflect the class which they export.
This will help find classes faster, and should push better naming for them too.
Some files and packages have been excluded from this linting, specifically when
a library or framework depends on the naming of a file for the functionality
e.g. Ember, knex-migrator, adapter-manager
refs TryGhost/Team#2691
- The bump adds possibility to make email's html/text snapshots with dynamic content. The breaking change here is with separate "matchPlaintextSnapshot" method extracted out of "matchMetadataSnapshot" to handle dynamic content in "text" part of the sent email.
refs https://github.com/TryGhost/Team/issues/2674
When going to /#/portal/account when not signed in, you are redirected
to the login page. But once signed in, you aren't redirected back to the
account page. This fixes this issue by adding an extra and optional
redirect parameter when requesting a magic token via email.
This new parameter allows to override the default behaviour of using the
Referer HTTP header, which doesn't include the hash/fragment part of the
URL.
The referrer is already restricted to only allow redirects to the site,
not external URLs.
- Moves Milestone emails from public beta to GA✨ Moved Milestone emails to GA
- Moves Milestone emails from public beta to GA✨ Moved Milestone emails to GA
- Moves Milestone emails from public beta to GA✨ Moved Milestone emails to GA
- Moves Milestone emails from public beta to GA✨ Moved Milestone emails to GA
- Moves Milestone emails from public beta to GA✨ Moved Milestone emails to GA
- Moves Milestone emails from public beta to GA✨ Moved Milestone emails to GA
- Moves Milestone emails from public beta to GA✨ Moved Milestone emails to GA
- Moves Milestone emails from public beta to GA✨ Moved Milestone emails to GA
- Moves Milestone emails from public beta to GA
refs
https://www.notion.so/ghost/Marketing-Milestone-email-campaigns-1d2c9dee3cfa4029863edb16092ad5c4?pvs=4
- Added email template for milestones with using a configuration file
for different member milestone values, as we're sending different
content for each one
- Implement sending the email to users who have
`milestone-notifications` enabled, currently still behind a flag
Co-authored-by: Peter Zimon <peter.zimon@gmail.com>
- without this, Node will try and resolve the domain name but local DNS
resolvers can take a while to timeout, which causes the tests to timeout
- `nodemailer-direct-transport` calls `dns.resolveMx`, so if we stub that
function and return an empty array, we can avoid any real DNS lookups
- this cleans up all imports or variables that aren't currently being used
- this really helps keep the tests clean by only allowing what is needed
- I've left `should` as an exemption for now because we need to clean up
how it is used
no issue
- Nock doesn't support multiple calls to enableNetConnect -> only the last one counts. This fixes that issue.
- Some tests interacted directly with nock instead of using the mockManager to restore everything.
fixes https://github.com/TryGhost/Team/issues/2611
The old email flow is no longer used since we introduced the email stability flow. This commit removes the related code and tests. The general test coverage decreased a bit as a result, because the old email flow probably had a high test coverage. The new flow is in separate packages, so it couldn't contribute to a higher test coverage (but it does have 100% unit test coverage).
refs https://github.com/TryGhost/Team/issues/2667
Some tests still accessed the internet. Now network access is disabled
by default. This change also introduces two helper methods related to
networking (mocking Slack and Mailgun).
This fixes two unreliable tests:
- Staff service was accessing a Slack test API -> timeout possible
- MentionSendingService was trying to send webmentions for every post
publish/change -> possible timeouts and job issues
refs: https://github.com/TryGhost/Toolbox/issues/389
Instead of logging errors, this will warn when adding a duplicate URL in the test environment.
At the moment, this is happening a lot in the test suite. While we also need to fix the root cause of this so we're not erroring in the product, it's a massive amount of spam in the logs when running the test suite which could prevent us from finding other errors which are causing issues.
refs: https://github.com/TryGhost/Toolbox/issues/389
Calling validate always uses the cache system, so this commit makes sure that the cache system is always initialised correctly by the tests.
no issue
- Reduced the amount of diffeerent properties by not populating a `currentARR` and `currentMembers` fields, but use a `currentValue` instead.
- The type of milestone can still be determined by its `type` property, so we actually don't need two different props here
refs
https://www.notion.so/ghost/Marketing-Milestone-email-campaigns-1d2c9dee3cfa4029863edb16092ad5c4?pvs=4
- Added a `slack-notifications` repository which handles sending Slack
messages to a URL as defined in our Ghost(Pro) config (also includes a
global switch to disable the feature if needed) and listens to
`MilestoneCreatedEvents`.
- Added a `slack-notification` service which listens to the events on
boot.
- In order to have access to further information such as the reason why
a Milestone email hasn't been sent, or the current ARR or Member value
as comparison to the achieved milestone, I added a `meta` object to the
`MilestoneCreatedEvent` which then gets accessible by the event
subscriber. This avoid doing further requests to the DB as we need to
have this information in relation to the event occurred.
---------
Co-authored-by: Fabien "egg" O'Carroll <fabien@allou.is>
no issue
- The way we're going to implement milestones diverged from the original idea of handling email sending within the milestone-emails package, as we'll be sending events instead and will utilise the StaffService to listen to them and send the emails
- This renames the package as well as the service in core itself and all relevant tests
fixes https://github.com/TryGhost/Team/issues/2542
fixes https://github.com/TryGhost/Team/issues/2543
fixes https://github.com/TryGhost/Team/issues/2544
- Hides incomplete subscriptions
- Shows Past Due subscriptions
- Fixed UI issues with 3+ subscriptions
- Fixed missing complimentary subscription when one subscription was
incomplete/inactive
- Fixed sending a paid subscription started email for incomplete
subscriptions. This change also required us to actually send the email
when the incomplete subscription eventually becomes active. So the
introduction of a new `SubscriptionActivatedEvent` made sense/was
required (because sending a SubscriptionCreatedEvent again would cause
other issues).
no issue
There are a couple of issues with resetting the Ghost instance between
E2E test files:
These issues came to the surface because of new tests written in
https://github.com/TryGhost/Ghost/pull/16117
**1. configUtils.restore does not work correctly**
`config.reset()` is a callback based method. On top of that, it doesn't
really work reliably (https://github.com/indexzero/nconf/issues/93)
What kinda happens, is that you first call `config.reset` but
immediately after you correcty reset the config using the `config.set`
calls afterwards. But since `config.reset` is async, that reset will
happen after all those sets, and the end result is that it isn't reset
correctly.
This mainly caused issues in the new updated images tests, which were
updating the config `imageOptimization.contentImageSizes`, which is a
deeply nested config value. Maybe some references to objects are reused
in nconf that cause this issue?
Wrapping `config.reset()` in a promise does fix the issue.
**2. Adapters cache not reset between tests**
At the start of each test, we set `paths:contentPath` to a nice new
temporary directory. But if a previous test already requests a
localStorage adapter, that adapter would have been created and in the
constructor `paths:contentPath` would have been passed. That same
instance will be reused in the next test run. So it won't read the new
config again. To fix this, we need to reset the adapter instances
between E2E tests.
How was this visible? Test uploads were stored in the actual git
repository, and not in a temporary directory. When writing the new image
upload tests, this also resulted in unreliable test runs because some
image names were already taken (from previous test runs).
**3. Old 2E2 test Ghost server not stopped**
Sometimes we still need access to the frontend test server using
`getAgentsWithFrontend`. But that does start a new Ghost server which is
actually listening for HTTP traffic. This could result in a fatal error
in tests because the port is already in use. The issue is that old E2E
tests also start a HTTP server, but they don't stop the server. When you
used the old `startGhost` util, it would check if a server was already
running and stop it first. The new `getAgentsWithFrontend` now also has
the same functionality to fix that issue.
We were incorrectly handling a "no resource found" return value from the
ResourceService, instead of an object with `null` values, we were expecting a
`null` value - so we were considering all URL's to be pointing toward a
resource.
refs https://github.com/TryGhost/Team/issues/2466
Now that we're checking for resources at the URL and rejecting if
there isn't one found, we want to make sure that we can handle pages
which are not a resource.
The idea here is to make a HEAD request to determine whether or not
the page exists. We don't need the full response so HEAD saves us some
bandwidth and we allow both 2xx and 3xx status codes because Ghost has
redirects to add missing trailing slashes, which may not be present in
the URL we're passed.
refs https://github.com/TryGhost/Team/issues/2466
The existing implementation was a very basic check to get us to the
first milestone. By checking if the page points to a resource we can
know for sure the URL exists on the site.
refs https://github.com/TryGhost/Toolbox/issues/503
- There was an error thrown due to empty "model._changed" field
- When attached or detached events (e.g. tag.attached) are sent through, their models do not contain any _changed properties. This was taken into account when checking for route related resource changes
refs https://github.com/TryGhost/Toolbox/issues/503
- Full URL regeneration process was happening even when only unrelated to URL generation fields were updated (e.g. 'plaintext' change in post does not affect the URL of the post). Stopping the "resource updated" event processing early circumvents full url regeneration inside of DynamicRouting, which can be quite heavy depending on routing configuration
- The URLResourceUpdatedEvent is supposed to be emmited whenever there's an update to the resource already associated with the URL and no url-affecting fields were touched.
This allows us to share the implementation with other parts of the codebase, the
specific usecase here being fetching the metadata from webmention sources, for
display in the mentions UI, which will be borrowing a lot of stuff from the
bookmark card.
refs https://github.com/TryGhost/Team/issues/2400
- we've deemed it useful to start to return `Content-Version` for all
API requests, because it becomes useful to know which version of Ghost
a response has come from in logs
- this should also help us detect Admin<->Ghost API mismatches, which
was the cause of a bug recently (ref'd issue)
refs https://github.com/TryGhost/Team/issues/2393
- During boot and loading the active theme, we now cache the result of
the gscan validation. Cache configuration can happen in
`adapters.cache.gscan`
- We now also return non-fatal errors when activating or adding a theme.
- When the `themeErrorsNotification` feature flag is on, we fetch the
active theme (which includes the validation information) when loading
admin
- If the currently active theme has errors, we show an error
notification that can open the error modal
- Added a new endpoint: `/ghost/api/admin/themes/active/` that returns
the result of the last gscan validation of the active theme. If no cache
is available, it will run a new gscan validation.
- Added new permissions for the active action/endpoint (author, editor,
administrator)
no issue
With the increased usage of DomainEvents, it gets harder to build
reliable tests without having to resort to timeouts. This utility method
allows us to wait for all events to be processed before continuing with
the test.
This change should speed up tests and make them more reliable.
It only adds extra code when running tests and shouldn't impact
production.
fixes https://github.com/TryGhost/Team/issues/2366
refs https://ghost.slack.com/archives/C02G9E68C/p1670232405014209
Probem described in issue.
In the old MEGA flow:
- The `email_verification_required` check is now repeated inside the job
In the new email service flow:
- The `email_verification_required` is now checked (didn't happen
before)
- When generating the email batch recipients, we only include members
that were created before the email was created. That way it is
impossible to avoid limit checks by inserting new members between
creating an email and sending an email.
- We don't need to repeat the check inside the job because of the above
changes
Improved handling of large imports:
- When checking `email_verification_required`, we now also check if the
import threshold is reached (a new method is introduced in
vertificationTrigger specifically for this usage). If it is, we start
the verification progress. This is required for long running imports
that only check the verification threshold at the very end.
- This change increases the concurrency of fastq to 3 (refs
https://ghost.slack.com/archives/C02G9E68C/p1670232405014209). So when
running a long import, it is now possible to send emails without having
to wait for the import. Above change makes sure it is not possible to
get around the verification limits.
Refactoring:
- Removed the need to use `updateVerificationTrigger` by making
thresholds getters instead of fixed variables.
- Improved awaiting of members import job in regression test
refs https://ghost.slack.com/archives/C02G9E68C/p1670960248186789
This reverts a change that was made here:
f4fdb4fa6c (r93071549),
but it still moved the original code to a new location in the
LastSeenAtUpdater
It includes a new E2E test to make sure timezones are supported
correctly.
- By not using Bookshelf, we no longer fire webhook calls
- By not using the member repository, we don't fetch and update the
member model and the labels relation in a forUpdate transaction, which
caused deadlock issues on the labels/members_labels tables which were
hard to resolve. Until now I was unable to find the other conflicting
transaction that caused this deadlock. Moving to raw knex (instead of
Bookshelf) and only updating the last_updated_at column should remove
the deadlock issue.
This removed the test for the email service wrapper, since it started
failing for an unknown reason and the test didn't make much sense (was
added earlier only to bump test threshold).