no issue
- New Member API batched import is meant to be a substitution to current import
with improved performance while keeping same behaviore. Current
import processes 1 record at a time using internal API calls and times
out consistently when large number of members has to be imported (~10k
records without Stripe).
- New import's aim is to improve performance and process >50K
records without timing out both with and without Stripe connected
members
- Batched import can be conceptually devided into 3 stages which have
their own ways to improve performance:
1. labels - can be at current performance as number of
labels is usually small, but could also be improved through batching
2. member records + member<->labels relations - these could
be performed as batched inserts into the database
3. Stripe connections - most challanging bottleneck to solve because
API request are slow by it's nature and have to deal with rate limits of
Stripe's API itself
- It's a heavy WIP, with lots of known pitfalls which are marked with
TODOs. Will be solved iteratively through time untill the method can be
declared stable
- The new batched import method will be hidden behind 'enableDeveloperExperiments' flag to
allow early testing
- A simple way to test behaviours without having to do complex interactions to e.g. generate errors or slow requests
- Makes it easier to test the new shutdown behaviour, among other things
- there can now be quite a big delay between SIGINT/TERM being received and shutdown finishing
- add an extra log message to acknowledge the SIGINT/TERM to facilitate debugging and just be clear
- stopppable is a dependency that handles closing connections properly, which server.close does not
- active connections are allowed to complete what they are doing
- idle connections are closed
- no new connections are allowed
- we call stoppable in stop() instead of server.close so that idle connections don't hold the server open
- calling await stop() from shutdown then ensures that we have a consistent experience of stop
- all together this allows ghost to shutdown gracefully when there are long-running requests
- @TODO: handle graceful shutdown of long-running processes
- @TODO: consider do we need to send 503s whilst the server is shutting down?
- changed method to logStopMessages, as we use start and stop, not start and shutdown
- changed logStopMesasges to output the "proper" messages and use this method consistently - the closing connections message isn't really useful
- changed uptime message to always be output cos I can't see a case where there isn't interesting/useful
- Connection handling is legacy code added in 1438278ce4
- Although we were tracking all connections in memory, we weren't actually closing any because stop isn't called
- This (and restart) were both added as part of the now long-deprecated system for using Ghost directly as an npm module
- If we want to close connections cleanly, we should use a tool to do this
- the announce functions exist for the purpose of communicating with Ghost CLI
- the functions were called announceServerStart and announceServerStopped, which implies they tell Ghost CLI when the server starts and stops
- however, the true intention / purpose of these functions is to:
- either tell Ghost CLI when Ghost has successfully booted (e.g. is ready to serve requests)
- or tell Ghost CLI when the server failed to boot, and report the error so that Ghost CLI can communicate it to the user
- therefore, I've refactored the old functions into 1 function to make it clearer they do the same job, but with 2 different states
- also added some tests :D
refs 022a433e56
- noticed I missed adding debug info in one place (not that it's used, just for consistency)
- also made the SIGINT/TERM code slightly more readable
no-issue
- switch from `membersService.api.members.list` to using bookshelf `Member.findPage()` with the `{paid: true}` filter to avoid per-member queries (N+1) to decorate members with subscriptions and a heavy post-fetch filter via `contentGating`
- add concurrency to the Mailgun API requests in `bulk-email` service to reduce overall time submitting API requests
- add debug statements with timing output for easier measurements
no issue
For large numbers of members we're unable to perform queries for free/paid members in a reasonable time without indexes. Bulk delete is also very slow when looping through bookshelf models and having bookshelf manage deletion of child records, adding `ON DELETE CASCADE` to the foreign key indexes moves child deletion to the DB level and allows for performant bulk delete queries.
- migrations only run on mysql because sqlite does not support altering tables
- adds foreign key indexes and constraints with 'ON DELETE CASCADE' for members_labels, members_stripe_customers, and members_stripe_customers_subscriptions
refs #12100
For performance reasons we want to add foreign key and unique constraints
to the members_stripe_* tables so we can utilised cascading deletes and
joins across the tables when querying.
In order to do this we must first ensure that:
- There are no duplicate entries in the `subscription_id` or `customer_id` columns
- There are no orphaned rows in the subscription or customers tables
If the first is not true, the unique constraint will fail, and if the second is not true,
the foreign key constraint will fail.
As we are only adding the indexes to existing MySQL databases at this point, the
cleanup migrations will also only be done for existing MySQL databases too.
The migrations for removing orphaned rows splits the deletion into a `SELECT`
followed by a `WHERE IN` to avoid the database "optimising" the query into a
`JOIN` which ends up taking much longer due to the lack of indexes.
no-issue
We are in the process of creating migrations to add foreign key constraints
and cascading deletes to the members_stripe_* tables to make listing members
and deleting members faster. As well as the migrations we need to update the
database schema so that new installations have the correct indexes and constraints.
Co-authored-by: Kevin Ansfield <kevin@lookingsideways.co.uk>
refs 173e3292fa
- The bug was initially introduced in referenced commit. When request is done with `api_key` context, there should always be an `integration` object associated with it - 71c17539d8/core/server/services/permissions/parse-context.js (L36) . An `id` from `context.integration` not `context.api_key` has to be assigned to newly created webhook!
- The webhooks API is about to be declared stable in upcoming release, so no migration will be done