This updates the AdapterCacheRedis instance to be able to handle updating
itself when reading from the cache. For this to work we need to pass a
`fetchData` function to the `get` method.
In the case of a cache miss, we will read the data via the `fetchData`
function, and store it in the cache, before returning the value to the caller.
When coupled with a `refreshAheadFactor` config, we will go a step further and
implement the "Refresh Ahead" caching strategy. What this means is that we will
refresh the contents of the cache in the background, this happens on a cache
read and only once the data in the cache has only a certain percentage of the
TTL left, which is set as a decimal value between 0 and 1.
e.g.
ttl = 100s
refreshAheadFactor = 0.2;
Any read from the cache that happens _after_ 80s will do a background refresh
Having the code use `async/await` make it more readable, and extracting the
execution to a separate function make its easier to run in the background in the
future
The main changes are:
- Updating the pipeline to allow for doing a background refresh of the
cache
- Remove the use of the EventAwareCacheWrapper for the posts public
cache
### Background refresh
This is just an initial implementation, and tbh it doesn't sit right
with me that the logic for this is in the pipeline - I think this should
sit in the cache implementation itself, and then we call out to it with
something like: `cache.get(key, fetchData)` and then the updates can
happen internally.
The `cache-manager` project actually has a method like this called
`wrap` - but every time I've used it it hangs, and debugging was a pain,
so I don't really trust it.
### EventAwareCacheWrapper
This is such a small amount of logic, I don't think it's worth creating
an entire wrapper for it, at least not a class based one. I would be
happy to refactor this to use a `Proxy` too, so that we don't have to
add methods to it each time we wanna change the underlying cache
implementation.
refs https://github.com/TryGhost/Arch/issues/83
This allows endpoints to implement their own key generation, with access to the
frame object they can be smart about key generation and use only options and
context values that are appropriate.
refs https://github.com/TryGhost/DevOps/issues/68
- without a name, tools such as New Relic report the function as
`<anonymous>`, which makes it incredible hard to follow the code flow
- this commit adds a function name to all middleware I can find that
doesn't already have one, which should fill in a lot of those gaps
refs: https://github.com/TryGhost/Team/issues/3139https://github.com/TryGhost/Team/issues/3140
- Added duplicate post functionality to post list context menu
- Currently only a single post can be duplicated at a time
- Currently only enabled via the `Making it rain` flag
- Added admin API endpoint to copy a post - `POST ghost/api/admin/posts/<post_id>/copy/`
- Added admin API endpoint to copy a page - `POST ghost/api/admin/pages/<page_id>/copy/`
As discussed with the product team we want to enforce kebab-case file names for
all files, with the exception of files which export a single class, in which
case they should be PascalCase and reflect the class which they export.
This will help find classes faster, and should push better naming for them too.
Some files and packages have been excluded from this linting, specifically when
a library or framework depends on the naming of a file for the functionality
e.g. Ember, knex-migrator, adapter-manager
- we previously used `@stdlib/utils` instead of the child package
`@stdlib/copy`, which is a lot smaller and contains our only use of
the parent
- this saves 140+MB of dependencies
- we keep ending up with multiple versions of the depedency in our tree,
and it's causing problems when comparing instances
- the workaround I'm implementing for now is to bump the package
everywhere and set a resolution so we only have 1 shared instance
- hopefully we can come up with a better method down the line
refs https://github.com/TryGhost/Toolbox/issues/522
- API-level response caching allows to cache responses bypassing the "pipeline" processing
- The main usecase for these caches is caching GET requests for expensive Content API requests
- To enable response caching add a "cache" key with a cache instance as a value, for example for posts public cache configuration can look like:
```
module.exports = {
docName: 'posts',
browse: {
cache: postsPublicService.api.cache,
options: [ ...
```
- there's a weird situation when we have mixed versions of the
dependency because different libraries try to compare instances
- this brings the usage up to 1.2.21 so we can fix the build for now
- this was all getting terribly behind so I've done several things:
- majority of `@tryghost/*` except Lexical packages
- gscan + knex-migrator to remove old `@tryghost/errors` usage
- bumped lockfile
- cleaned up unused dependencies
- adds missing dependencies that are used in the code
- this should help us be more explicit about the dependencies a package
uses
- because of how the npm scripts were set up, we were running the full
Admin integration tests during the unit tests phase of CI
- this commit renames the majority of `test` to `test:unit` in the
package.json files, and aliases `test` to `test:unit`
- special packages like Admin have no-op'd `test:unit` scripts so we
don't end up running its tests
- if the API controller endpoint is a function, we early return as we
expect the function to handle the response but we still ended up
calculating the headers beforehand, only to be thrown away
- this commit moves the header fetching code down in the flow so it's
only executed when needed
- this doesn't really have a big effect for us because 99% of our
controllers follow the object pattern
refs https://github.com/TryGhost/Toolbox/issues/363
- this API framework is standalone and should be pulled out into a
separate package so we can define its boundaries more clearly, and
promote better testing of smaller parts