closes#8781
- when the ownership get's transferred, the id of the new owner is not '1' anymore
- we previously added a database rule, which signalises if the blog is setup or not, see 827aa15757 (diff-7a2fe80302d7d6bf67f97cdccef1f71fR542)
- this database rule is based on the owner id being '1', which is wrong when you transfer ownership
- we should keep in mind, that the owner id being '1' is only the default Ghost setup, but it can change
- blog is setup if the owner is locked
no issue
- if you upload a huge import file, parallel operations can throw errors e.g. lock wait exceeds
- this can happen if multiple transactions run in parallel
- there is no need to run:
1. the removal of active tokens on import, because imported users have no active session
2. rescheduling logic on timezone, because importing scheduled posts works out of the box via the model layer (if a published date is detected and it's in the future, the post get's scheduled)
no issue
- if you delete an active user, Ghost logs an error message (Ghost does not crash!)
- but the event logic is not triggered, that means we don't delete the users tokens
- token deletion happens on: suspend a user and delete a user
refs #5422
- we can support null titles after this PR if we want
- user model: fix getAuthorRole
- user model: support adding roles by name
- we support this for roles as well, this makes it easier when importing related user roles (because usually roles already exists in the database and the related id's are wrong e.g. roles_users)
- base model: support for null created_at or updated_at values
- post or tag slugs are always safe strings
- enable an import of a null slug, no need to crash or to cover this on import layer
- add new DataImporter logic
- uses a class inheritance mechanism to achieve an easier readability and maintenance
- schema validation (happens on model layer) was ignored
- allow to import unknown user id's (see https://github.com/TryGhost/Ghost/issues/8365)
- most of the duplication handling happens on model layer (we can use the power of unique fields and errors from the database)
- the import is splitted into three steps:
- beforeImport
--> prepares the data to import, sorts out relations (roles, tags), detects fields (for LTS)
- doImport
--> does the actual import
- afterImport
--> updates the data after successful import e.g. update all user reference fields e.g. published_by (compares the imported data with the current state of the database)
- import images: markdown can be null
- show error message when json handler can't parse file
- do not request gravatar if email is null
- return problems/warnings after successful import
- optimise warnings in importer
- do not return warnings for role duplications, no helpful information
- error handler: return context information of error
- we show the affected json entries as one line in the UI
- show warning for: detected duplicated tag
- schema validation: fix valueMustBeBoolean translation
- remove context property from json parse error
* 🙀 change database schema for images
- rename user/post/tag images
- contains all the required changes from the schema change
* Refactor helper/meta data
- rename cover to cover_image
- also rename default settings to match the pattern
- rename image to profile_image for user
- rename image to feature_image for tags/posts
* {{image}} >>> {{img_url}}
- rename
- change the functionality
- attr is required
- e.g. {{img_url feature_image}}
* gscan 1.0.0
- update yarn.lock
* Update casper reference: 1.0-changes
- see 5487b4da8d
closes#5599
If two users edit the same post, it can happen that they override each others content or post settings. With this change this won't happen anymore.
✨ Update collision for posts
- add a new bookshelf plugin to detect these changes
- use the `changed` object of bookshelf -> we don't have to create our own diff
- compare client and server updated_at field
- run editing posts in a transaction (see comments in code base)
🙀 update collision for tags
- `updateTags` for adding posts on `onCreated` - happens after the post was inserted
--> it's "okay" to attach the tags afterwards on insert
--> there is no need to add collision for inserting data
--> it's very hard to move the updateTags call to `onCreating`, because the `updateTags` function queries the database to look up the affected post
- `updateTags` while editing posts on `onSaving` - all operations run in a transactions and are rolled back if something get's rejected
- Post model edit: if we push a transaction from outside, take this one
✨ introduce options.forUpdate
- if two queries happening in a transaction we have to signalise knex/mysql that we select for an update
- otherwise the following case happens:
>> you fetch posts for an update
>> a user requests comes in and updates the post (e.g. sets title to "X")
>> you update the fetched posts, title would get overriden to the old one
use options.forUpdate and protect internal post updates: model listeners
- use a transaction for listener updates
- signalise forUpdate
- write a complex test
use options.forUpdate and protect internal post updates: scheduling
- publish endpoint runs in a transaction
- add complex test
- @TODO: right now scheduling api uses posts api, therefor we had to extend the options for api's
>> allowed to pass transactions through it
>> but these are only allowed if defined from outside {opts: [...]}
>> so i think this is fine and not dirty
>> will wait for opinions
>> alternatively we have to re-write the scheduling endpoint to use the models directly
no issue
- client dates are sent as ISO format (moment(..).format())
- server dates are in JS Date format
>> when bookshelf fetches data from the database, all dates are transformed into JS dates
>> see `parse` helper function
- Bookshelf updates the model with the client data via Bookshelf's `set` function
- therefor Bookshelf uses a simple `isEqual` function from lodash to detect changes
- .previous(attr) and .get(attr) return false
- that has the concequence that dates are always marked as "changed"
- internally we use our `hasDateChanged` if we have to compare previous/updated dates
- but Bookshelf is not in our control for this case
no issue
- the UTC offset diff of the current and previous timezone must switch
- i have added more tests and more example case descriptions to understand why
no issue
- i don't know if this never worked or has worked and something changed in bookshelf
- but this fixes: saving the content (no change for published_at) of a scheduled post within the 2minutes window
- add `beforeWrite` option to hasDateChanged helper, see comment
- use previous for `beforeWrite` operations
- add a test and fix some other small issues in the scheduler tests
refs #8258
* 🎨 change last_login to last_seen
- rename the column
- a change in Ghost-Admin is required as well
* test utils: revert export examples
* revert line breaks
refs #8111
- Ghost returns now all (active+none active) users by default
- protect login with suspended status
- test permissions and add extra protection for suspending myself
- if a user is suspended and tries to activate himself, he won't be able to proceed the login to get a new token
* 🛠 bookshelf tarball, bson-objectid
* 🎨 schema changes
- change increment type to string
- add a default fallback for string length 191 (to avoid adding this logic to every single column which uses an ID)
- remove uuid, because ID now represents a global resource identifier
- keep uuid for post, because we are using this as preview id
- keep uuid for clients for now - we are using this param for Ghost-Auth
* ✨ base model: generate ObjectId on creating event
- each new resource get's a auto generate ObjectId
- this logic won't work for attached models, this commit comes later
* 🎨 centralised attach method
When attaching models there are two things important two know
1. To be able to attach an ObjectId, we need to register the `onCreating` event the fetched model!This is caused by the Bookshelf design in general. On this target model we are attaching the new model.
2. We need to manually fetch the target model, because Bookshelf has a weird behaviour (which is known as a bug, see see https://github.com/tgriesser/bookshelf/issues/629). The most important property when attaching a model is `parentFk`, which is the foreign key. This can be null when fetching the model with the option `withRelated`. To ensure quality and consistency, the custom attach wrapper always fetches the target model manual. By fetching the target model (again) is a little performance decrease, but it also has advantages: we can register the event, and directly unregister the event again. So very clean code.
Important: please only use the custom attach wrapper in the future.
* 🎨 token model had overriden the onCreating function because of the created_at field
- we need to ensure that the base onCreating hook get's triggered for ALL models
- if not, they don't get an ObjectId assigned
- in this case: be smart and check if the target model has a created_at field
* 🎨 we don't have a uuid field anymore, remove the usages
- no default uuid creation in models
- i am pretty sure we have some more definitions in our tests (for example in the export json files), but that is too much work to delete them all
* 🎨 do not parse ID to Number
- we had various occurances of parsing all ID's to numbers
- we don't need this behaviour anymore
- ID is string
- i will adapt the ID validation in the next commit
* 🎨 change ID regex for validation
- we only allow: ID as ObjectId, ID as 1 and ID as me
- we need to keep ID 1, because our whole software relies on ID 1 (permissions etc)
* 🎨 owner fixture
- roles: [4] does not work anymore
- 4 means -> static id 4
- this worked in an auto increment system (not even in a system with distributed writes)
- with ObjectId we generate each ID automatically (for static and dynamic resources)
- it is possible to define all id's for static resources still, but that means we need to know which ID is already used and for consistency we have to define ObjectId's for these static resources
- so no static id's anymore, except of: id 1 for owner and id 0 for external usage (because this is required from our permission system)
- NOTE: please read through the comment in the user model
* 🎨 tests: DataGenerator and test utils
First of all: we need to ensure using ObjectId's in the tests. When don't, we can't ensure that ObjectId's work properly.
This commit brings lot's of dynamic into all the static defined id's.
In one of the next commits, i will adapt all the tests.
* 🚨 remove counter in Notification API
- no need to add a counter
- we simply generate ObjectId's (they are auto incremental as well)
- our id validator does only allow ObjectId as id,1 and me
* 🎨 extend contextUser in Base Model
- remove isNumber check, because id's are no longer numbers, except of id 0/1
- use existing isExternalUser
- support id 0/1 as string or number
* ✨ Ghost Owner has id 1
- ensure we define this id in the fixtures.json
- doesn't matter if number or string
* 🎨 functional tests adaptions
- use dynamic id's
* 🎨 fix unit tests
* 🎨 integration tests adaptions
* 🎨 change importer utils
- all our export examples (test/fixtures/exports) contain id's as numbers
- fact: but we ignore them anyway when inserting into the database, see https://github.com/TryGhost/Ghost/blob/master/core/server/data/import/utils.js#L249
- in 0e6ed957cd (diff-70f514a06347c048648be464819503c4L67) i removed parsing id's to integers
- i realised that this ^ check just existed, because the userIdToMap was an object key and object keys are always strings!
- i think this logic is a little bit complicated, but i don't want to refactor this now
- this commit ensures when trying to find the user, the id comparison works again
- i've added more documentation to understand this logic ;)
- plus i renamed an attribute to improve readability
* 🎨 Data-Generator: add more defaults to createUser
- if i use the function DataGenerator.forKnex.createUser i would like to get a full set of defaults
* 🎨 test utils: change/extend function set for functional tests
- functional tests work a bit different
- they boot Ghost and seed the database
- some functional tests have mis-used the test setup
- the test setup needs two sections: integration/unit and functional tests
- any functional test is allowed to either add more data or change data in the existing Ghost db
- but what it should not do is: add test fixtures like roles or users from our DataGenerator and cross fingers it will work
- this commit adds a clean method for functional tests to add extra users
* 🎨 functional tests adaptions
- use last commit to insert users for functional tests clean
- tidy up usage of testUtils.setup or testUtils.doAuth
* 🐛 test utils: reset database before init
- ensure we don't have any left data from other tests in the database when starting ghost
* 🐛 fix test (unrelated to this PR)
- fixes a random failure
- return statement was missing
* 🎨 make changes for invites
refs #7494, refs #7495
This PR is an extracted clean up feature of #7495.
We are using everywhere static id checks (userId === 0 or userId === 1).
This PR moves the static values into the Base model.
This makes it 1. way more readable and 2. we can change the id's in a central place.
I changed the most important occurrences - no tests are touched (yet!).
The background is: when changing from auto increment id (number) to ObjectId's (string) we still need to support id 1 and 0, because Ghost relies on these two static id's.
I would like to support using both: 0/1 as string and 0/1 as number.
1 === owner/internal
0 === external
Another important change:
User Model does not longer define the contextUser method, because i couldn't find a reason?
I looked in Git history, see 6e48275160
* 🎨 do not call generateSlug twice for User.setup
* 🎨 call generatePasswordHash onSaving only
- now we can add defaults to User Model
- it was not possible before because add User model did the following:
1. validate password length
2. hash password manually
3. call ghostBookshelf.Model.add and THEN bookshelf defaults fn gets triggered
- call generatePasswordHash in onSaving hook for all use case
- add more tests to user model, juhu
refs #7432
- all models implemented it's own initialize fn to register events
- we can register all events in the base model
- important: we only listen on the event, if the model has defined a hook for it
- this is just a small clean up PR
- register more bookshelf events
* 🎨 move heart of fixtures to schema folder and change user model
- add fixtures.json to schema folder
- add fixture utils to schema folder
- keep all the logic!
--> FIXTURE.JSON
- add owner user with roles
--> USER MODEL
- add password as default
- findAll: allow querying inactive users when internal context (defaultFilters)
- findOne: do not remove values from original object!
- add: do not remove values from original object!
* 🔥 remove migrations key from default_settings.json
- this was a temporary invention for an older migration script
- sephiroth keep alls needed information in a migration collection
* 🔥 add code property to errors
- add code property to errors
- IMPORTANT: please share your opinion about that
- this is a copy paste behaviour of how node is doing that (errno, code etc.)
- so code specifies a GhostError
* 🎨 change error handling in versioning
- no need to throw specific database errors anymore (this was just a temporary solution)
- now: we are throwing real DatabaseVersionErrors
- specified by a code
- background: the versioning unit has not idea about seeding and population of the database
- it just throws what it knows --> database version does not exist or settings table does not exist
* 🎨 sephiroth optimisations
- added getPath function to get the path to init scripts and migration scripts
- migrationPath is still hardcoded (see TODO)
- tidy up database naming to transacting
* ✨ migration init scripts are now complete
- 1. add tables
- 2. add fixtures
- 3. add default settings
* 🎨 important: make bootup script smaller!
- remove all TODO'S except of one
- no seeding logic in bootup script anymore 🕵🏻
* ✨ sephiroth: allow params for init command
- param: skip (do not run this script)
- param: only (only run this script)
- very simple way
* 🎨 adapt tests and test env
- do not use migrate.populate anymore
- use sephiroth instead
- jscs/jshint
* 🎨 fix User model status checks
closes#6629
- i had the case that in gravatar process.env.NODE_ENV was undefined and indexOf of undefined crashe my application
- so always use config to read current env
closes#6165
- internal tags has been in labs for a couple of months, we've fixed some bugs & are ready to ship
- removes all code that tests for the labs flag
- also refactors the various usage of the visibility filter into a single util
- all the tests still pass!!!
- this marks #6165 as closed because I think the remaining UI tasks will be handled as part of a larger piece of work
refs #7116, refs #2001
- Changes the way Ghost errors are implemented to benefit from proper inheritance
- Moves all error definitions into a single file
- Changes the error constructor to take an options object, rather than needing the arguments to be passed in the correct order.
- Provides a wrapper so that any errors that haven't already been converted to GhostErrors get converted before they are displayed.
Summary of changes:
* 🐛 set NODE_ENV in config handler
* ✨ add GhostError implementation (core/server/errors.js)
- register all errors in one file
- inheritance from GhostError
- option pattern
* 🔥 remove all error files
* ✨ wrap all errors into GhostError in case of HTTP
* 🎨 adaptions
- option pattern for errors
- use GhostError when needed
* 🎨 revert debug deletion and add TODO for error id's
- 🛠 add bunyan and prettyjson, remove morgan
- ✨ add logging module
- GhostLogger class that handles setup of bunyan
- PrettyStream for stdout
- ✨ config for logging
- @TODO: testing level fatal?
- ✨ log each request via GhostLogger (express middleware)
- @TODO: add errors to output
- 🔥 remove errors.updateActiveTheme
- we can read the value from config
- 🔥 remove 15 helper functions in core/server/errors/index.js
- all these functions get replaced by modules:
1. logging
2. error middleware handling for html/json
3. error creation (which will be part of PR #7477)
- ✨ add express error handler for html/json
- one true error handler for express responses
- contains still some TODO's, but they are not high priority for first implementation/integration
- this middleware only takes responsibility of either rendering html responses or return json error responses
- 🎨 use new express error handler in middleware/index
- 404 and 500 handling
- 🎨 return error instead of error message in permissions/index.js
- the rule for error handling should be: if you call a unit, this unit should return a custom Ghost error
- 🎨 wrap serve static module
- rule: if you call a module/unit, you should always wrap this error
- it's always the same rule
- so the caller never has to worry about what comes back
- it's always a clear error instance
- in this case: we return our notfounderror if serve static does not find the resource
- this avoid having checks everywhere
- 🎨 replace usages of errors/index.js functions and adapt tests
- use logging.error, logging.warn
- make tests green
- remove some usages of logging and throwing api errors -> because when a request is involved, logging happens automatically
- 🐛 return errorDetails to Ghost-Admin
- errorDetails is used for Theme error handling
- 🎨 use 500er error for theme is missing error in theme-handler
- 🎨 extend file rotation to 1w
closes#6625
- "url" and "author" fields depend on {id, published_at, slug, author_id} to construct post url.
- implemented a generic solution by defining defaultColumnsToFetch() in
base class for models.
- findPage() calls defaultColumnsToFetch() before loading models
- results are transformed by filtering out additional properties to return just the requested fields
- Added a test case to check for url and author fields
- Renamed allColumns as requestedColumns and used _.map instead of Promise.map
closes#6932
- new default order of posts: scheduled, draft, published
- invent orderDefaultRaw fn for each model
- each model is able to create a default raw order query
- separate count and fetch query for fetchPage, because the count query where group/order statements attached
refs #6413
- PUT endpoint to publish a post/page for the scheduler
- fn endpoint to get all scheduled posts (with from/to query params) for the scheduler
- hardcoded permission handling for scheduler client
- fix event bug: unscheduled
- basic structure for scheduling
- post scheduling basics
- offer easy option to change adapter
- integrate the default scheduler adapter
- update scheduled posts when blog TZ changes
- safety check before scheduler can publish a post (not allowed to publish in the future or past)
- add force flag to allow publishing in the past
- invalidate cache header for /schedules/posts/:id
closes#6406
- created listeners.js connector
- merged listeners.js with events.js (in models/base)
- set a post to draft when published_at would be in the past
- reschedule a post when published_at would be in the future
issues #6406#6399
- all dates are stored as UTC with this commit
- use moment.tz.setDefault('UTC')
- add migration file to recalculate local datetimes to UTC
- store all dates in same format into our three supported databases
- add option to remeber migrations inside settings (core)
- support DST offset for migration
- ensure we force UTC in test env
- run whole migration as transaction
- extend: Settings.findOne function
Closes#6625
- Adds a failing test for not returning computed columns as well
as for the bookshelf bug where extra columns passed into a fetch
will result in the model having an extra "quoted" column.
- Filter model attributes for passing into "fetch" but used the
entire list of columns for `toJSON`.
refs #6009
- This is a straight rename, no functionality is added
- The dot syntax requires pre/post processing to convert the name
- This PR also includes several updates to the tests, as they weren't being run as part of Travis!
- pass debug: true to the API to get some useful debug output
- does not work in production mode
Note: I have added these lines back in so many times in the past month or so so that I could
figure out what was happening, I figured everyone else might find them useful.
TODO: use a proper logging method dependent on env
refs #5614, #5943
- adds a new 'filter' bookshelf plugin which extends the model
- the filter plugin provides handling for merging/combining various filters (enforced, defaults and custom/user-provided)
- the filter plugin also handles the calls to gql
- post processing is also moved to the plugin, to be further refactored/removed in future
- adds tests showing how filter could be abused prior to this commit