1
1
mirror of https://github.com/n8n-io/n8n.git synced 2024-10-26 13:29:14 +03:00

️ Revamp the docs

Reflect support for MariaDB in docs
This commit is contained in:
Tanay Pant 2020-04-15 15:36:15 +02:00
parent 303062623b
commit 50bebe1b8a
18 changed files with 167 additions and 173 deletions

View File

@ -1,10 +1,10 @@
# n8n Documentation
This is the documentation of n8n a free and open [fair-code](http://faircode.io) licensed node-based Workflow Automation Tool.
This is the documentation of n8n, a free and open [fair-code](http://faircode.io) licensed node-based Workflow Automation Tool.
It covers everything from setup, usage to development. It is still work in progress and all contributions are welcome.
It covers everything from setup to usage and development. It is still a work in progress and all contributions are welcome.
## What is n8n?
n8n (pronounced nodemation) helps you to interconnect each and every app with an API in the world with each other to share and manipulate its data without a single line of code. It is an easy to use, user-friendly and highly customizable service, which uses an intuitive user interface for you to design your unique workflows very fast. Hosted on your server and not based in the cloud it keeps your sensible data very secure in your own trusted database.
n8n (pronounced nodemation) helps you to interconnect every app with an API in the world with each other to share and manipulate its data without a single line of code. It is an easy to use, user-friendly and highly customizable service, which uses an intuitive user interface for you to design your unique workflows very fast. Hosted on your server and not based in the cloud, it keeps your sensible data very secure in your own trusted database.

View File

@ -35,7 +35,7 @@ export VUE_APP_URL_BASE_API="https://n8n.example.com/"
## Execution Data Manual Runs
n8n creates a random encryption key automatically on the first launch and saves
it in the `~/.n8n` folder. That key gets used to encrypt the credentials before
it in the `~/.n8n` folder. That key is used to encrypt the credentials before
they get saved to the database. It is also possible to overwrite that key and
set it via an environment variable.
@ -60,8 +60,8 @@ settings in the Editor UI.
## Execution Data Error/Success
When a workflow gets executed it will save the result in the database. That is
the case for executions that did succeed and for the ones that failed. That
When a workflow gets executed, it will save the result in the database. That's
the case for executions that succeeded and for the ones that failed. The
default behavior can be changed like this:
```bash
@ -71,7 +71,7 @@ export EXECUTIONS_DATA_SAVE_ON_SUCCESS=none
Possible values are:
- **all**: Saves all data
- **none**: Do not save anything (recommended if a workflow runs a very often and/or processes a lot of data, set up "Error Workflow" instead)
- **none**: Does not save anything (recommended if a workflow runs very often and/or processes a lot of data, set up "Error Workflow" instead)
These settings can also be overwritten on a per workflow basis in the workflow
settings in the Editor UI.
@ -80,10 +80,10 @@ settings in the Editor UI.
## Execute In Same Process
All workflows get executed in their own separate process. This ensures that all CPU cores
get used and that they do not block each other on CPU intensive tasks. Additionally does
the crash of one execution not take down the whole application. The disadvantage is, however,
that it slows down the start-time considerably and uses much more memory. So in case, the
workflows are not CPU intensive and they have to start very fast it is possible to run them
get used and that they do not block each other on CPU intensive tasks. Additionally, this makes sure that
the crash of one execution does not take down the whole application. The disadvantage is, however,
that it slows down the start-time considerably and uses much more memory. So in case the
workflows are not CPU intensive and they have to start very fast, it is possible to run them
all directly in the main-process with this setting.
```bash
@ -93,9 +93,9 @@ export EXECUTIONS_PROCESS=main
## Exclude Nodes
It is possible to not allow users to use nodes of a specific node type. If you, for example,
do not want that people can write data to disk with the "n8n-nodes-base.writeBinaryFile"
node and can not execute commands with the "n8n-nodes-base.executeCommand" node you can
It is possible to not allow users to use nodes of a specific node type. For example, if you
do not want that people can write data to the disk with the "n8n-nodes-base.writeBinaryFile"
node and that they cannot execute commands with the "n8n-nodes-base.executeCommand" node, you can
set the following:
```bash
@ -106,8 +106,8 @@ export NODES_EXCLUDE="[\"n8n-nodes-base.executeCommand\",\"n8n-nodes-base.writeB
## Custom Nodes Location
Every user can add custom nodes that get loaded by n8n on startup. The default
location is in the subfolder `.n8n/custom` of the user which started n8n.
Additional folders can be defined via an environment variable.
location is in the subfolder `.n8n/custom` of the user who started n8n.
Additional folders can be defined with an environment variable.
```bash
export N8N_CUSTOM_EXTENSIONS="/home/jim/n8n/custom-nodes;/data/n8n/nodes"
@ -116,11 +116,11 @@ export N8N_CUSTOM_EXTENSIONS="/home/jim/n8n/custom-nodes;/data/n8n/nodes"
## Use built-in and external modules in Function-Nodes
For security reasons, importing modules is restricted by default in Function-Nodes.
For security reasons, importing modules is restricted by default in the Function-Nodes.
It is, however, possible to lift that restriction for built-in and external modules by
setting the following environment variables:
`NODE_FUNCTION_ALLOW_BUILTIN`: For builtin modules
`NODE_FUNCTION_ALLOW_EXTERNAL`: For external modules sourced from n8n/node_modules directory. External module support is disabled when env variable is not set.
- `NODE_FUNCTION_ALLOW_BUILTIN`: For builtin modules
- `NODE_FUNCTION_ALLOW_EXTERNAL`: For external modules sourced from n8n/node_modules directory. External module support is disabled when env variable is not set.
```bash
# Allows usage of all builtin modules
@ -152,10 +152,10 @@ export N8N_SSL_CERT=/data/certs/server.pem
## Timezone
The timezone is set by default to "America/New_York". It gets for example used by the
Cron-Node to know at what time the workflow should be started at. To set a different
default timezone simply set `GENERIC_TIMEZONE` to the appropriate value like for example
for Berlin (Germany):
The timezone is set by default to "America/New_York". For instance, it is used by the
Cron node to know at what time the workflow should be started. To set a different
default timezone simply set `GENERIC_TIMEZONE` to the appropriate value. For example,
if you want to set the timezone to Berlin (Germany):
```bash
export GENERIC_TIMEZONE="Europe/Berlin"
@ -168,8 +168,8 @@ You can find the name of your timezone here:
## User Folder
User-specific data like the encryption key, SQLite database file, and
the ID of the tunnel (if used) get by default saved in the subfolder
`.n8n` of the user which started n8n. It is possible to overwrite the
the ID of the tunnel (if used) gets saved by default in the subfolder
`.n8n` of the user who started n8n. It is possible to overwrite the
user-folder via an environment variable.
```bash
@ -180,12 +180,12 @@ export N8N_USER_FOLDER="/home/jim/n8n"
## Webhook URL
The webhook URL will normally be created automatically by combining
`N8N_PROTOCOL`, `N8N_HOST` and `N8N_PORT`. If n8n runs, however, behind a
reverse proxy that would not work. Because n8n does for example run internally
`N8N_PROTOCOL`, `N8N_HOST` and `N8N_PORT`. However, if n8n runs behind a
reverse proxy that would not work. That's because n8n runs internally
on port 5678 but is exposed to the web via the reverse proxy on port 443. In
that case, it is important to set the webhook URL manually that it can be
displayed correctly in the Editor UI and even more important that the correct
webhook URLs get registred with external services.
that case, it is important to set the webhook URL manually so that it can be
displayed correctly in the Editor UI and even more important is that the correct
webhook URLs get registred with the external services.
```bash
export WEBHOOK_TUNNEL_URL="https://n8n.example.com/"
@ -194,15 +194,15 @@ export WEBHOOK_TUNNEL_URL="https://n8n.example.com/"
## Configuration via file
It is also possible to configure n8n via a configuration file.
It is also possible to configure n8n using a configuration file.
It is not necessary to define all values. Only the ones which should be
It is not necessary to define all values but only the ones that should be
different from the defaults.
If needed also multiple files can be supplied to for example have generic
If needed multiple files can also be supplied to. For example, have generic
base settings and some specific ones depending on the environment.
The path to the JSON configuration file to use can be set via the environment
The path to the JSON configuration file to use can be set using the environment
variable `N8N_CONFIG_FILES`.
```bash

View File

@ -1,12 +1,12 @@
# Create Node
It is quite easy to create own nodes in n8n. Mainly three things have to be defined:
It is quite easy to create your own nodes in n8n. Mainly three things have to be defined:
1. Generic information like name, description, image/icon
1. The parameters to display via which the user can interact with it
1. The code to run once the node gets executed
To simplify the development process we created a very basic CLI which creates boilerplate code to get started and then also builds the node (as they are written in TypeScript) and copies it to the correct location.
To simplify the development process, we created a very basic CLI which creates boilerplate code to get started, builds the node (as they are written in TypeScript), and copies it to the correct location.
## Create the first basic node
@ -14,13 +14,12 @@ To simplify the development process we created a very basic CLI which creates bo
1. Install the n8n-node-dev CLI: `npm install -g n8n-node-dev`
1. Create and go into the newly created folder in which you want to keep the code of the node
1. Use CLI to create boilerplate node code: `n8n-node-dev new`
1. Answer the questions (btw. Type “Execute” is the regular node you probably want to create).
It will then at the end create the node in the current folder
1. Answer the questions (the “Execute” node type is the regular node type that you probably want to create).
It will then create the node in the current folder.
1. Program… Add the functionality to the node
1. Build the node and copy to correct location: `n8n-node-dev build`
That command will build the JavaScript version of the node from the TypeScript code and copies it then
the user folder where custom nodes get read from `~/.n8n/custom/`
1. Restart n8n and refresh the window that the new node gets displayed
That command will build the JavaScript version of the node from the TypeScript code and copy it to the user folder where custom nodes get read from `~/.n8n/custom/`
1. Restart n8n and refresh the window so that the new node gets displayed
## Create own custom n8n-nodes-module
@ -43,14 +42,14 @@ can automatically find the nodes in the module:
- The module has to be installed alongside n8n
An example starter module which contains one node and credentials and implements
the above can be found here:
the above can be found here:
[https://github.com/n8n-io/n8n-nodes-starter](https://github.com/n8n-io/n8n-nodes-starter)
### Setup to use n8n-nodes-module
To use a custom `n8n-nodes-module` it simply has to be installed alongside n8n.
To use a custom `n8n-nodes-module`, it simply has to be installed alongside n8n.
For example like this:
```bash
@ -71,7 +70,7 @@ n8n
### Development/Testing of custom n8n-nodes-module
Works actually the same as for any other npm module.
This works in the same way as for any other npm module.
Execute in the folder which contains the code of the custom `n8n-nodes-module`
which should be loaded with n8n:
@ -99,25 +98,25 @@ n8n
## Node Development Guidelines
That everything works correctly, similarly and no unnecessary code gets added it is important to follow the following guidelines:
Please make sure that everything works correctly and that no unnecessary code gets added. It is important to follow the following guidelines:
### Do not change incoming data
Never change the incoming data a node receives (which can be queried with `this.getInputData()`) as it gets shared by all nodes. If data has to get added, changed or deleted it has to be cloned and the new data returned. If that is not done will sibling nodes, which execute after the current one, operate on the altered data and would so process different data then they were supposed to.
It is however not needed to always clone all the data. If a node for, example only, changes only the binary data but not the JSON one simply a new item can be created which reuses the reference to the JSON item.
Never change the incoming data a node receives (which can be queried with `this.getInputData()`) as it gets shared by all nodes. If data has to get added, changed or deleted it has to be cloned and the new data returned. If that is not done, sibling nodes which execute after the current one will operate on the altered data and would process different data than they were supposed to.
It is however not needed to always clone all the data. If a node for, example only, changes only the binary data but not the JSON data, a new item can be created which reuses the reference to the JSON item.
An example can be seen in the code of the [ReadBinartFile-Node](https://github.com/n8n-io/n8n/blob/master/packages/nodes-base/nodes/ReadBinaryFile.node.ts#L69-L83).
### Write nodes in TypeScript
All code of n8n is written in TypeScript so should also be the nodes. That makes development easier and faster and avoids at least some bugs.
All code of n8n is written in TypeScript and hence, the nodes should also be written in TypeScript. That makes development easier, faster, and avoids at least some bugs.
### Do use the built in request library
### Use the built in request library
Some third-party services have their own libraries on npm which make it a little bit easier to create an integration. It can be quite tempting to use them. The problem with those is that you add another dependency and not just one you add also all the dependencies of the dependencies. That means that more and more code gets added, has to get loaded, can introduce security vulnerabilities and bugs and so on. So please use the built-in module which can be used like this:
Some third-party services have their own libraries on npm which make it easier to create an integration. It can be quite tempting to use them. The problem with those is that you add another dependency and not just one you add but also all the dependencies of the dependencies. This means more and more code gets added, has to get loaded, can introduce security vulnerabilities, bugs and so on. So please use the built-in module which can be used like this:
```typescript
const response = await this.helpers.request(options);
@ -128,19 +127,19 @@ That is simply using the npm package [`request-promise-native`](https://github.c
### Reuse parameter names
When a node can perform multiple operations for example edit and delete some kind of entity. It would need for both operations an entity-id. Do not call them "editId" and "deleteId" simply call them "id". n8n can handle multiple parameters with the same name without a problem as long as only one is visible. To make sure that is the case the "displayOptions" can be used. By keeping the same name, the value can be kept if a user switches the operation from for example "edit" to "delete".
When a node can perform multiple operations like edit and delete some kind of entity, for both operations, it would need an entity-id. Do not call them "editId" and "deleteId" simply call them "id". n8n can handle multiple parameters with the same name without a problem as long as only one is visible. To make sure that is the case, the "displayOptions" can be used. By keeping the same name, the value can be kept if a user switches the operation from "edit" to "delete".
### Create an "Options" parameter
Some nodes may need a lot of options. Add only the very important ones to the top level and create for all the other ones an "Options" parameter where they can be added if needed. So the interface stays clean and does not unnecessarily confuse people. A good example of that would be the "XML Node".
Some nodes may need a lot of options. Add only the very important ones to the top level and for all others, create an "Options" parameter where they can be added if needed. This ensures that the interface stays clean and does not unnecessarily confuse people. A good example of that would be the XML node.
### Follow exiting parameter naming guideline
Ok, there is not much of a guideline yet but if your node can do multiple things call the parameter which sets the behavior either "mode" (like "Merge" and "XML" node) or "operation" like the most other ones. If this operations can be done on different resources (like "User" or "Order) create a "resource" parameter (like "Pipedrive" and "Trello" node)
There is not much of a guideline yet but if your node can do multiple things, call the parameter which sets the behavior either "mode" (like "Merge" and "XML" node) or "operation" like the most other ones. If these operations can be done on different resources (like "User" or "Order) create a "resource" parameter (like "Pipedrive" and "Trello" node)
### Node Icons
Check existing node icons as a reference when you create own ones. The resolution of an icon should be 60x60px and saved as png.
Check existing node icons as a reference when you create own ones. The resolution of an icon should be 60x60px and saved as PNG.

View File

@ -1,15 +1,15 @@
# Data Structure
For "basic usage" it is not necessarily needed to understand how the data is structured
which gets passed from one node to another. It becomes however important if you want to:
For "basic usage" it is not necessarily needed to understand how the data that
gets passed from one node to another is structured. However, it becomes important if you want to:
- create an own node
- create your own node
- write custom expressions
- use the Function or Function Item Node
- you want to get the most out of n8n in general
- use the Function or Function Item node
- you want to get the most out of n8n
In n8n all data passed between nodes is an array of objects. It has the structure below:
In n8n, all the data that is passed between nodes is an array of objects. It has the following structure:
```json
[

View File

@ -1,7 +1,7 @@
# Database
By default, n8n uses SQLite to save credentials, past executions, and workflows.
n8n however also supports MongoDB and PostgresDB.
By default, n8n uses SQLite to save credentials, past executions, and workflows. However,
n8n also supports MongoDB and PostgresDB.
## Shared Settings
@ -13,12 +13,12 @@ The following environment variables get used by all databases:
## MongoDB
!> **WARNING**: Use Postgres if possible! Mongo has problems with saving large
amounts of data in a document and causes also other problems. So support will
!> **WARNING**: Use PostgresDB, if possible! MongoDB has problems saving large
amounts of data in a document, among other issues. So, support
may be dropped in the future.
To use MongoDB as database you can provide the following environment variables like
in the example bellow:
To use MongoDB as the database, you can provide the following environment variables like
in the example below:
- `DB_TYPE=mongodb`
- `DB_MONGODB_CONNECTION_URL=<CONNECTION_URL>`
@ -38,7 +38,7 @@ n8n start
## PostgresDB
To use PostgresDB as database you can provide the following environment variables
To use PostgresDB as the database, you can provide the following environment variables
- `DB_TYPE=postgresdb`
- `DB_POSTGRESDB_DATABASE` (default: 'n8n')
- `DB_POSTGRESDB_HOST` (default: 'localhost')
@ -62,7 +62,7 @@ n8n start
## MySQL / MariaDB
The compatibility with MySQL/MariaDB was tested, even so, it is advisable to observe the operation of the application with this DB, as it is a new option, recently added. If you spot any problems, feel free to submit a PR.
The compatibility with MySQL/MariaDB has been tested. Even then, it is advisable to observe the operation of the application with this database as this option has been recently added. If you spot any problems, feel free to submit a burg report or a pull request.
To use MySQL as database you can provide the following environment variables:
- `DB_TYPE=mysqldb` or `DB_TYPE=mariadb`
@ -86,7 +86,7 @@ n8n start
## SQLite
The default database which gets used if no other one is defined.
This is the default database that gets used if nothing is defined.
The database file is located at:
`~/.n8n/database.sqlite`
@ -94,17 +94,16 @@ The database file is located at:
## Other Databases
Currently, only the above databases are supported. n8n uses internally
[TypeORM](https://typeorm.io) so adding support for the following databases
Currently, only the databases mentioned above are supported. n8n internally uses
[TypeORM](https://typeorm.io), so adding support for the following databases
should not be too much work:
- CockroachDB
- MariaDB
- Microsoft SQL
- Oracle
If you can not use any of the currently supported databases for some reason and
you can program, you can simply add support by yourself. If not you can request
that support should be added here:
If you cannot use any of the currently supported databases for some reason and
you can code, we'd appreciate your support in the form of a pull request. If not, you can request
for support here:
[https://community.n8n.io/c/feature-requests/cli](https://community.n8n.io/c/feature-requests/cli)

View File

@ -3,5 +3,5 @@
Detailed information about how to run n8n in Docker can be found in the README
of the [Docker Image](https://github.com/n8n-io/n8n/blob/master/docker/images/n8n/README.md).
A basic step by step example setup of n8n with docker-compose and Lets Encrypt is available on the
A basic step by step example setup of n8n with docker-compose and Let's Encrypt is available on the
[Server Setup](server-setup.md) page.

View File

@ -13,7 +13,7 @@ created next. [Feature Request](https://community.n8n.io/c/feature-requests/node
### An integration exists already but a feature is missing. Can you add it?
Adding new functionality to an existing integration is normally not that complicated. So the chance is
high that we can do that quite fast. Simply post your feature request in the forum and we will see
high that we can do that quite fast. Post your feature request in the forum and we'll see
what we can do. [Feature Request](https://community.n8n.io/c/feature-requests/nodes)
@ -25,32 +25,32 @@ Information about that can be found in the [CONTRIBUTING guide](https://github.c
## License
### What license does n8n use?
### Which license does n8n use?
n8n is [fair-code](http://faircode.io) licensed under [Apache 2.0 with Commons Clause](https://github.com/n8n-io/n8n/blob/master/packages/cli/LICENSE.md)
### Is n8n open-source?
No, according to the definition of the [Open Source Initiative (OSI)](https://opensource.org/osd)
is n8n not open-source. The reason is that [Commons Clause](https://commonsclause.com) which takes away some rights got attached to the Apache 2.0 license.
The source code is however open and people and companies can use it totally free.
What is however not allowed is to make money directly with n8n. So you can for example not charge
No, according to the definition of the [Open Source Initiative (OSI)](https://opensource.org/osd),
n8n is not open-source. The reason is that the [Commons Clause](https://commonsclause.com) which takes away some rights got attached to the Apache 2.0 license.
The source code is however open and people and companies can use it for free.
What is however not allowed is to make money directly with n8n. So you can, for example, not charge
other people to host or support n8n.
The support part is mainly there because it was already in the license and I am not a lawyer. So to make it simpler do I hereby grant anybody the right to do consulting/support without prior permission as long as it is less than 30,000 USD ($30k) per year.
The support part is mainly there because it was already in the license and I am not a lawyer. So to make it simpler, I hereby grant anybody the right to do consulting/support without prior permission as long as it is less than 30,000 USD ($30k) per year.
If you have bigger things planned simply write an email to [license@n8n.io](mailto:license@n8n.io).
If you have bigger things planned, simply write an email to [license@n8n.io](mailto:license@n8n.io).
### Why is n8n not open-source but [fair-code](http://faircode.io) licensed instead?
I love open-source and the idea that everybody can use and extend what I wrote for free. But as much
as money can not buy you love, love can sadly literally not buy you anything. Especially does it not pay for rent, food, health insurance, and so on.
I love open-source and the idea that everybody can use and extend what I wrote, for free. But as much
as money cannot buy you love, love can sadly literally not buy you anything. Especially it does not pay for rent, food, health insurance, and so on.
And even though people can theoretically contribute to a project are the main drivers which push a project
forward the most time very few and normally the creators or the company behind the project. So to make sure that the project improves and stays alive long term did the Commons Clause get attached. That makes sure that no other person/company can make money directly with n8n. Especially not in the way it is planned
to finance further development. For 99.99% of the people it will not make any difference at all but at
the same time does it protect the project.
the same time it protects the project.
As n8n itself depends on and uses a lot of other open-source projects it is only fair and in our interest
to also help them. So it is planed to contribute a certain percentage of revenue/profit every month to these

View File

@ -8,19 +8,18 @@ A connection establishes a link between nodes to route data through the workflow
## Node
Example: Google Sheets Node. It can be used to retrieve or write data to a Google Sheet.
A node is an entry point for retrieving data, a function to process data or an exit for sending data. The data process includes filtering, recomposing and changing data. There can be one or several nodes for your API, service or app. You can easily connect multiple nodes, which allows you to create simple and complex workflows with them intuitively.
A node is an entry for retrieving data, a function to process data or an exit for sending data. Data process includes filtering, recomposing and changing data. There is one or several nodes for your API, service or app. You can easily connect multiple nodes, which allows you to create simple and complex workflows with them very intuitively.
For example, consider a Google Sheets node. It can be used to retrieve or write data to a Google Sheet.
## Trigger
## Trigger Node
Example: Trello Trigger Node. When a Trello Board gets updated, it will trigger a workflow to start using the data from the updated board.
A trigger node is a node that starts a workflow and supplies the initial data. What triggers it, depends on the node. It could be the time, a webhook call or an event from an external service.
A trigger-node is a node which starts a workflow and supplies the initial data. What triggers it, depends on the node. It could be the time, a webhook call or an event from an external service.
For example, consider a Trello trigger node. When a Trello Board gets updated, it will trigger a workflow to start using the data from the updated board.
## Workflow
A workflow is a canvas, on which you can place and connect nodes. You define a trigger node(s), optionally function nodes which change the data in the flow or in external services. A workflow can be started manually
or by trigger nodes. A workflow run ends when all active and connected nodes have processed their data.
A workflow is a canvas on which you can place and connect nodes. A workflow can be started manually or by trigger nodes. A workflow run ends when all active and connected nodes have processed their data.

View File

@ -13,7 +13,7 @@ The following keyboard shortcuts can currently be used:
- **Tab**: Open "Node Creator". Type to filter and navigate with arrow keys. To create press "enter"
## With node/s selected
## With node(s) selected
- **ArrowDown**: Select sibling node bellow the current one
- **ArrowLeft**: Select node left of the current one

View File

@ -3,31 +3,31 @@
## Types
There are two main node types in n8n Trigger- and Regular-Nodes.
There are two main node types in n8n: Trigger nodes and Regular nodes.
### Trigger Nodes
The Trigger-Nodes start a workflow and supply the initial data. A workflow can contain multiple trigger nodes but with each execution, just one of them will execute as they do not have any input and they are the nodes from which the execution starts.
The Trigger nodes start a workflow and supply the initial data. A workflow can contain multiple trigger nodes but with each execution, only one of them will execute. This is because the other trigger nodes would not have any input as they are the nodes from which the execution of the workflow starts.
### Regular Nodes
These nodes do the actual “work”. They can add, remove and edit the data in the flow, request and send data to external APIs and can do everything possible with Node.js in general.
These nodes do the actual work. They can add, remove, and edit the data in the flow as well as request and send data to external APIs. They can do everything possible with Node.js in general.
## Credentials
External services need a way to identify and authenticate users. That data, which can range from API key over email/password combination to a very long multi-line private key, get saved in n8n as credentials.
To make sure that the data is secure, it gets saved to the database encrypted. As encryption key does a random personal encryption key gets used which gets automatically generated on the first start of n8n and then saved under `~/.n8n/config`.
External services need a way to identify and authenticate users. This data can range from an API key over an email/password combination to a very long multi-line private key and can be saved in n8n as credentials.
Nodes in n8n can then request that credential information. As an additional layer of security credentials can only be accessed by node types which specifically have the right to do so.
To make sure that the data is secure, it gets saved to the database encrypted. A random personal encryption key is used which gets automatically generated on the first run of n8n and then saved under `~/.n8n/config`.
## Expressions
With the help of expressions, it is possible to set node parameters dynamically by referencing other data. That can be data from the flow, nodes, the environment or self-generated data. They are normal text with placeholders (everything between {{...}}) that can execute JavaScript code which offers access to special variables to access data.
With the help of expressions, it is possible to set node parameters dynamically by referencing other data. That can be data from the flow, nodes, the environment or self-generated data. Expressions are normal text with placeholders (everything between {{...}}) that can execute JavaScript code, which offers access to special variables to access data.
An expression could look like this:
@ -57,20 +57,20 @@ The following special variables are available:
- **$runIndex**: The current run index (first time node gets executed it is 0, second time 1, ...)
- **$workflow**: Returns workflow metadata like: active, id, name
Normally it is not needed to write the JavaScript variables manually as they can be simply selected with the help of the Expression Editor.
Normally it is not needed to write the JavaScript variables manually as they can be selected with the help of the Expression Editor.
## Parameters
Parameters can be set for most nodes in n8n. The values that get set define what exactly a node does.
Parameter values are static by default, and are always the same no matter what data the node processes. However, it is possible to set the values dynamically with the help of an Expression. Using Expressions, it is possible to make the parameter value dependent on other factors like the data of flow or parameters of other nodes.
Parameter values are static by default and are always the same no matter what kind of data the node processes. However, it is possible to set the values dynamically with the help of an Expression. Using Expressions, it is possible to make the parameter value dependent on other factors like the data of flow or parameters of other nodes.
More information about it can be found under [Expressions](#expressions).
## Pausing Node
Sometimes when creating and debugging a workflow it is helpful to not execute some specific nodes. To make that as simple as possible and not having to disconnect each node, it is possible to pause them. When a node gets paused the data simple passes through the node without being changed.
Sometimes when creating and debugging a workflow, it is helpful to not execute specific nodes. To do that without disconnecting each node, you can pause them. When a node gets paused, the data passes through the node without being changed.
There are two ways to pause a node. Either pressing the pause button which gets displayed above the node when hovering over it. Or by selecting the node and pressing “d”.
There are two ways to pause a node. You can either press the pause button which gets displayed above the node when hovering over it or select the node and press “d”.

View File

@ -1,22 +1,22 @@
# Nodes
## Function and Function Item Node
## Function and Function Item Nodes
These are the most powerful nodes in n8n. With these, almost everything can be done if you know how to
write JavaScript code. Both nodes work very similarly. They simply give you access to the incoming data
write JavaScript code. Both nodes work very similarly. They give you access to the incoming data
and you can manipulate it.
### Difference between both nodes
The difference is that the code of the Function-Node does get executed only once and it receives the
full items (JSON and binary data) as an array and expects as return value again an array of items. The items
returned can be totally different from the incoming ones. So is it not just possible to remove and edit
existing items it is also possible to add or return totally new ones.
The difference is that the code of the Function node gets executed only once. It receives the
full items (JSON and binary data) as an array and expects an array of items as a return value. The items
returned can be totally different from the incoming ones. So it is not only possible to remove and edit
existing items, but also to add or return totally new ones.
The code of the Function Item-Node on the other does get executed once for every item. It receives
as input one item at a time and also just the JSON data. As a return value, it again expects the JSON data
of one single item. That makes it possible to very easily add, remove and edit JSON properties of items
The code of the Function Item node on the other hand gets executed once for every item. It receives
one item at a time as input and also just the JSON data. As a return value, it expects the JSON data
of one single item. That makes it possible to add, remove and edit JSON properties of items
but it is not possible to add new or remove existing items. Accessing and changing binary data is only
possible via the methods `getBinaryData` and `setBinaryData`.
@ -28,9 +28,9 @@ return a promise which resolves accordingly.
#### Variable: items
It contains all the items the node received as input.
It contains all the items that the node received as input.
Information about how the data is structured can be found on the page [Data Structure](data-structure.md)
Information about how the data is structured can be found on the page [Data Structure](data-structure.md).
The data can be accessed and manipulated like this:
@ -64,18 +64,18 @@ return newItems;
With `$item` it is possible to access the data of parent nodes. That can be the item data but also
the parameters. It expects as input an index of the item the data should be returned for. This is
needed because for each item the data returned can be different. This is probably obvious for the
item data itself but maybe less for data like parameters. The reason why it is also needed there is
item data itself but maybe less for data like parameters. The reason why it is also needed, is
that they may contain an expression. Expressions get always executed of the context for an item.
If that would not be the case, for example, the Email Send-Node not would be able to send multiple
If that would not be the case, for example, the Email Send node not would be able to send multiple
emails at once to different people. Instead, the same person would receive multiple emails.
The index is 0 based. So `$item(0)` will return the first item, `$item(1)` the second one, ...
By default will the item of the last run of the node be returned. So if the referenced node did run
3x (its last runIndex is 2) and the current node runs the first time (its runIndex is 0) will the
data of runIndex 2 of the referenced node be returned.
By default the item of the last run of the node will be returned. So if the referenced node ran
3x (its last runIndex is 2) and the current node runs the first time (its runIndex is 0) the
data of runIndex 2 of the referenced node will be returned.
For more information about what data can be accessed via $node check [here](#variable-node).
For more information about what data can be accessed via $node, check [here](#variable-node).
Example:
@ -96,9 +96,9 @@ const channel = $item(9).$node["Slack"].parameter["channel"];
#### Method: $items(nodeName?: string, outputIndex?: number, runIndex?: number)
Gives access to all the items of current or parent nodes. If no parameters get supplied
Gives access to all the items of current or parent nodes. If no parameters get supplied,
it returns all the items of the current node.
If a node-name is given, it returns the items the node did output on it`s first output
If a node-name is given, it returns the items the node output on its first output
(index: 0, most nodes only have one output, exceptions are IF and Switch-Node) on
its last run.
@ -184,13 +184,13 @@ return items;
Gives access to the static workflow data.
It is possible to save data directly with the workflow. This data should, however, be very small.
A common use case is to for example to save a timestamp of the last item that got processed from
an RSS-Feed or database. It will always return an object. Properties can then read, deleted or
set on that object. When the workflow execution did succeed n8n will check automatically if data
changed and will save it if necessary.
an RSS-Feed or database. It will always return an object. Properties can then read, delete or
set on that object. When the workflow execution succeeds, n8n will check automatically if the data
has changed and will save it, if necessary.
There are two types of static data. The "global" and the "node" one. Global static data is the
same in the whole workflow. And every node in the workflow can access it. The node static data
, however, is different for every node and only the node which did set it can retrieve it again.
, however, is different for every node and only the node which set it can retrieve it again.
Example:
@ -210,9 +210,9 @@ staticData.lastExecution = new Date().getTime();
delete staticData.lastExecution;
```
It is important to know that static data can not be read and written when testing via the UI.
The data will there always be empty and changes will not be persisted. Only when a workflow
is active and it gets called by a Trigger or Webhook will the static data be saved.
It is important to know that the static data can not be read and written when testing via the UI.
The data there will always be empty and the changes will not persist. Only when a workflow
is active and it gets called by a Trigger or Webhook, the static data will be saved.
@ -244,4 +244,4 @@ Sets all the binary data (all keys) of the item which gets currently processed.
#### Method: getWorkflowStaticData(type)
As described above for Function-Node.
As described above for Function node.

View File

@ -3,13 +3,13 @@
## Give n8n a spin
To spin up n8n to have a look you can run:
To spin up n8n, you can run:
```bash
npx n8n
```
It will then download everything which is needed and start n8n.
It will download everything that is needed to start n8n.
You can then access n8n by opening:
[http://localhost:5678](http://localhost:5678)
@ -17,7 +17,7 @@ You can then access n8n by opening:
## Start with docker
To just play around a little bit you can start n8n with docker.
To play around with n8n, you can also start it using docker:
```bash
docker run -it --rm \
@ -26,7 +26,7 @@ docker run -it --rm \
n8nio/n8n
```
Be aware that all data will be lost once the docker container got removed. To
Be aware that all the data will be lost once the docker container gets removed. To
persist the data mount the `~/.n8n` folder:
```bash
@ -37,5 +37,5 @@ docker run -it --rm \
n8nio/n8n
```
More information about the Docker setup can be found the README of the
[Docker Image](https://github.com/n8n-io/n8n/blob/master/docker/images/n8n/README.md)
More information about the Docker setup can be found in the README file of the
[Docker Image](https://github.com/n8n-io/n8n/blob/master/docker/images/n8n/README.md).

View File

@ -1,9 +1,9 @@
# Security
By default, n8n can be accessed by everybody. This is OK if you have it only running
locally but if you deploy it on a server which is accessible from the web you have
to make sure that n8n is protected!
Right now we have very basic protection via basic-auth in place. It can be activated
By default, n8n can be accessed by everybody. This is okay if you only have it running
locally but if you deploy it on a server which is accessible from the web, you have
to make sure that n8n is protected.
Right now we have very basic protection in place using basic-auth. It can be activated
by setting the following environment variables:
```bash

View File

@ -1,9 +1,9 @@
# Sensitive Data via File
To avoid passing sensitive information via environment variables "_FILE" may be
To avoid passing sensitive information via environment variables, "_FILE" may be
appended to some environment variables. It will then load the data from a file
with the given name. That makes it possible to load data easily from
Docker- and Kubernetes-Secrets.
Docker and Kubernetes secrets.
The following environment variables support file input:

View File

@ -1,11 +1,11 @@
# Server Setup
!> ***Important***: Make sure that you secure your n8n instance like described under [Security](security.md)!
!> ***Important***: Make sure that you secure your n8n instance as described under [Security](security.md).
## Example setup with docker-compose
If you have already installed docker and docker-compose you can directly start with step 4.
If you have already installed docker and docker-compose, then you can directly start with step 4.
### 1. Install Docker
@ -150,9 +150,9 @@ SSL_EMAIL=user@example.com
### 7. Create data folder
Create the folder which is defined as `DATA_FOLDER`. In the example
above it is `/root/n8n/`.
above, it is `/root/n8n/`.
In that folder will the database file from SQLite be saved and also the encryption key.
In that folder, the database file from SQLite as well as the encryption key will be saved.
The folder can be created like this:
```
@ -176,7 +176,7 @@ sudo docker-compose stop
### 9. Done
n8n will now be reachable via the above define subdomain + domain combination.
n8n will now be reachable via the above defined subdomain + domain combination.
The above example would result in: https://n8n.example.com
n8n will only be reachable via https not via http!
n8n will only be reachable via https and not via http.

View File

@ -22,14 +22,11 @@ n8n start
## Start with tunnel
!> **WARNING**: This is only meant for local development and testing. Should not be used in production!
!> **WARNING**: This is only meant for local development and testing. It should not be used in production!
To be able to use webhooks which all triggers of external services like Github
rely on n8n has to be reachable from the web. To make that easy n8n has a
special tunnel service (uses this code: [https://github.com/localtunnel/localtunnel](https://github.com/localtunnel/localtunnel)) which redirects requests from our servers to your local
n8n instance.
To be able to use webhooks for trigger nodes of external services like GitHub, n8n has to be reachable from the web. To make that easy, n8n has a special tunnel service, which redirects requests from our servers to your local n8n instance (uses this code: [https://github.com/localtunnel/localtunnel](https://github.com/localtunnel/localtunnel)).
To use it simply start n8n with `--tunnel`
To use it, simply start n8n with `--tunnel`
```bash
n8n start --tunnel

View File

@ -1,7 +1,7 @@
# Start Workflows via CLI
Workflows can not just be started by triggers, webhooks or manually via the
Editor it is also possible to start them directly via the CLI.
Workflows cannot be only started by triggers, webhooks or manually via the
Editor. It is also possible to start them directly via the CLI.
Execute a saved workflow by its ID:

View File

@ -3,12 +3,12 @@
## Activate
Activating a workflow means that the Trigger & Webhook nodes get activated and can trigger a workflow to run. By default are all newly created workflows deactivated. That means that even if a Trigger-Node like the Cron-Node should start a workflow because a predefined time is reached it will not unless the workflow gets activated. It is only possible to activate a workflow which contains a Trigger or a Webhook node.
Activating a workflow means that the Trigger and Webhook nodes get activated and can trigger a workflow to run. By default all the newly created workflows are deactivated. That means that even if a Trigger node like the Cron node should start a workflow because a predefined time is reached, it will not unless the workflow gets activated. It is only possible to activate a workflow which contains a Trigger or a Webhook node.
## Data Flow
Nodes do not only process one "item" they process multiple ones. So if the Trello-Node is set to "Create-Card" and it has an expression set for "Name" to be set depending on "name"-property it will create a card for each item, always choosing the name-property-value of the current one.
Nodes do not only process one "item", they process multiple ones. So if the Trello node is set to "Create-Card" and it has an expression set for "Name" to be set depending on "name" property, it will create a card for each item, always choosing the name-property-value of the current one.
This data would, for example, create two boards. One named "test1" the other one named "test2":
@ -26,7 +26,7 @@ This data would, for example, create two boards. One named "test1" the other one
## Error Workflows
For each workflow, an optional "Error Workflow" can be set. It gets executed in case the execution of the workflow fails. That makes it possible to for example inform the user via Email or Slack if something goes wrong. The same "Error Workflow" can be set on multiple workflows.
For each workflow, an optional "Error Workflow" can be set. It gets executed in case the execution of the workflow fails. That makes it possible to, for instance, inform the user via Email or Slack if something goes wrong. The same "Error Workflow" can be set on multiple workflows.
The only difference between a regular workflow and an "Error Workflow" is that it contains an "Error Trigger" node. So it is important to make sure that this node gets created before setting a workflow as "Error Workflow".
@ -56,29 +56,29 @@ The "Error Trigger" node will trigger in case the execution fails and receives i
```
All information is always present except:
- **execution.id**: Only present when the execution gets saved in the Database
- **execution.url**: Only present when the execution gets saved in the Database
- **execution.retryOf**: Only present when the execution is a retry of a previously failed one
- **execution.id**: Only present when the execution gets saved in the database
- **execution.url**: Only present when the execution gets saved in the database
- **execution.retryOf**: Only present when the execution is a retry of a previously failed execution
### Setting Error Workflow
An "Error Workflow" can be set in the Workflow Settings which can be accessed by pressing the "Workflow" button in the menu on the menu on the left side. The last option is "Settings". In the then appearing window, the "Error Workflow" can be selected via the Dropdown "Error Workflow".
An "Error Workflow" can be set in the Workflow Settings which can be accessed by pressing the "Workflow" button in the menu on the on the left side. The last option is "Settings". In the window that appears, the "Error Workflow" can be selected via the Dropdown "Error Workflow".
## Share Workflows
All workflows are simple JSON and can so be shared very easily.
All workflows are JSON and can be shared very easily.
There are multiple ways to download a workflow as JSON to then share it with other people via Email, Slack, Skype, Dropbox, …
1. Pressing "Download" under the Workflow button in the menu on the left. It then downloads the workflow as JSON file
1. Selecting the nodes in the editor which should be exported and then copy them (Ctrl + c). The nodes get then saved as JSON in the clipboard and can be pasted wherever desired (Ctrl + v).
1. Press the "Download" button under the Workflow menu in the sidebar on the left. It then downloads the workflow as a JSON file.
1. Select the nodes in the editor which should be exported and then copy them (Ctrl + c). The nodes then get saved as JSON in the clipboard and can be pasted wherever desired (Ctrl + v).
Importing that JSON representation again into n8n is as easy and can also be done in different ways:
1. Pressing "Import from File" or "Import from URL" under the Workflow button in the menu on the left.
1. Copying the JSON workflow to the clipboard (Ctrl + c) and then simply pasting it directly into the editor (Ctrl + v).
1. Press "Import from File" or "Import from URL" under the Workflow menu in the sidebar on the left.
1. Copy the JSON workflow to the clipboard (Ctrl + c) and then simply pasting it directly into the editor (Ctrl + v).
## Workflow Settings
@ -88,12 +88,12 @@ On each workflow, it is possible to set some custom settings and overwrite some
### Error Workflow
Workflow to run in case the execution of the current workflow fails. More information in section [Error Workflows](#error-workflows)
Workflow to run in case the execution of the current workflow fails. More information in section [Error Workflows](#error-workflows).
### Timezone
The timezone to use in the current workflow. If not set the global Timezone (by default "New York" gets used). This is for example important for the Cron Trigger node.
The timezone to use in the current workflow. If not set, the global Timezone (by default "New York" gets used). For instance, this is important for the Cron Trigger node.
### Save Data Error Execution