Initial open source release

This commit is contained in:
Denis Redozubov 2020-12-09 16:20:24 +03:00
commit 4939206d33
416 changed files with 33685 additions and 0 deletions

3
.gitattributes vendored Normal file
View File

@ -0,0 +1,3 @@
* linguist-vendored
*.hs linguist-vendored=false
*.rs linguist-vendored=false

63
.github/workflows/build_docker.yaml vendored Normal file
View File

@ -0,0 +1,63 @@
name: Octopod Docker Image
on:
push:
branches:
- master
pull_request:
branches:
- master
- develop
jobs:
build:
name: build-docker
runs-on: ubuntu-latest
timeout-minutes: 600
steps:
- uses: actions/checkout@v2
with:
repo: Aviora/dm
- name: Install Nix
uses: cachix/install-nix-action@v12
- name: Login to Cachix
uses: cachix/cachix-action@v8
with:
name: octopod
signingKey: ${{ secrets.CACHIX_SIGNING_KEY }}
- name: Build Docker Images
run: |
# enable required features (see https://github.com/cachix/install-nix-action/issues/19)
mkdir -p ~/.config/nix
echo "system-features = kvm" >> ~/.config/nix/nix.conf
# build docker images
./build.sh build
- name: Login to DockerHub
id: login-docker-hub
if: github.ref == 'refs/heads/master'
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Push Docker Images to DockerHub
if: github.ref == 'refs/heads/master'
run: |
# push docker images to DockerHub
image_name=`cat octo-docker | docker load | awk '{print $3}'`
docker tag $image_name typeable/octo:latest
docker push typeable/octo:latest
image_name=`cat octopod-server-docker | docker load | awk '{print $3}'`
docker tag $image_name typeable/octopod:latest
docker push typeable/octopod:latest
- name: Clean up
if: always()
continue-on-error: true
run: |
docker logout ${{ steps.login-docker-hub.outputs.registry }}

68
.github/workflows/build_octo_cli.yaml vendored Normal file
View File

@ -0,0 +1,68 @@
name: octo CLI
on:
push:
branches:
- master
pull_request:
branches:
- master
- develop
jobs:
macOS:
name: Create macOS octo CLI Release (Stack)
runs-on: macos-latest
steps:
- uses: actions/checkout@v1
- name: Cache stack dependencies
uses: actions/cache@v2
with:
path: ~/.stack
key: octo-cli-stack-${{ runner.os }}
# TODO: Remove this step once https://github.com/actions/cache/issues/445 is resolved.
- name: Fix macOS cache bug
run: rm -rf ~/.stack/setup-exe-cache
- name: Build
run: stack build octo-cli --local-bin-path out --copy-bins
- uses: actions/upload-artifact@v2
with:
name: octo-cli-macos
path: out/octo
linux:
name: Create Linux octo CLI Release (Nix)
runs-on: ubuntu-latest
timeout-minutes: 600
steps:
- uses: actions/checkout@v1
- uses: cachix/install-nix-action@v12
- uses: cachix/cachix-action@v8
with:
name: octopod
signingKey: "${{ secrets.CACHIX_SIGNING_KEY }}"
- name: Build
run: nix-build nix/octo.nix
- uses: actions/upload-artifact@v2
with:
name: octo-cli-linux
path: result/bin/octo
release:
name: "Release"
if: github.ref == 'refs/heads/master'
runs-on: "ubuntu-latest"
needs: [macOS, linux]
steps:
- uses: actions/download-artifact@v2
- name: Zip
run: |
chmod +x octo-cli-macos/octo
zip octo-cli-macos octo-cli-macos/octo
chmod +x octo-cli-linux/octo
zip octo-cli-linux octo-cli-linux/octo
- uses: "marvinpinto/action-automatic-releases@latest"
with:
repo_token: "${{ secrets.GITHUB_TOKEN }}"
automatic_release_tag: "latest"
prerelease: true
title: "Pre-Release"
files: |
*.zip

View File

@ -0,0 +1,23 @@
name: Documentation
on:
push:
branches:
- master
pull_request:
schedule:
- cron: "0 0 * * *"
jobs:
build:
name: Check Markdown Documentation
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v1
- name: Set up linter
run: |
yarn add remark-cli remark-lint-mdash-style https://github.com/typeable/remark-validate-links#anchors remark-preset-lint-recommended remark-lint-no-dead-urls
- name: Run linter
run: |
yarn run remark -f -u validate-links -u remark-lint-mdash-style -u remark-lint-final-newline -u remark-lint-list-item-bullet-indent -u remark-lint-no-auto-link-without-protocol -u remark-lint-no-blockquote-without-marker -u remark-lint-ordered-list-marker-style -u remark-lint-no-literal-urls -u remark-lint-hard-break-spaces -u remark-lint-no-duplicate-definitions -u remark-lint-no-heading-content-indent -u remark-lint-no-inline-padding -u remark-lint-no-shortcut-reference-image -u remark-lint-no-shortcut-reference-link -u remark-lint-no-undefined-references -u remark-lint-no-unused-definitions -u remark-lint-no-dead-urls docs README.md

9
.gitignore vendored Normal file
View File

@ -0,0 +1,9 @@
.stack-work/
package.yaml
result
dms-docker
*~
dist-newstyle
octopod-config.json
frontend-result
octopod-css/node_modules

318
.stylish-haskell.yaml vendored Normal file
View File

@ -0,0 +1,318 @@
# stylish-haskell configuration file
# ==================================
# The stylish-haskell tool is mainly configured by specifying steps. These steps
# are a list, so they have an order, and one specific step may appear more than
# once (if needed). Each file is processed by these steps in the given order.
steps:
# Format record definitions. This is disabled by default.
#
# You can control the layout of record fields. The only rules that can't be configured
# are these:
#
# - "|" is always aligned with "="
# - "," in fields is always aligned with "{"
# - "}" is likewise always aligned with "{"
#
# - records:
# # How to format equals sign between type constructor and data constructor.
# # Possible values:
# # - "same_line" -- leave "=" AND data constructor on the same line as the type constructor.
# # - "indent N" -- insert a new line and N spaces from the beginning of the next line.
# equals: "same_line"
# # How to format first field of each record constructor.
# # Possible values:
# # - "same_line" -- "{" and first field goes on the same line as the data constructor.
# # - "indent N" -- insert a new line and N spaces from the beginning of the data constructor
# first_field: "indent 2"
# # How many spaces to insert between the column with "," and the beginning of the comment in the next line.
# field_comment: 2
# # # How many spaces to insert before "deriving" clause. Deriving clauses are always on separate lines.
# deriving: 2
# Align the right hand side of some elements. This is quite conservative
# and only applies to statements where each element occupies a single
# line. All default to true.
- simple_align:
cases: false
top_level_patterns: false
records: false
# Import cleanup
- imports:
# There are different ways we can align names and lists.
#
# - global: Align the import names and import list throughout the entire
# file.
#
# - file: Like global, but don't add padding when there are no qualified
# imports in the file.
#
# - group: Only align the imports per group (a group is formed by adjacent
# import lines).
#
# - none: Do not perform any alignment.
#
# Default: global.
align: global
# The following options affect only import list alignment.
#
# List align has following options:
#
# - after_alias: Import list is aligned with end of import including
# 'as' and 'hiding' keywords.
#
# > import qualified Data.List as List (concat, foldl, foldr, head,
# > init, last, length)
#
# - with_alias: Import list is aligned with start of alias or hiding.
#
# > import qualified Data.List as List (concat, foldl, foldr, head,
# > init, last, length)
#
# - with_module_name: Import list is aligned `list_padding` spaces after
# the module name.
#
# > import qualified Data.List as List (concat, foldl, foldr, head,
# init, last, length)
#
# This is mainly intended for use with `pad_module_names: false`.
#
# > import qualified Data.List as List (concat, foldl, foldr, head,
# init, last, length, scanl, scanr, take, drop,
# sort, nub)
#
# - new_line: Import list starts always on new line.
#
# > import qualified Data.List as List
# > (concat, foldl, foldr, head, init, last, length)
#
# Default: after_alias
list_align: after_alias
# Right-pad the module names to align imports in a group:
#
# - true: a little more readable
#
# > import qualified Data.List as List (concat, foldl, foldr,
# > init, last, length)
# > import qualified Data.List.Extra as List (concat, foldl, foldr,
# > init, last, length)
#
# - false: diff-safe
#
# > import qualified Data.List as List (concat, foldl, foldr, init,
# > last, length)
# > import qualified Data.List.Extra as List (concat, foldl, foldr,
# > init, last, length)
#
# Default: true
pad_module_names: false
# Long list align style takes effect when import is too long. This is
# determined by 'columns' setting.
#
# - inline: This option will put as much specs on same line as possible.
#
# - new_line: Import list will start on new line.
#
# - new_line_multiline: Import list will start on new line when it's
# short enough to fit to single line. Otherwise it'll be multiline.
#
# - multiline: One line per import list entry.
# Type with constructor list acts like single import.
#
# > import qualified Data.Map as M
# > ( empty
# > , singleton
# > , ...
# > , delete
# > )
#
# Default: inline
long_list_align: new_line
# Align empty list (importing instances)
#
# Empty list align has following options
#
# - inherit: inherit list_align setting
#
# - right_after: () is right after the module name:
#
# > import Vector.Instances ()
#
# Default: inherit
empty_list_align: right_after
# List padding determines indentation of import list on lines after import.
# This option affects 'long_list_align'.
#
# - <integer>: constant value
#
# - module_name: align under start of module name.
# Useful for 'file' and 'group' align settings.
#
# Default: 4
list_padding: 2
# Separate lists option affects formatting of import list for type
# or class. The only difference is single space between type and list
# of constructors, selectors and class functions.
#
# - true: There is single space between Foldable type and list of it's
# functions.
#
# > import Data.Foldable (Foldable (fold, foldl, foldMap))
#
# - false: There is no space between Foldable type and list of it's
# functions.
#
# > import Data.Foldable (Foldable(fold, foldl, foldMap))
#
# Default: true
separate_lists: false
# Space surround option affects formatting of import lists on a single
# line. The only difference is single space after the initial
# parenthesis and a single space before the terminal parenthesis.
#
# - true: There is single space associated with the enclosing
# parenthesis.
#
# > import Data.Foo ( foo )
#
# - false: There is no space associated with the enclosing parenthesis
#
# > import Data.Foo (foo)
#
# Default: false
space_surround: false
# Language pragmas
- language_pragmas:
# We can generate different styles of language pragma lists.
#
# - vertical: Vertical-spaced language pragmas, one per line.
#
# - compact: A more compact style.
#
# - compact_line: Similar to compact, but wrap each line with
# `{-#LANGUAGE #-}'.
#
# Default: vertical.
style: vertical
# Align affects alignment of closing pragma brackets.
#
# - true: Brackets are aligned in same column.
#
# - false: Brackets are not aligned together. There is only one space
# between actual import and closing bracket.
#
# Default: true
align: false
# stylish-haskell can detect redundancy of some language pragmas. If this
# is set to true, it will remove those redundant pragmas. Default: true.
remove_redundant: true
# Language prefix to be used for pragma declaration, this allows you to
# use other options non case-sensitive like "language" or "Language".
# If a non correct String is provided, it will default to: LANGUAGE.
language_prefix: LANGUAGE
# Replace tabs by spaces. This is disabled by default.
# - tabs:
# # Number of spaces to use for each tab. Default: 8, as specified by the
# # Haskell report.
# spaces: 8
# Remove trailing whitespace
- trailing_whitespace: {}
# Squash multiple spaces between the left and right hand sides of some
# elements into single spaces. Basically, this undoes the effect of
# simple_align but is a bit less conservative.
- squash: {}
# A common setting is the number of columns (parts of) code will be wrapped
# to. Different steps take this into account.
#
# Set this to null to disable all line wrapping.
#
# Default: 80.
columns: 80
# By default, line endings are converted according to the OS. You can override
# preferred format here.
#
# - native: Native newline format. CRLF on Windows, LF on other OSes.
#
# - lf: Convert to LF ("\n").
#
# - crlf: Convert to CRLF ("\r\n").
#
# Default: native.
newline: native
# Sometimes, language extensions are specified in a cabal file or from the
# command line instead of using language pragmas in the file. stylish-haskell
# needs to be aware of these, so it can parse the file correctly.
#
# No language extensions are enabled by default.
language_extensions:
- BangPatterns
- ConstraintKinds
- CPP
- DataKinds
- DefaultSignatures
- DeriveFoldable
- DeriveFunctor
- DeriveGeneric
- DeriveTraversable
- DerivingVia
- DuplicateRecordFields
- ExistentialQuantification
- ExplicitNamespaces
- FlexibleInstances
- FunctionalDependencies
- GADTs
- GeneralizedNewtypeDeriving
- KindSignatures
- LambdaCase
- MultiParamTypeClasses
- MultiWayIf
- NamedFieldPuns
- NoMonadFailDesugaring
- NumDecimals
- OverloadedStrings
- PartialTypeSignatures
- PolyKinds
- QuantifiedConstraints
- QuasiQuotes
- RankNTypes
- RecordWildCards
- RecursiveDo
- RoleAnnotations
- ScopedTypeVariables
- StandaloneDeriving
- TemplateHaskell
- TupleSections
- TypeApplications
- TypeFamilies
- TypeInType
- TypeOperators
- UndecidableInstances
- ViewPatterns
- OverloadedLabels
# Attempt to find the cabal file in ancestors of the current directory, and
# parse options (currently only language extensions) from that.
#
# Default: true
cabal: false

1
Caddyfile vendored Symbolic link
View File

@ -0,0 +1 @@
Caddyfile2

42
Caddyfile1 vendored Normal file
View File

@ -0,0 +1,42 @@
localhost:8000
mime .css text/css
rewrite /static/styles {
r (.*)
to /octopod-css/production/styles/{1}
}
mime .js application/javascript
rewrite /static/vendors {
r (.*)
to /octopod-css/production/vendors/{1}
}
rewrite /static/scripts {
r (.*)
to /octopod-css/production/scripts/{1}
}
rewrite /static/images {
r (.*)
to /octopod-css/production/images/{1}
}
proxy /api localhost:3002 {
transparent
}
mime .json application/json
rewrite /config.json /octopod-config.json
rewrite /ghcjs /frontend-result/bin/frontend.jsexe/index.html
proxy / localhost:3003 {
transparent
websocket
except /octopod-css/ /frontend-result/ /octopod-config.json
}
log / stdout

27
Caddyfile2 vendored Normal file
View File

@ -0,0 +1,27 @@
http://localhost:8000
file_server
@production {
path_regexp production /static/(.*)
}
rewrite @production /octopod-css/production/{http.regexp.production.1}
reverse_proxy /api localhost:3002
rewrite /config.json /octopod-config.json
rewrite /ghcjs /frontend-result/bin/frontend.jsexe/index.html
@3003 {
not path /octopod-css/* /frontend-result/* /octopod-config.json
}
reverse_proxy @3003 localhost:3003
log {
output stdout
format single_field common_log
}

3
ChangeLog.md vendored Normal file
View File

@ -0,0 +1,3 @@
# Changelog for octopod
## Version 1.0

73
Development_guide.md vendored Normal file
View File

@ -0,0 +1,73 @@
# Development guide
## Git flow
`master` contains the latest "release version" only.
All development should be done in the `develop` branch.
Feature PRs are created to the `develop` branch and merged with all commits **squashed**. This leads to us having every commit in the `develop` branch corresponds to exactly one feature or bug fix.
When a release is ready, the `develop` branch is merged into the `master` branch using **rebase and merge**. This makes the `master` branch have every commit be a feature or bug fix. Merging to master triggers a CI script that collects all commits since the last merge and creates a new release with a change log of all commits.
## Building
### Nix Installation
Everything is built with [nix](https://nixos.org). To build the project you will need to install it.
```bash
curl https://nixos.org/nix/install | sh
```
### Nix cache
#### Reflex platform cache
To speedup initial project builds you will want to set up the Reflex Platform binary nix cache append the following to `/etc/nix/nix.conf`:
```
binary-caches = https://cache.nixos.org https://nixcache.reflex-frp.org
binary-cache-public-keys = cache.nixos.org-1:6NCHdD59X431o0gWypbMrAURkbJ16ZPMQFGspcDShjY= ryantrinkle.com-1:JJiAKaRv9mWgpVAz8dwewnZe0AzzEAzPkagE9SP5NWI=
binary-caches-parallel-connections = 40
```
#### Octopod cache
The Octopod cache will also be useful to speed up builds:
1. Install [Cachix](https://cachix.org):
```bash
nix-env -iA cachix -f https://cachix.org/api/v1/install
```
2. Add cache:
```bash
cachix use octopod
```
## Development
We have written a `Makefile` with common targets used during development.
### Building
- `build-backend` builds a release backend executable.
- `build-octo-cli` builds a release octo CLI executable. NOTE: this is not the octo CLI executable that is used for distribution but the dependencies are close enough for development purposes.
- `build-frontend` build the frontend release.
### Development
For development, we have set up `ghcid` commands that rebuild the project every time you make a change. The targets should self-explanatory:
- `ghcid-backend`
- `ghcid-cli`
- `ghcid-frontend`
### Frontend proxy
The frontend should be accessed through a proxy. We have set up [caddy](https://caddyserver.com) configs to ease development. You will need place an `octopod-config.json` file at the root of the repository containing a [config](../../charts/octopod/templates/octopod-nginx-configmap.yaml#L15-L20). `app_auth` can be an arbitrary string it will not affect anything when running locally.
### Stack
For convenience, the repo currently also contains a `stack.yaml` that can be used for development. It is only used to build the macOS octo CLI release but supports building both octo CLI and the _Octopod Server_ in an environment close enough to the release environment to be useful during development if you prefer stack.

30
LICENSE vendored Normal file
View File

@ -0,0 +1,30 @@
Copyright Typeable LLC (c) 2020
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following
disclaimer in the documentation and/or other materials provided
with the distribution.
* Neither the name of Author name here nor the names of other
contributors may be used to endorse or promote products derived
from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

39
Makefile vendored Normal file
View File

@ -0,0 +1,39 @@
.PHONY: build-backend build-octo-cli build-frontend docs backend-docs frontend-docs repl shell shell-ghcjs ghcid ghcid-cli ghcid-frontend push-octopod
build-backend:
nix-build . -A ghc.octopod-backend
build-octo-cli:
nix-build . -A ghc.octo-cli
build-frontend:
nix-build . -A ghcjs.octopod-frontend -o frontend-result
docs: backend-docs frontend-docs
backend-docs:
nix-build . -A ghc.octopod-backend.doc
frontend-docs:
nix-build . -A ghcjs.octopod-frontend.doc
repl:
nix-shell . -A shells.ghc --run "cabal repl lib:octopod-backend"
shell:
nix-shell . -A shells.ghc
shell-ghcjs:
nix-shell . -A shells.ghcjs
ghcid-backend:
nix-shell . -A shells.ghc --run 'ghcid -c "cabal new-repl octopod-backend"'
ghcid-cli:
nix-shell . -A shells.ghc --run 'ghcid -c "cabal new-repl octo-cli"'
ghcid-frontend:
nix-shell . -A shells.ghc --run 'ghcid -c "cabal new-repl octopod-frontend -fdevelopment" --test 'Main.main''
push-octopod:
./build.sh build-and-push latest

53
README.md vendored Normal file
View File

@ -0,0 +1,53 @@
# 🐙 Octopod ![Octopod Docker Image](https://github.com/typeable/octopod/workflows/Octopod%20Docker%20Image/badge.svg?branch=master) ![octo CLI](https://github.com/typeable/octopod/workflows/octo%20CLI/badge.svg?branch=master) ![Documentation](https://github.com/typeable/octopod/workflows/Documentation/badge.svg?branch=master)
_Octopod_ is a fully open-source self-hosted solution for managing multiple deployments in a _Kubernetes_ cluster with a user-friendly web interface. Managing deployments does not require any technical expertise.
We created _Octopod_ because we believe that everything we release should be rigorously tested, however, such desires greatly [complicate the development workflow](docs/en/PM_case_study.md) leading to longer release cycles. We use _Octopod_ to mitigate the downsides of rigorously testing each feature by deploying every single change we make to a separate staging environment allowing QA to investigate each feature independently and in parallel.
## 🖥 Demo
<p align="center"><img src="img/demo.gif"></img></p>
## 📑 Documentation
### 🔭 High-level notes
- [🐙 Overview](docs/en/Overview.md)
- [🧑‍🔬 Project managment case study](docs/en/PM_case_study.md)
- [🧑‍💻 Technical case study](docs/en/Tech_case_study.md)
### 🛠️ Technical documentation
- [🏗 Technical architecture](docs/en/Technical_architecture.md) [[RU](docs/ru/Technical_architecture.md)]
- [⚙️ Control script guide](docs/en/Control_scripts.md) [[RU](docs/ru/Control_scripts.md)]
- [🔧🐙 Octopod deployment guide](docs/en/Octopod_deployment_guide.md) [[RU](docs/ru/Octopod_deployment_with_K8S.md)]
- [🔧🚀 Helm-based Octopod project setup](docs/en/Helm-based_deployment_guide.md) [[RU](docs/ru/Helm-based_deployment_guide.md)]
- [🐙🎛 octo CLI user guide](docs/en/Octo_user_guide.md) [[RU](docs/ru/Octo_user_guide.md)]
- [🤖 CI integration](docs/en/Integration.md)
- [🔒 Octopod security model](docs/en/Security_model.md) [[RU](docs/ru/Security_model.md)]
## FAQ
### How long does it take to set up _Octopod_?
The longest part of setting up _Octopod_ for your project will probably be writing [_Control Scripts_](docs/en/Control_scripts.md). In total you should be able to get things running in about a day.
### Will _Octopod_ work with my project if it uses X?
Yes. _Octopod_ is project-agnostic. If you can run your project in a Docker container, then you can use _Octopod_ with that project.
### What do I need to know to set up Octopod?
You need to understand the basics of _Kubernetes_ and be familiar with whatever hosting provider you will be using. There is no need to know any special language you can write [_Control Scripts_](docs/en/Control_scripts.md) in whatever language you like.
### Does _Octopod_ work with my CI?
Yes. If you can run arbitrary executables in your CI, then you will be able to integrate it with _Octopod_. Integration basically consists of calling our _octo CLI_ tool to perform desired actions. You can find more detail in the [CI integration](docs/en/Integration.md) doc.
### How come I can't see the deployment logs in Octopod web app?
It's been excluded from the GUI because we don't have a good security story to accompany this feature yet. Some secret and credentials may leak to the project team using Octopod and, potentially, not everyone should have access to this data.
### Why Haskell and Rust?
We believe that there is a lot to be gained in programming in general by being able to statically ensure invariants in your code. One of the most practical ways of ensuring invariants is a good static type system. Haskell and Rust are both languages that have very strong type systems. This allows us to move fast without breaking things in the process.
<p align="center"><a href="https://typeable.io"><img src="img/typeable.png" width="177px"></img></a></p>

18
Release_checklist.md vendored Normal file
View File

@ -0,0 +1,18 @@
# Release checklist
1. Merge the `develop` branch into `master`.
2. CI will automatically create a new release in GitHub with _octo CLI_ and update the `latest` tag for both `octo` and `octopod`. Wait for CI to complete.
3. Edit the created release in GitHub to match the version you are releasing.
1. Change the release name to the version being released.
2. Uncheck "This is a pre-release"
4. Push the new release of `octo` and `octopod`. To do this run `./release.sh <version>`.
5. Update the referenced tags in documentation
6. If there were changes to the examples:
1. Build and push the new containers:
1. octopod-web-app-example
2. octopod-helm-example
2. Create a new tag incrementing the integer version number of the tag:
1. Pull the image (`docker pull`)
2. Tag it with the new `v<integer>` (`docker tag`)
3. Push the new tag (`docker push`)
3. Update docs where the tags are referenced.

66
build.sh vendored Executable file
View File

@ -0,0 +1,66 @@
#!/usr/bin/env bash
set -e
build_octo_cli_docker_image() {
nix build nixpkgs.octo-cli-container \
-I nixpkgs=nix \
-o "$1"
}
build_octopod_server_docker_image() {
nix build nixpkgs.octopod-server-container \
--arg migrations "$1" \
-I nixpkgs=nix \
-o "$2"
}
push_docker_images() {
outfile=latest-octopod-server-docker
for image_name in $octo_cli_docker $octopod_server_docker; do
image_type=$(echo "$image_name" | cut -d- -f1)
image=$(ls -ls "$image_name" | awk '{print $12}')
echo "size: $(du -sh $image)"
docker load --input "$image" | tee "$outfile"
nixcontainer=$(awk '{print $3}' $outfile)
docker tag "$nixcontainer" "typeable/${image_type}:$1"
docker push "typeable/${image_type}:$1"
echo "Published: ${image_type}:$1"
done
rm $outfile
}
build_docker_images() {
build_octo_cli_docker_image "$octo_cli_docker"
build_octopod_server_docker_image "$migrations" "$octopod_server_docker"
}
export tag=$(git rev-parse HEAD)
export migrations="./migrations"
export octo_cli_docker="octo-docker"
export octopod_server_docker="octopod-server-docker"
case "$1" in
build-and-push)
echo "$1 mode"
if test -z "$2"
then
echo "Please provide a tag to upload to"
exit 1
fi
build_docker_images
push_docker_images $2
;;
build)
echo "$1 mode"
build_docker_images
;;
*)
echo "usage:"
echo " $0 build Builds the docker images."
echo " $0 build-and-push <tag> Builds the docker images and uploads it to Docker Hub under the tag <tag>."
exit 1
;;
esac

6
cabal.project vendored Normal file
View File

@ -0,0 +1,6 @@
packages:
octo-cli/
octopod-api/
octopod-backend/
octopod-common/
octopod-frontend/

22
charts/cert-control/.helmignore vendored Normal file
View File

@ -0,0 +1,22 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

5
charts/cert-control/Chart.yaml vendored Normal file
View File

@ -0,0 +1,5 @@
apiVersion: v1
appVersion: "1.0"
description: A Helm chart for Kubernetes
name: cert-control
version: 0.1.0

View File

@ -0,0 +1,8 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: cert-control-clusterrole
rules:
- apiGroups: ["cert-manager.io"]
resources: ["certificates"]
verbs: ["list", "delete", "deletecollection"]

View File

@ -0,0 +1,13 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: {{ .Values.serviceaccount }}-cert-control-rolebinding
namespace: {{ .Values.namespace }}
roleRef:
kind: ClusterRole
apiGroup: rbac.authorization.k8s.io
name: cert-control-clusterrole
subjects:
- kind: ServiceAccount
name: {{ .Values.serviceaccount }}
namespace: {{ .Values.octopod_namespace | default .Values.namespace }}

3
charts/cert-control/values.yaml vendored Normal file
View File

@ -0,0 +1,3 @@
namespace: deployment
octopod_namespace: octopod
serviceaccount: octopod

22
charts/helm-access/.helmignore vendored Normal file
View File

@ -0,0 +1,22 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

5
charts/helm-access/Chart.yaml vendored Normal file
View File

@ -0,0 +1,5 @@
apiVersion: v1
appVersion: "1.0"
description: A Helm chart for Kubernetes
name: helm-access
version: 0.1.0

View File

@ -0,0 +1,11 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: helm-clusterrole
rules:
- apiGroups: [""]
resources: ["pods/portforward"]
verbs: ["create"]
- apiGroups: [""]
resources: ["pods"]
verbs: ["list", "get"]

View File

@ -0,0 +1,12 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: {{ .Values.serviceaccount }}-helm-clusterrolebinding
roleRef:
kind: ClusterRole
apiGroup: rbac.authorization.k8s.io
name: helm-clusterrole
subjects:
- kind: ServiceAccount
name: {{ .Values.serviceaccount }}
namespace: {{ .Values.namespace }}

2
charts/helm-access/values.yaml vendored Normal file
View File

@ -0,0 +1,2 @@
namespace: octopod
serviceaccount: octopod

5
charts/kubedog-access/Chart.yaml vendored Normal file
View File

@ -0,0 +1,5 @@
apiVersion: v1
appVersion: "1.0"
description: A Helm chart for Kubernetes
name: helm-access
version: 0.1.0

View File

@ -0,0 +1,17 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: kubedog-clusterrole
rules:
- apiGroups: ["apps"]
resources: ["deployments"]
verbs: ["list", "watch"]
- apiGroups: ["apps"]
resources: ["statefulsets"]
verbs: ["list", "watch"]
- apiGroups: ["apps"]
resources: ["replicasets"]
verbs: ["list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["list"]

View File

@ -0,0 +1,12 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: {{ .Values.serviceaccount }}-kubedog-clusterrolebinding
roleRef:
kind: ClusterRole
apiGroup: rbac.authorization.k8s.io
name: kubedog-clusterrole
subjects:
- kind: ServiceAccount
name: {{ .Values.serviceaccount }}
namespace: {{ .Values.namespace }}

2
charts/kubedog-access/values.yaml vendored Normal file
View File

@ -0,0 +1,2 @@
namespace: octopod
serviceaccount: octopod

22
charts/octopod-infra/.helmignore vendored Normal file
View File

@ -0,0 +1,22 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

5
charts/octopod-infra/Chart.yaml vendored Normal file
View File

@ -0,0 +1,5 @@
apiVersion: v1
appVersion: "1.0"
description: A Helm chart for Kubernetes
name: octopod-infra
version: 0.1.0

View File

@ -0,0 +1,11 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Release.Name }}-postgres-config
namespace: {{ .Release.Namespace }}
labels:
app: {{ .Release.Name }}-postgres
data:
POSTGRES_DB: {{ .Values.postgres_db | default .Release.Name }}
POSTGRES_USER: {{ .Values.postgres_user | default "postgres" }}
POSTGRES_PASSWORD: {{ .Values.postgres_password | default "password" }}

View File

@ -0,0 +1,14 @@
apiVersion: v1
kind: Service
metadata:
name: {{ .Release.Name }}-postgres
namespace: {{ .Release.Namespace }}
labels:
name: {{ .Release.Name }}-postgres
spec:
selector:
app: {{ .Release.Name }}-postgres
clusterIP: None
ports:
- port: 5432
name: postgres

View File

@ -0,0 +1,51 @@
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: {{ .Release.Name }}-postgres
namespace: {{ .Release.Namespace }}
spec:
serviceName: {{ .Release.Name }}-postgres
replicas: 1
selector:
matchLabels:
app: {{ .Release.Name }}-postgres
template:
metadata:
labels:
app: {{ .Release.Name }}-postgres
spec:
nodeSelector:
role: {{ .Values.nodeselector }}
terminationGracePeriodSeconds: 120
containers:
- name: postgres
image: postgres:10
envFrom:
- configMapRef:
name: {{ .Release.Name }}-postgres-config
resources:
requests:
cpu: {{ .Values.requests.cpu }}
memory: {{ .Values.requests.memory }}
limits:
cpu: {{ .Values.limits.cpu }}
memory: {{ .Values.limits.memory }}
ports:
- containerPort: 5432
name: postgredb
volumeMounts:
- name: postgredb
mountPath: /var/lib/postgresql/data
subPath: postgres
volumeClaimTemplates:
- metadata:
name: postgredb
labels:
app: {{ .Release.Name }}-postgres
spec:
accessModes:
- "ReadWriteOnce"
storageClassName: {{ .Values.storage_class | default "default" }}
resources:
requests:
storage: {{ .Values.storage_size }}

20
charts/octopod-infra/values.yaml vendored Normal file
View File

@ -0,0 +1,20 @@
global:
image_prefix:
image: default
image_tag:
namespace: octopod
nodeselector: stand
nodeselector: stand
namespace: octopod
postgres_db: octopod
postgres_user: octopod
postgres_password: octopod
storage_size: 1Gi
requests:
cpu: 0.2
memory: 256Mi
limits:
cpu: 0.2
memory: 512Mi

22
charts/octopod/.helmignore vendored Normal file
View File

@ -0,0 +1,22 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

5
charts/octopod/Chart.yaml vendored Normal file
View File

@ -0,0 +1,5 @@
apiVersion: v1
appVersion: "1.0"
description: A Helm chart for Kubernetes
name: octopod
version: 0.1.0

7
charts/octopod/templates/_helpers.tpl vendored Normal file
View File

@ -0,0 +1,7 @@
{{/*
Set dbname
*/}}
{{- define "dbname" -}}
{{- $dbname_release := .Release.Name | replace "." "_" | replace "-" "_" -}}
{{- .Values.dbname | default $dbname_release }}
{{- end -}}

View File

@ -0,0 +1,47 @@
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: {{ .Release.Name }}-clean-archive-cronjob
namespace: {{ .Values.namespace }}
spec:
schedule: "0 */1 * * *"
jobTemplate:
spec:
template:
metadata:
labels:
app: {{ .Release.Name }}
annotations:
checksum/config: "{{ .Values.global.deploy_checksum }}"
spec:
nodeSelector:
role: {{ .Values.nodeselector }}
containers:
- name: octo
image: {{ .Values.global.image_prefix }}/{{ .Values.global.octo_image }}:{{ .Values.global.image_tag }}
command:
- /app/octo
args:
- clean-archive
env:
- name: OCTOPOD_URL
value: https://{{ .Values.power_app_domain }}:443
volumeMounts:
- name: certs
mountPath: /cert.pem
subPath: client_cert.pem
- name: certs
mountPath: /key.pem
subPath: client_key.pem
resources:
requests:
cpu: 0.1
memory: 256Mi
limits:
cpu: 0.1
memory: 512Mi
restartPolicy: Never
volumes:
- name: certs
configMap:
name: octopod-certs

View File

@ -0,0 +1,32 @@
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: {{ .Release.Name }}-app-nginx-ingress
namespace: {{ .Values.namespace }}
annotations:
kubernetes.io/ingress.class: "nginx"
kubernetes.io/tls-acme: "true"
cert-manager.io/issuer: "{{ .Release.Name }}-certs"
nginx.ingress.kubernetes.io/proxy-connect-timeout: "600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "600"
nginx.ingress.kubernetes.io/proxy-read-timeout: "600"
nginx.ingress.kubernetes.io/proxy-next-upstream: "http_502 error timeout"
nginx.ingress.kubernetes.io/auth-secret: octopod-basic-auth
nginx.ingress.kubernetes.io/auth-secret-type: auth-file
nginx.ingress.kubernetes.io/auth-type: basic
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-origin: "https://{{ .Values.domain }}"
nginx.ingress.kubernetes.io/cors-allow-methods: "GET, POST, PUT, DELETE, PATCH, OPTIONS"
spec:
tls:
- hosts:
- {{ .Values.app_domain }}
secretName: {{ .Release.Name }}-app-tls
rules:
- host: {{ .Values.app_domain }}
http:
paths:
- path: /
backend:
serviceName: {{ .Release.Name }}
servicePort: 81

View File

@ -0,0 +1,19 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Release.Name }}-config
namespace: {{ .Values.namespace }}
data:
PROJECT_NAME: {{ .Values.project_name }}
BASE_DOMAIN: {{ .Values.base_domain }}
NAMESPACE: {{ .Values.target_namespace }}
STATUS_UPDATE_TIMEOUT: "{{ .Values.status_update_timeout }}"
ARCHIVE_RETENTION: "1209600"
CREATION_COMMAND: /utils/create
UPDATE_COMMAND: /utils/update
ARCHIVE_COMMAND: /utils/archive
CHECKING_COMMAND: /utils/check
CLEANUP_COMMAND: /utils/cleanup
ARCHIVE_CHECKING_COMMAND: /utils/archive_check
TAG_CHECKING_COMMAND: /utils/tag_check
INFO_COMMAND: /utils/info

View File

@ -0,0 +1,150 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Release.Name }}
namespace: {{ .Values.namespace }}
spec:
replicas: {{ .Values.replicas }}
selector:
matchLabels:
app: {{ .Release.Name }}
template:
metadata:
name: {{ .Release.Name }}
labels:
app: {{ .Release.Name }}
annotations:
checksum/config: "{{ .Values.global.deploy_checksum }}"
spec:
serviceAccountName: {{ .Values.service_account }}
nodeSelector:
role: {{ .Values.nodeselector }}
terminationGracePeriodSeconds: 600
initContainers:
- name: copy-utils
image: {{ .Values.global.utils_image_prefix }}/{{ .Values.global.utils_image }}:{{ .Values.global.utils_image_tag }}
command:
- sh
- -c
- 'cp /utils/* /copy/'
volumeMounts:
- name: utils
mountPath: /copy
- name: init
image: {{ .Values.global.image_prefix }}/{{ .Values.global.image }}:{{ .Values.global.image_tag }}
command:
- sh
- -c
- '/utils/init'
securityContext:
runAsGroup: 1000
runAsUser: 1000
volumeMounts:
- name: home
mountPath: /home/octopod
- name: utils
mountPath: /utils
- name: copy-www
image: {{ .Values.global.image_prefix }}/{{ .Values.global.image }}:{{ .Values.global.image_tag }}
command:
- sh
- -c
- 'cp -a /www/* /copy/'
volumeMounts:
- name: www
mountPath: /copy
containers:
- name: main
image: {{ .Values.global.image_prefix }}/{{ .Values.global.image }}:{{ .Values.global.image_tag }}
ports:
- containerPort: {{ .Values.port }}
protocol: TCP
- containerPort: {{ .Values.ui_port }}
protocol: TCP
args:
- "--port"
- "{{ .Values.port }}"
- "--ui-port"
- "{{ .Values.ui_port }}"
- "--ws-port"
- "{{ .Values.ws_port }}"
- "--db"
- "host='{{ .Values.pg_host }}' port=5432 user='octopod' password='octopod'"
- "--db-pool-size"
- "10"
- "--tls-cert-path"
- "/tls/server_cert.pem"
- "--tls-key-path"
- "/tls/server_key.pem"
- "--tls-store-path"
- "/tls_store"
envFrom:
- configMapRef:
name: {{ .Release.Name }}-config
securityContext:
runAsGroup: 1000
runAsUser: 1000
volumeMounts:
- name: home
mountPath: /home/octopod
- name: utils
mountPath: /utils
- name: certs
mountPath: /tls/server_cert.pem
subPath: server_cert.pem
- name: certs
mountPath: /tls/server_key.pem
subPath: server_key.pem
- name: certs
mountPath: /tls_store/server_cert.pem
subPath: server_cert.pem
resources:
requests:
cpu: 0.2
memory: 256Mi
limits:
cpu: 0.2
memory: 512Mi
readinessProbe:
httpGet:
port: {{ .Values.ui_port }}
path: /api/v1/ping
periodSeconds: 20
livenessProbe:
httpGet:
port: {{ .Values.ui_port }}
path: /api/v1/ping
initialDelaySeconds: 15
periodSeconds: 5
- name: nginx
image: nginx:1.17.5
volumeMounts:
- name: nginx-config
mountPath: /etc/nginx/conf.d/app.conf
subPath: app.conf
- name: nginx-config
mountPath: /www/config.json
subPath: config.json
- name: www
mountPath: /www
ports:
- containerPort: 80
protocol: TCP
volumes:
- name: home
emptyDir: {}
- name: utils
emptyDir: {}
- name: www
emptyDir: {}
- name: nginx-config
configMap:
name: {{ .Release.Name }}-nginx-config
- name: certs
configMap:
name: octopod-certs

View File

@ -0,0 +1,34 @@
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: {{ .Release.Name }}-nginx-ingress
namespace: {{ .Values.namespace }}
annotations:
kubernetes.io/ingress.class: "nginx"
kubernetes.io/tls-acme: "true"
cert-manager.io/issuer: "{{ .Release.Name }}-certs"
nginx.ingress.kubernetes.io/proxy-connect-timeout: "600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "600"
nginx.ingress.kubernetes.io/proxy-read-timeout: "600"
nginx.ingress.kubernetes.io/proxy-next-upstream: "http_502 error timeout"
{{- if .Values.global.auth_url }}
nginx.ingress.kubernetes.io/auth-url: "{{ .Values.global.auth_url }}"
{{- end }}
{{- if .Values.global.auth_signin }}
nginx.ingress.kubernetes.io/auth-signin: "{{ .Values.global.auth_signin }}"
{{- end }}
spec:
tls:
- hosts:
- {{ .Values.domain }}
secretName: {{ .Release.Name }}-tls
rules:
- host: {{ .Values.domain }}
http:
paths:
- path: /
backend:
serviceName: {{ .Release.Name }}
servicePort: 80

View File

@ -0,0 +1,18 @@
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: {{ .Release.Name }}-certs
namespace: {{ .Release.Namespace }}
spec:
acme:
email: {{ .Values.acme_registration_email }}
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: {{ .Release.Name }}-letsencrypt
# ACME HTTP-01 provider configurations
solvers:
# An empty 'selector' means that this solver matches all domains
- selector: {}
http01:
ingress:
class: nginx

View File

@ -0,0 +1,75 @@
{{- if .Values.migrations }}
apiVersion: batch/v1
kind: Job
metadata:
name: {{ .Release.Name }}-migration-job
namespace: {{ .Release.Namespace }}
annotations:
"helm.sh/hook": pre-install,pre-upgrade
"helm.sh/hook-delete-policy": hook-succeeded
spec:
activeDeadlineSeconds: 600
template:
spec:
containers:
- name: copy
image: {{ .Values.global.image_prefix }}/{{ .Values.global.image }}:{{ .Values.global.image_tag }}
command:
- "bash"
- "-ec"
- |
set -ex
# copy migrations
cp -av /migrations/* /mymigrations
# create sqitch.conf
echo '[core]' > /mymigrations/sqitch.conf
echo 'engine = pg' >> /mymigrations/sqitch.conf
echo 'plan_file = sqitch.plan' >> /mymigrations/sqitch.conf
echo 'top_dir = .' >> /mymigrations/sqitch.conf
echo '[engine "pg"]' >> /mymigrations/sqitch.conf
echo ' registry = sqitch' >> /mymigrations/sqitch.conf
echo '[deploy]' >> /mymigrations/sqitch.conf
echo ' verify = true' >> /mymigrations/sqitch.conf
echo '[rebase]' >> /mymigrations/sqitch.conf
echo ' verify = true' >> /mymigrations/sqitch.conf
echo '[target "octopod"]' >> /mymigrations/sqitch.conf
echo 'uri = db:pg://{{ .Values.connections.pg_instance }}/{{ template "dbname" . }}' >> /mymigrations/sqitch.conf
volumeMounts:
- name: migrations
mountPath: /mymigrations
- name: migrations
image: {{ .Values.global.image_prefix }}/{{ .Values.global.image }}:sqitch-v2.0.0
command:
- "bash"
- "-ec"
- |
set -ex
{{- if .Values.seed }}
echo 'check db'
POSTGRESQL_CONN="psql postgresql://{{ .Values.connections.pg_instance }}/postgres"
DBNAME={{ template "dbname" . }}
($POSTGRESQL_CONN -Atc "SELECT count(*) FROM pg_database WHERE lower(datname) = lower('$DBNAME');" | grep 1) || $POSTGRESQL_CONN -Atc "create database $DBNAME;"
{{- end }}
echo 'run migrations...'
cd /migrations && /usr/local/bin/sqitch deploy octopod
{{- if .Values.seed }}
echo 'seed'
DB_CONN="psql postgresql://{{ .Values.connections.pg_instance }}/{{ template "dbname" . }}"
cd /migrations && $DB_CONN -1 -f seeds.sql || echo 'ok'
{{- end }}
volumeMounts:
- name: migrations
mountPath: /migrations
volumes:
- name: migrations
emptyDir: {}
restartPolicy: Never
backoffLimit: 2
{{- end }}

View File

@ -0,0 +1,20 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Release.Name }}-nginx-config
namespace: {{ .Values.namespace }}
data:
app.conf: |
server {
listen 80 default_server;
server_name _;
root /www;
index index.html;
error_page 404 =200 /index.html;
}
config.json: |
{
"app_url": "https://{{ .Values.app_domain }}",
"ws_url": "wss://{{ .Values.ws_domain }}",
"app_auth": "Basic {{ .Values.basic_auth_token }}"
}

View File

@ -0,0 +1,25 @@
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: {{ .Release.Name }}-power-app-nginx-ingress
namespace: {{ .Values.namespace }}
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/proxy-connect-timeout: "600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "600"
nginx.ingress.kubernetes.io/proxy-read-timeout: "600"
nginx.ingress.kubernetes.io/proxy-next-upstream: "http_502 error timeout"
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
spec:
rules:
- host: {{ .Values.power_app_domain }}
http:
paths:
- path: /
backend:
serviceName: {{ .Release.Name }}
servicePort: 443
tls:
- hosts:
- {{ .Values.power_app_domain }}

View File

@ -0,0 +1,23 @@
apiVersion: v1
kind: Service
metadata:
name: {{ .Release.Name }}
namespace: {{ .Values.namespace }}
labels:
app: {{ .Release.Name }}
spec:
selector:
app: {{ .Release.Name }}
ports:
- name: octopod-power-app
port: 443
targetPort: {{ .Values.port }}
- name: octopod-ui
port: 80
targetPort: 80
- name: octopod-app
port: 81
targetPort: {{ .Values.ui_port }}
- name: octopod-ws
port: 82
targetPort: {{ .Values.ws_port }}

View File

@ -0,0 +1,26 @@
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: {{ .Release.Name }}-ws-nginx-ingress
namespace: {{ .Values.namespace }}
annotations:
kubernetes.io/ingress.class: "nginx"
kubernetes.io/tls-acme: "true"
cert-manager.io/issuer: "{{ .Release.Name }}-certs"
nginx.ingress.kubernetes.io/proxy-connect-timeout: "600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "600"
nginx.ingress.kubernetes.io/proxy-read-timeout: "600"
nginx.ingress.kubernetes.io/proxy-next-upstream: "http_502 error timeout"
spec:
tls:
- hosts:
- {{ .Values.ws_domain }}
secretName: {{ .Release.Name }}-ws-tls
rules:
- host: {{ .Values.ws_domain }}
http:
paths:
- path: /
backend:
serviceName: {{ .Release.Name }}
servicePort: 82

38
charts/octopod/values.yaml vendored Normal file
View File

@ -0,0 +1,38 @@
global:
image_prefix:
image: octopod
octo_image: octo
image_tag:
utils_image_prefix:
utils_image:
utils_image_tag:
namespace: octopod
target_namespace: deployment
nodeselector: stand
service_account: octopod
port: 4443
ui_port: 4000
ws_port: 4020
dbname: octopod
seed: false
migrations: true
replicas: 1
domain: octopod.stage.example.com
app_domain: octopod-app.stage.example.com
ws_domain: octopod-ws.stage.example.com
power_app_domain: octopod-power-app.stage.example.com
base_domain: stage.example.com
project_name: Octopod
status_update_timeout: 600
acme_registration_email:
basic_auth_token:
connections:
pg_instance: octopod:octopod@octopod-infra-postgres-0.octopod-infra-postgres.octopod:5432
pg_host: octopod-infra-postgres-0.octopod-infra-postgres.octopod
requests:
cpu: 200m
memory: 256Mi
limits:
cpu: 200m
memory: 512Mi

22
charts/pvc-control/.helmignore vendored Normal file
View File

@ -0,0 +1,22 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

5
charts/pvc-control/Chart.yaml vendored Normal file
View File

@ -0,0 +1,5 @@
apiVersion: v1
appVersion: "1.0"
description: A Helm chart for Kubernetes
name: pvc-control
version: 0.1.0

View File

@ -0,0 +1,8 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: pvc-control-clusterrole
rules:
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["list", "delete", "deletecollection"]

View File

@ -0,0 +1,13 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: {{ .Values.serviceaccount }}-pvc-control-rolebinding
namespace: {{ .Values.namespace }}
roleRef:
kind: ClusterRole
apiGroup: rbac.authorization.k8s.io
name: pvc-control-clusterrole
subjects:
- kind: ServiceAccount
name: {{ .Values.serviceaccount }}
namespace: {{ .Values.octopod_namespace | default .Values.namespace }}

3
charts/pvc-control/values.yaml vendored Normal file
View File

@ -0,0 +1,3 @@
namespace: deployment
octopod_namespace: octopod
serviceaccount: octopod

54
default.nix vendored Normal file
View File

@ -0,0 +1,54 @@
{ sources ? import ./nix/sources.nix
, reflex-platform ? sources.reflex-platform
}:
(import reflex-platform { }).project ({ pkgs, ... }: {
useWarp = true;
packages = {
octopod-common = ./octopod-common;
octopod-frontend = ./octopod-frontend;
octopod-backend = ./octopod-backend;
octo-cli = ./octo-cli;
octopod-api = ./octopod-api;
};
overrides = hself: hsuper: {
servant-reflex = hsuper.callCabal2nix "servant-reflex" sources.servant-reflex { };
tabulation = hsuper.callCabal2nix "tabulation" "${sources.obelisk}/lib/tabulation" { };
obelisk-executable-config-lookup = hsuper.callCabal2nix "obelisk-executable-config-lookup" "${sources.obelisk}/lib/executable-config/lookup" { };
obelisk-route = hsuper.callCabal2nix "obelisk-route" "${sources.obelisk}/lib/route" { };
hspec-webdriver = hsuper.callCabal2nix "hspec-webdriver" sources.hspec-webdriver-clone { };
servant = pkgs.haskell.lib.overrideCabal hsuper.servant (old: {
postInstall = "";
});
servant-websockets = hsuper.callHackageDirect
{
pkg = "servant-websockets";
ver = "2.0.0";
sha256 = "01bmwg3ysj8gijcqghykxfsd62sqz1pfby2irpzh5ybwyh285pvg";
} { };
deriving-aeson = hsuper.callHackageDirect
{
pkg = "deriving-aeson";
ver = "0.2.3";
sha256 = "0ckwdi9pr4aqp9psag4mdbx30nygxkkpdf21rg9rfz16cz8079j7";
} { };
table-layout = hsuper.callHackageDirect
{
pkg = "table-layout";
ver = "0.9.0.1";
sha256 = "12nllfnh6b5mjda9qxfy192v0r0sx181w9zc9j70kvjdn7hgrb0y";
} { };
data-default-instances-base = hsuper.callHackageDirect
{
pkg = "data-default-instances-base";
ver = "0.1.0.1";
sha256 = "18basdy4qjn246phw008ll9zbi3rpdn6bh2dk0i81a60gsmyn58q";
} { };
};
shells = {
ghc = [ "octopod-common" "octopod-backend" "octopod-frontend" "octopod-api" "octo-cli" ];
ghcjs = [ "octopod-common" "octopod-frontend" ];
};
})

31
docs/Makefile vendored Normal file
View File

@ -0,0 +1,31 @@
.PHONY: docs mermaid plantuml
docs: mermaid plantuml
mermaid:
# to install mmdc use https://github.com/mermaidjs/mermaid.cli
#
# mmdc does not support stateDiagram-v2 (technical-architecture-deployment-statuses-fsm.mmd),
# use https://mermaid-js.github.io/mermaid-live-editor to render it
for src in `ls diagrams/src/*.mmd | grep -v technical-architecture-deployment-statuses-fsm.mmd`; do \
name=`basename $$src .mmd`; \
mmdc -i $$src -o "diagrams/images/$$name.png" --scale 4 --cssFile style.css; \
done
plantuml:
# to install plantuml go to https://plantuml.com/command-line
#
# plantuml takes output paths relative to input file
for src in `ls diagrams/src/*.puml`; do \
name=`basename $$src .mmd`; \
plantuml -I $$src -o "../../diagrams/images" -tpng; \
done
plantuml:
# to install plantuml go to https://plantuml.com/command-line
#
# plantuml takes output paths relative to input file
for src in `ls diagrams/src/*.puml`; do \
name=`basename $$src .mmd`; \
plantuml -I $$src -o "../../diagrams/images" -tpng; \
done

15
docs/README.md vendored Normal file
View File

@ -0,0 +1,15 @@
# 🐙📑 Octopod documentation
## 🔭 High-level notes
- [🐙 Overview](en/Overview.md)
- [🧑‍🔬 Project managment case study](en/PM_case_study.md)
- [🧑‍💻 Technical case study](en/Tech_case_study.md)
## 🛠️ Technical documentation
- [🏗 Technical architecture](en/Technical_architecture.md) [[RU](ru/Technical_architecture.md)]
- [⚙️ Control script guide](en/Control_scripts.md) [[RU](ru/Control_scripts.md)]
- [🔧🐙 Octopod deployment guide](en/Octopod_deployment_guide.md) [[RU](ru/Octopod_deployment_with_K8S.md)]
- [🔧🚀 Helm-based Octopod project setup](en/Helm-based_deployment_guide.md) [[RU](ru/Helm-based_deployment_guide.md)]
- [🐙🎛 octo CLI user guide](en/Octo_user_guide.md) [[RU](ru/Octo_user_guide.md)]
- [🤖 CI integration](en/Integration.md)
- [🔒 Octopod security model](en/Security_model.md) [[RU](ru/Security_model.md)]

Binary file not shown.

After

Width:  |  Height:  |  Size: 17 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.9 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 77 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 174 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 182 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 80 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 86 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 186 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 201 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 331 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 190 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 196 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 182 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 197 KiB

View File

@ -0,0 +1,16 @@
@startuml
top to bottom direction
database "Redis" as OBR
database "Postgres" as OBP
boundary "nginx" as OBN
rectangle "Server" as OBSer
OBSer -down-> OBR
OBSer -down-> OBP
OBN -right-> OBSer
@enduml

View File

@ -0,0 +1,61 @@
@startuml
top to bottom direction
node "Kubernetes cluster" {
boundary "Kube API Server" as K8sAPI
rectangle "Kubernetes" as K8s
K8sAPI -> K8s
cloud "<i>Orange button</i> staging" as OBS #OldLace {
frame "Common infrastructure" #lightcyan {
database "Redis" as OBR #PowderBlue
database "Postgres" as OBP #PowderBlue
boundary "nginx" as OBN #PowderBlue
}
rectangle "Server with an <i>orange button</i>" as OBSer #Wheat
OBSer -down-> OBR
OBSer -down-> OBP
OBN -right-> OBSer
}
cloud "<i>Green button</i> staging" as GBS #technology {
frame "Common infrastructure " #lightcyan {
database "Redis" as GBR #PowderBlue
database "Postgres" as GBP #PowderBlue
boundary "nginx" as GBN #PowderBlue
}
rectangle "Server with a <i>green button</i>" as GBSer #Greenyellow
GBSer -down-> GBR
GBSer -down-> GBP
GBN -right-> GBSer
}
K8s -down-> OBS : create the staging
K8s -down-> GBS : create the staging
}
node Octopod {
boundary "Web UI" as UI
rectangle "Octopod Server" as OctoS
rectangle "Staging control scripts" as SCS
UI -> OctoS
OctoS -> SCS : delegates k8s logic to control script
}
SCS -down-> K8sAPI : set up the stagings
actor Developer
Developer -down-> UI : Specifies the git comit hash to deploy in the web UI
@enduml

View File

@ -0,0 +1,37 @@
sequenceDiagram
participant octo CLI
participant Octopod Server
participant PostgreSQL
participant UI
participant Octopod Server/BgWorker
participant Octopod Server/StatusUpdater
participant ControlScripts
participant KubeAPI
octo CLI->>Octopod Server: archive(name)
Octopod Server->>PostgreSQL: status=ArchivePending
alt name not found
PostgreSQL-->>Octopod Server: error: name not found
Octopod Server-->>octo CLI: error: name not found
else
PostgreSQL-->>Octopod Server: ok
Octopod Server-xUI: event FrontendPleaseUpdateEverything
UI->>Octopod Server: get deployments info
Octopod Server-->>UI: deployments info
Octopod Server->>Octopod Server/BgWorker: archive
Octopod Server-->>octo CLI: done
Octopod Server/BgWorker->>ControlScripts: archive
ControlScripts->>KubeAPI: archive deployment
KubeAPI-->>ControlScripts: done
ControlScripts-->>Octopod Server/BgWorker: done
Octopod Server/BgWorker->>PostgreSQL: write logs
Octopod Server/BgWorker-xUI: event FrontendPleaseUpdateEverything
UI->>Octopod Server: get deployments info
Octopod Server-->>UI: deployments info
Note over Octopod Server/StatusUpdater: wait 5 minutes
loop check deployment status every 30 seconds
Octopod Server/StatusUpdater->>PostgreSQL: status=Archived/ArchivePending
Octopod Server/StatusUpdater-xUI: event FrontendPleaseUpdateEverything
UI->>Octopod Server: get deployments info
Octopod Server-->>UI: deployments info
end
end

View File

@ -0,0 +1,36 @@
sequenceDiagram
participant UI
participant Octopod Server
participant PostgreSQL
participant Octopod Server/BgWorker
participant Octopod Server/StatusUpdater
participant ControlScripts
participant KubeAPI
UI->>Octopod Server: archive(name)
Octopod Server->>PostgreSQL: status=ArchivePending
alt name not found
PostgreSQL-->>Octopod Server: error: name not found
Octopod Server-->>UI: error: name not found
else
PostgreSQL-->>Octopod Server: ok
Octopod Server-xUI: event FrontendPleaseUpdateEverything
UI->>Octopod Server: get deployments info
Octopod Server-->>UI: deployments info
Octopod Server->>Octopod Server/BgWorker: archive
Octopod Server-->>UI: done
Octopod Server/BgWorker->>ControlScripts: archive
ControlScripts->>KubeAPI: archive deployment
KubeAPI-->>ControlScripts: done
ControlScripts-->>Octopod Server/BgWorker: done
Octopod Server/BgWorker->>PostgreSQL: write logs
Octopod Server/BgWorker-xUI: event FrontendPleaseUpdateEverything
UI->>Octopod Server: get deployments info
Octopod Server-->>UI: deployments info
Note over Octopod Server/StatusUpdater: wait 5 minutes
loop check deployment status every 30 seconds
Octopod Server/StatusUpdater->>PostgreSQL: status=Archived/ArchivePending
Octopod Server/StatusUpdater-xUI: event FrontendPleaseUpdateEverything
UI->>Octopod Server: get deployments info
Octopod Server-->>UI: deployments info
end
end

View File

@ -0,0 +1,20 @@
sequenceDiagram
participant octo CLI
participant Octopod Server
participant PostgreSQL
participant UI
participant Octopod Server/BgWorker
participant Octopod Server/StatusUpdater
participant ControlScripts
participant KubeAPI
octo CLI->>Octopod Server: cleanup(name)
Octopod Server->>Octopod Server/BgWorker: cleanup
Octopod Server-->>octo CLI: done
Octopod Server/BgWorker->>ControlScripts: cleanup
ControlScripts->>KubeAPI: cleanup deployment
KubeAPI-->>ControlScripts: done
ControlScripts-->>Octopod Server/BgWorker: done
Octopod Server->>PostgreSQL: delete config and logs
Octopod Server/BgWorker-xUI: event FrontendPleaseUpdateEverything
UI->>Octopod Server: get deployments info
Octopod Server-->>UI: deployments info

View File

@ -0,0 +1,19 @@
sequenceDiagram
participant UI
participant Octopod Server
participant PostgreSQL
participant Octopod Server/BgWorker
participant Octopod Server/StatusUpdater
participant ControlScripts
participant KubeAPI
UI->>Octopod Server: cleanup(name)
Octopod Server->>Octopod Server/BgWorker: cleanup
Octopod Server-->>UI: done
Octopod Server/BgWorker->>ControlScripts: cleanup
ControlScripts->>KubeAPI: cleanup deployment resources
KubeAPI-->>ControlScripts: done
ControlScripts-->>Octopod Server/BgWorker: done
Octopod Server->>PostgreSQL: delete config and logs
Octopod Server/BgWorker-xUI: event FrontendPleaseUpdateEverything
UI->>Octopod Server: get deployments info
Octopod Server-->>UI: deployments info

View File

@ -0,0 +1,41 @@
sequenceDiagram
participant octo CLI
participant Octopod Server
participant PostgreSQL
participant UI
participant Octopod Server/BgWorker
participant Octopod Server/StatusUpdater
participant ControlScripts
participant KubeAPI
octo CLI->>Octopod Server: create(name, tag, [override])
Octopod Server->>PostgreSQL: store config, status=CreatePending
alt name already exists
PostgreSQL->>Octopod Server: error: deployment already exists
Octopod Server-->>octo CLI: error: deployment already exists
else
alt tag not found
Octopod Server-->>octo CLI: error: tag not found
else
PostgreSQL-->>Octopod Server: ok
Octopod Server-xUI: event FrontendPleaseUpdateEverything
UI->>Octopod Server: get deployments info
Octopod Server-->>UI: deployments info
Octopod Server->>Octopod Server/BgWorker: create
Octopod Server-->>octo CLI: done
Octopod Server/BgWorker->>ControlScripts: create
ControlScripts->>KubeAPI: create deployment
KubeAPI-->>ControlScripts: done
ControlScripts-->>Octopod Server/BgWorker: done
Octopod Server/BgWorker->>PostgreSQL: write logs
Octopod Server/BgWorker-xUI: event FrontendPleaseUpdateEverything
UI->>Octopod Server: get deployments info
Octopod Server-->>UI: deployments info
Note over Octopod Server/StatusUpdater: wait 5 minutes
loop check deployment status every 30 seconds
Octopod Server/StatusUpdater->>PostgreSQL: status=Running/Failure
Octopod Server/StatusUpdater-xUI: event FrontendPleaseUpdateEverything
UI->>Octopod Server: get deployments info
Octopod Server-->>UI: deployments info
end
end
end

View File

@ -0,0 +1,40 @@
sequenceDiagram
participant UI
participant Octopod Server
participant PostgreSQL
participant Octopod Server/BgWorker
participant Octopod Server/StatusUpdater
participant ControlScripts
participant KubeAPI
UI->>Octopod Server: create(name, tag, [override])
Octopod Server->>PostgreSQL: store config, status=CreatePending
alt name already exists
PostgreSQL-->>Octopod Server: error: deployment already exists
Octopod Server-->>UI: error: deployment already exists
else
alt tag not found
Octopod Server-->>UI: error: tag not found
else
PostgreSQL-->>Octopod Server: ok
Octopod Server-xUI: event FrontendPleaseUpdateEverything
UI->>Octopod Server: get deployments info
Octopod Server-->>UI: deployments info
Octopod Server->>Octopod Server/BgWorker: create
Octopod Server-->>UI: done
Octopod Server/BgWorker->>ControlScripts: create
ControlScripts->>KubeAPI: create deployment
KubeAPI-->>ControlScripts: done
ControlScripts-->>Octopod Server/BgWorker: done
Octopod Server/BgWorker->>PostgreSQL: write logs
Octopod Server/BgWorker-xUI: event FrontendPleaseUpdateEverything
UI->>Octopod Server: get deployments info
Octopod Server-->>UI: deployments info
Note over Octopod Server/StatusUpdater: wait 5 minutes
loop check deployment status every 30 seconds
Octopod Server/StatusUpdater->>PostgreSQL: status=Running/Failure
Octopod Server/StatusUpdater-xUI: event FrontendPleaseUpdateEverything
UI->>Octopod Server: get deployments info
Octopod Server-->>UI: deployments info
end
end
end

View File

@ -0,0 +1,16 @@
stateDiagram-v2
[*] --> CreatePending: create
Running --> UpdatePending: update
Failure --> UpdatePending: update
Running --> ArchivePending: archive
Failure --> ArchivePending: archive
Archived --> CreatePending: restore
Archived --> [*]: cleanup
Running --> Failure: 30s passed and 'check' said "nok"
Failure --> Running: 30s passed and 'check' said "ok"
CreatePending --> Running: 5m passed and 'check' said "ok"
CreatePending --> Failure: 5m passed and 'check' said "nok"
UpdatePending --> Running: 5m passed and 'check' said "ok"
UpdatePending --> Failure: 5m passed and 'check' said "nok"
ArchivePending --> Archived: 30s passed and 'archive_check' said "ok"
ArchivePending --> ArchivePending: 30s passed and 'archive_check' said "nok"

View File

@ -0,0 +1,41 @@
sequenceDiagram
participant octo CLI
participant Octopod Server
participant PostgreSQL
participant UI
participant Octopod Server/BgWorker
participant Octopod Server/StatusUpdater
participant ControlScripts
participant KubeAPI
octo CLI->>Octopod Server: restore(name)
Octopod Server->>PostgreSQL: status=CreatePending
alt name not found
PostgreSQL-->>Octopod Server: error: name not found
Octopod Server-->>octo CLI: error: name not found
else
alt tag not found
Octopod Server-->>octo CLI: error: tag not found
else
PostgreSQL-->>Octopod Server: ok
Octopod Server-xUI: event FrontendPleaseUpdateEverything
UI->>Octopod Server: get deployments info
Octopod Server-->>UI: deployments info
Octopod Server->>Octopod Server/BgWorker: create
Octopod Server-->>octo CLI: done
Octopod Server/BgWorker->>ControlScripts: create
ControlScripts->>KubeAPI: restore deployment
KubeAPI-->>ControlScripts: done
ControlScripts-->>Octopod Server/BgWorker: done
Octopod Server/BgWorker->>PostgreSQL: write logs
Octopod Server/BgWorker-xUI: event FrontendPleaseUpdateEverything
UI->>Octopod Server: get deployments info
Octopod Server-->>UI: deployments info
Note over Octopod Server/StatusUpdater: wait 5 minutes
loop check deployment status every 30 seconds
Octopod Server/StatusUpdater->>PostgreSQL: status=Running/Failure
Octopod Server/StatusUpdater-xUI: event FrontendPleaseUpdateEverything
UI->>Octopod Server: get deployments info
Octopod Server-->>UI: deployments info
end
end
end

View File

@ -0,0 +1,40 @@
sequenceDiagram
participant UI
participant Octopod Server
participant PostgreSQL
participant Octopod Server/BgWorker
participant Octopod Server/StatusUpdater
participant ControlScripts
participant KubeAPI
UI->>Octopod Server: restore(name)
Octopod Server->>PostgreSQL: status=CreatePending
alt name not found
PostgreSQL-->>Octopod Server: error: name not found
Octopod Server-->>UI: error: name not found
else
alt tag not found
Octopod Server-->>UI: error: tag not found
else
PostgreSQL-->>Octopod Server: ok
Octopod Server-xUI: event FrontendPleaseUpdateEverything
UI->>Octopod Server: get deployments info
Octopod Server-->>UI: deployments info
Octopod Server->>Octopod Server/BgWorker: create
Octopod Server-->>UI: done
Octopod Server/BgWorker->>ControlScripts: create
ControlScripts->>KubeAPI: restore deployment
KubeAPI-->>ControlScripts: done
ControlScripts-->>Octopod Server/BgWorker: done
Octopod Server/BgWorker->>PostgreSQL: write logs
Octopod Server/BgWorker-xUI: event FrontendPleaseUpdateEverything
UI->>Octopod Server: get deployments info
Octopod Server-->>UI: deployments info
Note over Octopod Server/StatusUpdater: wait 5 minutes
loop check deployment status every 30 seconds
Octopod Server/StatusUpdater->>PostgreSQL: status=Running/Failure
Octopod Server/StatusUpdater-xUI: event FrontendPleaseUpdateEverything
UI->>Octopod Server: get deployments info
Octopod Server-->>UI: deployments info
end
end
end

View File

@ -0,0 +1,41 @@
sequenceDiagram
participant octo CLI
participant Octopod Server
participant PostgreSQL
participant UI
participant Octopod Server/BgWorker
participant Octopod Server/StatusUpdater
participant ControlScripts
participant KubeAPI
octo CLI->>Octopod Server: create(name, tag, [override])
Octopod Server->>PostgreSQL: store config, status=UpdatePending
alt name not found
PostgreSQL-->>Octopod Server: error: name not found
Octopod Server-->>octo CLI: error: name not found
else
alt tag not found
Octopod Server-->>octo CLI: error: tag not found
else
PostgreSQL-->>Octopod Server: ok
Octopod Server-xUI: event FrontendPleaseUpdateEverything
UI->>Octopod Server: get deployments info
Octopod Server-->UI: deployments info
Octopod Server->>Octopod Server/BgWorker: update
Octopod Server-->>octo CLI: done
Octopod Server/BgWorker->>ControlScripts: update
ControlScripts->>KubeAPI: upgrade deployment
KubeAPI-->>ControlScripts: done
ControlScripts-->>Octopod Server/BgWorker: done
Octopod Server/BgWorker->>PostgreSQL: write logs
Octopod Server/BgWorker-xUI: event FrontendPleaseUpdateEverything
UI->>Octopod Server: get deployments info
Octopod Server-->UI: deployments info
Note over Octopod Server/StatusUpdater: wait 5 minutes
loop check deployment status every 30 seconds
Octopod Server/StatusUpdater->>PostgreSQL: status=Running/Failure
Octopod Server/StatusUpdater-xUI: event FrontendPleaseUpdateEverything
UI->>Octopod Server: get deployments info
Octopod Server-->UI: deployments info
end
end
end

View File

@ -0,0 +1,40 @@
sequenceDiagram
participant UI
participant Octopod Server
participant PostgreSQL
participant Octopod Server/BgWorker
participant Octopod Server/StatusUpdater
participant ControlScripts
participant KubeAPI
UI->>Octopod Server: create(name, tag, [override])
Octopod Server->>PostgreSQL: store config, status=UpdatePending
alt name not found
PostgreSQL-->>Octopod Server: error: name not found
Octopod Server-->>UI: error: name not found
else
alt tag not found
Octopod Server-->>UI: error: tag not found
else
PostgreSQL-->>Octopod Server: ok
Octopod Server-xUI: event FrontendPleaseUpdateEverything
UI->>Octopod Server: get deployments info
Octopod Server-->>UI: deployments info
Octopod Server->>Octopod Server/BgWorker: update
Octopod Server-->>UI: done
Octopod Server/BgWorker->>ControlScripts: update
ControlScripts->>KubeAPI: upgrade deployment
KubeAPI-->>ControlScripts: done
ControlScripts-->>Octopod Server/BgWorker: done
Octopod Server/BgWorker->>PostgreSQL: write logs
Octopod Server/BgWorker-xUI: event FrontendPleaseUpdateEverything
UI->>Octopod Server: get deployments info
Octopod Server-->>UI: deployments info
Note over Octopod Server/StatusUpdater: wait 5 minutes
loop check deployment status every 30 seconds
Octopod Server/StatusUpdater->>PostgreSQL: status=Running/Failure
Octopod Server/StatusUpdater-xUI: event FrontendPleaseUpdateEverything
UI->>Octopod Server: get deployments info
Octopod Server-->>UI: deployments info
end
end
end

294
docs/en/Control_scripts.md vendored Normal file
View File

@ -0,0 +1,294 @@
# Control scripts
<details>
<summary>Table of contents</summary>
- [General behavior](#general-behavior)
- [Scripts](#scripts)
- [🔁 init](#-init)
- [Description](#description)
- [Sample implementation](#sample-implementation)
- [✨ create](#-create)
- [Description](#description-1)
- [Execution example](#execution-example)
- [Sample implementation](#sample-implementation-1)
- [🔧 update](#-update)
- [Description](#description-2)
- [Execution example](#execution-example-1)
- [Sample implementation](#sample-implementation-2)
- [🗃 archive](#-archive)
- [Description](#description-3)
- [Execution example](#execution-example-2)
- [Sample implementation](#sample-implementation-3)
- [✅ check](#-check)
- [Description](#description-4)
- [Execution example](#execution-example-3)
- [Sample implementation](#sample-implementation-4)
- [🚮 cleanup](#-cleanup)
- [Description](#description-5)
- [Execution example](#execution-example-4)
- [Sample implementation](#sample-implementation-5)
- [🗃✅ archive_check](#-archive_check)
- [Description](#description-6)
- [Execution example](#execution-example-5)
- [Sample implementation](#sample-implementation-6)
- [🐋✅ tag_check](#-tag_check)
- [Description](#description-7)
- [Execution example](#execution-example-6)
- [👀 info](#-info)
- [Description](#description-8)
- [Execution example](#execution-example-7)
- [Sample implementation](#sample-implementation-7)
</details>
## General behavior
All _control scripts_ receive input as CLI arguments. After executing the required logic they must finish with an _exit code_ of `0` if no errors have occurred and the required actions have all completed. If there was an error and some steps were not executed, the *script* should exit with an *exit code* **distinct from `0`**. Any non-zero exit code will indicate an error.
Everything the _scripts_ write to _stdout_ and _stderr_ will be collected and stored. DevOps engineers can then view these logs from the _octo CLI_, should that be needed.
> *NOTE: Logs from `check`, `archive_check` and `tag_check` are not collected because they are called very often.*
There are four arguments that are passed to **all** *scripts*. The first three arguments come from the [_Kubernetes ConfigMap_][configmap]:
* `--project-name` the name of the project. It is supplied mostly for informational purposes and can be useful for sending notifications if that is necessary.
* `--base-domain` the base domain. It can be useful for generating the URLs of deployments.
* `--namespace` The namespace in which the deployment should be created.
* `--name` The name of the deployment supplied in the _Web UI_. It can be useful for generating the deployment URL.
<a id="star"></a>*NOTE:* If an argument is marked with a ⭐, it means that the argument can be passed any number of times.
## Scripts
### 🔁 init
#### Description
This script is called **once** during the creation of the `Octopod Server` *Kubernetes Pod* to set up the proper environment to execute all other scripts.
It is guaranteed that this script will be called **before** any of the other scripts.
You could, for example, set up access to your *version control system*, *cloud providers*, etc. This can be achieved by saving the configuration into files in the `$HOME` directory.
Unlike all other scripts, this script receives no arguments.
#### Sample implementation
```bash
mkdir $HOME/.ssh
echo -e "Host github.com\nHostname github.com\nPort 22\nUser git\nIdentityFile $HOME/.ssh/deploy.key" > $HOME/.ssh/config
echo "MY_DEPLOY_KEY" > $HOME/.ssh/deploy.key"
```
### ✨ create
#### Description
Creates a new deployment in the _Kubernetes_ cluster.
This script receives the following additional command-line arguments as input:
* `--tag` The _Docker Image tag_ that should be deployed. (In practice you can use some other string that identifies a version of your system to deploy you will need to process it accordingly in the script.)
* `--app-env-override` [](#star) App-level overrides. These overrides should be passed to the server being deployed. These overrides are specified in the _Web UI_. They are passed in the format of `KEY=VALUE` pairs.
* `--deployment-override` [](#star) Deployment-level overrides. These overrides should be used to set up the deployment environment itself, rather than be passed to the server being deployed. These overrides are specified in the _Web UI_. They are passed in the format of `KEY=VALUE` pairs.
#### Execution example
The script might be called something like this:
```bash
create --project-name "Cactus store" --base-domain "cactus-store.com" --namespace "cactus" --name "orange-button" --tag "c9bbc3fcc69e5aa094bca110c6f79419ab7be77a" --app-env-override "EMAIL_TOKEN=123123" --app-env-override "SECRET_BUTTON_ENABLED=True" --deployment-override "FANCY_DATABASE=True"
```
#### Sample implementation
```bash
helm upgrade --install --namespace "$namespace" "$name" "$deployment_chart" \
--set "global.project-name=$project_name" \
--set "global.base-domain=$base-domain" \
--set "app.tag=$tag" \
--set "app.env.foo=$app_env_override_1" \
--set "app.bar=$deployment_override_1" \
--wait \
--timeout 300
```
### 🔧 update
#### Description
Updates a deployment in _Kubernetes_ to a new *Docker Image tag*.
This script receives the same additional command-line arguments as [`create`](#-create):
* `--tag` The _Docker Image tag_ that should be deployed. (In practice you can use some other string that identifies a version of your system to deploy you will need to process it accordingly in the script.)
* `--app-env-override` [](#star) App-level overrides. These overrides should be passed to the server being deployed. These overrides are specified in the _Web UI_. They are passed in the format of `KEY=VALUE` pairs.
* `--deployment-override` [](#star) Deployment-level overrides. These overrides should be used to set up the deployment environment itself, rather than be passed to the server being deployed. These overrides are specified in the _Web UI_. They are passed in the format of `KEY=VALUE` pairs.
#### Execution example
The script might be called something like this:
```bash
update --project-name "Cactus store" --base-domain "cactus-store.com" --namespace "cactus" --name "orange-button" --tag "c9bbc3fcc69e5aa094bca110c6f79419ab7be77a" --app-env-override "EMAIL_TOKEN=123123" --app-env-override "SECRET_BUTTON_ENABLED=True" --deployment-override "FANCY_DATABASE=True"
```
#### Sample implementation
```bash
helm upgrade --install --namespace "$namespace" "$name" "$deployment_chart" \
--set "global.project-name=$project_name" \
--set "global.base-domain=$base-domain" \
--set "app.tag=$tag" \
--set "app.env.foo=$app_env_override_1" \
--set "app.bar=$deployment_override_1" \
--wait \
--timeout 300
```
### 🗃 archive
#### Description
"Archives" a deployment. This script should only free the computational resources used by the deployment ― it should remove the _Kubernetes Pods_, but not remove any _Persistent Volumes_ associated with the deployment. It is done this way to provide a period of time in which the user can recover a deployment in the state it was in.
Deleting the _Persistent Volume Claims_ should be done in the [`cleanup`](#-cleanup) script.
This script should in some sense be the inverse of [`create`](#-create) (up to _Persistent Volumes_).
This script receives only [the default command-line arguments](#general-behavior) as input.
#### Execution example
The script might be called something like this:
```bash
archive --project-name "Cactus store" --base-domain "cactus-store.com" --namespace "cactus" --name "orange-button"
```
#### Sample implementation
```bash
helm delete "$name" --purge
```
### ✅ check
#### Description
This script checks the status of the deployment.
If the script exits with `0`, it means that the deployment is healthy and up. If the script exits with a non-zero exit code, it means that the deployment is not healthy or down.
This script receives only [the default command-line arguments](#general-behavior) as input.
#### Execution example
The script might be called something like this:
```bash
check --project-name "Cactus store" --base-domain "cactus-store.com" --namespace "cactus" --name "orange-button"
```
#### Sample implementation
```bash
echo "{\"Deployments\": [{\"ResourceName\": \"app-${name}\", \"Namespace\": \"${namespace}\"}], \"StatefulSets\": [{\"ResourceName\": \"db-${name}\", \"Namespace\": \"${namespace}\"}]}" | \
kubedog multitrack -t 3
```
### 🚮 cleanup
#### Description
Cleans up any persistent resources a deployment might have allocated, such as _Persistent Volumes_.
This script will always be called **after** [`archive`](#-archive) has been called on the given deployment.
This script receives only [the default command-line arguments](#general-behavior) as input.
#### Execution example
The script might be called something like this:
```bash
cleanup --project-name "Cactus store" --base-domain "cactus-store.com" --namespace "cactus" --name "orange-button"
```
#### Sample implementation
```bash
kubectl delete pvc -n $namespace -l "app=$name"
```
### 🗃✅ archive_check
#### Description
This script checks that a given deployment really has been archived and is no longer running.
If the scripts exits with `0`, it means that the deployment has been archived successfully. If the script exits with a non-zero exit code, it means that the deployment has not been archived.
This script receives only [the default command-line arguments](#general-behavior) as input.
#### Execution example
The script might be called something like this:
```bash
archive_check --project-name "Cactus store" --base-domain "cactus-store.com" --namespace "cactus" --name "orange-button"
```
#### Sample implementation
```bash
helm status $name
```
### 🐋✅ tag_check
#### Description
This script is called right before [`create`](#-create) and [`update`](#-update) scripts to check that a given _Docker Image tag_ exists. This can be useful since it can be very easy to make a typo in the _Docker Image tag_ and deployments are typically not instant. Implementing this script would allow the user of the _Web UI_ to instantly get an error specifically about a wrong _Docker Image tag_.
This script receives the following additional command-line arguments as input:
* `--tag` The _Docker Image tag_ that should be checked.
#### Execution example
The script might be called something like this:
```bash
tag_check --project-name "Cactus store" --base-domain "cactus-store.com" --namespace "cactus" --name "orange-button" --tag "c9bbc3fcc69e5aa094bca110c6f79419ab7be77a"
```
### 👀 info
#### Description
This script returns user-facing metadata about a deployment. Currently, the metadata consists of URLs that are relevant for the deployment. Things like the deployment URL, the URL to view logs, and the database URL.
The script should return the metadata as a two-column CSV table:
```
app,https://foo.example.com
api,https://api.foo.example.com
```
This script receives only [the default command-line arguments](#general-behavior) as input.
#### Execution example
The script might be called something like this:
```bash
info --project-name "Cactus store" --base-domain "cactus-store.com" --namespace "cactus" --name "orange-button"
```
#### Sample implementation
```bash
echo "app,https://${name}.example.com"
echo "api,https://api.${name}.example.com"
```
[configmap]: https://kubernetes.io/docs/concepts/configuration/configmap/

177
docs/en/Helm-based_deployment_guide.md vendored Normal file
View File

@ -0,0 +1,177 @@
# Helm-based deployment guide
<details>
<summary>Table of contents</summary>
- [The web application](#the-web-application)
- [Setting up Octopod](#setting-up-octopod)
- [Control scripts](#control-scripts)
- [A word about TLS](#a-word-about-tls)
- [Deploying Octopod](#deploying-octopod)
- [Testing out the deployment](#testing-out-the-deployment)
- [Setting up _octo CLI_](#setting-up-octo-cli)
- [Setting up certificates](#setting-up-certificates)
- [Setting up the API URL](#setting-up-the-api-url)
- [Creating a deployment](#creating-a-deployment)
- [Adding an override](#adding-an-override)
- [Updating the deployment version](#updating-the-deployment-version)
- [Changing the number of replicas](#changing-the-number-of-replicas)
</details>
In this guide, we will examine a very simple web application and explore setting up _Octopod_ to deploy it.
## The web application
The web application we will be using is a very simple application that serves a single endpoint `/`. The returned HTML markup contains the environment variables the executable has read from the environment. The only variables returned are the ones whose name starts with `APP_ENV`.
![](../images/first.png)
The source code can be found in the [examples/web-app](../../examples/web-app) folder of this repository.
You can also find a second version of the server in the [examples/web-app-v2](../../examples/web-app-v2) folder of this repository. The second version is identical to the first version with the only difference being that it returns the variables as an unordered list.
![](../images/second.png)
We have already built and pushed the two versions of the application into the [typeable/octopod-web-app-example](https://hub.docker.com/repository/docker/typeable/octopod-web-app-example) DockerHub registry under the `v1` and `v2` tags.
## Setting up Octopod
### Control scripts
The only thing you need to do to configure _Octopod_ to work with your application is to write appropriate [_control scripts_](Control_scripts.md) to manipulate your deployments. We have already written the appropriate _control scripts_ for this application. You can find them in the [examples/helm-based-control-scripts](../../examples/helm-based-control-scripts) folder of this repository. The scripts are written in the _Rust_ programming language.
The most interesting of them all is the [create.rs](../../examples/helm-based-control-scripts/src/bin/create.rs) script. The basic order of operations is:
1. Read the passed command-line arguments
2. Clone the repo to get the _charts_ used to deploy the application with _helm_
3. Generate the arguments that should be passed to _helm_
4. Call _helm_ with the downloaded _charts_ and the generated arguments
> 💡 **NOTE:** You might have noticed that there is no `update.rs`. That is because our application is stateless and packaged up into a single _chart_. This allows us to simply reuse the same script for both creating and updating a deployment. If you have a more complicated setup with a database, for example, you will most likely need a distinct implementation for `update`.
### A word about TLS
If you are deploying Web applications, as we are here, you probably want to use TLS to encrypt your connections to your deployment. The most straightforward way of doing this is generating a separate TLS certificate for every deployment (for every subdomain). [_Cert Manager_][cert-manager] creates TLS certificates through [_Lets Encrypt_][lets-encrypt] and [_Lets Encrypt_][lets-encrypt] has [a limit on the amount of certificates][lets-encrypt-rate-limits] you can issue within a given time interval. If you exceed this limit you will start getting a _too many registrations for this IP_ error. If that is the case moving the [_Cert Manager_][cert-manager] _Pod_ might help.
### Deploying Octopod
To deploy _Octopod_ you will need to follow the [_Octopod_ deployment guide](Octopod_deployment_guide.md). The only modification will be that you will replace the "Control Scripts Setup" section in the last step with the appropriate values.
These values point to a docker registry where we have already packaged up these _control scripts_ into a _Docker Image_.
```bash
#################################################
# Control Scripts Setup
#
# if you are just testing things out you can paste the values
# from the Helm Deployment Guide example
#################################################
# The name of the registry with control scripts
utils_registry="typeable"
# The name of the image with control scripts
utils_image="octopod-helm-example"
# The tag of the image to use
utils_image_tag="1.0"
```
## Testing out the deployment
### Setting up _octo CLI_
Using the Web UI is fairly straightforward, so we will examine creating deployments with the [_octo CLI_](Octo_user_guide.md).
#### Setting up certificates
You will need to get the paths to `client_cert.pem` and `client_key.pem` generated in the [Creating SSL certificates](Octopod_deployment_guide.md#creating-ssl-certificates) step and place them into `TLS_CERT_PATH` and `TLS_KEY_PATH` environment variables:
```bash
export TLS_CERT_PATH=/tmp/octopod/certs/client_cert.pem
export TLS_KEY_PATH=/tmp/octopod/certs/client_key.pem
```
#### Setting up the API URL
You will also need to set the power API URL (the `power_app_domain` value from the [Installing _Octopod Server_](Octopod_deployment_guide.md#installing-octopod-server) section) as the `OCTOPOD_URL` environment variable:
```bash
export OCTOPOD_URL=<power_app_domain>
```
### Creating a deployment
To create a deployment you can now run:
```bash
$ octo create -n hello-octopod -t v1 -e APP_ENV_KEY1=VALUE1
```
The options are:
- `-n hello-octopod` specifies that the name (subdomain) of the deployment should be `hello-octopod`
- `-t v1` specifies the version (Docker Image Tag) of the application to deploy to be `v1`
- `-e APP_ENV_KEY1=VALUE1` specifies adds an application-level key-value pair `APP_ENV_KEY1=VALUE1`
> 💡 **NOTE:** For more detail on _octo CLI_ options please see the [octo CLI user guide](Octo_user_guide.md).
This will run the `create` _control script_, which in turn will call `helm`. After waiting a couple of seconds you can visit `http://hello-octopod.<base_domain>` to see the running application:
![](../images/hello-octopod-1.png)
You can also see the deployed pod in the cluster using `kubectl`:
```bash
$ kubectl get pods -n deployment
NAME READY STATUS RESTARTS AGE
app-hello-octopod-8965856-qbwvq 1/1 Running 0 15s
```
### Adding an override
You can modify deployments by adding or removing overrides. To add a new application-level override run:
```bash
$ octo update -n hello-octopod -t v1 -e APP_ENV_KEY2=VALUE2
```
This will run the `update` _control script_ (which is identical to the `create` script in our case), which in turn will call `helm`. After waiting a few seconds you visit the deployment URL again and see the redeployed version:
![](../images/hello-octopod-2.png)
### Updating the deployment version
You can change the version (_Docker Image Tag_) of your deployment like so:
```bash
$ octo update -n hello-octopod -t v2
```
After waiting a few seconds you visit the deployment URL again and see the redeployed version:
![](../images/hello-octopod-3.png)
### Changing the number of replicas
You can change the number of replicas of your deployment (this is [essentially implemented in the _charts_ that we use](../../examples/web-app/charts/web-app/templates/deployment.yaml#L7)) like so:
```bash
$ octo update -n hello-octopod -t v2 -o replicas=3
```
`-o replicas=3` adds a deployment-level key-value pair (override) `replicas=3`.
You can verify that the new replicas have been deployed using `kubectl`:
```bash
$ kubectl get pods -n deployment
NAME READY STATUS RESTARTS AGE
app-hello-octopod-8965856-qbwvq 1/1 Running 0 97m
app-hello-octopod-8965856-v585c 1/1 Running 0 15s
app-hello-octopod-8965856-v88md 1/1 Running 0 15s
```
[cert-manager]: https://cert-manager.io/docs
[lets-encrypt]: https://letsencrypt.org
[lets-encrypt-rate-limits]: https://letsencrypt.org/docs/rate-limits

35
docs/en/Integration.md vendored Normal file
View File

@ -0,0 +1,35 @@
# Integration into existing CI/CD pipelines
<details>
<summary>Table of contents</summary>
- [✨ Creating deployments](#-creating-deployments)
- [🚀 Updating deployments](#-updating-deployments)
</details>
You likely already have some form of CI integration with your version control system, such as *GitHub Action* or *Travis CI*, to run various checks on your code. Most of these services are set up by providing what is essentially just a shell script that is run under specific conditions.
You might want to automate deployments even further ― you might want deployments to be automatically created and updated when developers create and update *Pull Requests*.
_Octopod_ can be interacted with through the _octo CLI_ tool. This tool can be easily called from within a *CI* script.
## ✨ Creating deployments
To create a deployment (given that you have already obtained a *Docker Image* and uploaded it to your _Image Registry_ in one of the previous *CI* steps) you simply need to call _octo CLI_ with the following arguments:
```bash
octo create -n $NAME -t $IMAGE_TAG
```
`$NAME` is the name of the deployment you want to create. You can set it to be the name of the branch for example.
`$IMAGE_TAG` is the _tag_ of the docker image you want to deploy.
## 🚀 Updating deployments
Updating deployments is done using the same arguments, but you need to call `create` command, instead of the `update` command:
```bash
octo update -n $NAME -t $IMAGE_TAG
```

251
docs/en/Octo_user_guide.md vendored Normal file
View File

@ -0,0 +1,251 @@
# Octo CLI User Guide
<details>
<summary>Table of contents</summary>
- [Environment variables](#environment-variables)
- [`OCTOPOD_URL`](#octopod_url)
- [`TLS_CERT_PATH` and `TLS_KEY_PATH`](#tls_cert_path-and-tls_key_path)
- [Commands](#commands)
- [create](#create)
- [Description](#description)
- [Options](#options)
- [Usage example](#usage-example)
- [list](#list)
- [Description](#description-1)
- [Options](#options-1)
- [Usage example](#usage-example-1)
- [archive](#archive)
- [Description](#description-2)
- [Options](#options-2)
- [Usage example](#usage-example-2)
- [update](#update)
- [Description](#description-3)
- [Options](#options-3)
- [Usage example](#usage-example-3)
- [info](#info)
- [Description](#description-4)
- [Options](#options-4)
- [Usage example](#usage-example-4)
- [cleanup](#cleanup)
- [Description](#description-5)
- [Options](#options-5)
- [Usage example](#usage-example-5)
- [restore](#restore)
- [Description](#description-6)
- [Options](#options-6)
- [Usage example](#usage-example-6)
- [clean-archive](#clean-archive)
- [Description](#description-7)
- [Options](#options-7)
- [Usage example](#usage-example-7)
- [logs](#logs)
- [Description](#description-8)
- [Options](#options-8)
- [Usage example](#usage-example-8)
</details>
## Environment variables
All commands _octo CLI_ executes require the executable to send authenticated requests to the _Octopod Server_. For this purpose _octo CLI_ needs both a way to reach your particular instance of _Octopod Server_, and a way for _Octopod Server_ to identify that you are allowed to make the given request.
### `OCTOPOD_URL`
> **_NOTE:_** this argument is **required** for _octo CLI_ to function.
`OCTOPOD_URL` is an environment variable _octo CLI_ reads to find your particular _Octopod Server_ installation. For example, it could contain `https://octopod-power-app.example.com:443`.
### `TLS_CERT_PATH` and `TLS_KEY_PATH`
`TLS_CERT_PATH` should contain the path to the TLS certificate you generated when setting up _Octopod Server_ and `TLS_KEY_PATH` should contain the path to the TLS key you generated when setting up _Octopod Server_. These files are used to authenticate the requests to _Octopod Server_.
If these variables are not set, then _octo CLI_ tries to read the certificate from the path `./cert.pem`, and the key from the path `./key.pem`.
## Commands
> <a id="star"></a>***NOTE:*** If an argument is marked with a ⭐, it means that the argument can be passed any number of times.
### create
#### Description
Creates a new deployment.
#### Options
- `-n,--name ARG` The name of the deployment to create
- `-t,--tag ARG` The _Docker tag_ to deploy
- `-e,--set-app-env-override ARG` [](#star) Set an application-level override. Expects a string in the format `KEY=VALUE`.
- `-o,--set-deployment-override ARG` [](#star) Set a deployment-level override. Expects a string in the format `KEY=VALUE`.
#### Usage example
```bash
$ octo create -n hello-octopod -t ca5fd1fe08389f6422a506a59b68a5272ac37ba6 -e KEY1=VALUE1 -e KEY2=VALUE2
```
### list
#### Description
Gets a list of all deployment names both archived and active.
#### Options
This command does not require any arguments.
#### Usage example
```bash
$ octo list
hello-octopod
foo
bar
```
### archive
#### Description
Archives a given deployment.
#### Options
- `-n,--name ARG` The name of the deployment to archive.
#### Usage example
```bash
$ octo archive -n hello-octopod
```
### update
#### Description
Updates the parameters of a given deployment.
#### Options
- `-n,--name ARG` The name of the deployment to update
- `-t,--tag ARG` The new _Docker tag_ to update the deployment to
- `-e,--set-app-env-override ARG` [](#star) Add a new or replace an existing application-level override. Expects a string in the format `KEY=VALUE`.
- `-E,--unset-app-env-override ARG` [](#star) Removes an existing application-level override.
- `-o,--set-deployment-override ARG` [](#star) Add a new or replace an existing deployment-level override. Expects a string in the format `KEY=VALUE`.
- `-O,--unset-deployment-override` [](#star) Removes an existing deployment-level override.
#### Usage example
```bash
$ octo update -n octopod -t 015f16ecf398fcadaac508c1855ae160af0969c4 -E KEY1 -e KEY2=VALUE22222 -a KEY3=VALUE8
```
### info
#### Description
Gets detailed information about a deployment, including a log of all preformed actions and the current parameters.
#### Options
- `-n,--name ARG` The name of the deployment
#### Usage example
```bash
$ octo info -n hello-octopod
Current settings:
tag: v1
application overrides: app=1 (Public)
deployment overrides: dep=2 (Public)
metadata:
app: https://ree.lvh.me
Last logs:
┏━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━┳━━━━━━━━┳━━━━━┳━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━┓
┃ Created at ┃ Action id ┃ Action ┃ Tag ┃ App overrides ┃ Deployment overrides ┃ Exit code ┃
┡━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━╇━━━━━━━━╇━━━━━╇━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━┩
│ 2020-11-02T17:14:03 │ 7 │ create │ v1 │ app=1 (Public) │ dep=2 (Public) │ 1 │
├─────────────────────┼───────────┼────────┼─────┼────────────────┼──────────────────────┼───────────┤
│ 2020-11-02T19:01:02 │ 8 │ update │ v1 │ app=1 (Public) │ dep=2 (Public) │ 1 │
└─────────────────────┴───────────┴────────┴─────┴────────────────┴──────────────────────┴───────────┘
```
### cleanup
#### Description
Frees all resources used by a given archived deployment. It will not succeed if the deployment is not archived. You can not recover the deployment after this command.
#### Options
- `-n,--name ARG` The name of the deployment
#### Usage example
```bash
$ octo cleanup -n hello-octopod
```
### restore
#### Description
Restores a previously archived deployment.
#### Options
- `-n,--name ARG` The name of the deployment
#### Usage example
```bash
$ octo restore -n hello-octopod
```
### clean-archive
#### Description
Calls `octo cleanup` on all deployments that were archived more than two weeks ago. This command is used in a cronjob which is automatically set up when deploying _Octopod Server_.
#### Options
This command does not have any options.
#### Usage example
```bash
$ octo clean-archive
```
### logs
#### Description
Outputs the logs collected while running an action on a deployment. For example when deploying or updating a deployment.
#### Options
- `-a,--action ARG` the id of the action to print logs for
- `-l,--log-type ARG` the types of logs that should be printed. Possible values are: `stdout`, `stderr`, `all`. The default value is `all`.
#### Usage example
```
$ octo logs -a 13
stdout:
stderr:
error: Found argument '--deployment-override' which wasn't expected, or isn't valid in this context
USAGE:
update --app-env-override <app-env-override>... --base-domain <base-domain> --name <name> --namespace <namespace> --project-name <project-name> --tag <tag>
For more information try --help
```

399
docs/en/Octopod_deployment_guide.md vendored Normal file
View File

@ -0,0 +1,399 @@
# Octopod Server deployment guide
<details>
<summary>Table of contents</summary>
- [Installing required utilities](#installing-required-utilities)
- [Setting up your cluster](#setting-up-your-cluster)
- [General utilities](#general-utilities)
- [Tiller (Helm)](#tiller-helm)
- [Cluster access privileges](#cluster-access-privileges)
- [A word about TLS](#a-word-about-tls)
- [Downloading project sources code](#downloading-project-sources-code)
- [Creating required namespaces](#creating-required-namespaces)
- [Creating required _Service Accounts_](#creating-required-service-accounts)
- [Creating the actual service account](#creating-the-actual-service-account)
- [Giving the appropriate _Service Account_ roles](#giving-the-appropriate-service-account-roles)
- [Web UI authentication secrets](#web-ui-authentication-secrets)
- [_octo CLI_ authentication certificates](#octo-cli-authentication-certificates)
- [Creating SSL certificates](#creating-ssl-certificates)
- [Enabling SSL passthrough](#enabling-ssl-passthrough)
- [Setting up DNS](#setting-up-dns)
- [Deploying _Octopod_ on localhost](#deploying-octopod-on-localhost)
- [Installing _Octopod_ infrastructure](#installing-octopod-infrastructure)
- [Installing the appropriate _Storage Class_](#installing-the-appropriate-storage-class)
- [Installing the actual infrastructure](#installing-the-actual-infrastructure)
- [Installing _Octopod Server_](#installing-octopod-server)
</details>
## Installing required utilities
Installing _Octopod Server_ in your cluster will require that you have the following tools installed on your system:
1. [_kubectl_][kubectl]
2. [_helm 2_][helm]
## Setting up your cluster
### General utilities
_Octopod Server_ requires the following utilities to be installed in your cluster:
1. [_Ingress Nginx_][ingress-nginx]
2. [_Cert Manager_][cert-manager]
_Octopod Server_ require the following minimal resources to function properly: 2 CPU, 2 GB of RAM. Make sure you have sufficient resources in your cluster.
By default _Octopod Server_ will be deployed on nodes with the `role=stand` label. Please make sue you have the appropriate label set in your cluster:
```bash
kubectl label node <your_node> role=stand
```
### Tiller (Helm)
[_Tiller_][tiller] is a cluster-side service used by [_helm 2_][helm] to manage deployments. The easiest way to install it is using the following command:
```bash
helm init
```
#### Cluster access privileges
When installing _Octopod Server_ you might encounter [a problem with cluster access privileges](https://github.com/helm/helm/issues/5100) related to [_Tiller_][tiller].
To give sufficient privileges to [_Tiller_][tiller] you can use the following commands:
```bash
kubectl create -n kube-system serviceaccount tiller
kubectl --namespace kube-system create clusterrolebinding tiller-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
kubectl --namespace kube-system patch deploy tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
```
### A word about TLS
To function properly _Octopod_ needs to generate three TLS certificates for the three subdomains it will be using. [_Cert Manager_][cert-manager] creates TLS certificates through [_Lets Encrypt_][lets-encrypt] and [_Lets Encrypt_][lets-encrypt] has [a limit on the amount of certificates][lets-encrypt-rate-limits] you can issue within a given time interval. If you exceed this limit you will start getting a _too many registrations for this IP_ error. If that is the case moving the [_Cert Manager_][cert-manager] _Pod_ might help.
## Downloading project sources code
To download the source code required to install _Octopod Sever_ you will need to clone the git repository:
```bash
git clone https://github.com/typeable/octopod.git /tmp/octopod
```
## Creating required namespaces
_Octopod_ uses the following namespaces in your cluster:
1. `deployment` as the name would suggest your deployments will be installed in this namespace
2. `octopod` this namespace will be used to install the _Octopod_ infrastructure
To create the two namespaces you can use these commands:
```bash
kubectl create namespace deployment
kubectl create namespace octopod
```
## Creating required [_Service Accounts_][kubernetes-service-account]
### Creating the actual service account
_Octopod Server_ requires an `octopod` [_Service Account_][kubernetes-service-account] to function. You can create it using the following command:
```bash
kubectl create -n octopod serviceaccount octopod
```
### Giving the appropriate _Service Account_ roles
1. If you are planning to use [_helm 2_][helm] in your [_Control scripts_](Control_scripts.md) to deploy your deployments, you will need to give appropriate permissions to the `octopod` _Service Account_:
```bash
cd /tmp/octopod/charts
helm install --name octopod-helm-access ./helm-access
```
2. If you are planning to delete [_Persistent Volumes Claims_][kubernetes-pvc] in your [_Control scripts_](Control_scripts.md) (might be useful for the `cleanup` script), you will need to give appropriate permissions to the `octopod` _Service Account_:
```bash
cd /tmp/octopod/charts
helm install --name octopod-pvc-control ./pvc-control
```
3. If you are planning to use _Octopod_ to delete unused certificates in your [_Control scripts_](Control_scripts.md) (might be useful for the `cleanup` script), you will need to give appropriate permissions to the `octopod` _Service Account_:
```bash
cd /tmp/octopod/charts
helm install --name octopod-cert-control ./cert-control
```
4. If you are planning to use [_kubedog_][kubedog] to check the state of your deployments in your [_Control scripts_](Control_scripts.md) (might be useful for the `check` script), you will need to give appropriate permissions to the `octopod` _Service Account_:
```bash
cd /tmp/octopod/charts
helm install --name octopod-kubedog-access ./kubedog-access
```
## Web UI authentication secrets
[Authentication](Security_model.md#web-ui-authentication) between _Octopod Server_ and the _Web UI_ is done through _Basic Auth_. This implies that there needs to be a username and password associated with it.
You can generate the username and password, and push into your cluster using the following command (of course you will want to generate a secure pair):
```bash
username="octopod"
password="password" # Please change it to a more secure password
kubectl create secret generic octopod-basic-auth -n octopod --from-literal=auth=$(htpasswd -bn $username $password)
```
## _octo CLI_ authentication certificates
### Creating SSL certificates
[Authentication](Security_model.md#octo-cli-authentication) between _octo CLI_ and _Octopod Server_ is performed through self-signed SSL certificates.
You can generate the certificates and push them into your cluster using the following commands:
```bash
mkdir certs
(cd certs && \
openssl req -x509 -newkey rsa:4096 -keyout server_key.pem -out server_cert.pem -nodes -subj "/CN=localhost/O=Server" && \
openssl req -newkey rsa:4096 -keyout client_key.pem -out client_csr.pem -nodes -subj "/CN=Client" && \
openssl x509 -req -in client_csr.pem -CA server_cert.pem -CAkey server_key.pem -out client_cert.pem -set_serial 01 -days 3650)
kubectl create configmap octopod-certs -n octopod --from-file=./certs
```
After executing these command you will find a new `certs` directory containing the certificates used for authentication between _octo CLI_ and _Octopod Server_. `client_key.pem` and `client_cert.pem` should then be [passed to _octo CLI_ through environment variables](Octo_user_guide.md#tls_cert_path-and-tls_key_path).
### Enabling SSL passthrough
Since we use custom self-signed SSL certificates for authentication, we will need the certificates used with requests to be passed to the server as-is without any modification. This is not support in default [_ingress-nginx_][ingress-nginx] configurations so you will most likely need to modify it manually.
Enabling SSL passthrough in [_ingress-nginx_][ingress-nginx] can be done by adding the `--enable-ssl-passthrough` command-line argument to the [_ingress-nginx_][ingress-nginx] config in your cluster.
To do this you can execute a command similar to this (you will need to lookup the names of the namespace and the deployment in your particular cluster):
```bash
kubectl edit deploy -n ingress-nginx ingress-nginx-controller
```
An editor with a YAML config should open up. You will need to modify to it have, among other things, this parameter:
```yaml
spec:
...
template:
...
spec:
...
containers:
...
- args:
...
- --enable-ssl-passthrough
```
## Setting up DNS
You will need to set up DNS records to point subdomains of your domain to the IP address of your cluster. The DNS record should look something like this:
```
*.octo.example.com A 1.2.3.4
octo.example.com A 1.2.3.4
```
### Deploying _Octopod_ on localhost
If you are deploying locally and don't have a separate domain you are trying set up, the lvh.me domain can be useful it is set up to point to `localhost` and you can use it to work with subdomains. Even so, deploying a fully-functional version of _Octopod_ on `localhost` is non-trivial and will require modifying the deployment _Charts_ to disable HTTPS redirects. (This guide does not cover that.)
## Installing _Octopod_ infrastructure
### Installing the appropriate _Storage Class_
Before installing the infrastructure you will first need to make sure you have a [_Storage Class_][kubernetes-storage-classes] named `default` installed in your cluster. You can check installed [_Storage Classes_][kubernetes-storage-classes] with the following command:
```bash
kubectl get storageclass
```
If you do not have it, you will need to install it. Installing the [_Storage Class_][kubernetes-storage-classes] in [_minikube_][minikube] can be done with the following command (you will need to modify it to suit your cluster hosting provider):
```bash
cat <<EOF | kubectl apply -f-
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: default
provisioner: k8s.io/minikube-hostpath
reclaimPolicy: Delete
volumeBindingMode: Immediate
EOF
```
### Installing the actual infrastructure
The only infrastructure _Octopod_ currently requires is _PostgreSQL_. You can install it in your cluster using the following command:
```bash
cd /tmp/octopod/charts
helm upgrade --install octopod-infra ./octopod-infra --namespace octopod\
--wait --timeout 600 --debug
```
## Installing _Octopod Server_
To install _Octopod Server_ in your cluster you will need to customize the variables in the following script and run it:
```bash
cd /tmp/octopod/charts
#################################################
# Octopod Images Setup
#
# you probably don't need to change it
#################################################
registry="typeable"
tag="1.0"
image="octopod"
octo_image="octo"
#################################################
# General Octopod Setup
#################################################
# The name of your project only used to display in the Web UI
project_name="MyProject"
# The email used to register Let's Encrypt SSL certificates
acme_registration_email="certbot@example.com"
#################################################
# Control Scripts Setup
#
# if you are just testing things out you can paste the values
# from the Helm Deployment Guide example
#################################################
# The name of the registry with control scripts
utils_registry="registry_name"
# The name of the image with control scripts
utils_image="utils"
# The tag of the image to use
utils_image_tag="1.0"
#################################################
# Web UI OAuth Authentication
#
# These parameters are passed to ingress-nginx to
# enable authentication for user accessing the
# Web UI.
#
# You can use OAuth2 Proxy to provide OAuth2 authentication.
#
# For more information see the Security Model doc.
#
# You can leave both these variables blank to disable
# authentication in your Web UI altogether.
#################################################
# URL for the OAuth authentication service
auth_url="https://oauth.exmaple.com/oauth2/auth"
# URL for the login page on the OAuth authentication service
auth_signin="https://oauth.exmaple.com/oauth2/start?rd=/redirect/$http_host$request_uri"
#################################################
# Domain Setup
#################################################
# The domain from which the Web UI should be served
domain="octo.example.com"
# The domain from which the user API should be served
# (used by the Web UI)
app_domain="api.octo.example.com"
# The domain from which the WebSocket notification service should be served
# (used by the Web UI)
ws_domain="ws.octo.example.com"
# The domain from which the power user API should be served
# (used by octo CLI)
power_app_domain="power.octo.example.com"
# The domain under which deployment subdomains should be created
base_domain="octo.example.com"
#################################################
# Basic Auth Setup
#
# These parameters should match the ones used in the
# "Web UI authentication secrets" step
#################################################
username="octopod"
password="password"
#################################################
# Other Setup
#################################################
# NOTE: on macOS you will need to replace `sha256sum` with `shasum -a 256`
sha256_sum=$(sha256sum octopod/values.yaml octopod/templates/* | awk '{print $1}' | sha256sum | awk '{print $1}')
base64_of_username_and_password=$(echo -n "$username:$password" | base64)
status_update_timeout=600
#################################################
# Actual installation in the cluster
#################################################
helm upgrade --install octopod ./octopod \
--namespace octopod \
--set "global.deploy_checksum=$sha256_sum" \
--set "global.image_prefix=$registry" \
--set "global.image_tag=$tag" \
--set "global.image=$image" \
--set "global.octo_image=$octo_image" \
--set "global.utils_image_prefix=$utils_registry" \
--set "global.utils_image=$utils_image" \
--set "global.utils_image_tag=$utils_image_tag" \
--set "global.acme_registration_email=$acme_registration_email" \
--set "global.auth_url=$auth_url" \
--set "global.auth_signin=$auth_signin" \
--set "basic_auth_token=$base64_of_username_and_password" \
--set "project_name=$project_name" \
--set "domain=$domain" \
--set "app_domain=$app_domain" \
--set "ws_domain=$ws_domain" \
--set "power_app_domain=$power_app_domain" \
--set "base_domain=$base_domain" \
--set "status_update_timeout=$status_update_timeout" \
--wait --timeout 600 --debug
```
[kubectl]: https://kubernetes.io/docs/tasks/tools/install-kubectl/
[helm]: https://v2.helm.sh/docs/using_helm/#installing-helm
[ingress-nginx]: https://kubernetes.github.io/ingress-nginx
[cert-manager]: https://cert-manager.io/docs/
[kubernetes-service-account]: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account
[kubernetes-pvc]: https://kubernetes.io/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims
[kubernetes-storage-classes]: https://kubernetes.io/docs/concepts/storage/storage-classes
[minikube]: https://kubernetes.io/ru/docs/tasks/tools/install-minikube/
[tiller]: https://v2.helm.sh/docs/install/
[kubedog]: https://github.com/werf/kubedog
[lets-encrypt]: https://letsencrypt.org
[lets-encrypt-rate-limits]: https://letsencrypt.org/docs/rate-limits

98
docs/en/Overview.md vendored Normal file
View File

@ -0,0 +1,98 @@
# Octopod overview
<details>
<summary>Table of contents</summary>
- [Intro](#intro)
- [🎯 The aim of Octopod](#-the-aim-of-octopod)
- [🔬 Example](#-example)
- [💽 The server](#-the-server)
- [🎨 Changing button colors](#-changing-button-colors)
- [😖 Why have such a complicated staging deployment?](#-why-have-such-a-complicated-staging-deployment)
- [🛠 The way Octopod is set up](#-the-way-octopod-is-set-up)
- [🎛️ CLI](#-cli)
- [🔒 Authentication in the UI](#-authentication-in-the-ui)
- [🤖 Automatic deployment / CD](#-automatic-deployment--cd)
- [📗 Glossary](#-glossary)
</details>
**NOTE: it is not recommended to use Octopod for managing production environments**
## Intro
_Octopod_ is a tool which implements the multi-staging deployment model (MSDM) on top of _Kubernetes_. _MSDM_ implies that every developed feature needs to be not only deployed in a separate environment for QA testing, but also needs to be updated when bugs found during testing are fixed and the feature is refined.
_Octopod_ exists to reduce the overhead in deploying and maintaining per-feature staging environments. This responsibility may otherwise fall to:
1. **DevOps engineers** this might seem natural since deploying and updating systems in new environments is typically the task of a DevOps engineer who has experience in system administration. However, developers and QA engineers would be blocked for additional periods while DevOps engineers deal with the additional load.
2. **Developers** they might take on the responsibility for deploying and maintaining their feature stagings ― this would most likely waste a lot of time since developers might not have the required experience.
The process of deploying and updating stagings is likely to be extremely similar across different developed features ― changing the behavior of a button and updating pricing calculations would probably be identical from the point of view of system administration ― a new version of the system needs to be deployed with the same default configuration (the staging configuration, as opposed to a production configuration).
## 🎯 The aim of Octopod
Octopod aims to extract the mentioned similarity between deploying different staging deployments while still allowing a certain amount of configuration where per-feature setup is still required. The result is a simple interface, which allows users to manage staging deployments without any system administration expertise or, for that matter, even without deep technical expertise.
## 🔬 Example
### 💽 The server
You are developing a server, which is accessed through [_nginx_](https://www.nginx.com), and the server needs access to a [*Postgres*](https://www.postgresql.org) database and a [*Redis*](https://redis.io) database
![](../diagrams/images/sample_architecture.png)
### 🎨 Changing button colors
Your server serves HTML to the browser, which displays two buttons. Both buttons currently have the same ugly color as the background, and you have two separate tasks: one task to change the first button be orange and another task to make the second button green. (Note that this is a toy example ― imagine that these are two separate complex tasks.)
Now imagine that two different developers each completed one of the tasks, and you are now deploying the new and updated version of your server to a staging environment. You are very surprised when you find that for some reason the background of the whole page suddenly became pink. Every developer says that they did not make the change, and yet it is there. (Here the background color changing to pink denotes an undesirable change, which impacts the product in significant and apparent ways, and was not made intentionally.)
A way to mitigate this situation is to test each feature separately, in its own staging deployment, and verify which change made the page background pink, and ideally, you would check each feature before merging them into the final product (merging it into the `master` branch, for example).
To check each feature before merging would require every developer to build the new version of the server, and set up all required services: [_nginx_](https://www.nginx.com), [*Postgres*](https://www.postgresql.org), [*Redis*](https://redis.io). Developers would also have to manage access to the set up environments ― set up SSL certificates, set up subdomains, make sure databases are not exposed, and make sure the connection between every component is secure and authenticated.
This is **a lot** of overhead just to test the color of a button. Note that most of the described work would be identical across the vast majority of features ― changing the deployment architecture is a relatively rare task in most projects. Databases, load balancing, caching, and proxying would be set up in much the same way for the majority of feature-specific stagings. The server itself is probably also compiled in exactly the same way for most features.
_Octopod_ aims to factor out the common parts of creating and deploying a staging.
If developers were using _Octopod_ to deploy stagings, literally the only thing needed from them would be to specify the git commit hash in the _Web UI_ of _Octopod_. The common infrastructure (shown in **blue**) which is the same across different stagings would not require any additional setup. The only difference between the two button feature staging would be the actual server that required changing the color. And that server is most likely also built in a uniform way, meaning it can be done automatically.
![](../diagrams/images/sample_deployment.png)
### 😖 Why have such a complicated staging deployment?
The purpose of having a staging deployment is to verify the correctness of an implementation of a task as it would behave in a production environment. After all, deploying in a production environment is the only real goal of implementing anything.
Having a staging deployment that is different from a production environment in any significant way can lead to unexpected behavior that was not obvious ― heavy caching of a request can lead to an inconsistent state and break a feature for example.
## 🛠 The way Octopod is set up
To integrate Octopod into your development workflow, a DevOps engineer needs to implement some common staging orchestration logic once for it to be reused by the whole development team. This can be done by implementing [*staging control scripts*](Control_scripts.md) in any programming language which can be executed in the Octopod environment. Statically linked executables don't depend on their environment at all, so languages such as [*Rust*](https://www.rust-lang.org) and [*Go*](https://golang.org) are a good fit.
When stagings are managed through the *Web UI*, *Octopod* executes appropriate _staging control scripts_ behind the scenes to set up your particular environment.
## 🎛️ CLI
For more in-depth control over the staging cluster, we also ship a CLI with _superuser privileges_ which allows a DevOps engineer to examine deployment logs to resolve issues, should they arise.
You can read more about the _octo CLI_ in the [octo CLI user guide](docs/../Octo_user_guide.md).
## 🔒 Authentication in the UI
Authentication to the _web UI_ can be set up through [_Ingress_](https://kubernetes.io/docs/concepts/services-networking/ingress/) which, for example, [supports OAuth authentication](https://kubernetes.github.io/ingress-nginx/examples/auth/oauth-external-auth/). This allows you to set up Octopod so that every developer in your GitHub organization has access to manage stagings without any additional permissions management.
## 🤖 Automatic deployment / CD
It is possible to set up your existing CI to automatically deploy new versions of your feature staging, reducing friction even further. This can be done by using the CLI to update a staging with the same name as the current branch in git for example. The CLI command can be executed straight in your CI script in services like GitHub Actions, Travis CI, Azure Pipelines, etc.
Fot more information see the [integration guide](Integration.md).
## 📗 Glossary
- _Octopod_ ― the deployment manager, this very system
- _octo CLI_ ― a command-line client, used to access Octopod with _superuser_ privileges
- _Octopod Server_ ― the server responsible for managing deployments
- _deployment control scripts_ ― scripts used to interact with your specific environment setup
- _web UI_ ― the interface developers, project managers, QA engineers, etc. use to manage stagings.
- <a id="overrides"></a>_overrides_ ― a set of environment variable key-value pairs, which have precedence over the default pairs set up by DevOps engineer. These environment variables are passed to your system during deployment.

122
docs/en/PM_case_study.md vendored Normal file
View File

@ -0,0 +1,122 @@
# Octopod Case Study
A deployment model is a critical part of any IT company and is usually
deeply integrated into multiple development processes. Having a good
deployment model lets successful companies create a well-tuned
development workflow a key to fast and reliable delivery of business
features.
Development models may vary significantly from one company to another.
However, the problems companies are facing when picking the right model
are usually pretty common. We want new features to be tested rigorously
before they hit production and become available for end users. We also
want the team to deliver in a timely manner, which means the ideal
workflow should exclude any blockers that force one team to wait when
the other team finishes their work.
## The two models
### The 3-tiered model
A common approach is to have a 3-tier deployment model which implies
having Development, Staging, and Production Environments. Although there
could be some variations, basically this model is used in the majority
of development teams.
In such a 3-tier model, the Development server usually pulls the changes
from a master branch where developers merge their changes once theyre
done with the implementation in the feature branch. Once the branch is
merged, the feature can be seen and tested on the Development server. If
the change makes the application unstable or does not work as expected,
a developer has to revert it, push the fix and then redeploy. Once the
feature is verified, the new code is integrated into a Staging
environment.
Even though this flow is pretty common, it has few significant
downsides which we will examine later in this document.
### The multi-staging model
A more interesting approach is a multi-staging model where a new Staging
server is created for each new feature branch. Staging servers are
accessible by unique URLs, like my-cool-feature.staging.company.com, and
are used for both development and QA verification. Albeit from the
developers perspective the process is similar, with this approach each
feature can be implemented and tested independently without a need to be
merged into the master branch first, making the process of testing a
release candidate on the Pre-Production server independent.
## Breaking features
### The 3-tiered model
One issue is that no matter how well developers test their feature, it
may still interact with someone elses code in unexpected ways. It means that a successful
verification of a particular feature on the Development server does not
guarantee that the feature wont unexpectedly break someone elses code.
If a critical part of the application, like authentication, gets broken,
this may block the testing process completely until the problem is
resolved. And to top it off, when this happens, it might not be
immediately clear whos change is to blame. Thus, everyone who did
recent merges begin looking into their code trying to see whether it was
their check-in that broke the server. Of course, it takes time and
blocks the development process as well until the culprit merge is found
and confirmed.
![](../images/break1.png)
### The multi-staging model
With the multi-staging model every deployed staging is separated from a known-to-be-good commit by exactly one feature implementation or bug fix. This means that if at any point we discover that a staging contains a critical bug, we will for sure know which feature broke the project. The "bad" feature can be identified without delaying the merge of "good" branches.
![](../images/break2.png)
## Configurable testing environments
Sometimes features require a special or dangerous environment to be tested in. For example, a feature might require handling a payment error that does not occur in the test payment processing environment, thus the feature will need to be tested in a production payment processing environment.
### The 3-tiered model
When QA is forced to test everything on a single staging deployment, by necessity all features tested during the deployment which includes the said feature will need to be tested in a production payment processing environment. This is bad at least because it increases the likelihood of unintentional charges and might complicate the testing of other features.
![](../images/env1.png)
### The multi-staging model
With a multi-staging this issue is mitigated completely since every single feature is tested in a completely separate environment that can be set up the way the feature requires.
![](../images/env2.png)
## Feature development lifecycle
### The 3-tiered model
Having features deployed and tested in wave-like cycles in tandem with the staging server being occupied by integration tests for a given release can increase the time interval between the developer submitting a feature for testing and getting feedback from QA. This leads to the developer losing the context of the task. Furthermore, an unfortunate series of events can significantly increase the time-to-production of a feature.
![](../images/dev1.png)
### The multi-staging model
With a multi-staging model the problem is lessened significantly. Allowing features to be deployed to staging environments independently reduces both the QA feedback time and the time-to-production of a feature under the same conditions.
> **NOTE:** Relative block widths have been kept consistent with the previous diagram.
![](../images/dev2.png)
## Octopod for the multi-staging model
While the multi-staging model has many upsides, having a dedicated Staging server for each new feature or bugfix usually
requires a more complicated environment and forces developers to spend
more time on deploying each piece of code. Difficulties in managing such
a swarm of servers often leads to introducing orchestration tools like
Kubernetes, which may not be easy to learn for everyone in the team,
especially when containers are built on top of AWS infrastructure which
implies using AWS-specific commands and tools. Thus, even though this
model provides a significant workflow improvement, it requires
developers to have certain DevOps expertise.
To overcome these limitations and let more teams use multi-staging
deployment models, we created Octopod. Octopod is a service
that, once installed and set up, supports your multi-tier development
workflow at a level that does not require deep technical knowledge. Octopod
simplifies the procedure of creating new servers and allows the
implementation of common CD solutions in just a few clicks.

15
docs/en/README.md vendored Normal file
View File

@ -0,0 +1,15 @@
# 🐙📑 Octopod documentation
## 🔭 High-level notes
- [🐙 Overview](Overview.md)
- [🧑‍🔬 Project managment case study](PM_case_study.md)
- [🧑‍💻 Technical case study](Tech_case_study.md)
## 🛠️ Technical documentation
- [🏗 Technical architecture](Technical_architecture.md)
- [⚙️ Control script guide](Control_scripts.md)
- [🔧🐙 Octopod deployment guide](Octopod_deployment_guide.md)
- [🔧🚀 Helm-based Octopod project setup](Helm-based_deployment_guide.md)
- [🐙🎛 octo CLI user guide](Octo_user_guide.md)
- [🤖 CI integration](Integration.md)
- [🔒 Octopod security model](Security_model.md)

114
docs/en/Security_model.md vendored Normal file
View File

@ -0,0 +1,114 @@
# Security model
<details>
<summary>Table of contents</summary>
- [Octopod roles](#octopod-roles)
- [Kubernetes role-based access control](#kubernetes-role-based-access-control)
- [Privileges to delete certificates](#privileges-to-delete-certificates)
- [Privileges to delete _Persistent Volumes Claims_](#privileges-to-delete-persistent-volumes-claims)
- [Web UI authentication](#web-ui-authentication)
- [Web UI OAuth](#web-ui-oauth)
- [octo CLI authentication](#octo-cli-authentication)
</details>
## Octopod roles
There are two user roles in _Octopod_:
* _user_
* _admin_
| role | managing deployments | viewing deployment logs |
| :---: | :------------------: | :---------------------: |
| user | ✅ | ❌ |
| admin | ✅ | ✅ |
_Web UI_ users have the _user_ role.
_octo CLI_ users have the _admin_ role.
There is currently no way to give someone access to _octo CLI_ without giving them the _admin_ role since authentication is done through SSL certificates instead of through OAuth.
## Kubernetes role-based access control
_Octopod Server_ is deployed in the `octopod` _Kubernetes_ namespace. Deployments are deployed in the `deployments` namespace.
_Octopod Server_ uses the `octopod` [_Service Account_][kubernetes-service-account].
Freeing resources might require _Octopod Server_ / _control scripts_ to have privileges to delete certificates and [_Persistent Volumes Claims_][kubernetes-pvc]. (It depends on the specifics of the _Kubernetes_ setup and _control scripts_)
Access can be configured through [_RBAC_][kubernetes-rbac]:
### Privileges to delete certificates
```yaml
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: cert-control-clusterrole
rules:
- apiGroups: ["cert-manager.io"]
resources: ["certificates"]
verbs: ["list", "delete", "deletecollection"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: octopod-cert-control-rolebinding
namespace: deployments
roleRef:
kind: ClusterRole
apiGroup: rbac.authorization.k8s.io
name: cert-control-clusterrole
subjects:
- kind: ServiceAccount
name: octopod
namespace: octopod
```
### Privileges to delete _Persistent Volumes Claims_
```yaml
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: pvc-control-clusterrole
rules:
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["list", "delete", "deletecollection"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: octopod-pvc-control-rolebinding
namespace: deployments
roleRef:
kind: ClusterRole
apiGroup: rbac.authorization.k8s.io
name: pvc-control-clusterrole
subjects:
- kind: ServiceAccount
name: octopod
namespace: octopod
```
## Web UI authentication
Authentication between the _Web UI_ and _Octopod Server_ is done through _Basic Auth_. The _Bearer token_ is read by the _Web UI_ after the page is loaded as part of [the config](../../charts/octopod/templates/octopod-nginx-configmap.yaml#L15-L20). By default, everything, including the config, can be accessed without any authentication. For ways of mitigating this please see the next section.
## Web UI OAuth
The [_Web UI_](Technical_architecture.md#-web-ui) on its own does not have any authentication whatsoever, meaning that anyone can open it and manage your deployments. Luckily, _Kubernetes_ [can be configured](../../charts/octopod/templates/octopod-ingress.yaml#L15-L21) to authenticate users before they get access to the _Web UI_. It can be set up to authenticate users through [_Ingress_](https://kubernetes.io/docs/concepts/services-networking/ingress/) which [supports external authentication services][kubernetes-ingress-nginx-external-auth]. You can set up [_OAuth2 Proxy_][oauth2-proxy] in your cluster to support numerous OAuth services. For example, if you use GitHub, you can set up [_OAuth2 Proxy_][oauth2-proxy] to use GitHub to automatically grant users access to Octopod when you add them to your organization in GitHub.
## octo CLI authentication
Authentication between _octo CLI_ and _Octopod Server_ is done through an SSL certificate that is generated [when deploying _Octopod_](../en/Octopod_deployment_guide.md#creating-ssl-certificates).
[kubernetes-service-account]: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account
[kubernetes-rbac]: https://kubernetes.io/docs/reference/access-authn-authz/rbac
[kubernetes-pvc]: https://kubernetes.io/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims
[kubernetes-ingress-nginx-external-auth]: https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#external-authentication
[oauth2-proxy]: https://oauth2-proxy.github.io/oauth2-proxy

74
docs/en/Tech_case_study.md vendored Normal file
View File

@ -0,0 +1,74 @@
# Octopod technical case study
<details>
<summary>Table of contents</summary>
- [Single staging model](#single-staging-model)
- [The multi-staging model](#the-multi-staging-model)
- [Implications](#implications)
- [Isolation separate configurations](#isolation--separate-configurations)
- [Isolation more information](#isolation--more-information)
- [Isolation independent state](#isolation--independent-state)
- [Isolation freedom to experiment](#isolation--freedom-to-experiment)
- [Infrastructure reuse](#infrastructure-reuse)
- [Continuous staging deployment](#continuous-staging-deployment)
</details>
In this case study, we consider developing a system that is mostly comprised of a server.
## Single staging model
This model implies having a single machine (physical or virtual) which is considered a staging environment. This single machine essentially becomes the contact point between Developers and QA engineers.
This in essence means that at any given point a single build of the service in a single configuration is available to the QA engineers.
## The multi-staging model
This model implies having a separate machine (physical or virtual) for *every* developed feature. Of course, deployments that are no longer needed will be removed.
This means that QA engineers can have access to an indefinite amount of independent builds and configurations of the product.
## Implications
### Isolation separate configurations
It is sometimes the case that certain features need a "special" environment to be tested in, for example testing a feature might require payment errors to occur that only happen in a production payment processing environment. Having a single staging will require either not testing the said feature, or testing **all** "current" features with a production payment processor. This might prove to be either more difficult than necessary, or might lead to other undesirable consequences.
"Current" is used here to denote features that have been completed after the last time QA engineers had the opportunity to test the staging of the given project. It is assumed that the completed features are merged into something analogous to a `master` branch in git.
A way to mitigate the issue might be to have fine-grained control over what features are allowed to be merged into the `master` branch so that when we deploy it, we will have only the one feature which needs the "special" environment, in a sense, batching features based on the environment they require. The main problem with this is that the payment processor is in all likelihood a tiny sliver, a single dimension of the whole space of possible environment setups. Trying to capture and institutionalize the whole breadth of a production environment in your development workflow seems like a futile endeavor.
Having per-feature stagings allow you to have completely separate and independent configurations for every single feature you develop and test at no additional cost.
### Isolation more information
Sometimes making a change in one part of the codebase can lead to unexpected behavior in seemingly unrelated parts of the project. For example, an implementation of a feature might break the login form altogether.
With a single staging after discovering that the newly deployed version of your project breaks the login form, you will need to inspect every single feature implemented since the last deployment of your staging environment to find the offending feature. Depending on your workflow, there can be a large number of features.
With per-feature stagings, you will know precisely what change broke the login form since every single deployment would differ from a known-to-be-stable version by *exactly* one feature implementation in most cases if you detect an unexpected breakage on one of the stagings, then you know exactly which feature broke it.
### Isolation independent state
Some developed features might require some intricate and fragile state to be properly tested, for example, you might want something special for your 1000th customer each day. (This is a very contrived example. In reality, there will be numerous more subtle situations.) With a single staging environment, it might be very easy to trigger the desired state accidentally while testing a different feature. Having per-feature stagings implies that every staging has a separate state this problem is mitigated.
As a bonus, it reduces the amount of data on the staging this might make reproducing found bugs easier for the developer since the database would have orders of magnitude less data, and only the data relevant to the feature will be present.
### Isolation freedom to experiment
Sometimes it can be useful for a business to test out an idea that might end up not being suitable for production at this time for whatever reason. (We are assuming that the feature is such that this becomes obvious only after implementing a prototype.)
When you have a single staging, the only real way to test out an idea is to merge into the rest of the codebase and deploy it to the staging in the usual way. If the feature is deemed an improvement all is well, just continue development. If it is decided that the feature should not be pushed to production, you now need to "unmerge" the changes from the rest of the codebase. Depending on the timeline, other features might have been implemented that rely on the code. In most cases rolling a feature back is a non-trivial task.
With per-feature stagings, you could, in essence, make a branch of your whole product, and experiment with it however you like without fear that the changes will be relied upon by the rest of the team. Rolling back a feature becomes as easy as removing the git branch.
### Infrastructure reuse
When setting up Octopod you will need to implement the exact steps required to set up your whole project infrastructure in a reliable and reproducible way. This clarity, which is otherwise most likely absent due to ad-hoc deployment procedures, allows you to bootstrap new projects quickly. New projects will likely reuse some of the technologies your teams have experience with on other projects; you will be able to easily copy over the already implemented parts of the infrastructure.
### Continuous staging deployment
It is easy to set up continuous deployment procedures with Octopod. (Continuous deployment here refers to the process of automatically deploying branches of your repository when they are updated.) This allows you to optimize your development workflow even further.
A not-so-obvious usage might be to set up a staging to be automatically updated with the `master` branch of your project repository. This has the advantage that anyone can at any time easily test the behavior of the `master` branch.

292
docs/en/Technical_architecture.md vendored Normal file
View File

@ -0,0 +1,292 @@
# Technical architecture
<details>
<summary>Table of contents</summary>
- [⚒️ Used tools](#-used-tools)
- [📐 App architecture](#-app-architecture)
- [🖥 Web UI](#-web-ui)
- [🐙 Octopod Server](#-octopod-server)
- [🐘 PostgreSQL](#-postgresql)
- [🎛 octo CLI](#-octo-cli)
- [📑 Control scripts](#-control-scripts)
- [🚮⏲ Clean Archive CronJob](#-clean-archive-cronjob)
- [Kube API Server](#kube-api-server)
- [📦 Octopod Distribution model](#-octopod-distribution-model)
- [Process view](#process-view)
- [✨ Create](#-create)
- [🔧 Update](#-update)
- [🗃 Archive](#-archive)
- [🚮 Cleanup](#-cleanup)
- [🔁 Restore](#-restore)
- [👨‍💻👩‍💻 How we use it](#-how-we-use-it)
- [🗂️ Deployment state transitions](#-deployment-state-transitions)
</details>
## ⚒️ Used tools
The main goal of _Octopod_ is to simplify deployment in [_Kubernetes_][kube].
When developing _Octopod_ we were expecting _Octopod_ itself to also be deployed with [_Kubernetes_][kube].
## 📐 App architecture
Users can interact with _Octopod_ through:
1. the [_Web UI_](#-web-ui) (expected to be used by developers, project managers, QA engineers, etc.)
2. the [_octo CLI_](#-octo-cli) (expected to be used by DevOps engineers and programmatically, e. g. on CI)
Interaction between _Octopod_ and [_Kubernetes_][kube] is done entirely through the [_control scripts_](#-control-scripts). This allows _Octopod_ to be adapted for use in practically any deployment setup.
_Octopod_ stores all data about deployments and all performed operations in [_PostgreSQL_](#-postgresql).
![App architecture](../diagrams/images/app-architecture.png)
### 🖥 Web UI
_Web UI_ the user interface used to manipulate deployments. It interacts with [_Octopod Server_](#-octopod-server) through HTTP/1.1 requests and receives events from [_Octopod Server_](#-octopod-server) through _Websockets_. Authentication between _Web UI_ and [_Octopod Server_](#-octopod-server) is done through Basic Auth. The Basic Auth token is read from a *JSON* config which is requested when the page is loaded. Access to the config should be configured through [_Ingress_][ingress].
The interface does not contain technical details related to administering deployments ― managing deployments is done in a simple way. The interface is geared towards being used by developers of any level, QA engineers, project managers, and people without a technical background.
### 🐙 Octopod Server
_Octopod Server_ the server that processes deployment management requests and delegates [_Kubernetes_][kube]-specific logic to [_control scripts_](#-control-scripts).
The server receives commands from the [_octo CLI_](#-octo-cli) and the [_Web UI_](#-web-ui) through HTTP/1.1 and updates the state of deployments. The server also sends updates to the [_Web UI_](#-web-ui) through _Websockets_. _Octopod Server_ interacts with [_Kube API Server_](#kube-api-server) through the [_control scripts_](#-control-scripts). Settings, deployment states and user action logs are stored in [_PostgreSQL_](#-postgresql).
### 🐘 PostgreSQL
[_PostgreSQL_](https://www.postgresql.org) DBMS used to store settings, deployment states and user action logs.
### 🎛 octo CLI
_octo CLI_ a command-line interface used to manage deployments. It sends HTTP/1.1 requests to [_Octopod Server_](#-octopod-server). The requests are [authenticated through SSL certificates](Security_model.md#octo-cli-authentication).
It can perform all actions available in the [_Web UI_](#-web-ui), but also has access to view deployment logs.
The CLI is expected to be used by DevOps engineers, but can also be used if it is necessary to automate deployment management in some way, for example [in CI scripts](Integration.md).
### 📑 Control scripts
_Control scripts_ a _Docker Container_ with executables which encapsulates all of the logic of interacting with the [_Kube API Server_](#kube-api-server), cloud providers, deployments, version control, etc.
This is necessary to make _Octopod_ itself independent from any particular deployment setup ― it can be set up to work with practically any setup.
When the [_Octopod Server_](#-octopod-server) _Pod_ starts, the contents of the *control scripts* container are copied into the _Octopod Server_ container file system. This means that the executables need to be either statically linked or interpreted through _Bash_ since it needs to be executed in the _Octopod Server_ container environment.
These [scripts need to be implemented](Control_scripts.md) to deploy _Octopod_.
### 🚮⏲ Clean Archive CronJob
_Clean Archive CronJob_ a CronJob which is run every hour. It deletes archived deployments older than 14 days. It is done by calling [_octo CLI_](#-octo-cli).
It is necessary because "deleting" (archiving) deployments should only free the occupied computational resources, _Persistent Volumes_ should not be freed when deleting a deployment. This gives us a window of time in which a deployment can be recovered in the state it was in before being archived.
### Kube API Server
[Kube API Server](https://kubernetes.io/docs/concepts/overview/kubernetes-api/) an API server in [_Kubernetes_][kube] which should be called from [_control scripts_](#-control-scripts).
## 📦 Octopod Distribution model
[_octo CLI_](#-octo-cli) is distributed as a *statically linked executable*. The prebuilt binaries can be found in the "Releases" tab of the GitHub repository.
[_Octopod Server_](#-octopod-server) and [_Web UI_](#-web-ui) are distributed as a single _Docker Image_. [_Charts_][chart] are used [to deploy](Octopod_deployment_guide.md) it in [_Kubernetes_][kube].
A _Docker Image_ with [_control scripts_](#-control-scripts) should be provided by the user. They are available in [our _Docker Hub_ registry](https://hub.docker.com/orgs/typeable/repositories).
## Process view
Here we provide sequence diagrams for every basic operation that can be performed in _Octopod_. These operations call [_control scripts_](#-control-scripts). On the diagrams, they are labeled as _ControlScripts_.
### ✨ Create
_Create_ creates a new deployment. The main inputs include the name of the deployment, the _Docker Image tag_ and optional overrides. A more detailed description can be found in the [control scripts documentation](Control_scripts.md#-create).
The arguments are forwarded to the [_create_](Control_scripts.md#-create) script which in turn creates the deployment in the _Kubernetes cluster_. It might call something like:
```bash
helm upgrade --install --namespace "$namespace" "$name" "$deployment_chart" \
--set "global.project-name=$project_name" \
--set "global.base-domain=$base-domain" \
--set "app.tag=$tag" \
--set "app.env.foo=$app_env_override_1" \
--set "app.bar=$deployment_override_1" \
--wait \
--timeout 300
```
<details>
<summary>Create via CLI sequence diagram</summary>
![Create](../diagrams/images/technical-architecture-create-via-cli.png)
</details>
<details>
<summary>Create via UI sequence diagram</summary>
![Create](../diagrams/images/technical-architecture-create-via-ui.png)
</details>
### 🔧 Update
_Update_ updates an existing deployment. The main inputs include the name of the deployment, the _Docker Image tag_ and optional overrides. A more detailed description can be found in the [control scripts documentation](Control_scripts.md#-update).
[_Overrides_](Overview.md#overrides) are read from the database and merged with the new changes. All arguments are forwarded to the [_update_](Control_scripts.md#-update) script which in turn updates the specified deployment with the new parameters in the _Kubernetes cluster_. It might call something like:
```bash
helm upgrade --install --namespace "$namespace" "$name" "$deployment_chart" \
--set "global.project-name=$project_name" \
--set "global.base-domain=$base-domain" \
--set "app.tag=$tag" \
--set "app.env.foo=$app_env_override_1" \
--set "app.bar=$deployment_override_1" \
--wait \
--timeout 300
```
<details>
<summary>Update via CLI sequence diagram</summary>
![Update](../diagrams/images/technical-architecture-update-via-cli.png)
</details>
<details>
<summary>Update via UI sequence diagram</summary>
![Update](../diagrams/images/technical-architecture-update-via-ui.png)
</details>
### 🗃 Archive
_Delete_ archives a deployment. It should only free the computational resources (_Pods_). _Persistent Volumes_ should not be deleted ― they are cleared in the [_cleanup_](#-cleanup) process. This operation can be undone with the [_restore_](#-restore) command.
The main argument is the name that identifies the deployment. A more detailed description can be found in the [control scripts documentation](Control_scripts.md#-archive).
The arguments are forwarded to the [_delete_](Control_scripts.md#-archive) script which in turn frees the computational resources. It might call something like:
```bash
helm delete "$name" --purge
```
<details>
<summary>Archive via CLI sequence diagram</summary>
![Archive](../diagrams/images/technical-architecture-archive-via-cli.png)
</details>
<details>
<summary>Archive via UI sequence diagram</summary>
![Archive](../diagrams/images/technical-architecture-archive-via-ui.png)
</details>
### 🚮 Cleanup
_Cleanup_ releases **all** resources captured by the deployment.
The main argument is the name that identifies the deployment. A more detailed description can be found in the [control scripts documentation](Control_scripts.md#-cleanup). It can only be called after [_archive_](#-archive) has been executed.
The arguments are forwarded to the [_cleanup_](Control_scripts.md#-cleanup) script which in turn frees all resources captured by the given deployment. It might call something like:
```bash
kubectl delete pvc -n "$namespace" "$name-postgres-pvc"
kubectl delete certificate -n "$namespace" "$name-postgres-cert"
```
<details>
<summary>Cleanup via CLI sequence diagram</summary>
![Cleanup](../diagrams/images/technical-architecture-cleanup-via-cli.png)
</details>
<details>
<summary>Cleanup via UI sequence diagram</summary>
![Cleanup](../diagrams/images/technical-architecture-cleanup-via-ui.png)
</details>
### 🔁 Restore
_restore_ restores an archived deployment in the state it was last in. Calls the same _script_ that is called in [_create_](#-create).
The main argument is the name that identifies the deployment. A more detailed description can be found in the [control scripts documentation](Control_scripts.md#-create). It can only be called after [_archive_](#-archive) has been executed.
All necessary setup information is read from the database: [_overrides_](Overview.md#overrides) and the _Docker Image tag_. The arguments are forwarded to the [_create_](Control_scripts.md#-create) script which in turn recreates the deployment. It might call something like:
```bash
helm upgrade --install --namespace "$namespace" "$name" "$deployment_chart" \
--set "global.project-name=$project_name" \
--set "global.base-domain=$base-domain" \
--set "app.tag=$tag" \
--set "app.env.foo=$app_env_override_1" \
--set "app.bar=$deployment_override_1" \
--wait \
--timeout 300
```
<details>
<summary>Restore via CLI sequence diagram</summary>
![Restore](../diagrams/images/technical-architecture-restore-via-cli.png)
</details>
<details>
<summary>Restore via UI sequence diagram</summary>
![Restore](../diagrams/images/technical-architecture-restore-via-ui.png)
</details>
## 👨‍💻👩‍💻 How we use it
We deploy several separate [_Kubernetes_][kube] clusters:
- We have separate clusters for every product we deploy
- We also separate _production_ and _staging_ clusters
That makes two clusters per product.
So we get a cluster matrix similar to the following table, where each cell is a separate cluster:
| | Staging (Has _Octopod_ installed) | Production (_Octopod_ not installed) |
| ------------------- | --------------------------------- | ------------------------------------ |
| **Cactus shop** | 🟩 🐙 Cluster *A* | 🟨 Cluster *B* |
| **Pottery service** | 🟦 🐙 Cluster *C* | 🟪 Cluster *D* |
| ... | ... | ... |
Every color depicts a separate cluster. A 🐙 indicates that _Octopod_ is installed in that cluster.
Every _staging_ cluster has a separate _Octopod_ installation with separate interfaces to manage the deployments.
## 🗂️ Deployment state transitions
A deployment can exist in one of six states:
1. *Running*
2. *Failure*
3. *CreatePending*
4. *UpdatePending*
5. *DeletePending*
6. *Archived*
_Running_, _Failure_, _Archived_ states are "permanent", meaning the deployment is not in the process of executing a command.
*CreatePending*, *UpdatePending*, *DeletePending* states are temporary, meaning the deployment is currently in the process of executing a deployment command.
![Deployment Statuses](../diagrams/images/technical-architecture-deployment-states-fsm.png)
<!-- [kubectl]: https://kubernetes.io/docs/reference/kubectl/ -->
<!-- [helm]: https://helm.sh -->
<!-- [kubedog]: https://github.com/werf/kubedog -->
[kube]: https://kubernetes.io
[chart]: https://helm.sh/docs/topics/charts/
[ingress]: https://kubernetes.io/docs/concepts/services-networking/ingress/

BIN
docs/images/break1.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 84 KiB

BIN
docs/images/break2.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 107 KiB

Some files were not shown because too many files have changed in this diff Show More