diff --git a/nixos/hardware/index.html b/nixos/hardware/index.html index debfc9f..567c2fb 100644 --- a/nixos/hardware/index.html +++ b/nixos/hardware/index.html @@ -1244,7 +1244,9 @@

Enables cloud-init but turns of non-working dhcp.

nixosModules.hardware-hetzner-cloud

Hardware configuration for https://www.hetzner.com/cloud instances.

-

The main difference here is that cloud-init is enabled.

+

The main difference here is that: +1. cloud-init is enabled. +2. the qemu agent is running, to allow password reset to function.

nixosModules.hardware-hetzner-cloud-arm

Hardware configuration for https://www.hetzner.com/cloud arm instances.

The main difference from nixosModules.hardware-hetzner-cloud is using systemd-boot by default.

diff --git a/search/search_index.json b/search/search_index.json index ed3faf2..9924bc3 100644 --- a/search/search_index.json +++ b/search/search_index.json @@ -1 +1 @@ -{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Home","text":"

Welcome!

SrvOS is a collection of NixOS modules that are optimized for servers. They includes many lessons that we gained over the years while deploying servers for our customers. As we like to share, we hope that this project will be useful to you.

To get started, start by reading the introductory tutorial, then check the User Guide for more information.

"},{"location":"faq/","title":"FAQ","text":"

Some questions and answers that haven't been integrated in the documentation yet.

"},{"location":"faq/#what-version-of-nixos-should-i-use","title":"What version of NixOS should I use?","text":"

SrvOS is currently tested against nixos-unstable and the latest NixOS release. SrvOS itself is automatically updated and tested against the latest version of that channel once a week.

If you want to make sure to use a tested version, use the \"follows\" mechanisms of Nix flakes to pull the same version as the one of SrvOS:

{\n  inputs.srvos.url = \"github:nix-community/srvos\";\n  # Use the version of nixpkgs that has been tested to work with SrvOS\n  inputs.nixpkgs.follows = \"srvos/nixpkgs\";\n}\n
"},{"location":"getting_started/","title":"Getting Started with SrvOS","text":"

This project is designed to work in combination with the Linux distribution NixOS or nix-darwin on macOS.

In this documentation, we expect the reader to be already familiar with the base operating system, and introduce how to compose it with our own extensions.

For NixOS continue reading here, for nix-darwin/macOS read this.

"},{"location":"github_actions_runner/","title":"GitHub Actions Runner","text":"

GitHub Action Runners are processes that execute the automated jobs you specify in your GitHub Actions workflows. These runners can be hosted on GitHub-hosted infrastructure or your infrastructure. Self-hosted runners run for your project only and are available at no additional cost.

This article looks at how to install a GitHub runner in your own NixOS infrastructure, making sure the environment is scalable and secure.

We have built a NixOS module that installs one or more self-hosted github action runner, along with a cachix watch store service with the most secure defaults.

NOTE: if you intend to run NixOS VM tests you must ensure your hosting provider supports nested virtualization or use bare-metal hosts, otherwise your tests will take a long time to execute.

"},{"location":"github_actions_runner/#authentication","title":"Authentication","text":"

In order to use a self-hosted GitHub action runner, you will need to register the runner with your GitHub account or organization. There are three different ways a self hosted runner can register itself on GitHub:

In this document, I will describe the most secure option: how to connect using a new GitHub App in your organization.

To ensure that you have complete control over the permissions that the app requires, you should create your own GitHub Application.

First, go to the setting page of your organization: https://github.com/organizations/<YOUR ORGANIZATION>/settings/apps

Once the app is created, the app's settings page will be presented. Scroll to the Private keys section and click the button labeled Generate a private key. You should save securely the generated PEM encoded private key. You will need that private key when you configure the CI. You should also save the generated GitHub App Id.

Once created, you should also limit the usage of this github app to your CI hosts public IPs (ipv4 and ipv6).

The application can be now be installed in your organization:

You can now use the NixOS role to install and configure the GitHub self hosted runner in your NixOS CI host.

If someone else is configuring the runner for you, you will need to provide him the the generated PEM encoded private key and the GitHub App Id.

You can find more information in the Official GitHub App creation documentation.

"},{"location":"github_actions_runner/#using-the-nixos-module","title":"Using the NixOS module","text":"

The module has been created as a role. Roles are used to define the specific purpose of a node, making it easy to manage and scale your infrastructure.

The following options must be configured

url the full URI to your organization or your repository. This URI has to match with the location where you installed the GitHub App.

count the number of runners you want to start on the host.

githubApp.id the Id of the GitHub App that was created.

githubApp.login the name of your organization / user where the GitHub App was registered.

githubApp.privateKeyFile the path to the file containing the GitHub App generated PEM encoded private key. This file should be present on the host and deployed as a secret (using sops-nix or agenix).

cachix.cacheName the name of your cachix organization.

cachix.tokenFile the path to the file containing your cachix token. This file should also be present on the host and deployed as a secret (using sops-nix or agenix).

Example of a module to configure 12 Github runners:

roles.github-actions-runner = {\n  url = \"https://github.com/<YOUR ORGANIZATION>\";\n  count = 12;\n  name = \"github-runner\";\n  githubApp = {\n    id = \"<YOUR GENERATED APP ID>\";\n    login = \"<YOUR ORGANIZATION>\";\n    privateKeyFile = config.age.secrets.github-app-runner-private-key.path;\n  };\n  cachix.cacheName = \"<YOUR CACHIX ORGANIZATION>\";\n  cachix.tokenFile = config.age.secrets.cachixToken.path;\n};\n
"},{"location":"github_actions_runner/#scaling","title":"Scaling","text":"

There are multiple ways to scale your GitHub runners, such as increasing the number of hosts or increasing the number of services on a single host. All services are completely isolated from each other, so there is no real distinction between one or the other approach. Your decision should be based on the compute/memory power your project needs.

You now have a fully functional self-hosted runner running on your NixOS infrastructure. If you need any further assistance in managing or improving your CI workflows with Nix, don't hesitate to contact us. Our team of experts is here to help you optimize your CI/CD pipelines and streamline your development process.

"},{"location":"help/","title":"Getting help","text":""},{"location":"help/#bugs","title":"Bugs","text":"

If you found a bug, feel free to create a new GitHub issue.

"},{"location":"help/#feature-development","title":"Feature development","text":"

For dedicated help or priority support, we are also available. Here is the best place to contact us: https://numtide.com/contact/.

"},{"location":"user_guide/","title":"User guide","text":"

This part of the documentation provides reference documentation for day-to-day users. Use the navigation menu to jump around.

"},{"location":"darwin/getting_started/","title":"Using SrvOS with nix-darwin","text":""},{"location":"darwin/getting_started/#finding-your-way-around","title":"Finding your way around","text":"

This project exports four big categories of NixOS modules which are useful to define a server configuration:

"},{"location":"darwin/getting_started/#example","title":"Example","text":"

Combining all of those together, here is how your flake.nix might look like, to deploy a GitHub Actions runner on Hetzner:

{\n  description = \"My machines flakes\";\n  inputs = {\n    srvos.url = \"github:nix-community/srvos/darwin-support\";\n    # Use the version of nixpkgs that has been tested to work with SrvOS\n    # Alternatively we also support the latest nixos release and unstable\n    nixpkgs.follows = \"srvos/nixpkgs\";\n    nix-darwin.url = \"github:LnL7/nix-darwin\";\n    nix-darwin.inputs.nixpkgs.follows = \"srvos/nixpkgs\";\n  };\n  outputs = { srvos, nix-darwin, ... }: {\n    darwinConfigurations.myHost = nix-darwin.lib.darwinSystem {\n      modules = [\n        # This machine is a server (i.e. CI runner)\n        srvos.darwinModules.server\n        # If a machine is a workstation or laptop, use this instead\n        # srvos.darwinModules.desktop\n\n        # Configured with extra terminfos\n        srvos.darwinModules.mixins-terminfo\n        # Finally add your configuration here\n        ./myHost.nix\n      ];\n    };\n  };\n}\n
"},{"location":"darwin/getting_started/#continue","title":"Continue","text":"

Now that we have gone over the high-level details, you should have an idea of how to use this project.

To dig further, take a look at the User guide.

"},{"location":"darwin/mixins/","title":"Configuration mixins","text":"

Config extensions for a given machine.

One or more can be included per Darwin configuration.

"},{"location":"darwin/mixins/#darwimodulesmixins-telegraf","title":"darwiModules.mixins-telegraf","text":"

Enables a generic telegraf configuration. nixosModules.mixins-prometheus for monitoring rules targeting this telegraf configuration.

"},{"location":"darwin/mixins/#darwinmodulesmixins-terminfo","title":"darwinModules.mixins-terminfo","text":"

Extends the terminfo database with often used terminal emulators. Terminfo is used by terminal applications to interfere supported features in the terminal. This is useful when connecting to a server via SSH.

"},{"location":"darwin/mixins/#darwinmodulesmixins-nix-experimental","title":"darwinModules.mixins-nix-experimental","text":"

Enables all experimental features in nix, that are known safe to use (i.e. are only used when explicitly requested in a build).

"},{"location":"darwin/mixins/#darwinmodulesmixins-trusted-nix-caches","title":"darwinModules.mixins-trusted-nix-caches","text":"

Add the common list of public nix binary caches that we trust.

"},{"location":"darwin/type/","title":"Machine type","text":"

Those high-level modules are used to define the type of machine.

We expect only one of those to be imported per Darwin configuration.

"},{"location":"darwin/type/#common-darwinmodulescommon","title":"Common (darwinModules.common)","text":"

Use this module if you are unsure if your darwin module will be used on server or desktop.

"},{"location":"darwin/type/#server-darwinmodulesserver","title":"Server (darwinModules.server)","text":"

Use this for headless systems that are remotely managed via ssh.

"},{"location":"darwin/type/#desktop-darwinmodulesdesktop","title":"Desktop (darwinModules.desktop)","text":"

Despite this project being about servers, we wanted to dogfood the common module.

"},{"location":"installation/hetzner_cloud/","title":"Hetzner Cloud installation","text":"

\u26a0\ufe0f Only works with VMs that have more than 2GB of RAM.

\u26a0\ufe0f This document reflects more of an ideal than reality right now.

  1. Create the VM in Hetzner Cloud, get the IP, IPv6, set the SSH public key.
  2. Create a new NixOS configuration in your flake:
{\n  inputs.nixos-anywhere.url = \"github:nix-community/nixos-anywhere\";\n  inputs.srvos.url = \"github:nix-community/srvos\"; \n  inputs.disko.url = \"github:nix-community/disko\";\n\n  outputs = { self, nixos-remote, srvos, disko, nixpkgs }: {\n    nixosConfigurations.my-host = nixpkgs.lib.nixosSystem {\n      system = \"x86_64-linux\";\n      modules = [{ \n        imports = [ \n          srvos.nixosModules.hardware-hetzner-cloud\n          srvos.nixosModules.server\n\n          # Are those together?\n          disko.nixosModules.disko\n          srvos.diskoModules.disk-layout-single-v1\n        ];\n        networking.hostName = \"my-host\";\n        # FIXME: Hetzner Cloud doesn't provide us with that configuration\n        systemd.network.networks.\"10-uplink\".networkConfig.Address = \"2a01:4f9:c010:52fd::1/128\";\n      }];\n    };\n    # TODO other $systems\n    devShells.x86_64-linux.default = with nixpkgs.legacyPackages.x86_64-linux; mkShellNoCC {\n      packages = [\n        # TODO: add nixos-rebuild as a package\n        nixos-anywhere.packages.x86_64-linux.default\n      ];\n    };\n  };\n}\n
  1. Update the hostname and IPv6 address in the config.

  2. Bootstrap the NixOS deployment:

    $ nix develop\n$ nixos-anywhere --flake .#my-host --target <ip>\n

\ud83c\udf89

  1. Pick a nixos deployment tool of your choice! Eg:
$ nixos-rebuild --flake .#my-host --target <ip> switch\n
"},{"location":"nixos/getting_started/","title":"Using SrvOS on NixOS","text":""},{"location":"nixos/getting_started/#finding-your-way-around","title":"Finding your way around","text":"

This project exports four big categories of NixOS modules which are useful to define a server configuration:

"},{"location":"nixos/getting_started/#example","title":"Example","text":"

Combining all of those together, here is how your flake.nix might look like, to deploy a GitHub Actions runner on Hetzner:

{\n  description = \"My machines flakes\";\n  inputs = {\n    srvos.url = \"github:nix-community/srvos\";\n    # Use the version of nixpkgs that has been tested to work with SrvOS\n    # Alternatively we also support the latest nixos release and unstable\n    nixpkgs.follows = \"srvos/nixpkgs\";\n  };\n  outputs = { self, nixpkgs, srvos }: {\n    nixosConfigurations.myHost = nixpkgs.lib.nixosSystem {\n      system = \"x86_64-linux\";\n      modules = [\n        # This machine is a server\n        srvos.nixosModules.server\n        # Deployed on the AMD Hetzner hardware\n        srvos.nixosModules.hardware-hetzner-amd\n        # Configured with extra terminfos\n        srvos.nixosModules.mixins-terminfo\n        # And designed to run the GitHub Actions runners\n        srvos.nixosModules.roles-github-actions-runner\n        # Finally add your configuration here\n        ./myHost.nix\n      ];\n    };\n  };\n}\n
"},{"location":"nixos/getting_started/#continue","title":"Continue","text":"

Now that we have gone over the high-level details, you should have an idea of how to use this project.

To dig further, take a look at the User guide.

"},{"location":"nixos/hardware/","title":"Machine hardware","text":"

Hardware modules are used to configure NixOS for well known hardware.

We expect only one hardware module to be imported per NixOS configuration.

Here are some of the hardwares that are supported:

"},{"location":"nixos/hardware/#nixosmoduleshardware-amazon","title":"nixosModules.hardware-amazon","text":"

Hardware configuration for https://aws.amazon.com/ec2 instances.

The main difference here is that the default userdata service is replaced by cloud-init.

"},{"location":"nixos/hardware/#nixosmoduleshardware-digitalocean-droplet","title":"nixosModules.hardware-digitalocean-droplet","text":"

Hardware configuration for https://www.digitalocean.com/ instances.

Enables cloud-init but turns of non-working dhcp.

"},{"location":"nixos/hardware/#nixosmoduleshardware-hetzner-cloud","title":"nixosModules.hardware-hetzner-cloud","text":"

Hardware configuration for https://www.hetzner.com/cloud instances.

The main difference here is that cloud-init is enabled.

"},{"location":"nixos/hardware/#nixosmoduleshardware-hetzner-cloud-arm","title":"nixosModules.hardware-hetzner-cloud-arm","text":"

Hardware configuration for https://www.hetzner.com/cloud arm instances.

The main difference from nixosModules.hardware-hetzner-cloud is using systemd-boot by default.

"},{"location":"nixos/hardware/#nixosmoduleshardware-hetzner-online-amd","title":"nixosModules.hardware-hetzner-online-amd","text":"

Hardware configuration for https://www.hetzner.com/dedicated-rootserver bare-metal AMD servers.

Introduces some workaround for the particular IPv6 configuration that Hetzner has.

"},{"location":"nixos/hardware/#nixosmoduleshardware-hetzner-online-intel","title":"nixosModules.hardware-hetzner-online-intel","text":"

Hardware configuration for https://www.hetzner.com/dedicated-rootserver bare-metal Intel servers.

Introduces some workaround for the particular IPv6 configuration that Hetzner has.

"},{"location":"nixos/hardware/#nixosmoduleshardware-hetzner-online-ex101","title":"nixosModules.hardware-hetzner-online-ex101","text":"

Hardware configuration for https://www.hetzner.com/de/dedicated-rootserver/ex101 bare-metal Intel Core i9-13900 servers.

Introduces some workaround for crashes under load.

"},{"location":"nixos/mixins/","title":"Configuration mixins","text":"

Config extensions for a given machine.

One or more can be included per NixOS configuration.

"},{"location":"nixos/mixins/#nixosmodulesmixins-cloud-init","title":"nixosModules.mixins-cloud-init","text":"

Enables cloud-init

"},{"location":"nixos/mixins/#nixosmodulesmixins-systemd-boot","title":"nixosModules.mixins-systemd-boot","text":"

Configure systemd-boot as bootloader.

"},{"location":"nixos/mixins/#nixosmodulesmixins-telegraf","title":"nixosModules.mixins-telegraf","text":"

Enables a generic telegraf configuration. nixosModules.mixins-prometheus for monitoring rules targeting this telegraf configuration.

"},{"location":"nixos/mixins/#nixosmodulesmixins-terminfo","title":"nixosModules.mixins-terminfo","text":"

Extends the terminfo database with often used terminal emulators. Terminfo is used by terminal applications to interfere supported features in the terminal. This is useful when connecting to a server via SSH.

"},{"location":"nixos/mixins/#nixosmodulesmixins-prometheus","title":"nixosModules.mixins-prometheus","text":"

Enables a Prometheus and configures it with a set of alert rules targeting our nixosModules.mixins-prometheus module.

"},{"location":"nixos/mixins/#nixosmodulesmixins-nginx","title":"nixosModules.mixins-nginx","text":"

Configure Nginx with recommended settings. Is quite useful when using nginx as a reverse-proxy on the machine to other services.

"},{"location":"nixos/mixins/#nixosmodulesmixins-nix-experimental","title":"nixosModules.mixins-nix-experimental","text":"

Enables all experimental features in nix, that are known safe to use (i.e. are only used when explicitly requested in a build). This for example unlocks use of containers in the nix sandbox.

"},{"location":"nixos/mixins/#nixosmodulesmixins-trusted-nix-caches","title":"nixosModules.mixins-trusted-nix-caches","text":"

Add the common list of public nix binary caches that we trust.

"},{"location":"nixos/mixins/#nixosmodulesmixins-mdns","title":"nixosModules.mixins-mdns","text":"

Enables mDNS support in systemd-networkd. Becomes a no-op if avahi is enabled on the same machine

"},{"location":"nixos/role/","title":"Machine role","text":"

Roles are special types of NixOS modules that are designed to take over a machine configuration.

We assume that only one role is assigned per machine.

By making this assumption, we are able to make deeper change to the machine configuration, without having to worry about potential conflicts with other roles.

"},{"location":"nixos/role/#github-actions-runner-nixosconfigurationroles-github-actions-runner","title":"GitHub Actions runner (nixosConfiguration.roles-github-actions-runner)","text":"

Dedicates the machine to becoming a cluster of GitHub Actions runners.

"},{"location":"nixos/role/#nix-remote-builder-nixosconfigurationroles-nix-remote-builder","title":"Nix Remote builder (nixosConfiguration.roles-nix-remote-builder)","text":"

Dedicates the machine to acting as a remote builder for Nix. The main use-case we have is to add more build capacity to the GitHub Actions runners, in a star fashion.

"},{"location":"nixos/type/","title":"Machine type","text":"

Those high-level modules are used to define the type of machine.

We expect only one of those to be imported per NixOS configuration.

"},{"location":"nixos/type/#common-nixosmodulescommon","title":"Common (nixosModules.common)","text":"

Use this module if you are unsure if your nixos module will be used on server or desktop.

"},{"location":"nixos/type/#server-nixosmodulesserver","title":"Server (nixosModules.server)","text":"

Use this for headless systems that are remotely managed via ssh.

"},{"location":"nixos/type/#desktop-nixosmodulesdesktop","title":"Desktop (nixosModules.desktop)","text":"

Despite this project being about servers, we wanted to dogfood the common module.

"}]} \ No newline at end of file +{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Home","text":"

Welcome!

SrvOS is a collection of NixOS modules that are optimized for servers. They includes many lessons that we gained over the years while deploying servers for our customers. As we like to share, we hope that this project will be useful to you.

To get started, start by reading the introductory tutorial, then check the User Guide for more information.

"},{"location":"faq/","title":"FAQ","text":"

Some questions and answers that haven't been integrated in the documentation yet.

"},{"location":"faq/#what-version-of-nixos-should-i-use","title":"What version of NixOS should I use?","text":"

SrvOS is currently tested against nixos-unstable and the latest NixOS release. SrvOS itself is automatically updated and tested against the latest version of that channel once a week.

If you want to make sure to use a tested version, use the \"follows\" mechanisms of Nix flakes to pull the same version as the one of SrvOS:

{\n  inputs.srvos.url = \"github:nix-community/srvos\";\n  # Use the version of nixpkgs that has been tested to work with SrvOS\n  inputs.nixpkgs.follows = \"srvos/nixpkgs\";\n}\n
"},{"location":"getting_started/","title":"Getting Started with SrvOS","text":"

This project is designed to work in combination with the Linux distribution NixOS or nix-darwin on macOS.

In this documentation, we expect the reader to be already familiar with the base operating system, and introduce how to compose it with our own extensions.

For NixOS continue reading here, for nix-darwin/macOS read this.

"},{"location":"github_actions_runner/","title":"GitHub Actions Runner","text":"

GitHub Action Runners are processes that execute the automated jobs you specify in your GitHub Actions workflows. These runners can be hosted on GitHub-hosted infrastructure or your infrastructure. Self-hosted runners run for your project only and are available at no additional cost.

This article looks at how to install a GitHub runner in your own NixOS infrastructure, making sure the environment is scalable and secure.

We have built a NixOS module that installs one or more self-hosted github action runner, along with a cachix watch store service with the most secure defaults.

NOTE: if you intend to run NixOS VM tests you must ensure your hosting provider supports nested virtualization or use bare-metal hosts, otherwise your tests will take a long time to execute.

"},{"location":"github_actions_runner/#authentication","title":"Authentication","text":"

In order to use a self-hosted GitHub action runner, you will need to register the runner with your GitHub account or organization. There are three different ways a self hosted runner can register itself on GitHub:

In this document, I will describe the most secure option: how to connect using a new GitHub App in your organization.

To ensure that you have complete control over the permissions that the app requires, you should create your own GitHub Application.

First, go to the setting page of your organization: https://github.com/organizations/<YOUR ORGANIZATION>/settings/apps

Once the app is created, the app's settings page will be presented. Scroll to the Private keys section and click the button labeled Generate a private key. You should save securely the generated PEM encoded private key. You will need that private key when you configure the CI. You should also save the generated GitHub App Id.

Once created, you should also limit the usage of this github app to your CI hosts public IPs (ipv4 and ipv6).

The application can be now be installed in your organization:

You can now use the NixOS role to install and configure the GitHub self hosted runner in your NixOS CI host.

If someone else is configuring the runner for you, you will need to provide him the the generated PEM encoded private key and the GitHub App Id.

You can find more information in the Official GitHub App creation documentation.

"},{"location":"github_actions_runner/#using-the-nixos-module","title":"Using the NixOS module","text":"

The module has been created as a role. Roles are used to define the specific purpose of a node, making it easy to manage and scale your infrastructure.

The following options must be configured

url the full URI to your organization or your repository. This URI has to match with the location where you installed the GitHub App.

count the number of runners you want to start on the host.

githubApp.id the Id of the GitHub App that was created.

githubApp.login the name of your organization / user where the GitHub App was registered.

githubApp.privateKeyFile the path to the file containing the GitHub App generated PEM encoded private key. This file should be present on the host and deployed as a secret (using sops-nix or agenix).

cachix.cacheName the name of your cachix organization.

cachix.tokenFile the path to the file containing your cachix token. This file should also be present on the host and deployed as a secret (using sops-nix or agenix).

Example of a module to configure 12 Github runners:

roles.github-actions-runner = {\n  url = \"https://github.com/<YOUR ORGANIZATION>\";\n  count = 12;\n  name = \"github-runner\";\n  githubApp = {\n    id = \"<YOUR GENERATED APP ID>\";\n    login = \"<YOUR ORGANIZATION>\";\n    privateKeyFile = config.age.secrets.github-app-runner-private-key.path;\n  };\n  cachix.cacheName = \"<YOUR CACHIX ORGANIZATION>\";\n  cachix.tokenFile = config.age.secrets.cachixToken.path;\n};\n
"},{"location":"github_actions_runner/#scaling","title":"Scaling","text":"

There are multiple ways to scale your GitHub runners, such as increasing the number of hosts or increasing the number of services on a single host. All services are completely isolated from each other, so there is no real distinction between one or the other approach. Your decision should be based on the compute/memory power your project needs.

You now have a fully functional self-hosted runner running on your NixOS infrastructure. If you need any further assistance in managing or improving your CI workflows with Nix, don't hesitate to contact us. Our team of experts is here to help you optimize your CI/CD pipelines and streamline your development process.

"},{"location":"help/","title":"Getting help","text":""},{"location":"help/#bugs","title":"Bugs","text":"

If you found a bug, feel free to create a new GitHub issue.

"},{"location":"help/#feature-development","title":"Feature development","text":"

For dedicated help or priority support, we are also available. Here is the best place to contact us: https://numtide.com/contact/.

"},{"location":"user_guide/","title":"User guide","text":"

This part of the documentation provides reference documentation for day-to-day users. Use the navigation menu to jump around.

"},{"location":"darwin/getting_started/","title":"Using SrvOS with nix-darwin","text":""},{"location":"darwin/getting_started/#finding-your-way-around","title":"Finding your way around","text":"

This project exports four big categories of NixOS modules which are useful to define a server configuration:

"},{"location":"darwin/getting_started/#example","title":"Example","text":"

Combining all of those together, here is how your flake.nix might look like, to deploy a GitHub Actions runner on Hetzner:

{\n  description = \"My machines flakes\";\n  inputs = {\n    srvos.url = \"github:nix-community/srvos/darwin-support\";\n    # Use the version of nixpkgs that has been tested to work with SrvOS\n    # Alternatively we also support the latest nixos release and unstable\n    nixpkgs.follows = \"srvos/nixpkgs\";\n    nix-darwin.url = \"github:LnL7/nix-darwin\";\n    nix-darwin.inputs.nixpkgs.follows = \"srvos/nixpkgs\";\n  };\n  outputs = { srvos, nix-darwin, ... }: {\n    darwinConfigurations.myHost = nix-darwin.lib.darwinSystem {\n      modules = [\n        # This machine is a server (i.e. CI runner)\n        srvos.darwinModules.server\n        # If a machine is a workstation or laptop, use this instead\n        # srvos.darwinModules.desktop\n\n        # Configured with extra terminfos\n        srvos.darwinModules.mixins-terminfo\n        # Finally add your configuration here\n        ./myHost.nix\n      ];\n    };\n  };\n}\n
"},{"location":"darwin/getting_started/#continue","title":"Continue","text":"

Now that we have gone over the high-level details, you should have an idea of how to use this project.

To dig further, take a look at the User guide.

"},{"location":"darwin/mixins/","title":"Configuration mixins","text":"

Config extensions for a given machine.

One or more can be included per Darwin configuration.

"},{"location":"darwin/mixins/#darwimodulesmixins-telegraf","title":"darwiModules.mixins-telegraf","text":"

Enables a generic telegraf configuration. nixosModules.mixins-prometheus for monitoring rules targeting this telegraf configuration.

"},{"location":"darwin/mixins/#darwinmodulesmixins-terminfo","title":"darwinModules.mixins-terminfo","text":"

Extends the terminfo database with often used terminal emulators. Terminfo is used by terminal applications to interfere supported features in the terminal. This is useful when connecting to a server via SSH.

"},{"location":"darwin/mixins/#darwinmodulesmixins-nix-experimental","title":"darwinModules.mixins-nix-experimental","text":"

Enables all experimental features in nix, that are known safe to use (i.e. are only used when explicitly requested in a build).

"},{"location":"darwin/mixins/#darwinmodulesmixins-trusted-nix-caches","title":"darwinModules.mixins-trusted-nix-caches","text":"

Add the common list of public nix binary caches that we trust.

"},{"location":"darwin/type/","title":"Machine type","text":"

Those high-level modules are used to define the type of machine.

We expect only one of those to be imported per Darwin configuration.

"},{"location":"darwin/type/#common-darwinmodulescommon","title":"Common (darwinModules.common)","text":"

Use this module if you are unsure if your darwin module will be used on server or desktop.

"},{"location":"darwin/type/#server-darwinmodulesserver","title":"Server (darwinModules.server)","text":"

Use this for headless systems that are remotely managed via ssh.

"},{"location":"darwin/type/#desktop-darwinmodulesdesktop","title":"Desktop (darwinModules.desktop)","text":"

Despite this project being about servers, we wanted to dogfood the common module.

"},{"location":"installation/hetzner_cloud/","title":"Hetzner Cloud installation","text":"

\u26a0\ufe0f Only works with VMs that have more than 2GB of RAM.

\u26a0\ufe0f This document reflects more of an ideal than reality right now.

  1. Create the VM in Hetzner Cloud, get the IP, IPv6, set the SSH public key.
  2. Create a new NixOS configuration in your flake:
{\n  inputs.nixos-anywhere.url = \"github:nix-community/nixos-anywhere\";\n  inputs.srvos.url = \"github:nix-community/srvos\"; \n  inputs.disko.url = \"github:nix-community/disko\";\n\n  outputs = { self, nixos-remote, srvos, disko, nixpkgs }: {\n    nixosConfigurations.my-host = nixpkgs.lib.nixosSystem {\n      system = \"x86_64-linux\";\n      modules = [{ \n        imports = [ \n          srvos.nixosModules.hardware-hetzner-cloud\n          srvos.nixosModules.server\n\n          # Are those together?\n          disko.nixosModules.disko\n          srvos.diskoModules.disk-layout-single-v1\n        ];\n        networking.hostName = \"my-host\";\n        # FIXME: Hetzner Cloud doesn't provide us with that configuration\n        systemd.network.networks.\"10-uplink\".networkConfig.Address = \"2a01:4f9:c010:52fd::1/128\";\n      }];\n    };\n    # TODO other $systems\n    devShells.x86_64-linux.default = with nixpkgs.legacyPackages.x86_64-linux; mkShellNoCC {\n      packages = [\n        # TODO: add nixos-rebuild as a package\n        nixos-anywhere.packages.x86_64-linux.default\n      ];\n    };\n  };\n}\n
  1. Update the hostname and IPv6 address in the config.

  2. Bootstrap the NixOS deployment:

    $ nix develop\n$ nixos-anywhere --flake .#my-host --target <ip>\n

\ud83c\udf89

  1. Pick a nixos deployment tool of your choice! Eg:
$ nixos-rebuild --flake .#my-host --target <ip> switch\n
"},{"location":"nixos/getting_started/","title":"Using SrvOS on NixOS","text":""},{"location":"nixos/getting_started/#finding-your-way-around","title":"Finding your way around","text":"

This project exports four big categories of NixOS modules which are useful to define a server configuration:

"},{"location":"nixos/getting_started/#example","title":"Example","text":"

Combining all of those together, here is how your flake.nix might look like, to deploy a GitHub Actions runner on Hetzner:

{\n  description = \"My machines flakes\";\n  inputs = {\n    srvos.url = \"github:nix-community/srvos\";\n    # Use the version of nixpkgs that has been tested to work with SrvOS\n    # Alternatively we also support the latest nixos release and unstable\n    nixpkgs.follows = \"srvos/nixpkgs\";\n  };\n  outputs = { self, nixpkgs, srvos }: {\n    nixosConfigurations.myHost = nixpkgs.lib.nixosSystem {\n      system = \"x86_64-linux\";\n      modules = [\n        # This machine is a server\n        srvos.nixosModules.server\n        # Deployed on the AMD Hetzner hardware\n        srvos.nixosModules.hardware-hetzner-amd\n        # Configured with extra terminfos\n        srvos.nixosModules.mixins-terminfo\n        # And designed to run the GitHub Actions runners\n        srvos.nixosModules.roles-github-actions-runner\n        # Finally add your configuration here\n        ./myHost.nix\n      ];\n    };\n  };\n}\n
"},{"location":"nixos/getting_started/#continue","title":"Continue","text":"

Now that we have gone over the high-level details, you should have an idea of how to use this project.

To dig further, take a look at the User guide.

"},{"location":"nixos/hardware/","title":"Machine hardware","text":"

Hardware modules are used to configure NixOS for well known hardware.

We expect only one hardware module to be imported per NixOS configuration.

Here are some of the hardwares that are supported:

"},{"location":"nixos/hardware/#nixosmoduleshardware-amazon","title":"nixosModules.hardware-amazon","text":"

Hardware configuration for https://aws.amazon.com/ec2 instances.

The main difference here is that the default userdata service is replaced by cloud-init.

"},{"location":"nixos/hardware/#nixosmoduleshardware-digitalocean-droplet","title":"nixosModules.hardware-digitalocean-droplet","text":"

Hardware configuration for https://www.digitalocean.com/ instances.

Enables cloud-init but turns of non-working dhcp.

"},{"location":"nixos/hardware/#nixosmoduleshardware-hetzner-cloud","title":"nixosModules.hardware-hetzner-cloud","text":"

Hardware configuration for https://www.hetzner.com/cloud instances.

The main difference here is that: 1. cloud-init is enabled. 2. the qemu agent is running, to allow password reset to function.

"},{"location":"nixos/hardware/#nixosmoduleshardware-hetzner-cloud-arm","title":"nixosModules.hardware-hetzner-cloud-arm","text":"

Hardware configuration for https://www.hetzner.com/cloud arm instances.

The main difference from nixosModules.hardware-hetzner-cloud is using systemd-boot by default.

"},{"location":"nixos/hardware/#nixosmoduleshardware-hetzner-online-amd","title":"nixosModules.hardware-hetzner-online-amd","text":"

Hardware configuration for https://www.hetzner.com/dedicated-rootserver bare-metal AMD servers.

Introduces some workaround for the particular IPv6 configuration that Hetzner has.

"},{"location":"nixos/hardware/#nixosmoduleshardware-hetzner-online-intel","title":"nixosModules.hardware-hetzner-online-intel","text":"

Hardware configuration for https://www.hetzner.com/dedicated-rootserver bare-metal Intel servers.

Introduces some workaround for the particular IPv6 configuration that Hetzner has.

"},{"location":"nixos/hardware/#nixosmoduleshardware-hetzner-online-ex101","title":"nixosModules.hardware-hetzner-online-ex101","text":"

Hardware configuration for https://www.hetzner.com/de/dedicated-rootserver/ex101 bare-metal Intel Core i9-13900 servers.

Introduces some workaround for crashes under load.

"},{"location":"nixos/mixins/","title":"Configuration mixins","text":"

Config extensions for a given machine.

One or more can be included per NixOS configuration.

"},{"location":"nixos/mixins/#nixosmodulesmixins-cloud-init","title":"nixosModules.mixins-cloud-init","text":"

Enables cloud-init

"},{"location":"nixos/mixins/#nixosmodulesmixins-systemd-boot","title":"nixosModules.mixins-systemd-boot","text":"

Configure systemd-boot as bootloader.

"},{"location":"nixos/mixins/#nixosmodulesmixins-telegraf","title":"nixosModules.mixins-telegraf","text":"

Enables a generic telegraf configuration. nixosModules.mixins-prometheus for monitoring rules targeting this telegraf configuration.

"},{"location":"nixos/mixins/#nixosmodulesmixins-terminfo","title":"nixosModules.mixins-terminfo","text":"

Extends the terminfo database with often used terminal emulators. Terminfo is used by terminal applications to interfere supported features in the terminal. This is useful when connecting to a server via SSH.

"},{"location":"nixos/mixins/#nixosmodulesmixins-prometheus","title":"nixosModules.mixins-prometheus","text":"

Enables a Prometheus and configures it with a set of alert rules targeting our nixosModules.mixins-prometheus module.

"},{"location":"nixos/mixins/#nixosmodulesmixins-nginx","title":"nixosModules.mixins-nginx","text":"

Configure Nginx with recommended settings. Is quite useful when using nginx as a reverse-proxy on the machine to other services.

"},{"location":"nixos/mixins/#nixosmodulesmixins-nix-experimental","title":"nixosModules.mixins-nix-experimental","text":"

Enables all experimental features in nix, that are known safe to use (i.e. are only used when explicitly requested in a build). This for example unlocks use of containers in the nix sandbox.

"},{"location":"nixos/mixins/#nixosmodulesmixins-trusted-nix-caches","title":"nixosModules.mixins-trusted-nix-caches","text":"

Add the common list of public nix binary caches that we trust.

"},{"location":"nixos/mixins/#nixosmodulesmixins-mdns","title":"nixosModules.mixins-mdns","text":"

Enables mDNS support in systemd-networkd. Becomes a no-op if avahi is enabled on the same machine

"},{"location":"nixos/role/","title":"Machine role","text":"

Roles are special types of NixOS modules that are designed to take over a machine configuration.

We assume that only one role is assigned per machine.

By making this assumption, we are able to make deeper change to the machine configuration, without having to worry about potential conflicts with other roles.

"},{"location":"nixos/role/#github-actions-runner-nixosconfigurationroles-github-actions-runner","title":"GitHub Actions runner (nixosConfiguration.roles-github-actions-runner)","text":"

Dedicates the machine to becoming a cluster of GitHub Actions runners.

"},{"location":"nixos/role/#nix-remote-builder-nixosconfigurationroles-nix-remote-builder","title":"Nix Remote builder (nixosConfiguration.roles-nix-remote-builder)","text":"

Dedicates the machine to acting as a remote builder for Nix. The main use-case we have is to add more build capacity to the GitHub Actions runners, in a star fashion.

"},{"location":"nixos/type/","title":"Machine type","text":"

Those high-level modules are used to define the type of machine.

We expect only one of those to be imported per NixOS configuration.

"},{"location":"nixos/type/#common-nixosmodulescommon","title":"Common (nixosModules.common)","text":"

Use this module if you are unsure if your nixos module will be used on server or desktop.

"},{"location":"nixos/type/#server-nixosmodulesserver","title":"Server (nixosModules.server)","text":"

Use this for headless systems that are remotely managed via ssh.

"},{"location":"nixos/type/#desktop-nixosmodulesdesktop","title":"Desktop (nixosModules.desktop)","text":"

Despite this project being about servers, we wanted to dogfood the common module.

"}]} \ No newline at end of file