A drop-in replacement for nix-serve that is faster and more reliable
Go to file
Philipp Schuster 578ad85b30
readme: introduce recommended service.nix-serve.package variant (#38)
It is much more idiomatic to use services.nix-serve.package instead of
an overlay. Also this way, one doesn't has to consume the repository directly.
2024-10-01 15:38:04 +02:00
benchmark Add benchmark numbers to README (#5) 2022-09-01 21:37:05 -07:00
cbits Call initLibStore before opening store (#23) 2023-02-27 07:39:07 -08:00
src Support base16 1.0 (#33) 2024-02-01 15:59:36 -08:00
.gitignore Ignore Cabal and Nix stuff in .gitignore file (#5) 2022-07-13 11:53:22 -07:00
CHANGELOG.md Version 1.0.0 → 1.0.1 (#15) 2022-11-25 09:22:29 -07:00
default.nix Flake-enable the project 2022-07-01 14:00:11 -07:00
flake.lock Support GHC 9.4 (#32) 2023-12-18 07:16:55 -08:00
flake.nix Fix Nix build (#35) 2024-03-02 18:57:01 +01:00
LICENSE Switch to BSD 3-clause license 2022-07-05 10:46:15 -07:00
nix-serve-ng.cabal Support base16 1.0 (#33) 2024-02-01 15:59:36 -08:00
README.md readme: introduce recommended service.nix-serve.package variant (#38) 2024-10-01 15:38:04 +02:00
shell.nix Flake-enable the project 2022-07-01 14:00:11 -07:00

nix-serve-ng

nix-serve-ng is a faster, more reliable, drop-in replacement for nix-serve.

Quick start

There are three main approaches you can use to configure a NixOS system to replace the old nix-serve with nix-serve-ng:

  • A: Set services.nix-serve.package = pkgs.nix-serve-ng; in your NixOS configuration
    • nix-serve-ng is packaged in nixpkgs already
    • There is no need to consume this repository directly
  • B: Include nix-serve-ng.nixosModules.default in your NixOS configuration
    • nix-serve-ng refers to this repository being a flake input
    • Requires consume this repository / this flake
    • Overlays pkgs.nix-serve with pkgs.nix-serve-ng
  • C: Like B but not requiring a flake

We recommend approach A. Only use B or C if you need a bleeding edge upstream version of the project.

Variant A:

The code snippet below shows a flake.nix.

{ 
  inputs.nixpkgs.url = "github:NixOS/nixpkgs";

  outputs = { nixpkgs, ... }: {
    nixosConfigurations.default = nixpkgs.lib.nixosSystem {
      modules = [
        /* ... */
        { 
          services.nix-serve.enable = true;
          services.nix-serve.package = pkgs.nix-serve-ng;
          /* ... */
        }
        /* ... */
      ];
    };
  };
}

Variant B:

The code snippet below shows a flake.nix.

{ 
  inputs.nixpkgs.url = "github:NixOS/nixpkgs";
  inputs.nix-serve-ng.url = "aristanetworks/nix-serve-ng";

  outputs = { nixpkgs, nix-serve-ng, ... }: {
    nixosConfigurations.default = nixpkgs.lib.nixosSystem {
      modules = [
        nix-serve-ng.nixosModules.default
        /* ... */
        { 
          services.nix-serve.enable = true;
          /* ... */
        }
        /* ... */
      ];
    };
  };
}

Variant C:

The code snippet below shows a NixOS module file.

{ config, pkgs, lib, ... }:

let 
  nix-serve-ng-src = builtins.fetchTarball {
    # Replace the URL and hash with whatever you actually need
    url    = "https://github.com/aristanetworks/nix-serve-ng/archive/1937593598bb1285b41804f25cd6f9ddd4d5f1cb.tar.gz";

    sha256 = "1lqd207gbx1wjbhky33d2r8xi6avfbx4v0kpsvn84zaanifdgz2g";
  };

  nix-serve-ng = import nix-serve-ng-src;
in
{ 
  /* ... */
  imports = [ nix-serve-ng.nixosModules.default ];
  
  config = {
    services.nix-serve.enable = true;
  };
  /* ... */
}

Motivation

Our requirements for this project were:

  • Improve reliability

    … since nix-serve would intermittently hang and require restarts

  • Improve efficiency

    … since nix-serve was doing some obviously inefficient things which we felt we could improve upon

  • Be backwards-compatible

    Our replacement would need to be a drop-in replacement for the original nix-serve, supporting the same command-line options and even sharing the same executable name

    The only exception is logging: we provide more detailed logging than before

Did we satisfy those requirements?

Results

  • Reliability

    We have test-driven this internally under heavy load with stable memory usage and without any failures but it's probably premature to declare victory.

    In particular, we have not done the following things:

    • Memory leak detection

      In other words, we haven't put our nix-serve through, say, valgrind

    • Exploit detection

      In other words, we haven't attempted to crash or sabotage the service with maliciously-crafted payload

  • Performance

    We have improved significantly on efficiency, not only compared to nix-serve but also compared to other nix-serve rewrites. We are more efficient than:

    • The original nix-serve

    • eris - A Perl rewrite of nix-serve

    • harmonia - A Rust rewrite of nix-serve

    See the Benchmarks section below for more details

  • Backwards-compatibility

    We have excellent backwards-compatibility, so in the vast majority of cases, you can simply replace pkgs.nix-serve with pkgs.nix-serve-ng and make no other changes.

    • Our executable shares the same name (nix-serve) as the original program

    • We support most the original command-line options

      The options that we're aware of that we do not currently support fall into two categories:

      • Useless options which are only relevant to starman:

        Upon request, we can still parse and ignore the following irrelevant options for extra backwards compatibility:

        • --workers

          We do not use worker subprocess like starman does. Instead we use warp which internally uses Haskell green threads to service a much larger number of requests with less overhead and lower footprint when idle.

        • --preload-app

          This optimization is meaningless for a compiled Haskell executable.

        • --disable-proctitle

      • Useful options

        We might accept requests to support the following options, but we might explore other alternatives first before supporting them:

        • --max-requests

          warp itself is unlikely to be a bottleneck to servicing a large number of requests but there may still be Nix-specific or disk-specific reasons to cap the number of requests.

        • --disable-keepalive

        • --keepalive-timeout

        • --read-timeout

        • --user

        • --group

        • --pid

        • --error-log

    Because of this backwards-compatibility you only need to replace the old nix-serve executable with the nix-serve executable built by this package (which is what the included NixOS module does).

    You don't need to define or use any new NixOS options. You continue to use the old services.nix-serve options hierarchy to configure the upgraded service.

Benchmarks

The test environment is a large server machine:

  • CPU: 24 × Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz
  • RAM: 384 GB (24 × 16 GB @ 2133 MT/s)
  • Disk (/nix/store): ≈4 TB SSD

Legend:

  • Fetch present NAR info ×10: Time to fetch the NAR info for 10 different files that are present
  • Fetch absent NAR info ×1: Time to fetch the NAR info a single file that is absent
  • Fetch empty NAR ×10: Time to fetch the NAR for the same empty file 10 times
  • Fetch 10 MB NAR ×10: Time to fetch the NAR for the same 10 MB file 10 times

Raw numbers:

Benchmark nix-serve eris harmonia nix-serve-ng
Fetch present NAR info ×10 2.09 ms ± 66 μs 41.5 ms ± 426 μs 1.57 ms ± 91 μs 1.32 ms ± 33 μs
Fetch absent NAR info ×1 212 μs ± 18 μs 3.42 ms ± 113 μs 139 μs ± 11 μs 115 μs ± 6.2 μs
Fetch empty NAR ×10 164 ms ± 8.5 ms 246 ms ± 20 ms 279 ms ± 10 ms 5.16 ms ± 368 μs
Fetch 10 MB NAR ×10 291 ms ± 8.7 ms 453 ms ± 19 ms 487 ms ± 41 ms 86.9 ms ± 3.0 ms

Speedups (compared to nix-serve):

Benchmark nix-serve eris harmonia nix-serve-ng
Fetch present NAR info ×10 1.0 0.05 1.33 1.58
Fetch absent NAR info ×1 1.0 0.06 1.53 1.84
Fetch empty NAR ×10 1.0 0.67 0.59 31.80
Fetch 10 MB NAR ×10 1.0 0.64 0.60 3.35

We can summarize nix-serve-ng's performance like this:

  • Time to handle a NAR info request: ≈ 100 μs
  • Time to serve a NAR: ≈ 500 μs + 800 μs / MB

You can reproduce these benchmarks using the benchmark suite. See the instructions in ./benchmark/Main.hs for running your own benchmarks.

Caveats:

  • We haven't used any of these services' tuning options, including:
    • Tuning garbage collection (for nix-serve-ng)
    • Tuning concurrency/parallelism/workers
  • We haven't benchmarked memory utilization