Merge branch 'master' into ckb-update-and-cleanup

This commit is contained in:
Kier Davis 2018-11-06 00:46:39 +00:00
commit 3b7984dd51
No known key found for this signature in database
GPG Key ID: 66378DA35FF9F0FA
1098 changed files with 28726 additions and 18393 deletions

28
.github/CODEOWNERS vendored
View File

@ -12,7 +12,7 @@
# Libraries
/lib @edolstra @nbp
/lib/systems @nbp @ericson2314
/lib/systems @nbp @ericson2314 @matthewbauer
/lib/generators.nix @edolstra @nbp @Profpatsch
/lib/debug.nix @edolstra @nbp @Profpatsch
@ -20,9 +20,11 @@
/default.nix @nbp
/pkgs/top-level/default.nix @nbp @Ericson2314
/pkgs/top-level/impure.nix @nbp @Ericson2314
/pkgs/top-level/stage.nix @nbp @Ericson2314
/pkgs/stdenv/generic @Ericson2314
/pkgs/stdenv/cross @Ericson2314
/pkgs/top-level/stage.nix @nbp @Ericson2314 @matthewbauer
/pkgs/top-level/splice.nix @Ericson2314 @matthewbauer
/pkgs/top-level/release-cross.nix @Ericson2314 @matthewbauer
/pkgs/stdenv/generic @Ericson2314 @matthewbauer
/pkgs/stdenv/cross @Ericson2314 @matthewbauer
/pkgs/build-support/cc-wrapper @Ericson2314 @orivej
/pkgs/build-support/bintools-wrapper @Ericson2314 @orivej
/pkgs/build-support/setup-hooks @Ericson2314
@ -74,6 +76,14 @@
/pkgs/stdenv/darwin @NixOS/darwin-maintainers
/pkgs/os-specific/darwin @NixOS/darwin-maintainers
# C compilers
/pkgs/development/compilers/gcc @matthewbauer
/pkgs/development/compilers/llvm @matthewbauer
# Compatibility stuff
/pkgs/top-level/unix-tools.nix @matthewbauer
/pkgs/development/tools/xcbuild @matthewbauer
# Beam-related (Erlang, Elixir, LFE, etc)
/pkgs/development/beam-modules @gleber
/pkgs/development/interpreters/erlang @gleber
@ -97,3 +107,13 @@
/pkgs/desktops/plasma-5 @ttuegel
/pkgs/development/libraries/kde-frameworks @ttuegel
/pkgs/development/libraries/qt-5 @ttuegel
# PostgreSQL and related stuff
/pkgs/servers/sql/postgresql @thoughtpolice
/nixos/modules/services/databases/postgresql.xml @thoughtpolice
/nixos/modules/services/databases/postgresql.nix @thoughtpolice
/nixos/tests/postgresql.nix @thoughtpolice
# Dhall
/pkgs/development/dhall-modules @Gabriel439 @Profpatsch
/pkgs/development/interpreters/dhall @Gabriel439 @Profpatsch

View File

@ -842,9 +842,12 @@ src = fetchFromGitHub {
owner = "NixOS";
repo = "nix";
rev = "1f795f9f44607cc5bec70d1300150bfefcef2aae";
sha256 = "04yri911rj9j19qqqn6m82266fl05pz98inasni0vxr1cf1gdgv9";
sha256 = "1i2yxndxb6yc9l6c99pypbd92lfq5aac4klq7y2v93c9qvx2cgpc";
}
</programlisting>
Find the value to put as <literal>sha256</literal> by running
<literal>nix run -f '&lt;nixpkgs&gt;' nix-prefetch-github -c nix-prefetch-github --rev 1f795f9f44607cc5bec70d1300150bfefcef2aae NixOS nix</literal>
or <literal>nix-prefetch-url --unpack https://github.com/NixOS/nix/archive/1f795f9f44607cc5bec70d1300150bfefcef2aae.tar.gz</literal>.
</para>
</listitem>
</itemizedlist>

View File

@ -6,8 +6,14 @@
<para>
Sometimes one wants to override parts of <literal>nixpkgs</literal>, e.g.
derivation attributes, the results of derivations or even the whole package
set.
derivation attributes, the results of derivations.
</para>
<para>
These functions are used to make changes to packages, returning only single
packages. <link xlink:href="#chap-overlays">Overlays</link>, on the other
hand, can be used to combine the overridden packages across the entire
package set of Nixpkgs.
</para>
<section xml:id="sec-pkg-override">
@ -25,6 +31,9 @@
<para>
Example usages:
<programlisting>pkgs.foo.override { arg1 = val1; arg2 = val2; ... }</programlisting>
<!-- TODO: move below programlisting to a new section about extending and overlays
and reference it
-->
<programlisting>
import pkgs.path { overlays = [ (self: super: {
foo = super.foo.override { barSupport = true ; };
@ -86,11 +95,11 @@ helloWithDebug = pkgs.hello.overrideAttrs (oldAttrs: rec {
in this case, as it overrides only the attributes of the final derivation.
It is for this reason that <varname>overrideAttrs</varname> should be
preferred in (almost) all cases to <varname>overrideDerivation</varname>,
i.e. to allow using <varname>sdenv.mkDerivation</varname> to process input
i.e. to allow using <varname>stdenv.mkDerivation</varname> to process input
arguments, as well as the fact that it is easier to use (you can use the
same attribute names you see in your Nix code, instead of the ones
generated (e.g. <varname>buildInputs</varname> vs
<varname>nativeBuildInputs</varname>, and involves less typing.
<varname>nativeBuildInputs</varname>), and it involves less typing).
</para>
</note>
</section>

View File

@ -12,8 +12,8 @@
<para>
Some extensions (plugins) might require OCaml and sometimes other OCaml
packages. The <literal>coq.ocamlPackages</literal> attribute can be used
to depend on the same package set Coq was built against.
packages. The <literal>coq.ocamlPackages</literal> attribute can be used to
depend on the same package set Coq was built against.
</para>
<para>

View File

@ -483,8 +483,8 @@ and in this case the `python35` interpreter is automatically used.
### Interpreters
Versions 2.7, 3.4, 3.5, 3.6 and 3.7 of the CPython interpreter are available as
respectively `python27`, `python34`, `python35` and `python36`. The PyPy interpreter
Versions 2.7, 3.5, 3.6 and 3.7 of the CPython interpreter are available as
respectively `python27`, `python35` and `python36`. The PyPy interpreter
is available as `pypy`. The aliases `python2` and `python3` correspond to respectively `python27` and
`python35`. The default interpreter, `python`, maps to `python2`.
The Nix expressions for the interpreters can be found in
@ -507,7 +507,7 @@ Each interpreter has the following attributes:
- `buildEnv`. Function to build python interpreter environments with extra packages bundled together. See section *python.buildEnv function* for usage and documentation.
- `withPackages`. Simpler interface to `buildEnv`. See section *python.withPackages function* for usage and documentation.
- `sitePackages`. Alias for `lib/${libPrefix}/site-packages`.
- `executable`. Name of the interpreter executable, e.g. `python3.4`.
- `executable`. Name of the interpreter executable, e.g. `python3.7`.
- `pkgs`. Set of Python packages for that specific interpreter. The package set can be modified by overriding the interpreter and passing `packageOverrides`.
### Building packages and applications
@ -529,7 +529,6 @@ attribute set is created for each available Python interpreter. The available
sets are
* `pkgs.python27Packages`
* `pkgs.python34Packages`
* `pkgs.python35Packages`
* `pkgs.python36Packages`
* `pkgs.python37Packages`
@ -670,7 +669,7 @@ python3Packages.buildPythonApplication rec {
sha256 = "035w8gqql36zlan0xjrzz9j4lh9hs0qrsgnbyw07qs7lnkvbdv9x";
};
propagatedBuildInputs = with python3Packages; [ tornado_4 pythondaemon ];
propagatedBuildInputs = with python3Packages; [ tornado_4 python-daemon ];
meta = with lib; {
...
@ -837,7 +836,7 @@ community to help save time. No tool is preferred at the moment.
### Deterministic builds
Python 2.7, 3.5 and 3.6 are now built deterministically and 3.4 mostly.
The Python interpreters are now built deterministically.
Minor modifications had to be made to the interpreters in order to generate
deterministic bytecode. This has security implications and is relevant for
those using Python in a `nix-shell`.

View File

@ -23,6 +23,7 @@ Adding custom .vimrc lines can be done using the following code:
```
vim_configurable.customize {
# `name` specifies the name of the executable and package
name = "vim-with-plugins";
vimrcConfig.customRC = ''
@ -31,6 +32,8 @@ vim_configurable.customize {
}
```
This configuration is used when vim is invoked with the command specified as name, in this case `vim-with-plugins`.
For Neovim the `configure` argument can be overridden to achieve the same:
```
@ -83,6 +86,7 @@ The resulting package can be added to `packageOverrides` in `~/.nixpkgs/config.n
{
packageOverrides = pkgs: with pkgs; {
myVim = vim_configurable.customize {
# `name` specifies the name of the executable and package
name = "vim-with-plugins";
# add here code from the example section
};

View File

@ -17,91 +17,122 @@
<title>Installing overlays</title>
<para>
The list of overlays is determined as follows.
The list of overlays can be set either explicitly in a Nix expression, or
through <literal>&lt;nixpkgs-overlays></literal> or user configuration
files.
</para>
<para>
If the <varname>overlays</varname> argument is not provided explicitly, we
look for overlays in a path. The path is determined as follows:
<orderedlist>
<listitem>
<para>
First, if an <varname>overlays</varname> argument to the nixpkgs function
itself is given, then that is used.
</para>
<para>
This can be passed explicitly when importing nipxkgs, for example
<literal>import &lt;nixpkgs> { overlays = [ overlay1 overlay2 ];
}</literal>.
</para>
</listitem>
<listitem>
<para>
Otherwise, if the Nix path entry <literal>&lt;nixpkgs-overlays></literal>
exists, we look for overlays at that path, as described below.
</para>
<para>
See the section on <literal>NIX_PATH</literal> in the Nix manual for more
details on how to set a value for
<literal>&lt;nixpkgs-overlays>.</literal>
</para>
</listitem>
<listitem>
<para>
If one of <filename>~/.config/nixpkgs/overlays.nix</filename> and
<filename>~/.config/nixpkgs/overlays/</filename> exists, then we look for
overlays at that path, as described below. It is an error if both exist.
</para>
</listitem>
</orderedlist>
</para>
<section xml:id="sec-overlays-argument">
<title>Set overlays in NixOS or Nix expressions</title>
<para>
If we are looking for overlays at a path, then there are two cases:
<itemizedlist>
<listitem>
<para>
If the path is a file, then the file is imported as a Nix expression and
used as the list of overlays.
</para>
</listitem>
<listitem>
<para>
If the path is a directory, then we take the content of the directory,
order it lexicographically, and attempt to interpret each as an overlay
by:
<itemizedlist>
<listitem>
<para>
Importing the file, if it is a <literal>.nix</literal> file.
</para>
</listitem>
<listitem>
<para>
Importing a top-level <filename>default.nix</filename> file, if it is
a directory.
</para>
</listitem>
</itemizedlist>
</para>
</listitem>
</itemizedlist>
</para>
<para>
On a NixOS system the value of the <literal>nixpkgs.overlays</literal>
option, if present, is passed to the system Nixpkgs directly as an
argument. Note that this does not affect the overlays for non-NixOS
operations (e.g. <literal>nix-env</literal>), which are
<link xlink:href="#sec-overlays-lookup">looked</link> up independently.
</para>
<para>
On a NixOS system the value of the <literal>nixpkgs.overlays</literal>
option, if present, is passed to the system Nixpkgs directly as an argument.
Note that this does not affect the overlays for non-NixOS operations (e.g.
<literal>nix-env</literal>), which are looked up independently.
</para>
<para>
The list of overlays can be passed explicitly when importing nixpkgs, for
example <literal>import &lt;nixpkgs> { overlays = [ overlay1 overlay2 ];
}</literal>.
</para>
<para>
The <filename>overlays.nix</filename> option therefore provides a convenient
way to use the same overlays for a NixOS system configuration and user
configuration: the same file can be used as
<filename>overlays.nix</filename> and imported as the value of
<literal>nixpkgs.overlays</literal>.
</para>
<para>
Further overlays can be added by calling the <literal>pkgs.extend</literal>
or <literal>pkgs.appendOverlays</literal>, although it is often preferable
to avoid these functions, because they recompute the Nixpkgs fixpoint,
which is somewhat expensive to do.
</para>
</section>
<section xml:id="sec-overlays-lookup">
<title>Install overlays via configuration lookup</title>
<para>
The list of overlays is determined as follows.
</para>
<para>
<orderedlist>
<listitem>
<para>
First, if an
<link xlink:href="#sec-overlays-argument"><varname>overlays</varname>
argument</link> to the nixpkgs function itself is given, then that is
used and no path lookup will be performed.
</para>
</listitem>
<listitem>
<para>
Otherwise, if the Nix path entry
<literal>&lt;nixpkgs-overlays></literal> exists, we look for overlays at
that path, as described below.
</para>
<para>
See the section on <literal>NIX_PATH</literal> in the Nix manual for
more details on how to set a value for
<literal>&lt;nixpkgs-overlays>.</literal>
</para>
</listitem>
<listitem>
<para>
If one of <filename>~/.config/nixpkgs/overlays.nix</filename> and
<filename>~/.config/nixpkgs/overlays/</filename> exists, then we look
for overlays at that path, as described below. It is an error if both
exist.
</para>
</listitem>
</orderedlist>
</para>
<para>
If we are looking for overlays at a path, then there are two cases:
<itemizedlist>
<listitem>
<para>
If the path is a file, then the file is imported as a Nix expression and
used as the list of overlays.
</para>
</listitem>
<listitem>
<para>
If the path is a directory, then we take the content of the directory,
order it lexicographically, and attempt to interpret each as an overlay
by:
<itemizedlist>
<listitem>
<para>
Importing the file, if it is a <literal>.nix</literal> file.
</para>
</listitem>
<listitem>
<para>
Importing a top-level <filename>default.nix</filename> file, if it is
a directory.
</para>
</listitem>
</itemizedlist>
</para>
</listitem>
</itemizedlist>
</para>
<para>
Because overlays that are set in NixOS configuration do not affect
non-NixOS operations such as <literal>nix-env</literal>, the
<filename>overlays.nix</filename> option provides a convenient way to use
the same overlays for a NixOS system configuration and user configuration:
the same file can be used as <filename>overlays.nix</filename> and imported
as the value of <literal>nixpkgs.overlays</literal>.
</para>
<!-- TODO: Example of sharing overlays between NixOS configuration
and configuration lookup. Also reference the example
from the sec-overlays-argument paragraph about NixOS.
-->
</section>
</section>
<!--============================================================-->
<section xml:id="sec-overlays-definition">

View File

@ -681,10 +681,10 @@ overrides = self: super: rec {
</para>
<para>
The python plugin allows the addition of extra libraries. For instance, the
<literal>inotify.py</literal> script in weechat-scripts requires D-Bus or
libnotify, and the <literal>fish.py</literal> script requires pycrypto. To
use these scripts, use the <literal>python</literal> plugin's
The python and perl plugins allows the addition of extra libraries. For
instance, the <literal>inotify.py</literal> script in weechat-scripts
requires D-Bus or libnotify, and the <literal>fish.py</literal> script
requires pycrypto. To use these scripts, use the plugin's
<literal>withPackages</literal> attribute:
<programlisting>weechat.override { configure = {availablePlugins, ...}: {
plugins = with availablePlugins; [

View File

@ -372,7 +372,7 @@ let f(h, h + 1, i) = i + h
They are programs/libraries used at build time that furthermore produce
programs/libraries also used at build time. If the dependency doesn't
care about the target platform (i.e. isn't a compiler or similar tool),
put it in <varname>nativeBuildInputs</varname>instead. The most common
put it in <varname>nativeBuildInputs</varname> instead. The most common
use for this <literal>buildPackages.stdenv.cc</literal>, the default C
compiler for this role. That example crops up more than one might think
in old commonly used C libraries.
@ -2099,13 +2099,13 @@ someVar=$(stripHash $name)
</para>
<para>
In order to alleviate this burden, the <firstterm>setup
hook></firstterm>mechanism was written, where any package can include a
shell script that [by convention rather than enforcement by Nix], any
downstream reverse-dependency will source as part of its build process. That
allows the downstream dependency to merely specify its dependencies, and
lets those dependencies effectively initialize themselves. No boilerplate
mirroring the list of dependencies is needed.
In order to alleviate this burden, the <firstterm>setup hook</firstterm>
mechanism was written, where any package can include a shell script that [by
convention rather than enforcement by Nix], any downstream
reverse-dependency will source as part of its build process. That allows the
downstream dependency to merely specify its dependencies, and lets those
dependencies effectively initialize themselves. No boilerplate mirroring the
list of dependencies is needed.
</para>
<para>
@ -2445,6 +2445,28 @@ addEnvHooks "$hostOffset" myBashFunction
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
breakpointHook
</term>
<listitem>
<para>
This hook will make a build pause instead of stopping when a failure
happen. It prevents nix to cleanup the build environment immediatly and
allows the user to attach to a build environment using the
<command>cntr</command> command. On build error it will print the
instruction that are neccessary for <command>cntr</command>. Installing
cntr and running the command will provide shell access to the build
sandbox of failed build. At <filename>/var/lib/cntr</filename> the
sandbox filesystem is mounted. All commands and files of the system are
still accessible within the shell. To execute commands from the sandbox
use the cntr exec subcommand. Note that <command>cntr</command> also
needs to be executed on the machine that is doing the build, which might
be not the case when remote builders are enabled.
<command>cntr</command> is only supported on linux based platforms.
</para>
</listitem>
</varlistentry>
</variablelist>
</para>
</section>

View File

@ -23,27 +23,54 @@ rec {
# -- TRACING --
/* Trace msg, but only if pred is true.
/* Conditionally trace the supplied message, based on a predicate.
Type: traceIf :: bool -> string -> a -> a
Example:
traceIf true "hello" 3
trace: hello
=> 3
*/
traceIf = pred: msg: x: if pred then trace msg x else x;
traceIf =
# Predicate to check
pred:
# Message that should be traced
msg:
# Value to return
x: if pred then trace msg x else x;
/* Trace the value and also return it.
/* Trace the supplied value after applying a function to it, and
return the original value.
Type: traceValFn :: (a -> b) -> a -> a
Example:
traceValFn (v: "mystring ${v}") "foo"
trace: mystring foo
=> "foo"
*/
traceValFn = f: x: trace (f x) x;
traceValFn =
# Function to apply
f:
# Value to trace and return
x: trace (f x) x;
/* Trace the supplied value and return it.
Type: traceVal :: a -> a
Example:
traceVal 42
# trace: 42
=> 42
*/
traceVal = traceValFn id;
/* `builtins.trace`, but the value is `builtins.deepSeq`ed first.
Type: traceSeq :: a -> b -> b
Example:
trace { a.b.c = 3; } null
trace: { a = <CODE>; }
@ -52,7 +79,11 @@ rec {
trace: { a = { b = { c = 3; }; }; }
=> null
*/
traceSeq = x: y: trace (builtins.deepSeq x x) y;
traceSeq =
# The value to trace
x:
# The value to return
y: trace (builtins.deepSeq x x) y;
/* Like `traceSeq`, but only evaluate down to depth n.
This is very useful because lots of `traceSeq` usages
@ -76,27 +107,49 @@ rec {
in trace (generators.toPretty { allowPrettyValues = true; }
(modify depth snip x)) y;
/* A combination of `traceVal` and `traceSeq` */
traceValSeqFn = f: v: traceValFn f (builtins.deepSeq v v);
/* A combination of `traceVal` and `traceSeq` that applies a
provided function to the value to be traced after `deepSeq`ing
it.
*/
traceValSeqFn =
# Function to apply
f:
# Value to trace
v: traceValFn f (builtins.deepSeq v v);
/* A combination of `traceVal` and `traceSeq`. */
traceValSeq = traceValSeqFn id;
/* A combination of `traceVal` and `traceSeqN` that applies a
provided function to the value to be traced. */
traceValSeqNFn =
# Function to apply
f:
depth:
# Value to trace
v: traceSeqN depth (f v) v;
/* A combination of `traceVal` and `traceSeqN`. */
traceValSeqNFn = f: depth: v: traceSeqN depth (f v) v;
traceValSeqN = traceValSeqNFn id;
# -- TESTING --
/* Evaluate a set of tests. A test is an attribute set {expr,
expected}, denoting an expression and its expected result. The
result is a list of failed tests, each represented as {name,
expected, actual}, denoting the attribute name of the failing
test and its expected and actual results. Used for regression
testing of the functions in lib; see tests.nix for an example.
Only tests having names starting with "test" are run.
Add attr { tests = ["testName"]; } to run these test only
/* Evaluate a set of tests. A test is an attribute set `{expr,
expected}`, denoting an expression and its expected result. The
result is a list of failed tests, each represented as `{name,
expected, actual}`, denoting the attribute name of the failing
test and its expected and actual results.
Used for regression testing of the functions in lib; see
tests.nix for an example. Only tests having names starting with
"test" are run.
Add attr { tests = ["testName"]; } to run these tests only.
*/
runTests = tests: lib.concatLists (lib.attrValues (lib.mapAttrs (name: test:
runTests =
# Tests to run
tests: lib.concatLists (lib.attrValues (lib.mapAttrs (name: test:
let testsToRun = if tests ? tests then tests.tests else [];
in if (substring 0 4 name == "test" || elem name testsToRun)
&& ((testsToRun == []) || elem name tests.tests)
@ -105,8 +158,11 @@ rec {
then [ { inherit name; expected = test.expected; result = test.expr; } ]
else [] ) tests));
# create a test assuming that list elements are true
# usage: { testX = allTrue [ true ]; }
/* Create a test assuming that list elements are `true`.
Example:
{ testX = allTrue [ true ]; }
*/
testAllTrue = expr: { inherit expr; expected = map (x: true) expr; };

View File

@ -309,6 +309,12 @@ lib.mapAttrs (n: v: v // { shortName = n; }) rec {
fullName = "GNU General Public License v2.0 only";
};
gpl2Classpath = {
spdxId = "GPL-2.0-with-classpath-exception";
fullName = "GNU General Public License v2.0 only (with Classpath exception)";
url = https://fedoraproject.org/wiki/Licensing/GPL_Classpath_Exception;
};
gpl2ClasspathPlus = {
fullName = "GNU General Public License v2.0 or later (with Classpath exception)";
url = https://fedoraproject.org/wiki/Licensing/GPL_Classpath_Exception;
@ -394,6 +400,10 @@ lib.mapAttrs (n: v: v // { shortName = n; }) rec {
free = false;
};
jasper = spdx {
spdxId = "JasPer-2.0";
fullName = "JasPer License";
};
lgpl2 = spdx {
spdxId = "LGPL-2.0";

View File

@ -1,4 +1,5 @@
# General list operations.
{ lib }:
with lib.trivial;
let
@ -8,21 +9,23 @@ rec {
inherit (builtins) head tail length isList elemAt concatLists filter elem genList;
/* Create a list consisting of a single element. `singleton x' is
sometimes more convenient with respect to indentation than `[x]'
/* Create a list consisting of a single element. `singleton x` is
sometimes more convenient with respect to indentation than `[x]`
when x spans multiple lines.
Type: singleton :: a -> [a]
Example:
singleton "foo"
=> [ "foo" ]
*/
singleton = x: [x];
/* right fold a binary function `op' between successive elements of
`list' with `nul' as the starting value, i.e.,
`foldr op nul [x_1 x_2 ... x_n] == op x_1 (op x_2 ... (op x_n nul))'.
Type:
foldr :: (a -> b -> b) -> b -> [a] -> b
/* right fold a binary function `op` between successive elements of
`list` with `nul' as the starting value, i.e.,
`foldr op nul [x_1 x_2 ... x_n] == op x_1 (op x_2 ... (op x_n nul))`.
Type: foldr :: (a -> b -> b) -> b -> [a] -> b
Example:
concat = foldr (a: b: a + b) "z"
@ -42,16 +45,15 @@ rec {
else op (elemAt list n) (fold' (n + 1));
in fold' 0;
/* `fold' is an alias of `foldr' for historic reasons */
/* `fold` is an alias of `foldr` for historic reasons */
# FIXME(Profpatsch): deprecate?
fold = foldr;
/* left fold, like `foldr', but from the left:
/* left fold, like `foldr`, but from the left:
`foldl op nul [x_1 x_2 ... x_n] == op (... (op (op nul x_1) x_2) ... x_n)`.
Type:
foldl :: (b -> a -> b) -> b -> [a] -> b
Type: foldl :: (b -> a -> b) -> b -> [a] -> b
Example:
lconcat = foldl (a: b: a + b) "z"
@ -70,16 +72,20 @@ rec {
else op (foldl' (n - 1)) (elemAt list n);
in foldl' (length list - 1);
/* Strict version of `foldl'.
/* Strict version of `foldl`.
The difference is that evaluation is forced upon access. Usually used
with small whole results (in contract with lazily-generated list or large
lists where only a part is consumed.)
Type: foldl' :: (b -> a -> b) -> b -> [a] -> b
*/
foldl' = builtins.foldl' or foldl;
/* Map with index starting from 0
Type: imap0 :: (int -> a -> b) -> [a] -> [b]
Example:
imap0 (i: v: "${v}-${toString i}") ["a" "b"]
=> [ "a-0" "b-1" ]
@ -88,6 +94,8 @@ rec {
/* Map with index starting from 1
Type: imap1 :: (int -> a -> b) -> [a] -> [b]
Example:
imap1 (i: v: "${v}-${toString i}") ["a" "b"]
=> [ "a-1" "b-2" ]
@ -96,6 +104,8 @@ rec {
/* Map and concatenate the result.
Type: concatMap :: (a -> [b]) -> [a] -> [b]
Example:
concatMap (x: [x] ++ ["z"]) ["a" "b"]
=> [ "a" "z" "b" "z" ]
@ -118,15 +128,21 @@ rec {
/* Remove elements equal to 'e' from a list. Useful for buildInputs.
Type: remove :: a -> [a] -> [a]
Example:
remove 3 [ 1 3 4 3 ]
=> [ 1 4 ]
*/
remove = e: filter (x: x != e);
remove =
# Element to remove from the list
e: filter (x: x != e);
/* Find the sole element in the list matching the specified
predicate, returns `default' if no such element exists, or
`multiple' if there are multiple matching elements.
predicate, returns `default` if no such element exists, or
`multiple` if there are multiple matching elements.
Type: findSingle :: (a -> bool) -> a -> a -> [a] -> a
Example:
findSingle (x: x == 3) "none" "multiple" [ 1 3 3 ]
@ -136,14 +152,24 @@ rec {
findSingle (x: x == 3) "none" "multiple" [ 1 9 ]
=> "none"
*/
findSingle = pred: default: multiple: list:
findSingle =
# Predicate
pred:
# Default value to return if element was not found.
default:
# Default value to return if more than one element was found
multiple:
# Input list
list:
let found = filter pred list; len = length found;
in if len == 0 then default
else if len != 1 then multiple
else head found;
/* Find the first element in the list matching the specified
predicate or returns `default' if no such element exists.
predicate or return `default` if no such element exists.
Type: findFirst :: (a -> bool) -> a -> [a] -> a
Example:
findFirst (x: x > 3) 7 [ 1 6 4 ]
@ -151,12 +177,20 @@ rec {
findFirst (x: x > 9) 7 [ 1 6 4 ]
=> 7
*/
findFirst = pred: default: list:
findFirst =
# Predicate
pred:
# Default value to return
default:
# Input list
list:
let found = filter pred list;
in if found == [] then default else head found;
/* Return true iff function `pred' returns true for at least element
of `list'.
/* Return true if function `pred` returns true for at least one
element of `list`.
Type: any :: (a -> bool) -> [a] -> bool
Example:
any isString [ 1 "a" { } ]
@ -166,8 +200,10 @@ rec {
*/
any = builtins.any or (pred: foldr (x: y: if pred x then true else y) false);
/* Return true iff function `pred' returns true for all elements of
`list'.
/* Return true if function `pred` returns true for all elements of
`list`.
Type: all :: (a -> bool) -> [a] -> bool
Example:
all (x: x < 3) [ 1 2 ]
@ -177,19 +213,25 @@ rec {
*/
all = builtins.all or (pred: foldr (x: y: if pred x then y else false) true);
/* Count how many times function `pred' returns true for the elements
of `list'.
/* Count how many elements of `list` match the supplied predicate
function.
Type: count :: (a -> bool) -> [a] -> int
Example:
count (x: x == 3) [ 3 2 3 4 6 ]
=> 2
*/
count = pred: foldl' (c: x: if pred x then c + 1 else c) 0;
count =
# Predicate
pred: foldl' (c: x: if pred x then c + 1 else c) 0;
/* Return a singleton list or an empty list, depending on a boolean
value. Useful when building lists with optional elements
(e.g. `++ optional (system == "i686-linux") flashplayer').
Type: optional :: bool -> a -> [a]
Example:
optional true "foo"
=> [ "foo" ]
@ -200,13 +242,19 @@ rec {
/* Return a list or an empty list, depending on a boolean value.
Type: optionals :: bool -> [a] -> [a]
Example:
optionals true [ 2 3 ]
=> [ 2 3 ]
optionals false [ 2 3 ]
=> [ ]
*/
optionals = cond: elems: if cond then elems else [];
optionals =
# Condition
cond:
# List to return if condition is true
elems: if cond then elems else [];
/* If argument is a list, return it; else, wrap it in a singleton
@ -223,20 +271,28 @@ rec {
/* Return a list of integers from `first' up to and including `last'.
Type: range :: int -> int -> [int]
Example:
range 2 4
=> [ 2 3 4 ]
range 3 2
=> [ ]
*/
range = first: last:
range =
# First integer in the range
first:
# Last integer in the range
last:
if first > last then
[]
else
genList (n: first + n) (last - first + 1);
/* Splits the elements of a list in two lists, `right' and
`wrong', depending on the evaluation of a predicate.
/* Splits the elements of a list in two lists, `right` and
`wrong`, depending on the evaluation of a predicate.
Type: (a -> bool) -> [a] -> { right :: [a], wrong :: [a] }
Example:
partition (x: x > 2) [ 5 1 2 3 4 ]
@ -252,7 +308,7 @@ rec {
/* Splits the elements of a list into many lists, using the return value of a predicate.
Predicate should return a string which becomes keys of attrset `groupBy' returns.
`groupBy'' allows to customise the combining function and initial value
`groupBy'` allows to customise the combining function and initial value
Example:
groupBy (x: boolToString (x > 2)) [ 5 1 2 3 4 ]
@ -268,10 +324,6 @@ rec {
xfce = [ { name = "xfce"; script = "xfce4-session &"; } ];
}
groupBy' allows to customise the combining function and initial value
Example:
groupBy' builtins.add 0 (x: boolToString (x > 2)) [ 5 1 2 3 4 ]
=> { true = 12; false = 3; }
*/
@ -289,17 +341,27 @@ rec {
the merging stops at the shortest. How both lists are merged is defined
by the first argument.
Type: zipListsWith :: (a -> b -> c) -> [a] -> [b] -> [c]
Example:
zipListsWith (a: b: a + b) ["h" "l"] ["e" "o"]
=> ["he" "lo"]
*/
zipListsWith = f: fst: snd:
zipListsWith =
# Function to zip elements of both lists
f:
# First list
fst:
# Second list
snd:
genList
(n: f (elemAt fst n) (elemAt snd n)) (min (length fst) (length snd));
/* Merges two lists of the same size together. If the sizes aren't the same
the merging stops at the shortest.
Type: zipLists :: [a] -> [b] -> [{ fst :: a, snd :: b}]
Example:
zipLists [ 1 2 ] [ "a" "b" ]
=> [ { fst = 1; snd = "a"; } { fst = 2; snd = "b"; } ]
@ -308,6 +370,8 @@ rec {
/* Reverse the order of the elements of a list.
Type: reverseList :: [a] -> [a]
Example:
reverseList [ "b" "o" "j" ]
@ -321,8 +385,7 @@ rec {
`before a b == true` means that `b` depends on `a` (there's an
edge from `b` to `a`).
Examples:
Example:
listDfs true hasPrefix [ "/home/user" "other" "/" "/home" ]
== { minimal = "/"; # minimal element
visited = [ "/home/user" ]; # seen elements (in reverse order)
@ -336,7 +399,6 @@ rec {
rest = [ "/home" "other" ]; # everything else
*/
listDfs = stopOnCycles: before: list:
let
dfs' = us: visited: rest:
@ -361,7 +423,7 @@ rec {
`before a b == true` means that `b` should be after `a`
in the result.
Examples:
Example:
toposort hasPrefix [ "/home/user" "other" "/" "/home" ]
== { result = [ "/" "/home" "/home/user" "other" ]; }
@ -376,7 +438,6 @@ rec {
toposort (a: b: a < b) [ 3 2 1 ] == { result = [ 1 2 3 ]; }
*/
toposort = before: list:
let
dfsthis = listDfs true before list;
@ -467,26 +528,38 @@ rec {
/* Return the first (at most) N elements of a list.
Type: take :: int -> [a] -> [a]
Example:
take 2 [ "a" "b" "c" "d" ]
=> [ "a" "b" ]
take 2 [ ]
=> [ ]
*/
take = count: sublist 0 count;
take =
# Number of elements to take
count: sublist 0 count;
/* Remove the first (at most) N elements of a list.
Type: drop :: int -> [a] -> [a]
Example:
drop 2 [ "a" "b" "c" "d" ]
=> [ "c" "d" ]
drop 2 [ ]
=> [ ]
*/
drop = count: list: sublist count (length list) list;
drop =
# Number of elements to drop
count:
# Input list
list: sublist count (length list) list;
/* Return a list consisting of at most count elements of list,
starting at index start.
/* Return a list consisting of at most `count` elements of `list`,
starting at index `start`.
Type: sublist :: int -> int -> [a] -> [a]
Example:
sublist 1 3 [ "a" "b" "c" "d" "e" ]
@ -494,7 +567,13 @@ rec {
sublist 1 3 [ ]
=> [ ]
*/
sublist = start: count: list:
sublist =
# Index at which to start the sublist
start:
# Number of elements to take
count:
# Input list
list:
let len = length list; in
genList
(n: elemAt list (n + start))
@ -504,6 +583,10 @@ rec {
/* Return the last element of a list.
This function throws an error if the list is empty.
Type: last :: [a] -> a
Example:
last [ 1 2 3 ]
=> 3
@ -512,7 +595,11 @@ rec {
assert lib.assertMsg (list != []) "lists.last: list must not be empty!";
elemAt list (length list - 1);
/* Return all elements but the last
/* Return all elements but the last.
This function throws an error if the list is empty.
Type: init :: [a] -> [a]
Example:
init [ 1 2 3 ]
@ -523,7 +610,7 @@ rec {
take (length list - 1) list;
/* return the image of the cross product of some lists by a function
/* Return the image of the cross product of some lists by a function.
Example:
crossLists (x:y: "${toString x}${toString y}") [[1 2] [3 4]]
@ -534,8 +621,9 @@ rec {
/* Remove duplicate elements from the list. O(n^2) complexity.
Example:
Type: unique :: [a] -> [a]
Example:
unique [ 3 2 3 4 ]
=> [ 3 2 4 ]
*/

View File

@ -8,61 +8,72 @@ with lib.strings;
rec {
# Returns true when the given argument is an option
#
# Examples:
# isOption 1 // => false
# isOption (mkOption {}) // => true
/* Returns true when the given argument is an option
Type: isOption :: a -> bool
Example:
isOption 1 // => false
isOption (mkOption {}) // => true
*/
isOption = lib.isType "option";
# Creates an Option attribute set. mkOption accepts an attribute set with the following keys:
#
# default: Default value used when no definition is given in the configuration.
# defaultText: Textual representation of the default, for in the manual.
# example: Example value used in the manual.
# description: String describing the option.
# type: Option type, providing type-checking and value merging.
# apply: Function that converts the option value to something else.
# internal: Whether the option is for NixOS developers only.
# visible: Whether the option shows up in the manual.
# readOnly: Whether the option can be set only once
# options: Obsolete, used by types.optionSet.
#
# All keys default to `null` when not given.
#
# Examples:
# mkOption { } // => { _type = "option"; }
# mkOption { defaultText = "foo"; } // => { _type = "option"; defaultText = "foo"; }
/* Creates an Option attribute set. mkOption accepts an attribute set with the following keys:
All keys default to `null` when not given.
Example:
mkOption { } // => { _type = "option"; }
mkOption { defaultText = "foo"; } // => { _type = "option"; defaultText = "foo"; }
*/
mkOption =
{ default ? null # Default value used when no definition is given in the configuration.
, defaultText ? null # Textual representation of the default, for in the manual.
, example ? null # Example value used in the manual.
, description ? null # String describing the option.
, relatedPackages ? null # Related packages used in the manual (see `genRelatedPackages` in ../nixos/doc/manual/default.nix).
, type ? null # Option type, providing type-checking and value merging.
, apply ? null # Function that converts the option value to something else.
, internal ? null # Whether the option is for NixOS developers only.
, visible ? null # Whether the option shows up in the manual.
, readOnly ? null # Whether the option can be set only once
, options ? null # Obsolete, used by types.optionSet.
{
# Default value used when no definition is given in the configuration.
default ? null,
# Textual representation of the default, for the manual.
defaultText ? null,
# Example value used in the manual.
example ? null,
# String describing the option.
description ? null,
# Related packages used in the manual (see `genRelatedPackages` in ../nixos/doc/manual/default.nix).
relatedPackages ? null,
# Option type, providing type-checking and value merging.
type ? null,
# Function that converts the option value to something else.
apply ? null,
# Whether the option is for NixOS developers only.
internal ? null,
# Whether the option shows up in the manual.
visible ? null,
# Whether the option can be set only once
readOnly ? null,
# Obsolete, used by types.optionSet.
options ? null
} @ attrs:
attrs // { _type = "option"; };
# Creates a Option attribute set for a boolean value option i.e an option to be toggled on or off:
#
# Examples:
# mkEnableOption "foo" // => { _type = "option"; default = false; description = "Whether to enable foo."; example = true; type = { ... }; }
mkEnableOption = name: mkOption {
/* Creates an Option attribute set for a boolean value option i.e an
option to be toggled on or off:
Example:
mkEnableOption "foo"
=> { _type = "option"; default = false; description = "Whether to enable foo."; example = true; type = { ... }; }
*/
mkEnableOption =
# Name for the created option
name: mkOption {
default = false;
example = true;
description = "Whether to enable ${name}.";
type = lib.types.bool;
};
# This option accept anything, but it does not produce any result. This
# is useful for sharing a module across different module sets without
# having to implement similar features as long as the value of the options
# are not expected.
/* This option accepts anything, but it does not produce any result.
This is useful for sharing a module across different module sets
without having to implement similar features as long as the
values of the options are not accessed. */
mkSinkUndeclaredOptions = attrs: mkOption ({
internal = true;
visible = false;
@ -102,18 +113,24 @@ rec {
else
val) (head defs).value defs;
# Extracts values of all "value" keys of the given list
#
# Examples:
# getValues [ { value = 1; } { value = 2; } ] // => [ 1 2 ]
# getValues [ ] // => [ ]
/* Extracts values of all "value" keys of the given list.
Type: getValues :: [ { value :: a } ] -> [a]
Example:
getValues [ { value = 1; } { value = 2; } ] // => [ 1 2 ]
getValues [ ] // => [ ]
*/
getValues = map (x: x.value);
# Extracts values of all "file" keys of the given list
#
# Examples:
# getFiles [ { file = "file1"; } { file = "file2"; } ] // => [ "file1" "file2" ]
# getFiles [ ] // => [ ]
/* Extracts values of all "file" keys of the given list
Type: getFiles :: [ { file :: a } ] -> [a]
Example:
getFiles [ { file = "file1"; } { file = "file2"; } ] // => [ "file1" "file2" ]
getFiles [ ] // => [ ]
*/
getFiles = map (x: x.file);
# Generate documentation template from the list of option declaration like
@ -146,10 +163,13 @@ rec {
/* This function recursively removes all derivation attributes from
`x' except for the `name' attribute. This is to make the
generation of `options.xml' much more efficient: the XML
representation of derivations is very large (on the order of
megabytes) and is not actually used by the manual generator. */
`x` except for the `name` attribute.
This is to make the generation of `options.xml` much more
efficient: the XML representation of derivations is very large
(on the order of megabytes) and is not actually used by the
manual generator.
*/
scrubOptionValue = x:
if isDerivation x then
{ type = "derivation"; drvPath = x.name; outPath = x.name; name = x.name; }
@ -158,20 +178,21 @@ rec {
else x;
/* For use in the example option attribute. It causes the given
text to be included verbatim in documentation. This is necessary
for example values that are not simple values, e.g.,
functions. */
/* For use in the `example` option attribute. It causes the given
text to be included verbatim in documentation. This is necessary
for example values that are not simple values, e.g., functions.
*/
literalExample = text: { _type = "literalExample"; inherit text; };
# Helper functions.
/* Helper functions. */
/* Convert an option, described as a list of the option parts in to a
safe, human readable version.
# Convert an option, described as a list of the option parts in to a
# safe, human readable version. ie:
#
# (showOption ["foo" "bar" "baz"]) == "foo.bar.baz"
# (showOption ["foo" "bar.baz" "tux"]) == "foo.\"bar.baz\".tux"
Example:
(showOption ["foo" "bar" "baz"]) == "foo.bar.baz"
(showOption ["foo" "bar.baz" "tux"]) == "foo.\"bar.baz\".tux"
*/
showOption = parts: let
escapeOptionPart = part:
let

View File

@ -12,6 +12,8 @@ rec {
/* Concatenate a list of strings.
Type: concatStrings :: [string] -> string
Example:
concatStrings ["foo" "bar"]
=> "foobar"
@ -20,15 +22,19 @@ rec {
/* Map a function over a list and concatenate the resulting strings.
Type: concatMapStrings :: (a -> string) -> [a] -> string
Example:
concatMapStrings (x: "a" + x) ["foo" "bar"]
=> "afooabar"
*/
concatMapStrings = f: list: concatStrings (map f list);
/* Like `concatMapStrings' except that the f functions also gets the
/* Like `concatMapStrings` except that the f functions also gets the
position as a parameter.
Type: concatImapStrings :: (int -> a -> string) -> [a] -> string
Example:
concatImapStrings (pos: x: "${toString pos}-${x}") ["foo" "bar"]
=> "1-foo2-bar"
@ -37,17 +43,25 @@ rec {
/* Place an element between each element of a list
Type: intersperse :: a -> [a] -> [a]
Example:
intersperse "/" ["usr" "local" "bin"]
=> ["usr" "/" "local" "/" "bin"].
*/
intersperse = separator: list:
intersperse =
# Separator to add between elements
separator:
# Input list
list:
if list == [] || length list == 1
then list
else tail (lib.concatMap (x: [separator x]) list);
/* Concatenate a list of strings with a separator between each element
Type: concatStringsSep :: string -> [string] -> string
Example:
concatStringsSep "/" ["usr" "local" "bin"]
=> "usr/local/bin"
@ -55,43 +69,77 @@ rec {
concatStringsSep = builtins.concatStringsSep or (separator: list:
concatStrings (intersperse separator list));
/* First maps over the list and then concatenates it.
/* Maps a function over a list of strings and then concatenates the
result with the specified separator interspersed between
elements.
Type: concatMapStringsSep :: string -> (string -> string) -> [string] -> string
Example:
concatMapStringsSep "-" (x: toUpper x) ["foo" "bar" "baz"]
=> "FOO-BAR-BAZ"
*/
concatMapStringsSep = sep: f: list: concatStringsSep sep (map f list);
concatMapStringsSep =
# Separator to add between elements
sep:
# Function to map over the list
f:
# List of input strings
list: concatStringsSep sep (map f list);
/* First imaps over the list and then concatenates it.
/* Same as `concatMapStringsSep`, but the mapping function
additionally receives the position of its argument.
Type: concatMapStringsSep :: string -> (int -> string -> string) -> [string] -> string
Example:
concatImapStringsSep "-" (pos: x: toString (x / pos)) [ 6 6 6 ]
=> "6-3-2"
*/
concatImapStringsSep = sep: f: list: concatStringsSep sep (lib.imap1 f list);
concatImapStringsSep =
# Separator to add between elements
sep:
# Function that receives elements and their positions
f:
# List of input strings
list: concatStringsSep sep (lib.imap1 f list);
/* Construct a Unix-style search path consisting of each `subDir"
directory of the given list of packages.
/* Construct a Unix-style, colon-separated search path consisting of
the given `subDir` appended to each of the given paths.
Type: makeSearchPath :: string -> [string] -> string
Example:
makeSearchPath "bin" ["/root" "/usr" "/usr/local"]
=> "/root/bin:/usr/bin:/usr/local/bin"
makeSearchPath "bin" ["/"]
=> "//bin"
makeSearchPath "bin" [""]
=> "/bin"
*/
makeSearchPath = subDir: packages:
concatStringsSep ":" (map (path: path + "/" + subDir) (builtins.filter (x: x != null) packages));
makeSearchPath =
# Directory name to append
subDir:
# List of base paths
paths:
concatStringsSep ":" (map (path: path + "/" + subDir) (builtins.filter (x: x != null) paths));
/* Construct a Unix-style search path, using given package output.
If no output is found, fallback to `.out` and then to the default.
/* Construct a Unix-style search path by appending the given
`subDir` to the specified `output` of each of the packages. If no
output by the given name is found, fallback to `.out` and then to
the default.
Type: string -> string -> [package] -> string
Example:
makeSearchPathOutput "dev" "bin" [ pkgs.openssl pkgs.zlib ]
=> "/nix/store/9rz8gxhzf8sw4kf2j2f1grr49w8zx5vj-openssl-1.0.1r-dev/bin:/nix/store/wwh7mhwh269sfjkm6k5665b5kgp7jrk2-zlib-1.2.8/bin"
*/
makeSearchPathOutput = output: subDir: pkgs: makeSearchPath subDir (map (lib.getOutput output) pkgs);
makeSearchPathOutput =
# Package output to use
output:
# Directory name to append
subDir:
# List of packages
pkgs: makeSearchPath subDir (map (lib.getOutput output) pkgs);
/* Construct a library search path (such as RPATH) containing the
libraries for a set of packages
@ -117,13 +165,12 @@ rec {
/* Construct a perl search path (such as $PERL5LIB)
FIXME(zimbatm): this should be moved in perl-specific code
Example:
pkgs = import <nixpkgs> { }
makePerlPath [ pkgs.perlPackages.libnet ]
=> "/nix/store/n0m1fk9c960d8wlrs62sncnadygqqc6y-perl-Net-SMTP-1.25/lib/perl5/site_perl"
*/
# FIXME(zimbatm): this should be moved in perl-specific code
makePerlPath = makeSearchPathOutput "lib" "lib/perl5/site_perl";
/* Construct a perl search path recursively including all dependencies (such as $PERL5LIB)
@ -138,34 +185,51 @@ rec {
/* Depending on the boolean `cond', return either the given string
or the empty string. Useful to concatenate against a bigger string.
Type: optionalString :: bool -> string -> string
Example:
optionalString true "some-string"
=> "some-string"
optionalString false "some-string"
=> ""
*/
optionalString = cond: string: if cond then string else "";
optionalString =
# Condition
cond:
# String to return if condition is true
string: if cond then string else "";
/* Determine whether a string has given prefix.
Type: hasPrefix :: string -> string -> bool
Example:
hasPrefix "foo" "foobar"
=> true
hasPrefix "foo" "barfoo"
=> false
*/
hasPrefix = pref: str:
substring 0 (stringLength pref) str == pref;
hasPrefix =
# Prefix to check for
pref:
# Input string
str: substring 0 (stringLength pref) str == pref;
/* Determine whether a string has given suffix.
Type: hasSuffix :: string -> string -> bool
Example:
hasSuffix "foo" "foobar"
=> false
hasSuffix "foo" "barfoo"
=> true
*/
hasSuffix = suffix: content:
hasSuffix =
# Suffix to check for
suffix:
# Input string
content:
let
lenContent = stringLength content;
lenSuffix = stringLength suffix;
@ -180,6 +244,8 @@ rec {
Also note that Nix treats strings as a list of bytes and thus doesn't
handle unicode.
Type: stringtoCharacters :: string -> [string]
Example:
stringToCharacters ""
=> [ ]
@ -194,18 +260,25 @@ rec {
/* Manipulate a string character by character and replace them by
strings before concatenating the results.
Type: stringAsChars :: (string -> string) -> string -> string
Example:
stringAsChars (x: if x == "a" then "i" else x) "nax"
=> "nix"
*/
stringAsChars = f: s:
concatStrings (
stringAsChars =
# Function to map over each individual character
f:
# Input string
s: concatStrings (
map f (stringToCharacters s)
);
/* Escape occurrence of the elements of list in string by
/* Escape occurrence of the elements of `list` in `string` by
prefixing it with a backslash.
Type: escape :: [string] -> string -> string
Example:
escape ["(" ")"] "(foo)"
=> "\\(foo\\)"
@ -214,6 +287,8 @@ rec {
/* Quote string to be used safely within the Bourne shell.
Type: escapeShellArg :: string -> string
Example:
escapeShellArg "esc'ape\nme"
=> "'esc'\\''ape\nme'"
@ -222,6 +297,8 @@ rec {
/* Quote all arguments to be safely passed to the Bourne shell.
Type: escapeShellArgs :: [string] -> string
Example:
escapeShellArgs ["one" "two three" "four'five"]
=> "'one' 'two three' 'four'\\''five'"
@ -230,13 +307,15 @@ rec {
/* Turn a string into a Nix expression representing that string
Type: string -> string
Example:
escapeNixString "hello\${}\n"
=> "\"hello\\\${}\\n\""
*/
escapeNixString = s: escape ["$"] (builtins.toJSON s);
/* Obsolete - use replaceStrings instead. */
# Obsolete - use replaceStrings instead.
replaceChars = builtins.replaceStrings or (
del: new: s:
let
@ -256,6 +335,8 @@ rec {
/* Converts an ASCII string to lower-case.
Type: toLower :: string -> string
Example:
toLower "HOME"
=> "home"
@ -264,6 +345,8 @@ rec {
/* Converts an ASCII string to upper-case.
Type: toUpper :: string -> string
Example:
toUpper "home"
=> "HOME"
@ -273,7 +356,7 @@ rec {
/* Appends string context from another string. This is an implementation
detail of Nix.
Strings in Nix carry an invisible `context' which is a list of strings
Strings in Nix carry an invisible `context` which is a list of strings
representing store paths. If the string is later used in a derivation
attribute, the derivation will properly populate the inputDrvs and
inputSrcs.
@ -319,8 +402,9 @@ rec {
in
recurse 0 0;
/* Return the suffix of the second argument if the first argument matches
its prefix.
/* Return a string without the specified prefix, if the prefix matches.
Type: string -> string -> string
Example:
removePrefix "foo." "foo.bar.baz"
@ -328,18 +412,23 @@ rec {
removePrefix "xxx" "foo.bar.baz"
=> "foo.bar.baz"
*/
removePrefix = pre: s:
removePrefix =
# Prefix to remove if it matches
prefix:
# Input string
str:
let
preLen = stringLength pre;
sLen = stringLength s;
preLen = stringLength prefix;
sLen = stringLength str;
in
if hasPrefix pre s then
substring preLen (sLen - preLen) s
if hasPrefix prefix str then
substring preLen (sLen - preLen) str
else
s;
str;
/* Return the prefix of the second argument if the first argument matches
its suffix.
/* Return a string without the specified suffix, if the suffix matches.
Type: string -> string -> string
Example:
removeSuffix "front" "homefront"
@ -347,17 +436,21 @@ rec {
removeSuffix "xxx" "homefront"
=> "homefront"
*/
removeSuffix = suf: s:
removeSuffix =
# Suffix to remove if it matches
suffix:
# Input string
str:
let
sufLen = stringLength suf;
sLen = stringLength s;
sufLen = stringLength suffix;
sLen = stringLength str;
in
if sufLen <= sLen && suf == substring (sLen - sufLen) sufLen s then
substring 0 (sLen - sufLen) s
if sufLen <= sLen && suffix == substring (sLen - sufLen) sufLen str then
substring 0 (sLen - sufLen) str
else
s;
str;
/* Return true iff string v1 denotes a version older than v2.
/* Return true if string v1 denotes a version older than v2.
Example:
versionOlder "1.1" "1.2"
@ -367,7 +460,7 @@ rec {
*/
versionOlder = v1: v2: builtins.compareVersions v2 v1 == 1;
/* Return true iff string v1 denotes a version equal to or newer than v2.
/* Return true if string v1 denotes a version equal to or newer than v2.
Example:
versionAtLeast "1.1" "1.0"
@ -459,6 +552,11 @@ rec {
/* Create a fixed width string with additional prefix to match
required width.
This function will fail if the input string is longer than the
requested length.
Type: fixedWidthString :: int -> string -> string
Example:
fixedWidthString 5 "0" (toString 15)
=> "00015"
@ -502,12 +600,16 @@ rec {
=> false
*/
isStorePath = x:
isCoercibleToString x
&& builtins.substring 0 1 (toString x) == "/"
&& dirOf x == builtins.storeDir;
if isCoercibleToString x then
let str = toString x; in
builtins.substring 0 1 str == "/"
&& dirOf str == builtins.storeDir
else
false;
/* Convert string to int
Obviously, it is a bit hacky to use fromJSON that way.
/* Parse a string string as an int.
Type: string -> int
Example:
toInt "1337"
@ -517,17 +619,18 @@ rec {
toInt "3.14"
=> error: floating point JSON numbers are not supported
*/
# Obviously, it is a bit hacky to use fromJSON this way.
toInt = str:
let may_be_int = builtins.fromJSON str; in
if builtins.isInt may_be_int
then may_be_int
else throw "Could not convert ${str} to int.";
/* Read a list of paths from `file', relative to the `rootPath'. Lines
beginning with `#' are treated as comments and ignored. Whitespace
is significant.
/* Read a list of paths from `file`, relative to the `rootPath`.
Lines beginning with `#` are treated as comments and ignored.
Whitespace is significant.
NOTE: this function is not performant and should be avoided
NOTE: This function is not performant and should be avoided.
Example:
readPathsFromFile /prefix
@ -549,6 +652,8 @@ rec {
/* Read the contents of a file removing the trailing \n
Type: fileContents :: path -> string
Example:
$ echo "1.0" > ./version

View File

@ -32,6 +32,7 @@ rec {
else if final.isUClibc then "uclibc"
else if final.isAndroid then "bionic"
else if final.isLinux /* default */ then "glibc"
else if final.isAvr then "avrlibc"
# TODO(@Ericson2314) think more about other operating systems
else "native/impure";
extensions = {

View File

@ -99,6 +99,34 @@ rec {
riscv64 = riscv "64";
riscv32 = riscv "32";
avr = {
config = "avr";
};
arm-embedded = {
config = "arm-none-eabi";
libc = "newlib";
};
aarch64-embedded = {
config = "aarch64-none-elf";
libc = "newlib";
};
ppc-embedded = {
config = "powerpc-none-eabi";
libc = "newlib";
};
i686-embedded = {
config = "i686-elf";
libc = "newlib";
};
x86_64-embedded = {
config = "x86_64-elf";
libc = "newlib";
};
#
# Darwin

View File

@ -19,6 +19,7 @@ rec {
isRiscV = { cpu = { family = "riscv"; }; };
isSparc = { cpu = { family = "sparc"; }; };
isWasm = { cpu = { family = "wasm"; }; };
isAvr = { cpu = { family = "avr"; }; };
is32bit = { cpu = { bits = 32; }; };
is64bit = { cpu = { bits = 64; }; };

View File

@ -101,6 +101,8 @@ rec {
wasm32 = { bits = 32; significantByte = littleEndian; family = "wasm"; };
wasm64 = { bits = 64; significantByte = littleEndian; family = "wasm"; };
avr = { bits = 8; family = "avr"; };
};
################################################################################
@ -117,6 +119,7 @@ rec {
apple = {};
pc = {};
none = {};
unknown = {};
};
@ -200,6 +203,7 @@ rec {
cygnus = {};
msvc = {};
eabi = {};
elf = {};
androideabi = {};
android = {
@ -255,9 +259,16 @@ rec {
setType "system" components;
mkSkeletonFromList = l: {
"1" = if elemAt l 0 == "avr"
then { cpu = elemAt l 0; kernel = "none"; abi = "unknown"; }
else throw "Target specification with 1 components is ambiguous";
"2" = # We only do 2-part hacks for things Nix already supports
if elemAt l 1 == "cygwin"
then { cpu = elemAt l 0; kernel = "windows"; abi = "cygnus"; }
else if (elemAt l 1 == "eabi")
then { cpu = elemAt l 0; vendor = "none"; kernel = "none"; abi = elemAt l 1; }
else if (elemAt l 1 == "elf")
then { cpu = elemAt l 0; vendor = "none"; kernel = "none"; abi = elemAt l 1; }
else { cpu = elemAt l 0; kernel = elemAt l 1; };
"3" = # Awkwards hacks, beware!
if elemAt l 1 == "apple"
@ -268,6 +279,10 @@ rec {
then { cpu = elemAt l 0; vendor = elemAt l 1; kernel = "windows"; abi = "gnu"; }
else if hasPrefix "netbsd" (elemAt l 2)
then { cpu = elemAt l 0; vendor = elemAt l 1; kernel = elemAt l 2; }
else if (elemAt l 2 == "eabi")
then { cpu = elemAt l 0; vendor = elemAt l 1; kernel = "none"; abi = elemAt l 2; }
else if (elemAt l 2 == "elf")
then { cpu = elemAt l 0; vendor = elemAt l 1; kernel = "none"; abi = elemAt l 2; }
else throw "Target specification with 3 components is ambiguous";
"4" = { cpu = elemAt l 0; vendor = elemAt l 1; kernel = elemAt l 2; abi = elemAt l 3; };
}.${toString (length l)}

View File

@ -471,6 +471,7 @@ rec {
"x86_64-linux" = pc64;
"armv5tel-linux" = sheevaplug;
"armv6l-linux" = raspberrypi;
"armv7a-linux" = armv7l-hf-multiplatform;
"armv7l-linux" = armv7l-hf-multiplatform;
"aarch64-linux" = aarch64-multiplatform;
"mipsel-linux" = fuloong2f_n32;

View File

@ -112,7 +112,7 @@ runTests {
storePathAppendix = isStorePath
"${goodPath}/bin/python";
nonAbsolute = isStorePath (concatStrings (tail (stringToCharacters goodPath)));
asPath = isStorePath goodPath;
asPath = isStorePath (/. + goodPath);
otherPath = isStorePath "/something/else";
otherVals = {
attrset = isStorePath {};

View File

@ -9,23 +9,37 @@ rec {
Type: id :: a -> a
*/
id = x: x;
id =
# The value to return
x: x;
/* The constant function
Ignores the second argument.
Or: Construct a function that always returns a static value.
Ignores the second argument. If called with only one argument,
constructs a function that always returns a static value.
Type: const :: a -> b -> a
Example:
let f = const 5; in f 10
=> 5
*/
const = x: y: x;
const =
# Value to return
x:
# Value to ignore
y: x;
## Named versions corresponding to some builtin operators.
/* Concatenate two lists */
/* Concatenate two lists
Type: concat :: [a] -> [a] -> [a]
Example:
concat [ 1 2 ] [ 3 4 ]
=> [ 1 2 3 4 ]
*/
concat = x: y: x ++ y;
/* boolean or */
@ -53,27 +67,40 @@ rec {
bitNot = builtins.sub (-1);
/* Convert a boolean to a string.
Note that toString on a bool returns "1" and "".
This function uses the strings "true" and "false" to represent
boolean values. Calling `toString` on a bool instead returns "1"
and "" (sic!).
Type: boolToString :: bool -> string
*/
boolToString = b: if b then "true" else "false";
/* Merge two attribute sets shallowly, right side trumps left
mergeAttrs :: attrs -> attrs -> attrs
Example:
mergeAttrs { a = 1; b = 2; } { b = 3; c = 4; }
=> { a = 1; b = 3; c = 4; }
*/
mergeAttrs = x: y: x // y;
mergeAttrs =
# Left attribute set
x:
# Right attribute set (higher precedence for equal keys)
y: x // y;
/* Flip the order of the arguments of a binary function.
Type: flip :: (a -> b -> c) -> (b -> a -> c)
Example:
flip concat [1] [2]
=> [ 2 1 ]
*/
flip = f: a: b: f b a;
/* Apply function if argument is non-null.
/* Apply function if the supplied argument is non-null.
Example:
mapNullable (x: x+1) null
@ -81,7 +108,11 @@ rec {
mapNullable (x: x+1) 22
=> 23
*/
mapNullable = f: a: if isNull a then a else f a;
mapNullable =
# Function to call
f:
# Argument to check for null before passing it to `f`
a: if isNull a then a else f a;
# Pull in some builtins not included elsewhere.
inherit (builtins)
@ -92,21 +123,27 @@ rec {
## nixpks version strings
# The current full nixpkgs version number.
/* Returns the current full nixpkgs version number. */
version = release + versionSuffix;
# The current nixpkgs version number as string.
/* Returns the current nixpkgs release number as string. */
release = lib.strings.fileContents ../.version;
# The current nixpkgs version suffix as string.
/* Returns the current nixpkgs version suffix as string. */
versionSuffix =
let suffixFile = ../.version-suffix;
in if pathExists suffixFile
then lib.strings.fileContents suffixFile
else "pre-git";
# Attempt to get the revision nixpkgs is from
revisionWithDefault = default:
/* Attempts to return the the current revision of nixpkgs and
returns the supplied default value otherwise.
Type: revisionWithDefault :: string -> string
*/
revisionWithDefault =
# Default value to return if revision can not be determined
default:
let
revisionFile = "${toString ./..}/.git-revision";
gitRepo = "${toString ./..}/.git";
@ -117,14 +154,20 @@ rec {
nixpkgsVersion = builtins.trace "`lib.nixpkgsVersion` is deprecated, use `lib.version` instead!" version;
# Whether we're being called by nix-shell.
/* Determine whether the function is being called from inside a Nix
shell.
Type: inNixShell :: bool
*/
inNixShell = builtins.getEnv "IN_NIX_SHELL" != "";
## Integer operations
# Return minimum/maximum of two numbers.
/* Return minimum of two numbers. */
min = x: y: if x < y then x else y;
/* Return maximum of two numbers. */
max = x: y: if x > y then x else y;
/* Integer modulus
@ -158,8 +201,9 @@ rec {
second subtype, compare elements of a single subtype with `yes`
and `no` respectively.
Example:
Type: (a -> bool) -> (a -> a -> int) -> (a -> a -> int) -> (a -> a -> int)
Example:
let cmp = splitByAndCompare (hasPrefix "foo") compare compare; in
cmp "a" "z" => -1
@ -170,31 +214,44 @@ rec {
# while
compare "fooa" "a" => 1
*/
splitByAndCompare = p: yes: no: a: b:
splitByAndCompare =
# Predicate
p:
# Comparison function if predicate holds for both values
yes:
# Comparison function if predicate holds for neither value
no:
# First value to compare
a:
# Second value to compare
b:
if p a
then if p b then yes a b else -1
else if p b then 1 else no a b;
/* Reads a JSON file. */
/* Reads a JSON file.
Type :: path -> any
*/
importJSON = path:
builtins.fromJSON (builtins.readFile path);
## Warnings
/* See https://github.com/NixOS/nix/issues/749. Eventually we'd like these
to expand to Nix builtins that carry metadata so that Nix can filter out
the INFO messages without parsing the message string.
# See https://github.com/NixOS/nix/issues/749. Eventually we'd like these
# to expand to Nix builtins that carry metadata so that Nix can filter out
# the INFO messages without parsing the message string.
#
# Usage:
# {
# foo = lib.warn "foo is deprecated" oldFoo;
# }
#
# TODO: figure out a clever way to integrate location information from
# something like __unsafeGetAttrPos.
Usage:
{
foo = lib.warn "foo is deprecated" oldFoo;
}
TODO: figure out a clever way to integrate location information from
something like __unsafeGetAttrPos.
*/
warn = msg: builtins.trace "WARNING: ${msg}";
info = msg: builtins.trace "INFO: ${msg}";

View File

@ -143,6 +143,11 @@
github = "ahmedtd";
name = "Taahir Ahmed";
};
ahuzik = {
email = "ales.guzik@gmail.com";
github = "alesguzik";
name = "Ales Huzik";
};
aij = {
email = "aij+git@mrph.org";
github = "aij";
@ -401,6 +406,11 @@
github = "AveryLychee";
name = "Avery Lychee";
};
averelld = {
email = "averell+nixos@rxd4.com";
github = "averelld";
name = "averelld";
};
avnik = {
email = "avn@avnik.info";
github = "avnik";
@ -495,6 +505,11 @@
github = "bennofs";
name = "Benno Fünfstück";
};
benpye = {
email = "ben@curlybracket.co.uk";
github = "benpye";
name = "Ben Pye";
};
benwbooth = {
email = "benwbooth@gmail.com";
github = "benwbooth";
@ -2967,6 +2982,11 @@
github = "nequissimus";
name = "Tim Steinbach";
};
nikitavoloboev = {
email = "nikita.voloboev@gmail.com";
github = "nikitavoloboev";
name = "Nikita Voloboev";
};
nfjinjing = {
email = "nfjinjing@gmail.com";
github = "nfjinjing";
@ -3641,6 +3661,11 @@
github = "roosemberth";
name = "Roosembert (Roosemberth) Palacios";
};
royneary = {
email = "christian@ulrich.earth";
github = "royneary";
name = "Christian Ulrich";
};
rprospero = {
email = "rprospero+nix@gmail.com";
github = "rprospero";
@ -3890,6 +3915,11 @@
github = "sjagoe";
name = "Simon Jagoe";
};
sjau = {
email = "nixos@sjau.ch";
github = "sjau";
name = "Stephan Jau";
};
sjmackenzie = {
email = "setori88@gmail.com";
github = "sjmackenzie";
@ -4138,6 +4168,11 @@
github = "taku0";
name = "Takuo Yonezawa";
};
talyz = {
email = "kim.lindberger@gmail.com";
github = "talyz";
name = "Kim Lindberger";
};
tari = {
email = "peter@taricorp.net";
github = "tari";
@ -4613,6 +4648,11 @@
github = "wucke13";
name = "Wucke";
};
wykurz = {
email = "wykurz@gmail.com";
github = "wykurz";
name = "Mateusz Wykurz";
};
wyvie = {
email = "elijahrum@gmail.com";
github = "wyvie";

View File

@ -15,7 +15,7 @@ containers.database =
{ config =
{ config, pkgs, ... }:
{ <xref linkend="opt-services.postgresql.enable"/> = true;
<xref linkend="opt-services.postgresql.package"/> = pkgs.postgresql96;
<xref linkend="opt-services.postgresql.package"/> = pkgs.postgresql_9_6;
};
};
</programlisting>

View File

@ -197,10 +197,10 @@ swapDevices = [ { device = "/dev/disk/by-label/swap"; } ];
pkgs.emacs
];
<xref linkend="opt-services.postgresql.package"/> = pkgs.postgresql90;
<xref linkend="opt-services.postgresql.package"/> = pkgs.postgresql_10;
</programlisting>
The latter option definition changes the default PostgreSQL package used
by NixOSs PostgreSQL service to 9.0. For more information on packages,
by NixOSs PostgreSQL service to 10.x. For more information on packages,
including how to add new ones, see <xref linkend="sec-custom-packages"/>.
</para>
</listitem>

View File

@ -34,13 +34,4 @@
Similarly, UDP port ranges can be opened through
<xref linkend="opt-networking.firewall.allowedUDPPortRanges"/>.
</para>
<para>
Also of interest is
<programlisting>
<xref linkend="opt-networking.firewall.allowPing"/> = true;
</programlisting>
to allow the machine to respond to ping requests. (ICMPv6 pings are always
allowed.)
</para>
</section>

View File

@ -637,6 +637,11 @@ $ nix-instantiate -E '(import &lt;nixpkgsunstable&gt; {}).gitFull'
anyways for clarity.
</para>
</listitem>
<listitem>
<para>
Groups <literal>kvm</literal> and <literal>render</literal> are introduced now, as systemd requires them.
</para>
</listitem>
</itemizedlist>
</section>

View File

@ -97,6 +97,16 @@
start org.nixos.nix-daemon</command>.
</para>
</listitem>
<listitem>
<para>
The Syncthing state and configuration data has been moved from
<varname>services.syncthing.dataDir</varname> to the newly defined
<varname>services.syncthing.configDir</varname>, which default to
<literal>/var/lib/syncthing/.config/syncthing</literal>.
This change makes possible to share synced directories using ACLs
without Syncthing resetting the permission on every start.
</para>
</listitem>
</itemizedlist>
</listitem>
<listitem>
@ -137,6 +147,56 @@
make sure to update your configuration if you want to keep <literal>proglodyte-wasm</literal>
</para>
</listitem>
<listitem>
<para>
OpenSMTPD has been upgraded to version 6.4.0p1. This release makes
backwards-incompatible changes to the configuration file format. See
<command>man smtpd.conf</command> for more information on the new file
format.
</para>
</listitem>
<listitem>
<para>
The versioned <varname>postgresql</varname> have been renamed to use
underscore number seperators. For example, <varname>postgresql96</varname>
has been renamed to <varname>postgresql_9_6</varname>.
</para>
</listitem>
<listitem>
<para>
Package <literal>consul-ui</literal> and passthrough <literal>consul.ui</literal> have been removed.
The package <literal>consul</literal> now uses upstream releases that vendor the UI into the binary.
See <link xlink:href="https://github.com/NixOS/nixpkgs/pull/48714#issuecomment-433454834">#48714</link>
for details.
</para>
</listitem>
<listitem>
<para>
Slurm introduces the new option
<literal>services.slurm.stateSaveLocation</literal>,
which is now set to <literal>/var/spool/slurm</literal> by default
(instead of <literal>/var/spool</literal>).
Make sure to move all files to the new directory or to set the option accordingly.
</para>
<para>
The slurmctld now runs as user <literal>slurm</literal> instead of <literal>root</literal>.
If you want to keep slurmctld running as <literal>root</literal>, set
<literal>services.slurm.user = root</literal>.
</para>
<para>
The options <literal>services.slurm.nodeName</literal> and
<literal>services.slurm.partitionName</literal> are now sets of
strings to correctly reflect that fact that each of these
options can occour more than once in the configuration.
</para>
</listitem>
<listitem>
<para>
The <literal>solr</literal> package has been upgraded from 4.10.3 to 7.5.0 and has undergone
some major changes. The <literal>services.solr</literal> module has been updated to reflect
these changes. Please review http://lucene.apache.org/solr/ carefully before upgrading.
</para>
</listitem>
<listitem>
<para>
Package <literal>ckb</literal> is renamed to <literal>ckb-next</literal>,
@ -162,6 +222,15 @@
Matomo version.
</para>
</listitem>
<listitem>
<para>
The deprecated <literal>truecrypt</literal> package has been removed
and <literal>truecrypt</literal> attribute is now an alias for
<literal>veracrypt</literal>. VeraCrypt is backward-compatible with
TrueCrypt volumes. Note that <literal>cryptsetup</literal> also
supports loading TrueCrypt volumes.
</para>
</listitem>
</itemizedlist>
</section>
</section>

View File

@ -53,7 +53,8 @@ in rec {
inherit prefix check;
modules = modules ++ extraModules ++ baseModules ++ [ pkgsModule ];
args = extraArgs;
specialArgs = { modulesPath = ../modules; } // specialArgs;
specialArgs =
{ modulesPath = builtins.toString ../modules; } // specialArgs;
}) config options;
# These are the extra arguments passed to every module. In

View File

@ -1,3 +1,7 @@
/* Build a channel tarball. These contain, in addition to the nixpkgs
* expressions themselves, files that indicate the version of nixpkgs
* that they represent.
*/
{ pkgs, nixpkgs, version, versionSuffix }:
pkgs.releaseTools.makeSourceTarball {

View File

@ -16,6 +16,13 @@ let
resolvconfOptions = cfg.resolvconfOptions
++ optional cfg.dnsSingleRequest "single-request"
++ optional cfg.dnsExtensionMechanism "edns0";
localhostMapped4 = cfg.hosts ? "127.0.0.1" && elem "localhost" cfg.hosts."127.0.0.1";
localhostMapped6 = cfg.hosts ? "::1" && elem "localhost" cfg.hosts."::1";
localhostMultiple = any (elem "localhost") (attrValues (removeAttrs cfg.hosts [ "127.0.0.1" "::1" ]));
in
{
@ -23,8 +30,7 @@ in
options = {
networking.hosts = lib.mkOption {
type = types.attrsOf ( types.listOf types.str );
default = {};
type = types.attrsOf (types.listOf types.str);
example = literalExample ''
{
"127.0.0.1" = [ "foo.bar.baz" ];
@ -192,6 +198,29 @@ in
config = {
assertions = [{
assertion = localhostMapped4;
message = ''`networking.hosts` doesn't map "127.0.0.1" to "localhost"'';
} {
assertion = !cfg.enableIPv6 || localhostMapped6;
message = ''`networking.hosts` doesn't map "::1" to "localhost"'';
} {
assertion = !localhostMultiple;
message = ''
`networking.hosts` maps "localhost" to something other than "127.0.0.1"
or "::1". This will break some applications. Please use
`networking.extraHosts` if you really want to add such a mapping.
'';
}];
networking.hosts = {
"127.0.0.1" = [ "localhost" ];
} // optionalAttrs (cfg.hostName != "") {
"127.0.1.1" = [ cfg.hostName ];
} // optionalAttrs cfg.enableIPv6 {
"::1" = [ "localhost" ];
};
environment.etc =
{ # /etc/services: TCP/UDP port assignments.
"services".source = pkgs.iana-etc + "/etc/services";
@ -199,29 +228,14 @@ in
# /etc/protocols: IP protocol numbers.
"protocols".source = pkgs.iana-etc + "/etc/protocols";
# /etc/rpc: RPC program numbers.
"rpc".source = pkgs.glibc.out + "/etc/rpc";
# /etc/hosts: Hostname-to-IP mappings.
"hosts".text =
let oneToString = set : ip : ip + " " + concatStringsSep " " ( getAttr ip set );
allToString = set : concatMapStringsSep "\n" ( oneToString set ) ( attrNames set );
userLocalHosts = optionalString
( builtins.hasAttr "127.0.0.1" cfg.hosts )
( concatStringsSep " " ( remove "localhost" cfg.hosts."127.0.0.1" ));
userLocalHosts6 = optionalString
( builtins.hasAttr "::1" cfg.hosts )
( concatStringsSep " " ( remove "localhost" cfg.hosts."::1" ));
otherHosts = allToString ( removeAttrs cfg.hosts [ "127.0.0.1" "::1" ]);
in
''
127.0.0.1 ${userLocalHosts} localhost
${optionalString cfg.enableIPv6 ''
::1 ${userLocalHosts6} localhost
''}
${otherHosts}
${cfg.extraHosts}
'';
"hosts".text = let
oneToString = set: ip: ip + " " + concatStringsSep " " set.${ip};
allToString = set: concatMapStringsSep "\n" (oneToString set) (attrNames set);
in ''
${allToString cfg.hosts}
${cfg.extraHosts}
'';
# /etc/host.conf: resolver configuration file
"host.conf".text = cfg.hostConf;
@ -251,6 +265,9 @@ in
"resolv.conf".source = "${pkgs.systemd}/lib/systemd/resolv.conf";
} // optionalAttrs (config.services.resolved.enable && dnsmasqResolve) {
"dnsmasq-resolv.conf".source = "/run/systemd/resolve/resolv.conf";
} // optionalAttrs (pkgs.stdenv.hostPlatform.libc == "glibc") {
# /etc/rpc: RPC program numbers.
"rpc".source = pkgs.glibc.out + "/etc/rpc";
};
networking.proxy.envVars =
@ -296,4 +313,4 @@ in
};
}
}

View File

@ -1,4 +1,4 @@
{ config, lib, pkgs, pkgs_i686, ... }:
{ config, lib, pkgs, ... }:
with pkgs;
with lib;
@ -19,7 +19,7 @@ let
# Forces 32bit pulseaudio and alsaPlugins to be built/supported for apps
# using 32bit alsa on 64bit linux.
enable32BitAlsaPlugins = cfg.support32Bit && stdenv.isx86_64 && (pkgs_i686.alsaLib != null && pkgs_i686.libpulseaudio != null);
enable32BitAlsaPlugins = cfg.support32Bit && stdenv.isx86_64 && (pkgs.pkgsi686Linux.alsaLib != null && pkgs.pkgsi686Linux.libpulseaudio != null);
myConfigFile =
@ -63,7 +63,7 @@ let
pcm_type.pulse {
libs.native = ${pkgs.alsaPlugins}/lib/alsa-lib/libasound_module_pcm_pulse.so ;
${lib.optionalString enable32BitAlsaPlugins
"libs.32Bit = ${pkgs_i686.alsaPlugins}/lib/alsa-lib/libasound_module_pcm_pulse.so ;"}
"libs.32Bit = ${pkgs.pkgsi686Linux.alsaPlugins}/lib/alsa-lib/libasound_module_pcm_pulse.so ;"}
}
pcm.!default {
type pulse
@ -72,7 +72,7 @@ let
ctl_type.pulse {
libs.native = ${pkgs.alsaPlugins}/lib/alsa-lib/libasound_module_ctl_pulse.so ;
${lib.optionalString enable32BitAlsaPlugins
"libs.32Bit = ${pkgs_i686.alsaPlugins}/lib/alsa-lib/libasound_module_ctl_pulse.so ;"}
"libs.32Bit = ${pkgs.pkgsi686Linux.alsaPlugins}/lib/alsa-lib/libasound_module_ctl_pulse.so ;"}
}
ctl.!default {
type pulse

View File

@ -19,7 +19,9 @@ let
pkgs.diffutils
pkgs.findutils
pkgs.gawk
pkgs.glibc # for ldd, getent
pkgs.stdenv.cc.libc
pkgs.getent
pkgs.getconf
pkgs.gnugrep
pkgs.gnupatch
pkgs.gnused

View File

@ -266,7 +266,7 @@ let
(mkIf config.isNormalUser {
group = mkDefault "users";
createHome = mkDefault true;
home = mkDefault "/home/${name}";
home = mkDefault "/home/${config.name}";
useDefaultShell = mkDefault true;
isSystemUser = mkDefault false;
})

View File

@ -1,4 +1,4 @@
{ config, lib, pkgs, pkgs_i686, ... }:
{ config, lib, pkgs, ... }:
with lib;
@ -148,7 +148,7 @@ in
[ "/run/opengl-driver/share" ] ++ optional cfg.driSupport32Bit "/run/opengl-driver-32/share";
hardware.opengl.package = mkDefault (makePackage pkgs);
hardware.opengl.package32 = mkDefault (makePackage pkgs_i686);
hardware.opengl.package32 = mkDefault (makePackage pkgs.pkgsi686Linux);
boot.extraModulePackages = optional (elem "virtualbox" videoDrivers) kernelPackages.virtualboxGuestAdditions;
};

View File

@ -1,6 +1,6 @@
# This module provides the proprietary AMDGPU-PRO drivers.
{ config, lib, pkgs, pkgs_i686, ... }:
{ config, lib, pkgs, ... }:
with lib;
@ -11,7 +11,7 @@ let
enabled = elem "amdgpu-pro" drivers;
package = config.boot.kernelPackages.amdgpu-pro;
package32 = pkgs_i686.linuxPackages.amdgpu-pro.override { libsOnly = true; kernel = null; };
package32 = pkgs.pkgsi686Linux.linuxPackages.amdgpu-pro.override { libsOnly = true; kernel = null; };
opengl = config.hardware.opengl;

View File

@ -1,6 +1,6 @@
# This module provides the proprietary ATI X11 / OpenGL drivers.
{ config, lib, pkgs_i686, ... }:
{ config, lib, pkgs, ... }:
with lib;
@ -24,7 +24,7 @@ in
{ name = "fglrx"; modules = [ ati_x11 ]; libPath = [ "${ati_x11}/lib" ]; };
hardware.opengl.package = ati_x11;
hardware.opengl.package32 = pkgs_i686.linuxPackages.ati_drivers_x11.override { libsOnly = true; kernel = null; };
hardware.opengl.package32 = pkgs.pkgsi686Linux.linuxPackages.ati_drivers_x11.override { libsOnly = true; kernel = null; };
environment.systemPackages = [ ati_x11 ];

View File

@ -1,6 +1,6 @@
# This module provides the proprietary NVIDIA X11 / OpenGL drivers.
{ stdenv, config, lib, pkgs, pkgs_i686, ... }:
{ stdenv, config, lib, pkgs, ... }:
with lib;
@ -25,7 +25,7 @@ let
nvidia_x11 = nvidiaForKernel config.boot.kernelPackages;
nvidia_libs32 =
if versionOlder nvidia_x11.version "391" then
((nvidiaForKernel pkgs_i686.linuxPackages).override { libsOnly = true; kernel = null; }).out
((nvidiaForKernel pkgs.pkgsi686Linux.linuxPackages).override { libsOnly = true; kernel = null; }).out
else
(nvidiaForKernel config.boot.kernelPackages).lib32;

View File

@ -7,4 +7,6 @@
imports =
[ ./installation-cd-base.nix
];
fonts.fontconfig.enable = false;
}

View File

@ -22,4 +22,42 @@ with lib;
powerManagement.enable = false;
system.stateVersion = mkDefault "18.03";
installer.cloneConfigExtra = ''
# Let demo build as a trusted user.
# nix.trustedUsers = [ "demo" ];
# Mount a VirtualBox shared folder.
# This is configurable in the VirtualBox menu at
# Machine / Settings / Shared Folders.
# fileSystems."/mnt" = {
# fsType = "vboxsf";
# device = "nameofdevicetomount";
# options = [ "rw" ];
# };
# By default, the NixOS VirtualBox demo image includes SDDM and Plasma.
# If you prefer another desktop manager or display manager, you may want
# to disable the default.
# services.xserver.desktopManager.plasma5.enable = lib.mkForce false;
# services.xserver.displayManager.sddm.enable = lib.mkForce false;
# Enable GDM/GNOME by uncommenting above two lines and two lines below.
# services.xserver.displayManager.gdm.enable = true;
# services.xserver.desktopManager.gnome3.enable = true;
# Set your time zone.
# time.timeZone = "Europe/Amsterdam";
# List packages installed in system profile. To search, run:
# \$ nix search wget
# environment.systemPackages = with pkgs; [
# wget vim
# ];
# Enable the OpenSSH daemon.
# services.openssh.enable = true;
system.stateVersion = mkDefault "18.03";
'';
}

View File

@ -331,6 +331,9 @@
zeronet = 304;
lirc = 305;
lidarr = 306;
slurm = 307;
kapacitor = 308;
solr = 309;
# When adding a uid, make sure it doesn't match an existing gid. And don't use uids above 399!
@ -385,7 +388,7 @@
virtuoso = 44;
#rtkit = 45; # unused
dovecot2 = 46;
#dovenull = 47; # unused
dovenull2 = 47;
prayer = 49;
mpd = 50;
clamav = 51;
@ -622,6 +625,9 @@
zeronet = 304;
lirc = 305;
lidarr = 306;
slurm = 307;
kapacitor = 308;
solr = 309;
# When adding a gid, make sure it doesn't match an existing
# uid. Users and groups with the same name should have equal

View File

@ -208,7 +208,6 @@ in
config = {
_module.args = {
pkgs = cfg.pkgs;
pkgs_i686 = cfg.pkgs.pkgsi686Linux;
};
};
}

View File

@ -108,7 +108,6 @@
./programs/oblogout.nix
./programs/plotinus.nix
./programs/qt5ct.nix
./programs/rootston.nix
./programs/screen.nix
./programs/sedutil.nix
./programs/slock.nix
@ -121,11 +120,13 @@
./programs/sysdig.nix
./programs/systemtap.nix
./programs/sway.nix
./programs/sway-beta.nix
./programs/thefuck.nix
./programs/tmux.nix
./programs/udevil.nix
./programs/venus.nix
./programs/vim.nix
./programs/wavemon.nix
./programs/way-cooler.nix
./programs/wireshark.nix
./programs/xfs_quota.nix
@ -234,6 +235,7 @@
./services/desktops/dleyna-server.nix
./services/desktops/flatpak.nix
./services/desktops/geoclue2.nix
./services/desktops/gsignond.nix
./services/desktops/pipewire.nix
./services/desktops/gnome3/at-spi2-core.nix
./services/desktops/gnome3/chrome-gnome-shell.nix
@ -431,6 +433,7 @@
./services/monitoring/hdaps.nix
./services/monitoring/heapster.nix
./services/monitoring/incron.nix
./services/monitoring/kapacitor.nix
./services/monitoring/longview.nix
./services/monitoring/monit.nix
./services/monitoring/munin.nix
@ -504,6 +507,7 @@
./services/networking/dnsmasq.nix
./services/networking/ejabberd.nix
./services/networking/epmd.nix
./services/networking/eternal-terminal.nix
./services/networking/fakeroute.nix
./services/networking/ferm.nix
./services/networking/firefox/sync-server.nix

View File

@ -7,7 +7,7 @@
# Include some utilities that are useful for installing or repairing
# the system.
environment.systemPackages = [
pkgs.w3m-nox # needed for the manual anyway
pkgs.w3m-nographics # needed for the manual anyway
pkgs.testdisk # useful for repairing boot problems
pkgs.ms-sys # for writing Microsoft boot sectors / MBRs
pkgs.efibootmgr
@ -19,6 +19,9 @@
pkgs.cryptsetup # needed for dm-crypt volumes
pkgs.mkpasswd # for generating password files
# Some text editors.
pkgs.vim
# Some networking tools.
pkgs.fuse
pkgs.fuse3

View File

@ -48,6 +48,8 @@ let
{
imports = [ ${toString config.installer.cloneConfigIncludes} ];
${config.installer.cloneConfigExtra}
}
'';
@ -73,6 +75,13 @@ in
'';
};
installer.cloneConfigExtra = mkOption {
default = "";
description = ''
Extra text to include in the cloned configuration.nix included in this
installer.
'';
};
};
config = {

View File

@ -63,7 +63,7 @@ with lib;
# Tell the Nix evaluator to garbage collect more aggressively.
# This is desirable in memory-constrained environments that don't
# (yet) have swap set up.
environment.variables.GC_INITIAL_HEAP_SIZE = "100000";
environment.variables.GC_INITIAL_HEAP_SIZE = "1M";
# Make the installer more likely to succeed in low memory
# environments. The kernel's overcommit heustistics bite us
@ -87,9 +87,6 @@ with lib;
# console less cumbersome if the machine has a public IP.
networking.firewall.logRefusedConnections = mkDefault false;
environment.systemPackages = [ pkgs.vim ];
# Allow the user to log in as root without a password.
users.users.root.initialHashedPassword = "";
};

View File

@ -16,7 +16,7 @@ let
# programmable completion. If we do, enable all modules installed in
# the system and user profile in obsolete /etc/bash_completion.d/
# directories. Bash loads completions in all
# $XDG_DATA_DIRS/share/bash-completion/completions/
# $XDG_DATA_DIRS/bash-completion/completions/
# on demand, so they do not need to be sourced here.
if shopt -q progcomp &>/dev/null; then
. "${pkgs.bash-completion}/etc/profile.d/bash_completion.sh"

View File

@ -1,103 +0,0 @@
{ config, pkgs, lib, ... }:
with lib;
let
cfg = config.programs.rootston;
rootstonWrapped = pkgs.writeScriptBin "rootston" ''
#! ${pkgs.runtimeShell}
if [[ "$#" -ge 1 ]]; then
exec ${pkgs.rootston}/bin/rootston "$@"
else
${cfg.extraSessionCommands}
exec ${pkgs.rootston}/bin/rootston -C ${cfg.configFile}
fi
'';
in {
options.programs.rootston = {
enable = mkEnableOption ''
rootston, the reference compositor for wlroots. The purpose of rootston
is to test and demonstrate the features of wlroots (if you want a real
Wayland compositor you should e.g. use Sway instead). You can manually
start the compositor by running "rootston" from a terminal'';
extraSessionCommands = mkOption {
type = types.lines;
default = "";
example = ''
# Define a keymap (US QWERTY is the default)
export XKB_DEFAULT_LAYOUT=de,us
export XKB_DEFAULT_VARIANT=nodeadkeys
export XKB_DEFAULT_OPTIONS=grp:alt_shift_toggle,caps:escape
'';
description = ''
Shell commands executed just before rootston is started.
'';
};
extraPackages = mkOption {
type = with types; listOf package;
default = with pkgs; [
westonLite xwayland rofi
];
defaultText = literalExample ''
with pkgs; [
westonLite xwayland rofi
]
'';
example = literalExample "[ ]";
description = ''
Extra packages to be installed system wide.
'';
};
config = mkOption {
type = types.str;
default = ''
[keyboard]
meta-key = Logo
# Sway/i3 like Keybindings
# Maps key combinations with commands to execute
# Commands include:
# - "exit" to stop the compositor
# - "exec" to execute a shell command
# - "close" to close the current view
# - "next_window" to cycle through windows
[bindings]
Logo+Shift+e = exit
Logo+q = close
Logo+m = maximize
Alt+Tab = next_window
Logo+Return = exec weston-terminal
Logo+d = exec rofi -show run
'';
description = ''
Default configuration for rootston (used when called without any
parameters).
'';
};
configFile = mkOption {
type = types.path;
default = "/etc/rootston.ini";
example = literalExample "${pkgs.rootston}/etc/rootston.ini";
description = ''
Path to the default rootston configuration file (the "config" option
will have no effect if you change the path).
'';
};
};
config = mkIf cfg.enable {
environment.etc."rootston.ini".text = cfg.config;
environment.systemPackages = [ rootstonWrapped ] ++ cfg.extraPackages;
hardware.opengl.enable = mkDefault true;
fonts.enableDefaultFonts = mkDefault true;
programs.dconf.enable = mkDefault true;
};
meta.maintainers = with lib.maintainers; [ primeos gnidorah ];
}

View File

@ -13,7 +13,7 @@ with lib;
# Set up the per-user profile.
mkdir -m 0755 -p "$NIX_USER_PROFILE_DIR"
if [ "$(stat --printf '%u' "$NIX_USER_PROFILE_DIR")" != "$(id -u)" ]; then
echo "WARNING: bad ownership on $NIX_USER_PROFILE_DIR, should be $(id -u)" >&2
echo "WARNING: the per-user profile dir $NIX_USER_PROFILE_DIR should belong to user id $(id -u)" >&2
fi
if [ -w "$HOME" ]; then
@ -35,7 +35,7 @@ with lib;
NIX_USER_GCROOTS_DIR="/nix/var/nix/gcroots/per-user/$USER"
mkdir -m 0755 -p "$NIX_USER_GCROOTS_DIR"
if [ "$(stat --printf '%u' "$NIX_USER_GCROOTS_DIR")" != "$(id -u)" ]; then
echo "WARNING: bad ownership on $NIX_USER_GCROOTS_DIR, should be $(id -u)" >&2
echo "WARNING: the per-user gcroots dir $NIX_USER_GCROOTS_DIR should belong to user id $(id -u)" >&2
fi
# Set up a default Nix expression from which to install stuff.

View File

@ -0,0 +1,79 @@
{ config, pkgs, lib, ... }:
with lib;
let
cfg = config.programs.sway-beta;
swayPackage = cfg.package;
swayWrapped = pkgs.writeShellScriptBin "sway" ''
${cfg.extraSessionCommands}
exec ${pkgs.dbus.dbus-launch} --exit-with-session ${swayPackage}/bin/sway
'';
swayJoined = pkgs.symlinkJoin {
name = "sway-joined";
paths = [ swayWrapped swayPackage ];
};
in {
options.programs.sway-beta = {
enable = mkEnableOption ''
Sway, the i3-compatible tiling Wayland compositor. This module will be removed after the final release of Sway 1.0
'';
package = mkOption {
type = types.package;
default = pkgs.sway-beta;
defaultText = "pkgs.sway-beta";
description = ''
The package to be used for `sway`.
'';
};
extraSessionCommands = mkOption {
type = types.lines;
default = "";
example = ''
export SDL_VIDEODRIVER=wayland
# needs qt5.qtwayland in systemPackages
export QT_QPA_PLATFORM=wayland
export QT_WAYLAND_DISABLE_WINDOWDECORATION="1"
# Fix for some Java AWT applications (e.g. Android Studio),
# use this if they aren't displayed properly:
export _JAVA_AWT_WM_NONREPARENTING=1
'';
description = ''
Shell commands executed just before Sway is started.
'';
};
extraPackages = mkOption {
type = with types; listOf package;
default = with pkgs; [
xwayland rxvt_unicode dmenu
];
defaultText = literalExample ''
with pkgs; [ xwayland rxvt_unicode dmenu ];
'';
example = literalExample ''
with pkgs; [
xwayland
i3status i3status-rust
termite rofi light
]
'';
description = ''
Extra packages to be installed system wide.
'';
};
};
config = mkIf cfg.enable {
environment.systemPackages = [ swayJoined ] ++ cfg.extraPackages;
security.pam.services.swaylock = {};
hardware.opengl.enable = mkDefault true;
fonts.enableDefaultFonts = mkDefault true;
programs.dconf.enable = mkDefault true;
};
meta.maintainers = with lib.maintainers; [ gnidorah primeos colemickens ];
}

View File

@ -0,0 +1,28 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.programs.wavemon;
in {
options = {
programs.wavemon = {
enable = mkOption {
type = types.bool;
default = false;
description = ''
Whether to add wavemon to the global environment and configure a
setcap wrapper for it.
'';
};
};
};
config = mkIf cfg.enable {
environment.systemPackages = with pkgs; [ wavemon ];
security.wrappers.wavemon = {
source = "${pkgs.wavemon}/bin/wavemon";
capabilities = "cap_net_admin+ep";
};
};
}

View File

@ -28,7 +28,10 @@ with lib;
(config:
let enabled = getAttrFromPath [ "services" "printing" "gutenprint" ] config;
in if enabled then [ pkgs.gutenprint ] else [ ]))
(mkRenamedOptionModule [ "services" "ddclient" "domain" ] [ "services" "ddclient" "domains" ])
(mkChangedOptionModule [ "services" "ddclient" "domain" ] [ "services" "ddclient" "domains" ]
(config:
let value = getAttrFromPath [ "services" "ddclient" "domain" ] config;
in if value != "" then [ value ] else []))
(mkRemovedOptionModule [ "services" "ddclient" "homeDir" ] "")
(mkRenamedOptionModule [ "services" "elasticsearch" "host" ] [ "services" "elasticsearch" "listenAddress" ])
(mkRenamedOptionModule [ "services" "graphite" "api" "host" ] [ "services" "graphite" "api" "listenAddress" ])

View File

@ -28,7 +28,7 @@ with lib;
capability setuid,
network inet raw,
${pkgs.glibc.out}/lib/*.so mr,
${pkgs.stdenv.cc.libc.out}/lib/*.so mr,
${pkgs.libcap.lib}/lib/libcap.so* mr,
${pkgs.attr.out}/lib/libattr.so* mr,

View File

@ -170,4 +170,6 @@ in {
'';
}) cfg.params;
};
meta.maintainers = with lib.maintainers; [ ekleog ];
}

View File

@ -20,7 +20,6 @@ with lib;
KERNEL=="random", TAG+="systemd"
SUBSYSTEM=="cpu", ENV{MODALIAS}=="cpu:type:x86,*feature:*009E*", TAG+="systemd", ENV{SYSTEMD_WANTS}+="rngd.service"
KERNEL=="hw_random", TAG+="systemd", ENV{SYSTEMD_WANTS}+="rngd.service"
${if config.services.tcsd.enable then "" else ''KERNEL=="tpm0", TAG+="systemd", ENV{SYSTEMD_WANTS}+="rngd.service"''}
'';
systemd.services.rngd = {
@ -30,8 +29,7 @@ with lib;
description = "Hardware RNG Entropy Gatherer Daemon";
serviceConfig.ExecStart = "${pkgs.rng_tools}/sbin/rngd -f -v" +
(if config.services.tcsd.enable then " --no-tpm=1" else "");
serviceConfig.ExecStart = "${pkgs.rng-tools}/sbin/rngd -f";
};
};
}

View File

@ -53,6 +53,9 @@ in
Type = "notify";
NotifyAccess = "all";
};
restartTriggers = [
config.environment.etc."salt/master".source
];
};
};

View File

@ -15,7 +15,6 @@ let
# Default is in /etc/salt/pki/minion
pki_dir = "/var/lib/salt/pki/minion";
} cfg.configuration;
configDir = pkgs.writeTextDir "minion" (builtins.toJSON fullConfig);
in
@ -28,15 +27,24 @@ in
default = {};
description = ''
Salt minion configuration as Nix attribute set.
See <link xlink:href="https://docs.saltstack.com/en/latest/ref/configuration/minion.html"/>
for details.
See <link xlink:href="https://docs.saltstack.com/en/latest/ref/configuration/minion.html"/>
for details.
'';
};
};
};
config = mkIf cfg.enable {
environment.systemPackages = with pkgs; [ salt ];
environment = {
# Set this up in /etc/salt/minion so `salt-call`, etc. work.
# The alternatives are
# - passing --config-dir to all salt commands, not just the minion unit,
# - setting aglobal environment variable.
etc."salt/minion".source = pkgs.writeText "minion" (
builtins.toJSON fullConfig
);
systemPackages = with pkgs; [ salt ];
};
systemd.services.salt-minion = {
description = "Salt Minion";
wantedBy = [ "multi-user.target" ];
@ -45,11 +53,14 @@ in
utillinux
];
serviceConfig = {
ExecStart = "${pkgs.salt}/bin/salt-minion --config-dir=${configDir}";
ExecStart = "${pkgs.salt}/bin/salt-minion";
LimitNOFILE = 8192;
Type = "notify";
NotifyAccess = "all";
};
restartTriggers = [
config.environment.etc."salt/minion".source
];
};
};
}

View File

@ -6,13 +6,18 @@ let
cfg = config.services.slurm;
# configuration file can be generated by http://slurm.schedmd.com/configurator.html
defaultUser = "slurm";
configFile = pkgs.writeTextDir "slurm.conf"
''
ClusterName=${cfg.clusterName}
StateSaveLocation=${cfg.stateSaveLocation}
SlurmUser=${cfg.user}
${optionalString (cfg.controlMachine != null) ''controlMachine=${cfg.controlMachine}''}
${optionalString (cfg.controlAddr != null) ''controlAddr=${cfg.controlAddr}''}
${optionalString (cfg.nodeName != null) ''nodeName=${cfg.nodeName}''}
${optionalString (cfg.partitionName != null) ''partitionName=${cfg.partitionName}''}
${toString (map (x: "NodeName=${x}\n") cfg.nodeName)}
${toString (map (x: "PartitionName=${x}\n") cfg.partitionName)}
PlugStackConfig=${plugStackConfig}
ProctrackType=${cfg.procTrackType}
${cfg.extraConfig}
@ -24,12 +29,19 @@ let
${cfg.extraPlugstackConfig}
'';
cgroupConfig = pkgs.writeTextDir "cgroup.conf"
''
${cfg.extraCgroupConfig}
'';
slurmdbdConf = pkgs.writeTextDir "slurmdbd.conf"
''
DbdHost=${cfg.dbdserver.dbdHost}
SlurmUser=${cfg.user}
StorageType=accounting_storage/mysql
${cfg.dbdserver.extraConfig}
'';
# slurm expects some additional config files to be
# in the same directory as slurm.conf
etcSlurm = pkgs.symlinkJoin {
@ -43,6 +55,8 @@ in
###### interface
meta.maintainers = [ maintainers.markuskowa ];
options = {
services.slurm = {
@ -60,6 +74,27 @@ in
};
};
dbdserver = {
enable = mkEnableOption "SlurmDBD service";
dbdHost = mkOption {
type = types.str;
default = config.networking.hostName;
description = ''
Hostname of the machine where <literal>slurmdbd</literal>
is running (i.e. name returned by <literal>hostname -s</literal>).
'';
};
extraConfig = mkOption {
type = types.lines;
default = "";
description = ''
Extra configuration for <literal>slurmdbd.conf</literal>
'';
};
};
client = {
enable = mkEnableOption "slurm client daemon";
};
@ -116,9 +151,9 @@ in
};
nodeName = mkOption {
type = types.nullOr types.str;
default = null;
example = "linux[1-32] CPUs=1 State=UNKNOWN";
type = types.listOf types.str;
default = [];
example = literalExample ''[ "linux[1-32] CPUs=1 State=UNKNOWN" ];'';
description = ''
Name that SLURM uses to refer to a node (or base partition for BlueGene
systems). Typically this would be the string that "/bin/hostname -s"
@ -127,9 +162,9 @@ in
};
partitionName = mkOption {
type = types.nullOr types.str;
default = null;
example = "debug Nodes=linux[1-32] Default=YES MaxTime=INFINITE State=UP";
type = types.listOf types.str;
default = [];
example = literalExample ''[ "debug Nodes=linux[1-32] Default=YES MaxTime=INFINITE State=UP" ];'';
description = ''
Name by which the partition may be referenced. Note that now you have
to write the partition's parameters after the name.
@ -150,7 +185,7 @@ in
};
procTrackType = mkOption {
type = types.string;
type = types.str;
default = "proctrack/linuxproc";
description = ''
Plugin to be used for process tracking on a job step basis.
@ -159,6 +194,25 @@ in
'';
};
stateSaveLocation = mkOption {
type = types.str;
default = "/var/spool/slurmctld";
description = ''
Directory into which the Slurm controller, slurmctld, saves its state.
'';
};
user = mkOption {
type = types.str;
default = defaultUser;
description = ''
Set this option when you want to run the slurmctld daemon
as something else than the default slurm user "slurm".
Note that the UID of this user needs to be the same
on all nodes.
'';
};
extraConfig = mkOption {
default = "";
type = types.lines;
@ -184,6 +238,8 @@ in
used when <literal>procTrackType=proctrack/cgroup</literal>.
'';
};
};
};
@ -220,12 +276,24 @@ in
'';
};
in mkIf (cfg.enableStools || cfg.client.enable || cfg.server.enable) {
in mkIf ( cfg.enableStools ||
cfg.client.enable ||
cfg.server.enable ||
cfg.dbdserver.enable ) {
environment.systemPackages = [ wrappedSlurm ];
services.munge.enable = mkDefault true;
# use a static uid as default to ensure it is the same on all nodes
users.users.slurm = mkIf (cfg.user == defaultUser) {
name = defaultUser;
group = "slurm";
uid = config.ids.uids.slurm;
};
users.groups.slurm.gid = config.ids.uids.slurm;
systemd.services.slurmd = mkIf (cfg.client.enable) {
path = with pkgs; [ wrappedSlurm coreutils ]
++ lib.optional cfg.enableSrunX11 slurm-spank-x11;
@ -261,6 +329,29 @@ in
PIDFile = "/run/slurmctld.pid";
ExecReload = "${pkgs.coreutils}/bin/kill -HUP $MAINPID";
};
preStart = ''
mkdir -p ${cfg.stateSaveLocation}
chown -R ${cfg.user}:slurm ${cfg.stateSaveLocation}
'';
};
systemd.services.slurmdbd = mkIf (cfg.dbdserver.enable) {
path = with pkgs; [ wrappedSlurm munge coreutils ];
wantedBy = [ "multi-user.target" ];
after = [ "network.target" "munged.service" "mysql.service" ];
requires = [ "munged.service" "mysql.service" ];
# slurm strips the last component off the path
environment.SLURM_CONF = "${slurmdbdConf}/slurm.conf";
serviceConfig = {
Type = "forking";
ExecStart = "${cfg.package}/bin/slurmdbd";
PIDFile = "/run/slurmdbd.pid";
ExecReload = "${pkgs.coreutils}/bin/kill -HUP $MAINPID";
};
};
};

View File

@ -55,7 +55,7 @@ in
package = mkOption {
type = types.package;
example = literalExample "pkgs.postgresql96";
example = literalExample "pkgs.postgresql_9_6";
description = ''
PostgreSQL package to use.
'';
@ -118,7 +118,7 @@ in
extraPlugins = mkOption {
type = types.listOf types.path;
default = [];
example = literalExample "[ (pkgs.postgis.override { postgresql = pkgs.postgresql94; }) ]";
example = literalExample "[ (pkgs.postgis.override { postgresql = pkgs.postgresql_9_4; }) ]";
description = ''
When this list contains elements a new store path is created.
PostgreSQL and the elements are symlinked into it. Then pg_config,
@ -167,9 +167,9 @@ in
# Note: when changing the default, make it conditional on
# system.stateVersion to maintain compatibility with existing
# systems!
mkDefault (if versionAtLeast config.system.stateVersion "17.09" then pkgs.postgresql96
else if versionAtLeast config.system.stateVersion "16.03" then pkgs.postgresql95
else pkgs.postgresql94);
mkDefault (if versionAtLeast config.system.stateVersion "17.09" then pkgs.postgresql_9_6
else if versionAtLeast config.system.stateVersion "16.03" then pkgs.postgresql_9_5
else pkgs.postgresql_9_4);
services.postgresql.dataDir =
mkDefault (if versionAtLeast config.system.stateVersion "17.09" then "/var/lib/postgresql/${config.services.postgresql.package.psqlSchema}"
@ -271,5 +271,5 @@ in
};
meta.doc = ./postgresql.xml;
meta.maintainers = with lib.maintainers; [ thoughtpolice ];
}

View File

@ -27,12 +27,12 @@
<filename>configuration.nix</filename>:
<programlisting>
<xref linkend="opt-services.postgresql.enable"/> = true;
<xref linkend="opt-services.postgresql.package"/> = pkgs.postgresql94;
<xref linkend="opt-services.postgresql.package"/> = pkgs.postgresql_9_4;
</programlisting>
Note that you are required to specify the desired version of PostgreSQL
(e.g. <literal>pkgs.postgresql94</literal>). Since upgrading your PostgreSQL
version requires a database dump and reload (see below), NixOS cannot
provide a default value for
(e.g. <literal>pkgs.postgresql_9_4</literal>). Since upgrading your
PostgreSQL version requires a database dump and reload (see below), NixOS
cannot provide a default value for
<xref linkend="opt-services.postgresql.package"/> such as the most recent
release of PostgreSQL.
</para>

View File

@ -0,0 +1,43 @@
# Accounts-SSO gSignOn daemon
{ config, lib, pkgs, ... }:
with lib;
let
package = pkgs.gsignond.override { plugins = config.services.gsignond.plugins; };
in
{
###### interface
options = {
services.gsignond = {
enable = mkOption {
type = types.bool;
default = false;
description = ''
Whether to enable gSignOn daemon, a DBus service
which performs user authentication on behalf of its clients.
'';
};
plugins = mkOption {
type = types.listOf types.package;
default = [];
description = ''
What plugins to use with the gSignOn daemon.
'';
};
};
};
###### implementation
config = mkIf config.services.gsignond.enable {
environment.etc."gsignond.conf".source = "${package}/etc/gsignond.conf";
services.dbus.packages = [ package ];
};
}

View File

@ -27,13 +27,13 @@ in {
destination = "/etc/udev/rules.d/51-trezor.rules";
text = ''
# TREZOR v1 (One)
SUBSYSTEM=="usb", ATTR{idVendor}=="534c", ATTR{idProduct}=="0001", MODE="0666", GROUP="dialout", TAG+="uaccess", TAG+="udev-acl", SYMLINK+="trezor%n"
KERNEL=="hidraw*", ATTRS{idVendor}=="534c", ATTRS{idProduct}=="0001", MODE="0666", GROUP="dialout", TAG+="uaccess", TAG+="udev-acl"
SUBSYSTEM=="usb", ATTR{idVendor}=="534c", ATTR{idProduct}=="0001", MODE="0660", GROUP="trezord", TAG+="uaccess", SYMLINK+="trezor%n"
KERNEL=="hidraw*", ATTRS{idVendor}=="534c", ATTRS{idProduct}=="0001", MODE="0660", GROUP="trezord", TAG+="uaccess"
# TREZOR v2 (T)
SUBSYSTEM=="usb", ATTR{idVendor}=="1209", ATTR{idProduct}=="53c0", MODE="0661", GROUP="dialout", TAG+="uaccess", TAG+="udev-acl", SYMLINK+="trezor%n"
SUBSYSTEM=="usb", ATTR{idVendor}=="1209", ATTR{idProduct}=="53c1", MODE="0666", GROUP="dialout", TAG+="uaccess", TAG+="udev-acl", SYMLINK+="trezor%n"
KERNEL=="hidraw*", ATTRS{idVendor}=="1209", ATTRS{idProduct}=="53c1", MODE="0666", GROUP="dialout", TAG+="uaccess", TAG+="udev-acl"
SUBSYSTEM=="usb", ATTR{idVendor}=="1209", ATTR{idProduct}=="53c0", MODE="0660", GROUP="trezord", TAG+="uaccess", SYMLINK+="trezor%n"
SUBSYSTEM=="usb", ATTR{idVendor}=="1209", ATTR{idProduct}=="53c1", MODE="0660", GROUP="trezord", TAG+="uaccess", SYMLINK+="trezor%n"
KERNEL=="hidraw*", ATTRS{idVendor}=="1209", ATTRS{idProduct}=="53c1", MODE="0660", GROUP="trezord", TAG+="uaccess"
'';
});

View File

@ -56,6 +56,32 @@ in
{ Type = "dbus";
BusName = "org.freedesktop.UPower";
ExecStart = "@${cfg.package}/libexec/upowerd upowerd";
Restart = "on-failure";
# Upstream lockdown:
# Filesystem lockdown
ProtectSystem = "strict";
# Needed by keyboard backlight support
ProtectKernelTunables = false;
ProtectControlGroups = true;
ReadWritePaths = "/var/lib/upower";
ProtectHome = true;
PrivateTmp = true;
# Network
# PrivateNetwork=true would block udev's netlink socket
RestrictAddressFamilies = "AF_UNIX AF_NETLINK";
# Execute Mappings
MemoryDenyWriteExecute = true;
# Modules
ProtectKernelModules = true;
# Real-time
RestrictRealtime = true;
# Privilege escalation
NoNewPrivileges = true;
};
};

View File

@ -176,4 +176,6 @@ in
}
) cfg.instances);
};
meta.maintainers = with lib.maintainers; [ ekleog ];
}

View File

@ -115,4 +115,6 @@ in
};
};
};
meta.maintainers = with lib.maintainers; [ ekleog ];
}

View File

@ -311,7 +311,7 @@ in
{ name = "dovenull";
uid = config.ids.uids.dovenull2;
description = "Dovecot user for untrusted logins";
group = cfg.group;
group = "dovenull";
}
] ++ optional (cfg.user == "dovecot2")
{ name = "dovecot2";
@ -332,6 +332,10 @@ in
}
++ optional (cfg.createMailUser && cfg.mailGroup != null)
{ name = cfg.mailGroup;
}
++ singleton
{ name = "dovenull";
gid = config.ids.gids.dovenull2;
};
environment.etc."dovecot/modules".source = modulesDir;

View File

@ -127,11 +127,15 @@ let
options {
pidfile = "$RUNDIR/rspamd.pid";
.include "$CONFDIR/options.inc"
.include(try=true; priority=1,duplicate=merge) "$LOCAL_CONFDIR/local.d/options.inc"
.include(try=true; priority=10) "$LOCAL_CONFDIR/override.d/options.inc"
}
logging {
type = "syslog";
.include "$CONFDIR/logging.inc"
.include(try=true; priority=1,duplicate=merge) "$LOCAL_CONFDIR/local.d/logging.inc"
.include(try=true; priority=10) "$LOCAL_CONFDIR/override.d/logging.inc"
}
${concatStringsSep "\n" (mapAttrsToList (name: value: ''
@ -149,6 +153,41 @@ let
${cfg.extraConfig}
'';
rspamdDir = pkgs.linkFarm "etc-rspamd-dir" (
(mapAttrsToList (name: file: { name = "local.d/${name}"; path = file.source; }) cfg.locals) ++
(mapAttrsToList (name: file: { name = "override.d/${name}"; path = file.source; }) cfg.overrides) ++
(optional (cfg.localLuaRules != null) { name = "rspamd.local.lua"; path = cfg.localLuaRules; }) ++
[ { name = "rspamd.conf"; path = rspamdConfFile; } ]
);
configFileModule = prefix: { name, config, ... }: {
options = {
enable = mkOption {
type = types.bool;
default = true;
description = ''
Whether this file ${prefix} should be generated. This
option allows specific ${prefix} files to be disabled.
'';
};
text = mkOption {
default = null;
type = types.nullOr types.lines;
description = "Text of the file.";
};
source = mkOption {
type = types.path;
description = "Path of the source file.";
};
};
config = {
source = mkIf (config.text != null) (
let name' = "rspamd-${prefix}-" + baseNameOf name;
in mkDefault (pkgs.writeText name' config.text));
};
};
in
{
@ -167,6 +206,41 @@ in
description = "Whether to run the rspamd daemon in debug mode.";
};
locals = mkOption {
type = with types; loaOf (submodule (configFileModule "locals"));
default = {};
description = ''
Local configuration files, written into <filename>/etc/rspamd/local.d/{name}</filename>.
'';
example = literalExample ''
{ "redis.conf".source = "/nix/store/.../etc/dir/redis.conf";
"arc.conf".text = "allow_envfrom_empty = true;";
}
'';
};
overrides = mkOption {
type = with types; loaOf (submodule (configFileModule "overrides"));
default = {};
description = ''
Overridden configuration files, written into <filename>/etc/rspamd/override.d/{name}</filename>.
'';
example = literalExample ''
{ "redis.conf".source = "/nix/store/.../etc/dir/redis.conf";
"arc.conf".text = "allow_envfrom_empty = true;";
}
'';
};
localLuaRules = mkOption {
default = null;
type = types.nullOr types.path;
description = ''
Path of file to link to <filename>/etc/rspamd/rspamd.local.lua</filename> for local
rules written in Lua
'';
};
workers = mkOption {
type = with types; attrsOf (submodule workerOpts);
description = ''
@ -242,16 +316,17 @@ in
gid = config.ids.gids.rspamd;
};
environment.etc."rspamd.conf".source = rspamdConfFile;
environment.etc."rspamd".source = rspamdDir;
systemd.services.rspamd = {
description = "Rspamd Service";
wantedBy = [ "multi-user.target" ];
after = [ "network.target" ];
restartTriggers = [ rspamdDir ];
serviceConfig = {
ExecStart = "${pkgs.rspamd}/bin/rspamd ${optionalString cfg.debug "-d"} --user=${cfg.user} --group=${cfg.group} --pid=/run/rspamd.pid -c ${rspamdConfFile} -f";
ExecStart = "${pkgs.rspamd}/bin/rspamd ${optionalString cfg.debug "-d"} --user=${cfg.user} --group=${cfg.group} --pid=/run/rspamd.pid -c /etc/rspamd/rspamd.conf -f";
Restart = "always";
RuntimeDirectory = "rspamd";
PrivateTmp = true;

View File

@ -14,15 +14,16 @@ let
pathUrlQuote = url: replaceStrings ["/"] ["%2F"] url;
pgSuperUser = config.services.postgresql.superUser;
databaseYml = ''
production:
adapter: postgresql
database: ${cfg.databaseName}
host: ${cfg.databaseHost}
password: ${cfg.databasePassword}
username: ${cfg.databaseUsername}
encoding: utf8
'';
databaseConfig = {
production = {
adapter = "postgresql";
database = cfg.databaseName;
host = cfg.databaseHost;
password = cfg.databasePassword;
username = cfg.databaseUsername;
encoding = "utf8";
};
};
gitalyToml = pkgs.writeText "gitaly.toml" ''
socket_path = "${lib.escape ["\""] gitalySocket}"
@ -45,35 +46,31 @@ let
'') gitlabConfig.production.repositories.storages))}
'';
gitlabShellYml = ''
user: ${cfg.user}
gitlab_url: "http+unix://${pathUrlQuote gitlabSocket}"
http_settings:
self_signed_cert: false
repos_path: "${cfg.statePath}/repositories"
secret_file: "${cfg.statePath}/config/gitlab_shell_secret"
log_file: "${cfg.statePath}/log/gitlab-shell.log"
custom_hooks_dir: "${cfg.statePath}/custom_hooks"
redis:
bin: ${pkgs.redis}/bin/redis-cli
host: 127.0.0.1
port: 6379
database: 0
namespace: resque:gitlab
'';
gitlabShellConfig = {
user = cfg.user;
gitlab_url = "http+unix://${pathUrlQuote gitlabSocket}";
http_settings.self_signed_cert = false;
repos_path = "${cfg.statePath}/repositories";
secret_file = "${cfg.statePath}/config/gitlab_shell_secret";
log_file = "${cfg.statePath}/log/gitlab-shell.log";
custom_hooks_dir = "${cfg.statePath}/custom_hooks";
redis = {
bin = "${pkgs.redis}/bin/redis-cli";
host = "127.0.0.1";
port = 6379;
database = 0;
namespace = "resque:gitlab";
};
};
redisYml = ''
production:
url: redis://localhost:6379/
'';
redisConfig.production.url = "redis://localhost:6379/";
secretsYml = ''
production:
secret_key_base: ${cfg.secrets.secret}
otp_key_base: ${cfg.secrets.otp}
db_key_base: ${cfg.secrets.db}
openid_connect_signing_key: ${builtins.toJSON cfg.secrets.jws}
'';
secretsConfig.production = {
secret_key_base = cfg.secrets.secret;
otp_key_base = cfg.secrets.otp;
db_key_base = cfg.secrets.db;
openid_connect_signing_key = cfg.secrets.jws;
};
gitlabConfig = {
# These are the default settings from config/gitlab.example.yml
@ -115,12 +112,8 @@ let
upload_pack = true;
receive_pack = true;
};
workhorse = {
secret_file = "${cfg.statePath}/.gitlab_workhorse_secret";
};
git = {
bin_path = "git";
};
workhorse.secret_file = "${cfg.statePath}/.gitlab_workhorse_secret";
git.bin_path = "git";
monitoring = {
ip_whitelist = [ "127.0.0.0/8" "::1/128" ];
sidekiq_exporter = {
@ -138,7 +131,7 @@ let
HOME = "${cfg.statePath}/home";
UNICORN_PATH = "${cfg.statePath}/";
GITLAB_PATH = "${cfg.packages.gitlab}/share/gitlab/";
GITLAB_STATE_PATH = "${cfg.statePath}";
GITLAB_STATE_PATH = cfg.statePath;
GITLAB_UPLOADS_PATH = "${cfg.statePath}/uploads";
SCHEMA = "${cfg.statePath}/db/schema.rb";
GITLAB_LOG_PATH = "${cfg.statePath}/log";
@ -146,13 +139,11 @@ let
GITLAB_SHELL_CONFIG_PATH = "${cfg.statePath}/shell/config.yml";
GITLAB_SHELL_SECRET_PATH = "${cfg.statePath}/config/gitlab_shell_secret";
GITLAB_SHELL_HOOKS_PATH = "${cfg.statePath}/shell/hooks";
GITLAB_REDIS_CONFIG_FILE = pkgs.writeText "gitlab-redis.yml" redisYml;
GITLAB_REDIS_CONFIG_FILE = pkgs.writeText "redis.yml" (builtins.toJSON redisConfig);
prometheus_multiproc_dir = "/run/gitlab";
RAILS_ENV = "production";
};
unicornConfig = builtins.readFile ./defaultUnicornConfig.rb;
gitlab-rake = pkgs.stdenv.mkDerivation rec {
name = "gitlab-rake";
buildInputs = [ pkgs.makeWrapper ];
@ -162,7 +153,6 @@ let
mkdir -p $out/bin
makeWrapper ${cfg.packages.gitlab.rubyEnv}/bin/rake $out/bin/gitlab-rake \
${concatStrings (mapAttrsToList (name: value: "--set ${name} '${value}' ") gitlabEnv)} \
--set GITLAB_CONFIG_PATH '${cfg.statePath}/config' \
--set PATH '${lib.makeBinPath [ pkgs.nodejs pkgs.gzip pkgs.git pkgs.gnutar config.services.postgresql.package pkgs.coreutils pkgs.procps ]}:$PATH' \
--set RAKEOPT '-f ${cfg.packages.gitlab}/share/gitlab/Rakefile' \
--run 'cd ${cfg.packages.gitlab}/share/gitlab'
@ -306,7 +296,6 @@ in {
initialRootPassword = mkOption {
type = types.str;
default = "UseNixOS!";
description = ''
Initial password of the root account if this is a new install.
'';
@ -461,10 +450,30 @@ in {
}
];
systemd.tmpfiles.rules = [
"d /run/gitlab 0755 ${cfg.user} ${cfg.group} -"
"d ${gitlabEnv.HOME} 0750 ${cfg.user} ${cfg.group} -"
"d ${cfg.backupPath} 0750 ${cfg.user} ${cfg.group} -"
"d ${cfg.statePath}/builds 0750 ${cfg.user} ${cfg.group} -"
"d ${cfg.statePath}/config 0750 ${cfg.user} ${cfg.group} -"
"d ${cfg.statePath}/db 0750 ${cfg.user} ${cfg.group} -"
"d ${cfg.statePath}/log 0750 ${cfg.user} ${cfg.group} -"
"d ${cfg.statePath}/repositories 2770 ${cfg.user} ${cfg.group} -"
"d ${cfg.statePath}/shell 0750 ${cfg.user} ${cfg.group} -"
"d ${cfg.statePath}/tmp/pids 0750 ${cfg.user} ${cfg.group} -"
"d ${cfg.statePath}/tmp/sockets 0750 ${cfg.user} ${cfg.group} -"
"d ${cfg.statePath}/uploads 0700 ${cfg.user} ${cfg.group} -"
"d ${cfg.statePath}/custom_hooks/pre-receive.d 0700 ${cfg.user} ${cfg.group} -"
"d ${cfg.statePath}/custom_hooks/post-receive.d 0700 ${cfg.user} ${cfg.group} -"
"d ${cfg.statePath}/custom_hooks/update.d 0700 ${cfg.user} ${cfg.group} -"
"d ${gitlabConfig.production.shared.path}/artifacts 0750 ${cfg.user} ${cfg.group} -"
"d ${gitlabConfig.production.shared.path}/lfs-objects 0750 ${cfg.user} ${cfg.group} -"
"d ${gitlabConfig.production.shared.path}/pages 0750 ${cfg.user} ${cfg.group} -"
];
systemd.services.gitlab-sidekiq = {
after = [ "network.target" "redis.service" ];
after = [ "network.target" "redis.service" "gitlab.service" ];
wantedBy = [ "multi-user.target" ];
partOf = [ "gitlab.service" ];
environment = gitlabEnv;
path = with pkgs; [
config.services.postgresql.package
@ -486,10 +495,8 @@ in {
};
systemd.services.gitaly = {
after = [ "network.target" "gitlab.service" ];
after = [ "network.target" ];
wantedBy = [ "multi-user.target" ];
environment.HOME = gitlabEnv.HOME;
environment.GITLAB_SHELL_CONFIG_PATH = gitlabEnv.GITLAB_SHELL_CONFIG_PATH;
path = with pkgs; [ gitAndTools.git cfg.packages.gitaly.rubyEnv cfg.packages.gitaly.rubyEnv.wrappedRuby ];
serviceConfig = {
Type = "simple";
@ -505,8 +512,6 @@ in {
systemd.services.gitlab-workhorse = {
after = [ "network.target" ];
wantedBy = [ "multi-user.target" ];
environment.HOME = gitlabEnv.HOME;
environment.GITLAB_SHELL_CONFIG_PATH = gitlabEnv.GITLAB_SHELL_CONFIG_PATH;
path = with pkgs; [
gitAndTools.git
gnutar
@ -514,10 +519,6 @@ in {
openssh
gitlab-workhorse
];
preStart = ''
mkdir -p /run/gitlab
chown ${cfg.user}:${cfg.group} /run/gitlab
'';
serviceConfig = {
PermissionsStartOnly = true; # preStart must be run as root
Type = "simple";
@ -538,7 +539,7 @@ in {
};
systemd.services.gitlab = {
after = [ "network.target" "postgresql.service" "redis.service" ];
after = [ "gitlab-workhorse.service" "gitaly.service" "network.target" "postgresql.service" "redis.service" ];
requires = [ "gitlab-sidekiq.service" ];
wantedBy = [ "multi-user.target" ];
environment = gitlabEnv;
@ -551,102 +552,76 @@ in {
gnupg
];
preStart = ''
mkdir -p ${cfg.backupPath}
mkdir -p ${cfg.statePath}/builds
mkdir -p ${cfg.statePath}/repositories
mkdir -p ${gitlabConfig.production.shared.path}/artifacts
mkdir -p ${gitlabConfig.production.shared.path}/lfs-objects
mkdir -p ${gitlabConfig.production.shared.path}/pages
mkdir -p ${cfg.statePath}/log
mkdir -p ${cfg.statePath}/tmp/pids
mkdir -p ${cfg.statePath}/tmp/sockets
mkdir -p ${cfg.statePath}/shell
mkdir -p ${cfg.statePath}/db
mkdir -p ${cfg.statePath}/uploads
mkdir -p ${cfg.statePath}/custom_hooks/pre-receive.d
mkdir -p ${cfg.statePath}/custom_hooks/post-receive.d
mkdir -p ${cfg.statePath}/custom_hooks/update.d
rm -rf ${cfg.statePath}/config ${cfg.statePath}/shell/hooks
mkdir -p ${cfg.statePath}/config
${pkgs.openssl}/bin/openssl rand -hex 32 > ${cfg.statePath}/config/gitlab_shell_secret
mkdir -p /run/gitlab
mkdir -p ${cfg.statePath}/log
[ -d /run/gitlab/log ] || ln -sf ${cfg.statePath}/log /run/gitlab/log
[ -d /run/gitlab/tmp ] || ln -sf ${cfg.statePath}/tmp /run/gitlab/tmp
[ -d /run/gitlab/uploads ] || ln -sf ${cfg.statePath}/uploads /run/gitlab/uploads
ln -sf $GITLAB_SHELL_CONFIG_PATH /run/gitlab/shell-config.yml
chown -R ${cfg.user}:${cfg.group} /run/gitlab
# Prepare home directory
mkdir -p ${gitlabEnv.HOME}/.ssh
touch ${gitlabEnv.HOME}/.ssh/authorized_keys
chown -R ${cfg.user}:${cfg.group} ${gitlabEnv.HOME}/
cp -rf ${cfg.packages.gitlab}/share/gitlab/db/* ${cfg.statePath}/db
cp -rf ${cfg.packages.gitlab}/share/gitlab/config.dist/* ${cfg.statePath}/config
${optionalString cfg.smtp.enable ''
ln -sf ${smtpSettings} ${cfg.statePath}/config/initializers/smtp_settings.rb
''}
ln -sf ${cfg.statePath}/config /run/gitlab/config
rm -rf ${cfg.statePath}/config
mkdir ${cfg.statePath}/config
if [ -e ${cfg.statePath}/lib ]; then
rm ${cfg.statePath}/lib
fi
ln -sf ${pkgs.gitlab}/share/gitlab/lib ${cfg.statePath}/lib
ln -sf ${cfg.packages.gitlab}/share/gitlab/lib ${cfg.statePath}/lib
[ -L /run/gitlab/config ] || ln -sf ${cfg.statePath}/config /run/gitlab/config
[ -L /run/gitlab/log ] || ln -sf ${cfg.statePath}/log /run/gitlab/log
[ -L /run/gitlab/tmp ] || ln -sf ${cfg.statePath}/tmp /run/gitlab/tmp
[ -L /run/gitlab/uploads ] || ln -sf ${cfg.statePath}/uploads /run/gitlab/uploads
${optionalString cfg.smtp.enable ''
ln -sf ${smtpSettings} ${cfg.statePath}/config/initializers/smtp_settings.rb
''}
cp ${cfg.packages.gitlab}/share/gitlab/VERSION ${cfg.statePath}/VERSION
cp -rf ${cfg.packages.gitlab}/share/gitlab/config.dist/* ${cfg.statePath}/config
${pkgs.openssl}/bin/openssl rand -hex 32 > ${cfg.statePath}/config/gitlab_shell_secret
# JSON is a subset of YAML
ln -fs ${pkgs.writeText "gitlab.yml" (builtins.toJSON gitlabConfig)} ${cfg.statePath}/config/gitlab.yml
ln -fs ${pkgs.writeText "database.yml" databaseYml} ${cfg.statePath}/config/database.yml
ln -fs ${pkgs.writeText "secrets.yml" secretsYml} ${cfg.statePath}/config/secrets.yml
ln -fs ${pkgs.writeText "unicorn.rb" unicornConfig} ${cfg.statePath}/config/unicorn.rb
ln -sf ${pkgs.writeText "gitlab.yml" (builtins.toJSON gitlabConfig)} ${cfg.statePath}/config/gitlab.yml
ln -sf ${pkgs.writeText "database.yml" (builtins.toJSON databaseConfig)} ${cfg.statePath}/config/database.yml
ln -sf ${pkgs.writeText "secrets.yml" (builtins.toJSON secretsConfig)} ${cfg.statePath}/config/secrets.yml
ln -sf ${./defaultUnicornConfig.rb} ${cfg.statePath}/config/unicorn.rb
# Install the shell required to push repositories
ln -sf ${pkgs.writeText "config.yml" (builtins.toJSON gitlabShellConfig)} /run/gitlab/shell-config.yml
[ -L ${cfg.statePath}/shell/hooks ] || ln -sf ${cfg.packages.gitlab-shell}/hooks ${cfg.statePath}/shell/hooks
${cfg.packages.gitlab-shell}/bin/install
chown -R ${cfg.user}:${cfg.group} ${cfg.statePath}/
chmod -R ug+rwX,o-rwx+X ${cfg.statePath}/
chown -R ${cfg.user}:${cfg.group} /run/gitlab
# Install the shell required to push repositories
ln -fs ${pkgs.writeText "config.yml" gitlabShellYml} "$GITLAB_SHELL_CONFIG_PATH"
ln -fs ${cfg.packages.gitlab-shell}/hooks "$GITLAB_SHELL_HOOKS_PATH"
${cfg.packages.gitlab-shell}/bin/install
if [ "${cfg.databaseHost}" = "127.0.0.1" ]; then
if ! test -e "${cfg.statePath}/db-created"; then
if ! test -e "${cfg.statePath}/db-created"; then
if [ "${cfg.databaseHost}" = "127.0.0.1" ]; then
${pkgs.sudo}/bin/sudo -u ${pgSuperUser} psql postgres -c "CREATE ROLE ${cfg.databaseUsername} WITH LOGIN NOCREATEDB NOCREATEROLE ENCRYPTED PASSWORD '${cfg.databasePassword}'"
${pkgs.sudo}/bin/sudo -u ${pgSuperUser} ${config.services.postgresql.package}/bin/createdb --owner ${cfg.databaseUsername} ${cfg.databaseName}
touch "${cfg.statePath}/db-created"
# enable required pg_trgm extension for gitlab
${pkgs.sudo}/bin/sudo -u ${pgSuperUser} psql ${cfg.databaseName} -c "CREATE EXTENSION IF NOT EXISTS pg_trgm"
fi
# enable required pg_trgm extension for gitlab
${pkgs.sudo}/bin/sudo -u ${pgSuperUser} psql ${cfg.databaseName} -c "CREATE EXTENSION IF NOT EXISTS pg_trgm"
${pkgs.sudo}/bin/sudo -u ${cfg.user} -H ${gitlab-rake}/bin/gitlab-rake db:schema:load
touch "${cfg.statePath}/db-created"
fi
# Always do the db migrations just to be sure the database is up-to-date
${gitlab-rake}/bin/gitlab-rake db:migrate RAILS_ENV=production
${pkgs.sudo}/bin/sudo -u ${cfg.user} -H ${gitlab-rake}/bin/gitlab-rake db:migrate
# The gitlab:setup task is horribly broken somehow, the db:migrate
# task above and the db:seed_fu below will do the same for setting
# up the initial database
if ! test -e "${cfg.statePath}/db-seeded"; then
${gitlab-rake}/bin/gitlab-rake db:seed_fu RAILS_ENV=production \
${pkgs.sudo}/bin/sudo -u ${cfg.user} ${gitlab-rake}/bin/gitlab-rake db:seed_fu \
GITLAB_ROOT_PASSWORD='${cfg.initialRootPassword}' GITLAB_ROOT_EMAIL='${cfg.initialRootEmail}'
touch "${cfg.statePath}/db-seeded"
fi
# The gitlab:shell:setup regenerates the authorized_keys file so that
# the store path to the gitlab-shell in it gets updated
${pkgs.sudo}/bin/sudo -u ${cfg.user} force=yes ${gitlab-rake}/bin/gitlab-rake gitlab:shell:setup RAILS_ENV=production
${pkgs.sudo}/bin/sudo -u ${cfg.user} -H force=yes ${gitlab-rake}/bin/gitlab-rake gitlab:shell:setup
# The gitlab:shell:create_hooks task seems broken for fixing links
# so we instead delete all the hooks and create them anew
rm -f ${cfg.statePath}/repositories/**/*.git/hooks
${gitlab-rake}/bin/gitlab-rake gitlab:shell:create_hooks RAILS_ENV=production
${pkgs.sudo}/bin/sudo -u ${cfg.user} -H ${gitlab-rake}/bin/gitlab-rake gitlab:shell:create_hooks
${pkgs.sudo}/bin/sudo -u ${cfg.user} -H ${pkgs.git}/bin/git config --global core.autocrlf "input"
# Change permissions in the last step because some of the
# intermediary scripts like to create directories as root.
chown -R ${cfg.user}:${cfg.group} ${cfg.statePath}
chmod -R ug+rwX,o-rwx+X ${cfg.statePath}
chmod -R u+rwX,go-rwx+X ${gitlabEnv.HOME}
chmod -R ug+rwX,o-rwx ${cfg.statePath}/repositories
chmod -R ug-s ${cfg.statePath}/repositories

View File

@ -157,6 +157,7 @@ in {
Restart = "on-failure";
ProtectSystem = "strict";
ReadWritePaths = "${cfg.configDir}";
KillSignal = "SIGINT";
PrivateTmp = true;
RemoveIPC = true;
};

View File

@ -7,7 +7,7 @@ let
ddConf = {
dd_url = "https://app.datadoghq.com";
skip_ssl_validation = "no";
skip_ssl_validation = false;
confd_path = "/etc/datadog-agent/conf.d";
additional_checksd = "/etc/datadog-agent/checks.d";
use_dogstatsd = true;
@ -16,6 +16,7 @@ let
// optionalAttrs (cfg.hostname != null) { inherit (cfg) hostname; }
// optionalAttrs (cfg.tags != null ) { tags = concatStringsSep ", " cfg.tags; }
// optionalAttrs (cfg.enableLiveProcessCollection) { process_config = { enabled = "true"; }; }
// optionalAttrs (cfg.enableTraceAgent) { apm_config = { enabled = true; }; }
// cfg.extraConfig;
# Generate Datadog configuration files for each configured checks.
@ -132,6 +133,15 @@ in {
default = false;
type = types.bool;
};
enableTraceAgent = mkOption {
description = ''
Whether to enable the trace agent.
'';
default = false;
type = types.bool;
};
checks = mkOption {
description = ''
Configuration for all Datadog checks. Keys of this attribute
@ -244,6 +254,16 @@ in {
${pkgs.datadog-process-agent}/bin/agent --config /etc/datadog-agent/datadog.yaml
'';
});
datadog-trace-agent = lib.mkIf cfg.enableTraceAgent (makeService {
description = "Datadog Trace Agent";
path = [ ];
script = ''
export DD_API_KEY=$(head -n 1 ${cfg.apiKeyFile})
${pkgs.datadog-trace-agent}/bin/trace-agent -config /etc/datadog-agent/datadog.yaml
'';
});
};
environment.etc = etcfiles;

View File

@ -0,0 +1,154 @@
{ options, config, lib, pkgs, ... }:
with lib;
let
cfg = config.services.kapacitor;
kapacitorConf = pkgs.writeTextFile {
name = "kapacitord.conf";
text = ''
hostname="${config.networking.hostName}"
data_dir="${cfg.dataDir}"
[http]
bind-address = "${cfg.bind}:${toString cfg.port}"
log-enabled = false
auth-enabled = false
[task]
dir = "${cfg.dataDir}/tasks"
snapshot-interval = "${cfg.taskSnapshotInterval}"
[replay]
dir = "${cfg.dataDir}/replay"
[storage]
boltdb = "${cfg.dataDir}/kapacitor.db"
${optionalString (cfg.loadDirectory != null) ''
[load]
enabled = true
dir = "${cfg.loadDirectory}"
''}
${optionalString (cfg.defaultDatabase.enable) ''
[[influxdb]]
name = "default"
enabled = true
default = true
urls = [ "${cfg.defaultDatabase.url}" ]
username = "${cfg.defaultDatabase.username}"
password = "${cfg.defaultDatabase.password}"
''}
${cfg.extraConfig}
'';
};
in
{
options.services.kapacitor = {
enable = mkEnableOption "kapacitor";
dataDir = mkOption {
type = types.path;
example = "/var/lib/kapacitor";
default = "/var/lib/kapacitor";
description = "Location where Kapacitor stores its state";
};
port = mkOption {
type = types.int;
default = 9092;
description = "Port of Kapacitor";
};
bind = mkOption {
type = types.str;
default = "";
example = literalExample "0.0.0.0";
description = "Address to bind to. The default is to bind to all addresses";
};
extraConfig = mkOption {
description = "These lines go into kapacitord.conf verbatim.";
default = "";
type = types.lines;
};
user = mkOption {
type = types.str;
default = "kapacitor";
description = "User account under which Kapacitor runs";
};
group = mkOption {
type = types.str;
default = "kapacitor";
description = "Group under which Kapacitor runs";
};
taskSnapshotInterval = mkOption {
type = types.str;
description = "Specifies how often to snapshot the task state (in InfluxDB time units)";
default = "1m0s";
example = "1m0s";
};
loadDirectory = mkOption {
type = types.nullOr types.path;
description = "Directory where to load services from, such as tasks, templates and handlers (or null to disable service loading on startup)";
default = null;
};
defaultDatabase = {
enable = mkEnableOption "kapacitor.defaultDatabase";
url = mkOption {
description = "The URL to an InfluxDB server that serves as the default database";
example = "http://localhost:8086";
type = types.string;
};
username = mkOption {
description = "The username to connect to the remote InfluxDB server";
type = types.string;
};
password = mkOption {
description = "The password to connect to the remote InfluxDB server";
type = types.string;
};
};
};
config = mkIf cfg.enable {
environment.systemPackages = [ pkgs.kapacitor ];
systemd.services.kapacitor = {
description = "Kapacitor Real-Time Stream Processing Engine";
wantedBy = [ "multi-user.target" ];
after = [ "networking.target" ];
serviceConfig = {
ExecStart = "${pkgs.kapacitor}/bin/kapacitord -config ${kapacitorConf}";
User = "kapacitor";
Group = "kapacitor";
PermissionsStartOnly = true;
};
preStart = ''
mkdir -p ${cfg.dataDir}
chown ${cfg.user}:${cfg.group} ${cfg.dataDir}
'';
};
users.users.kapacitor = {
uid = config.ids.uids.kapacitor;
description = "Kapacitor user";
home = cfg.dataDir;
};
users.groups.kapacitor = {
gid = config.ids.gids.kapacitor;
};
};
}

View File

@ -10,6 +10,13 @@ let
# Get a submodule without any embedded metadata:
_filter = x: filterAttrs (k: v: k != "_module") x;
# a wrapper that verifies that the configuration is valid
promtoolCheck = what: name: file: pkgs.runCommand "${name}-${what}-checked"
{ buildInputs = [ cfg.package ]; } ''
ln -s ${file} $out
promtool ${what} $out
'';
# Pretty-print JSON to a file
writePrettyJSON = name: x:
pkgs.runCommand name { } ''
@ -19,18 +26,19 @@ let
# This becomes the main config file
promConfig = {
global = cfg.globalConfig;
rule_files = cfg.ruleFiles ++ [
rule_files = map (promtoolCheck "check-rules" "rules") (cfg.ruleFiles ++ [
(pkgs.writeText "prometheus.rules" (concatStringsSep "\n" cfg.rules))
];
]);
scrape_configs = cfg.scrapeConfigs;
};
generatedPrometheusYml = writePrettyJSON "prometheus.yml" promConfig;
prometheusYml =
if cfg.configText != null then
prometheusYml = let
yml = if cfg.configText != null then
pkgs.writeText "prometheus.yml" cfg.configText
else generatedPrometheusYml;
else generatedPrometheusYml;
in promtoolCheck "check-config" "prometheus.yml" yml;
cmdlineArgs = cfg.extraFlags ++ [
"-storage.local.path=${cfg.dataDir}/metrics"
@ -376,6 +384,15 @@ in {
'';
};
package = mkOption {
type = types.package;
default = pkgs.prometheus;
defaultText = "pkgs.prometheus";
description = ''
The prometheus package that should be used.
'';
};
listenAddress = mkOption {
type = types.str;
default = "0.0.0.0:9090";
@ -495,7 +512,7 @@ in {
after = [ "network.target" ];
script = ''
#!/bin/sh
exec ${pkgs.prometheus}/bin/prometheus \
exec ${cfg.package}/bin/prometheus \
${concatStringsSep " \\\n " cmdlineArgs}
'';
serviceConfig = {

View File

@ -33,7 +33,7 @@ let
purple_plugin_path =
lib.concatMapStringsSep ":"
(plugin: "${plugin}/lib/pidgin/")
(plugin: "${plugin}/lib/pidgin/:${plugin}/lib/purple-2/")
cfg.libpurple_plugins
;

View File

@ -93,6 +93,8 @@ in
services.timesyncd.enable = mkForce false;
systemd.services.systemd-timedated.environment = { SYSTEMD_TIMEDATED_NTP_SERVICES = "chronyd.service"; };
systemd.services.chronyd =
{ description = "chrony NTP daemon";

View File

@ -6,9 +6,10 @@ let
dataDir = "/var/lib/consul";
cfg = config.services.consul;
configOptions = { data_dir = dataDir; } //
(if cfg.webUi then { ui_dir = "${cfg.package.ui}"; } else { }) //
cfg.extraConfig;
configOptions = {
data_dir = dataDir;
ui = cfg.webUi;
} // cfg.extraConfig;
configFiles = [ "/etc/consul.json" "/etc/consul-addrs.json" ]
++ cfg.extraConfigFiles;

View File

@ -182,9 +182,10 @@ with lib;
serviceConfig = rec {
DynamicUser = true;
RuntimeDirectory = StateDirectory;
RuntimeDirectoryMode = "0750";
StateDirectory = builtins.baseNameOf dataDir;
Type = "oneshot";
ExecStartPre = "!${lib.getBin pkgs.coreutils}/bin/install -m666 ${cfg.configFile} /run/${RuntimeDirectory}/ddclient.conf";
ExecStartPre = "!${lib.getBin pkgs.coreutils}/bin/install -m660 ${cfg.configFile} /run/${RuntimeDirectory}/ddclient.conf";
ExecStart = "${lib.getBin pkgs.ddclient}/bin/ddclient -file /run/${RuntimeDirectory}/ddclient.conf";
};
};

View File

@ -0,0 +1,89 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.services.eternal-terminal;
in
{
###### interface
options = {
services.eternal-terminal = {
enable = mkEnableOption "Eternal Terminal server";
port = mkOption {
default = 2022;
type = types.int;
description = ''
The port the server should listen on. Will use the server's default (2022) if not specified.
'';
};
verbosity = mkOption {
default = 0;
type = types.enum (lib.range 0 9);
description = ''
The verbosity level (0-9).
'';
};
silent = mkOption {
default = false;
type = types.bool;
description = ''
If enabled, disables all logging.
'';
};
logSize = mkOption {
default = 20971520;
type = types.int;
description = ''
The maximum log size.
'';
};
};
};
###### implementation
config = mkIf cfg.enable {
# We need to ensure the et package is fully installed because
# the (remote) et client runs the `etterminal` binary when it
# connects.
environment.systemPackages = [ pkgs.eternal-terminal ];
systemd.services = {
eternal-terminal = {
description = "Eternal Terminal server.";
wantedBy = [ "multi-user.target" ];
after = [ "syslog.target" "network.target" ];
serviceConfig = {
Type = "forking";
ExecStart = "${pkgs.eternal-terminal}/bin/etserver --daemon --cfgfile=${pkgs.writeText "et.cfg" ''
; et.cfg : Config file for Eternal Terminal
;
[Networking]
port = ${toString cfg.port}
[Debug]
verbose = ${toString cfg.verbosity}
silent = ${if cfg.silent then "1" else "0"}
logsize = ${toString cfg.logSize}
''}";
Restart = "on-failure";
KillMode = "process";
};
};
};
};
}

View File

@ -67,6 +67,8 @@ in
environment.systemPackages = [ pkgs.ntp ];
services.timesyncd.enable = mkForce false;
systemd.services.systemd-timedated.environment = { SYSTEMD_TIMEDATED_NTP_SERVICES = "ntpd.service"; };
users.users = singleton
{ name = ntpUser;
uid = config.ids.uids.ntp;

View File

@ -267,4 +267,6 @@ in
"ip46tables -t nat -D OUTPUT -p tcp ${redCond block} -j ${chain} 2>/dev/null || true"
) cfg.redsocks;
};
meta.maintainers = with lib.maintainers; [ ekleog ];
}

View File

@ -248,6 +248,14 @@ in {
</itemizedlist>
'';
ppk_id = mkOptionalStrParam ''
String identifying the Postquantum Preshared Key (PPK) to be used.
'';
ppk_required = mkYesNoParam no ''
Whether a Postquantum Preshared Key (PPK) is required for this connection.
'';
keyingtries = mkIntParam 1 ''
Number of retransmission sequences to perform during initial
connect. Instead of giving up initiation after the first retransmission
@ -922,6 +930,36 @@ in {
<literal>0xffffffff</literal>.
'';
set_mark_in = mkStrParam "0/0x00000000" ''
Netfilter mark applied to packets after the inbound IPsec SA processed
them. This way it's not necessary to mark packets via Netfilter before
decryption or right afterwards to match policies or process them
differently (e.g. via policy routing).
An additional mask may be appended to the mark, separated by
<literal>/</literal>. The default mask if omitted is 0xffffffff. The
special value <literal>%same</literal> uses the value (but not the mask)
from <option>mark_in</option> as mark value, which can be fixed,
<literal>%unique</literal> or <literal>%unique-dir</literal>.
Setting marks in XFRM input requires Linux 4.19 or higher.
'';
set_mark_out = mkStrParam "0/0x00000000" ''
Netfilter mark applied to packets after the outbound IPsec SA processed
them. This allows processing ESP packets differently than the original
traffic (e.g. via policy routing).
An additional mask may be appended to the mark, separated by
<literal>/</literal>. The default mask if omitted is 0xffffffff. The
special value <literal>%same</literal> uses the value (but not the mask)
from <option>mark_out</option> as mark value, which can be fixed,
<literal>%unique_</literal> or <literal>%unique-dir</literal>.
Setting marks in XFRM output is supported since Linux 4.14. Setting a
mask requires at least Linux 4.19.
'';
tfc_padding = mkParamOfType (with lib.types; either int (enum ["mtu"])) 0 ''
Pads ESP packets with additional data to have a consistent ESP packet
size for improved Traffic Flow Confidentiality. The padding defines the
@ -946,6 +984,33 @@ in {
supported, but the installation does not fail otherwise.
'';
copy_df = mkYesNoParam yes ''
Whether to copy the DF bit to the outer IPv4 header in tunnel mode. This
effectively disables Path MTU discovery (PMTUD). Controlling this
behavior is not supported by all kernel interfaces.
'';
copy_ecn = mkYesNoParam yes ''
Whether to copy the ECN (Explicit Congestion Notification) header field
to/from the outer IP header in tunnel mode. Controlling this behavior is
not supported by all kernel interfaces.
'';
copy_dscp = mkEnumParam [ "out" "in" "yes" "no" ] "out" ''
Whether to copy the DSCP (Differentiated Services Field Codepoint)
header field to/from the outer IP header in tunnel mode. The value
<literal>out</literal> only copies the field from the inner to the outer
header, the value <literal>in</literal> does the opposite and only
copies the field from the outer to the inner header when decapsulating,
the value <literal>yes</literal> copies the field in both directions,
and the value <literal>no</literal> disables copying the field
altogether. Setting this to <literal>yes</literal> or
<literal>in</literal> could allow an attacker to adversely affect other
traffic at the receiver, which is why the default is
<literal>out</literal>. Controlling this behavior is not supported by
all kernel interfaces.
'';
start_action = mkEnumParam ["none" "trap" "start"] "none" ''
Action to perform after loading the configuration.
<itemizedlist>
@ -1060,6 +1125,24 @@ in {
defined in a unique section having the <literal>ike</literal> prefix.
'';
ppk = mkPrefixedAttrsOfParams {
secret = mkOptionalStrParam ''
Value of the PPK. It may either be an ASCII string, a hex encoded string
if it has a <literal>0x</literal> prefix or a Base64 encoded string if
it has a <literal>0s</literal> prefix in its value. Should have at least
256 bits of entropy for 128-bit security.
'';
id = mkPrefixedAttrsOfParam (mkOptionalStrParam "") ''
PPK identity the PPK belongs to. Multiple unique identities may be
specified, each having an <literal>id</literal> prefix, if a secret is
shared between multiple peers.
'';
} ''
Postquantum Preshared Key (PPK) section for a specific secret. Each PPK is
defined in a unique section having the <literal>ppk</literal> prefix.
'';
private = mkPrefixedAttrsOfParams {
file = mkOptionalStrParam ''
File name in the private folder for which this passphrase should be used.

View File

@ -62,9 +62,21 @@ in {
dataDir = mkOption {
type = types.path;
default = "/var/lib/syncthing";
description = ''
Path where synced directories will exist.
'';
};
configDir = mkOption {
type = types.path;
description = ''
Path where the settings and keys will exist.
'';
default =
let
nixos = config.system.stateVersion;
cond = versionAtLeast nixos "19.03";
in cfg.dataDir + (optionalString cond "/.config/syncthing");
};
openDefaultPorts = mkOption {
@ -144,7 +156,7 @@ in {
${cfg.package}/bin/syncthing \
-no-browser \
-gui-address=${cfg.guiAddress} \
-home=${cfg.dataDir}
-home=${cfg.configDir}
'';
};
};

View File

@ -39,7 +39,8 @@ in
systemd.services.zerotierone = {
description = "ZeroTierOne";
path = [ cfg.package ];
after = [ "network.target" ];
bindsTo = [ "network-online.target" ];
after = [ "network-online.target" ];
wantedBy = [ "multi-user.target" ];
preStart = ''
mkdir -p /var/lib/zerotier-one/networks.d

View File

@ -6,142 +6,105 @@ let
cfg = config.services.solr;
# Assemble all jars needed for solr
solrJars = pkgs.stdenv.mkDerivation {
name = "solr-jars";
src = pkgs.fetchurl {
url = http://archive.apache.org/dist/tomcat/tomcat-5/v5.5.36/bin/apache-tomcat-5.5.36.tar.gz;
sha256 = "01mzvh53wrs1p2ym765jwd00gl6kn8f9k3nhdrnhdqr8dhimfb2p";
};
installPhase = ''
mkdir -p $out/lib
cp common/lib/*.jar $out/lib/
ln -s ${pkgs.ant}/lib/ant/lib/ant.jar $out/lib/
ln -s ${cfg.solrPackage}/lib/ext/* $out/lib/
ln -s ${pkgs.jdk.home}/lib/tools.jar $out/lib/
'' + optionalString (cfg.extraJars != []) ''
for f in ${concatStringsSep " " cfg.extraJars}; do
cp $f $out/lib
done
'';
};
in {
in
{
options = {
services.solr = {
enable = mkOption {
type = types.bool;
default = false;
description = ''
Enables the solr service.
'';
};
enable = mkEnableOption "Enables the solr service.";
javaPackage = mkOption {
type = types.package;
default = pkgs.jre;
defaultText = "pkgs.jre";
description = ''
Which Java derivation to use for running solr.
'';
};
solrPackage = mkOption {
package = mkOption {
type = types.package;
default = pkgs.solr;
defaultText = "pkgs.solr";
description = ''
Which solr derivation to use for running solr.
'';
description = "Which Solr package to use.";
};
extraJars = mkOption {
type = types.listOf types.path;
default = [];
description = ''
List of paths pointing to jars. Jars are copied to commonLibFolder to be available to java/solr.
'';
port = mkOption {
type = types.int;
default = 8983;
description = "Port on which Solr is ran.";
};
log4jConfiguration = mkOption {
type = types.lines;
default = ''
log4j.rootLogger=INFO, stdout
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.Target=System.out
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L - %m%n
'';
description = ''
Contents of the <literal>log4j.properties</literal> used. By default,
everything is logged to stdout (picked up by systemd) with level INFO.
'';
};
user = mkOption {
type = types.str;
description = ''
The user that should run the solr process and.
the working directories.
'';
};
group = mkOption {
type = types.str;
description = ''
The group that will own the working directory.
'';
};
solrHome = mkOption {
type = types.str;
description = ''
The solr home directory. It is your own responsibility to
make sure this directory contains a working solr configuration,
and is writeable by the the user running the solr service.
Failing to do so, the solr will not start properly.
'';
stateDir = mkOption {
type = types.path;
default = "/var/lib/solr";
description = "The solr home directory containing config, data, and logging files.";
};
extraJavaOptions = mkOption {
type = types.listOf types.str;
default = [];
description = ''
Extra command line options given to the java process running
solr.
'';
description = "Extra command line options given to the java process running Solr.";
};
extraWinstoneOptions = mkOption {
type = types.listOf types.str;
default = [];
description = ''
Extra command line options given to the Winstone, which is
the servlet container hosting solr.
'';
user = mkOption {
type = types.str;
default = "solr";
description = "User under which Solr is ran.";
};
group = mkOption {
type = types.str;
default = "solr";
description = "Group under which Solr is ran.";
};
};
};
config = mkIf cfg.enable {
services.winstone.solr = {
serviceName = "solr";
inherit (cfg) user group javaPackage;
warFile = "${cfg.solrPackage}/lib/solr.war";
extraOptions = [
"--commonLibFolder=${solrJars}/lib"
"--useJasper"
] ++ cfg.extraWinstoneOptions;
extraJavaOptions = [
"-Dsolr.solr.home=${cfg.solrHome}"
"-Dlog4j.configuration=file://${pkgs.writeText "log4j.properties" cfg.log4jConfiguration}"
] ++ cfg.extraJavaOptions;
environment.systemPackages = [ cfg.package ];
systemd.services.solr = {
after = [ "network.target" "remote-fs.target" "nss-lookup.target" "systemd-journald-dev-log.socket" ];
wantedBy = [ "multi-user.target" ];
environment = {
SOLR_HOME = "${cfg.stateDir}/data";
LOG4J_PROPS = "${cfg.stateDir}/log4j2.xml";
SOLR_LOGS_DIR = "${cfg.stateDir}/logs";
SOLR_PORT = "${toString cfg.port}";
};
path = with pkgs; [
gawk
procps
];
preStart = ''
mkdir -p "${cfg.stateDir}/data";
mkdir -p "${cfg.stateDir}/logs";
if ! test -e "${cfg.stateDir}/data/solr.xml"; then
install -D -m0640 ${cfg.package}/server/solr/solr.xml "${cfg.stateDir}/data/solr.xml"
install -D -m0640 ${cfg.package}/server/solr/zoo.cfg "${cfg.stateDir}/data/zoo.cfg"
fi
if ! test -e "${cfg.stateDir}/log4j2.xml"; then
install -D -m0640 ${cfg.package}/server/resources/log4j2.xml "${cfg.stateDir}/log4j2.xml"
fi
'';
serviceConfig = {
User = cfg.user;
Group = cfg.group;
ExecStart="${cfg.package}/bin/solr start -f -a \"${concatStringsSep " " cfg.extraJavaOptions}\"";
ExecStop="${cfg.package}/bin/solr stop";
};
};
users.users = optionalAttrs (cfg.user == "solr") (singleton
{ name = "solr";
group = cfg.group;
home = cfg.stateDir;
createHome = true;
uid = config.ids.uids.solr;
});
users.groups = optionalAttrs (cfg.group == "solr") (singleton
{ name = "solr";
gid = config.ids.gids.solr;
});
};
}

View File

@ -70,7 +70,15 @@ in {
'';
};
nginx.enable = mkEnableOption "nginx vhost management";
nginx.enable = mkOption {
type = types.bool;
default = false;
description = ''
Whether to enable nginx virtual host management.
Further nginx configuration can be done by adapting <literal>services.nginx.virtualHosts.&lt;name&gt;</literal>.
See <xref linkend="opt-services.nginx.virtualHosts"/> for further information.
'';
};
webfinger = mkOption {
type = types.bool;
@ -184,7 +192,7 @@ in {
type = types.nullOr types.str;
default = null;
description = ''
Database password. Use <literal>adminpassFile</literal> to avoid this
Admin password. Use <literal>adminpassFile</literal> to avoid this
being world-readable in the <literal>/nix/store</literal>.
'';
};

View File

@ -46,7 +46,7 @@ let
configFile = pkgs.writeText "nginx.conf" ''
user ${cfg.user} ${cfg.group};
error_log stderr;
error_log ${cfg.logError};
daemon off;
${cfg.config}
@ -341,6 +341,35 @@ in
";
};
logError = mkOption {
default = "stderr";
description = "
Configures logging.
The first parameter defines a file that will store the log. The
special value stderr selects the standard error file. Logging to
syslog can be configured by specifying the syslog: prefix.
The second parameter determines the level of logging, and can be
one of the following: debug, info, notice, warn, error, crit,
alert, or emerg. Log levels above are listed in the order of
increasing severity. Setting a certain log level will cause all
messages of the specified and more severe log levels to be logged.
If this parameter is omitted then error is used.
";
};
preStart = mkOption {
type = types.lines;
default = ''
test -d ${cfg.stateDir}/logs || mkdir -m 750 -p ${cfg.stateDir}/logs
test `stat -c %a ${cfg.stateDir}` = "750" || chmod 750 ${cfg.stateDir}
test `stat -c %a ${cfg.stateDir}/logs` = "750" || chmod 750 ${cfg.stateDir}/logs
chown -R ${cfg.user}:${cfg.group} ${cfg.stateDir}
'';
description = "
Shell commands executed before the service's nginx is started.
";
};
config = mkOption {
default = "";
description = "
@ -608,9 +637,7 @@ in
stopIfChanged = false;
preStart =
''
mkdir -p ${cfg.stateDir}/logs
chmod 700 ${cfg.stateDir}
chown -R ${cfg.user}:${cfg.group} ${cfg.stateDir}
${cfg.preStart}
${cfg.package}/bin/nginx -c ${configFile} -p ${cfg.stateDir} -t
'';
serviceConfig = {

View File

@ -31,10 +31,26 @@ in
'';
};
purifyOnStart = mkOption {
type = types.bool;
default = false;
description = ''
On startup, the `baseDir` directory is populated with various files,
subdirectories and symlinks. If this option is enabled, these items
(except for the `logs` and `work` subdirectories) are first removed.
This prevents interference from remainders of an old configuration
(libraries, webapps, etc.), so it's recommended to enable this option.
'';
};
baseDir = mkOption {
type = lib.types.path;
default = "/var/tomcat";
description = "Location where Tomcat stores configuration files, webapplications and logfiles";
description = ''
Location where Tomcat stores configuration files, web applications
and logfiles. Note that it is partially cleared on each service startup
if `purifyOnStart` is enabled.
'';
};
logDirs = mkOption {
@ -197,6 +213,15 @@ in
after = [ "network.target" ];
preStart = ''
${lib.optionalString cfg.purifyOnStart ''
# Delete most directories/symlinks we create from the existing base directory,
# to get rid of remainders of an old configuration.
# The list of directories to delete is taken from the "mkdir" command below,
# excluding "logs" (because logs are valuable) and "work" (because normally
# session files are there), and additionally including "bin".
rm -rf ${cfg.baseDir}/{conf,virtualhosts,temp,lib,shared/lib,webapps,bin}
''}
# Create the base directory
mkdir -p \
${cfg.baseDir}/{conf,virtualhosts,logs,temp,lib,shared/lib,webapps,work}

View File

@ -22,7 +22,7 @@ let
# This wrapper ensures that we actually get themes
makeWrapper ${pkgs.lightdm_gtk_greeter}/sbin/lightdm-gtk-greeter \
$out/greeter \
--prefix PATH : "${pkgs.glibc.bin}/bin" \
--prefix PATH : "${lib.getBin pkgs.stdenv.cc.libc}/bin" \
--set GDK_PIXBUF_MODULE_FILE "${pkgs.librsvg.out}/lib/gdk-pixbuf-2.0/2.10.0/loaders.cache" \
--set GTK_PATH "${theme}:${pkgs.gtk3.out}" \
--set GTK_EXE_PREFIX "${theme}" \

View File

@ -21,7 +21,8 @@ let
[ coreutils
gnugrep
findutils
glibc # needed for getent
getent
stdenv.cc.libc # nscd in update-users-groups.pl
shadow
nettools # needed for hostname
utillinux # needed for mount and mountpoint

View File

@ -147,7 +147,7 @@ let
${config.boot.initrd.extraUtilsCommands}
# Copy ld manually since it isn't detected correctly
cp -pv ${pkgs.glibc.out}/lib/ld*.so.? $out/lib
cp -pv ${pkgs.stdenv.cc.libc.out}/lib/ld*.so.? $out/lib
# Copy all of the needed libraries
find $out/bin $out/lib -type f | while read BIN; do

View File

@ -112,6 +112,7 @@ in {
environment.etc."systemd/nspawn".source = generateUnits "nspawn" units [] [];
systemd.targets."multi-user".wants = [ "machines.target "];
};
}

View File

@ -387,7 +387,7 @@ let
logindHandlerType = types.enum [
"ignore" "poweroff" "reboot" "halt" "kexec" "suspend"
"hibernate" "hybrid-sleep" "lock"
"hibernate" "hybrid-sleep" "suspend-then-hibernate" "lock"
];
in
@ -587,6 +587,15 @@ in
'';
};
services.journald.forwardToSyslog = mkOption {
default = config.services.rsyslogd.enable || config.services.syslog-ng.enable;
defaultText = "config.services.rsyslogd.enable || config.services.syslog-ng.enable";
type = types.bool;
description = ''
Whether to forward log messages to syslog.
'';
};
services.logind.extraConfig = mkOption {
default = "";
type = types.lines;
@ -754,6 +763,9 @@ in
ForwardToConsole=yes
TTYPath=${config.services.journald.console}
''}
${optionalString (config.services.journald.forwardToSyslog) ''
ForwardToSyslog=yes
''}
${config.services.journald.extraConfig}
'';

View File

@ -53,7 +53,7 @@ let cfg = config.ec2; in
# Mount all formatted ephemeral disks and activate all swap devices.
# We cannot do this with the fileSystems and swapDevices options
# because the set of devices is dependent on the instance type
# (e.g. "m1.large" has one ephemeral filesystem and one swap device,
# (e.g. "m1.small" has one ephemeral filesystem and one swap device,
# while "m1.large" has two ephemeral filesystems and no swap
# devices). Also, put /tmp and /var on /disk0, since it has a lot
# more space than the root device. Similarly, "move" /nix to /disk0

View File

@ -243,6 +243,9 @@ let
Restart = "on-failure";
Slice = "machine.slice";
Delegate = true;
# Hack: we don't want to kill systemd-nspawn, since we call
# "machinectl poweroff" in preStop to shut down the
# container cleanly. But systemd requires sending a signal
@ -606,7 +609,7 @@ in
{ config =
{ config, pkgs, ... }:
{ services.postgresql.enable = true;
services.postgresql.package = pkgs.postgresql96;
services.postgresql.package = pkgs.postgresql_9_6;
system.stateVersion = "17.03";
};
@ -657,6 +660,8 @@ in
serviceConfig = serviceDirectives dummyConfig;
};
in {
systemd.targets."multi-user".wants = [ "machines.target" ];
systemd.services = listToAttrs (filter (x: x.value != null) (
# The generic container template used by imperative containers
[{ name = "container@"; value = unit; }]
@ -680,7 +685,7 @@ in
} // (
if config.autoStart then
{
wantedBy = [ "multi-user.target" ];
wantedBy = [ "machines.target" ];
wants = [ "network.target" ];
after = [ "network.target" ];
restartTriggers = [ config.path ];

View File

@ -0,0 +1,135 @@
{ config, lib, pkgs, ... }:
with lib;
with builtins;
let
cfg = config.virtualisation;
sanitizeImageName = image: replaceStrings ["/"] ["-"] image.imageName;
hash = drv: head (split "-" (baseNameOf drv.outPath));
# The label of an ext4 FS is limited to 16 bytes
labelFromImage = image: substring 0 16 (hash image);
# The Docker image is loaded and some files from /var/lib/docker/
# are written into a qcow image.
preload = image: pkgs.vmTools.runInLinuxVM (
pkgs.runCommand "docker-preload-image-${sanitizeImageName image}" {
buildInputs = with pkgs; [ docker e2fsprogs utillinux curl kmod ];
preVM = pkgs.vmTools.createEmptyImage {
size = cfg.dockerPreloader.qcowSize;
fullName = "docker-deamon-image.qcow2";
};
}
''
mkfs.ext4 /dev/vda
e2label /dev/vda ${labelFromImage image}
mkdir -p /var/lib/docker
mount -t ext4 /dev/vda /var/lib/docker
modprobe overlay
# from https://github.com/tianon/cgroupfs-mount/blob/master/cgroupfs-mount
mount -t tmpfs -o uid=0,gid=0,mode=0755 cgroup /sys/fs/cgroup
cd /sys/fs/cgroup
for sys in $(awk '!/^#/ { if ($4 == 1) print $1 }' /proc/cgroups); do
mkdir -p $sys
if ! mountpoint -q $sys; then
if ! mount -n -t cgroup -o $sys cgroup $sys; then
rmdir $sys || true
fi
fi
done
dockerd -H tcp://127.0.0.1:5555 -H unix:///var/run/docker.sock &
until $(curl --output /dev/null --silent --connect-timeout 2 http://127.0.0.1:5555); do
printf '.'
sleep 1
done
docker load -i ${image}
kill %1
find /var/lib/docker/ -maxdepth 1 -mindepth 1 -not -name "image" -not -name "overlay2" | xargs rm -rf
'');
preloadedImages = map preload cfg.dockerPreloader.images;
in
{
options.virtualisation.dockerPreloader = {
images = mkOption {
default = [ ];
type = types.listOf types.package;
description =
''
A list of Docker images to preload (in the /var/lib/docker directory).
'';
};
qcowSize = mkOption {
default = 1024;
type = types.int;
description =
''
The size (MB) of qcow files.
'';
};
};
config = {
assertions = [{
# If docker.storageDriver is null, Docker choose the storage
# driver. So, in this case, we cannot be sure overlay2 is used.
assertion = cfg.dockerPreloader.images == []
|| cfg.docker.storageDriver == "overlay2"
|| cfg.docker.storageDriver == "overlay"
|| cfg.docker.storageDriver == null;
message = "The Docker image Preloader only works with overlay2 storage driver!";
}];
virtualisation.qemu.options =
map (path: "-drive if=virtio,file=${path}/disk-image.qcow2,readonly,media=cdrom,format=qcow2")
preloadedImages;
# All attached QCOW files are mounted and their contents are linked
# to /var/lib/docker/ in order to make image available.
systemd.services.docker-preloader = {
description = "Preloaded Docker images";
wantedBy = ["docker.service"];
after = ["network.target"];
path = with pkgs; [ mount rsync jq ];
script = ''
mkdir -p /var/lib/docker/overlay2/l /var/lib/docker/image/overlay2
echo '{}' > /tmp/repositories.json
for i in ${concatStringsSep " " (map labelFromImage cfg.dockerPreloader.images)}; do
mkdir -p /mnt/docker-images/$i
# The ext4 label is limited to 16 bytes
mount /dev/disk/by-label/$(echo $i | cut -c1-16) -o ro,noload /mnt/docker-images/$i
find /mnt/docker-images/$i/overlay2/ -maxdepth 1 -mindepth 1 -not -name l\
-exec ln -s '{}' /var/lib/docker/overlay2/ \;
cp -P /mnt/docker-images/$i/overlay2/l/* /var/lib/docker/overlay2/l/
rsync -a /mnt/docker-images/$i/image/ /var/lib/docker/image/
# Accumulate image definitions
cp /tmp/repositories.json /tmp/repositories.json.tmp
jq -s '.[0] * .[1]' \
/tmp/repositories.json.tmp \
/mnt/docker-images/$i/image/overlay2/repositories.json \
> /tmp/repositories.json
done
mv /tmp/repositories.json /var/lib/docker/image/overlay2/repositories.json
'';
serviceConfig = {
Type = "oneshot";
};
};
};
}

View File

@ -144,7 +144,6 @@ in
path = with pkgs; [ iproute ];
serviceConfig = {
ExecStart = "${gce}/bin/google_network_daemon --debug";
Type = "oneshot";
};
};

View File

@ -196,6 +196,8 @@ in {
wantedBy = [ "multi-user.target" ];
path = with pkgs; [ coreutils libvirt gawk ];
restartIfChanged = false;
environment.ON_SHUTDOWN = "${cfg.onShutdown}";
};
systemd.sockets.virtlogd = {

View File

@ -1,4 +1,4 @@
{ config, lib, pkgs, pkgs_i686, ... }:
{ config, lib, pkgs, ... }:
with lib;
@ -64,7 +64,7 @@ in
};
hardware.opengl.package = prl-tools;
hardware.opengl.package32 = pkgs_i686.linuxPackages.prl-tools.override { libsOnly = true; kernel = null; };
hardware.opengl.package32 = pkgs.pkgsi686Linux.linuxPackages.prl-tools.override { libsOnly = true; kernel = null; };
services.udev.packages = [ prl-tools ];

View File

@ -185,7 +185,10 @@ let
in
{
imports = [ ../profiles/qemu-guest.nix ];
imports = [
../profiles/qemu-guest.nix
./docker-preloader.nix
];
options = {

View File

@ -12,7 +12,7 @@ in {
virtualbox = {
baseImageSize = mkOption {
type = types.int;
default = 10 * 1024;
default = 50 * 1024;
description = ''
The size of the VirtualBox base image in MiB.
'';
@ -61,7 +61,7 @@ in {
export HOME=$PWD
export PATH=${pkgs.virtualbox}/bin:$PATH
echo "creating VirtualBox pass-through disk wrapper (no copying invovled)..."
echo "creating VirtualBox pass-through disk wrapper (no copying involved)..."
VBoxManage internalcommands createrawvmdk -filename disk.vmdk -rawdisk $diskImage
echo "creating VirtualBox VM..."
@ -72,9 +72,9 @@ in {
--memory ${toString cfg.memorySize} --acpi on --vram 32 \
${optionalString (pkgs.stdenv.hostPlatform.system == "i686-linux") "--pae on"} \
--nictype1 virtio --nic1 nat \
--audiocontroller ac97 --audio alsa \
--audiocontroller ac97 --audio alsa --audioout on \
--rtcuseutc on \
--usb on --mouse usbtablet
--usb on --usbehci on --mouse usbtablet
VBoxManage storagectl "$vmName" --name SATA --add sata --portcount 4 --bootable on --hostiocache on
VBoxManage storageattach "$vmName" --storagectl SATA --port 0 --device 0 --type hdd \
--medium disk.vmdk
@ -82,7 +82,7 @@ in {
echo "exporting VirtualBox VM..."
mkdir -p $out
fn="$out/${cfg.vmFileName}"
VBoxManage export "$vmName" --output "$fn"
VBoxManage export "$vmName" --output "$fn" --options manifest
rm -v $diskImage

View File

@ -283,6 +283,7 @@ in rec {
tests.docker-tools = callTestOnMatchingSystems ["x86_64-linux"] tests/docker-tools.nix {};
tests.docker-tools-overlay = callTestOnMatchingSystems ["x86_64-linux"] tests/docker-tools-overlay.nix {};
tests.docker-edge = callTestOnMatchingSystems ["x86_64-linux"] tests/docker-edge.nix {};
tests.docker-preloader = callTestOnMatchingSystems ["x86_64-linux"] tests/docker-preloader.nix {};
tests.docker-registry = callTest tests/docker-registry.nix {};
tests.dovecot = callTest tests/dovecot.nix {};
tests.dnscrypt-proxy = callTestOnMatchingSystems ["x86_64-linux"] tests/dnscrypt-proxy.nix {};
@ -300,7 +301,7 @@ in rec {
tests.fsck = callTest tests/fsck.nix {};
tests.fwupd = callTest tests/fwupd.nix {};
tests.gdk-pixbuf = callTest tests/gdk-pixbuf.nix {};
#tests.gitlab = callTest tests/gitlab.nix {};
tests.gitlab = callTest tests/gitlab.nix {};
tests.gitolite = callTest tests/gitolite.nix {};
tests.gjs = callTest tests/gjs.nix {};
tests.gocd-agent = callTest tests/gocd-agent.nix {};
@ -399,6 +400,7 @@ in rec {
tests.radicale = callTest tests/radicale.nix {};
tests.redmine = callTest tests/redmine.nix {};
tests.rspamd = callSubTests tests/rspamd.nix {};
tests.rsyslogd = callSubTests tests/rsyslogd.nix {};
tests.runInMachine = callTest tests/run-in-machine.nix {};
tests.rxe = callTest tests/rxe.nix {};
tests.samba = callTest tests/samba.nix {};
@ -408,6 +410,7 @@ in rec {
tests.slurm = callTest tests/slurm.nix {};
tests.smokeping = callTest tests/smokeping.nix {};
tests.snapper = callTest tests/snapper.nix {};
tests.solr = callTest tests/solr.nix {};
#tests.statsd = callTest tests/statsd.nix {}; # statsd is broken: #45946
tests.strongswan-swanctl = callTest tests/strongswan-swanctl.nix {};
tests.sudo = callTest tests/sudo.nix {};
@ -467,7 +470,7 @@ in rec {
{ services.httpd.enable = true;
services.httpd.adminAddr = "foo@example.org";
services.postgresql.enable = true;
services.postgresql.package = pkgs.postgresql93;
services.postgresql.package = pkgs.postgresql_9_3;
environment.systemPackages = [ pkgs.php ];
});
};

View File

@ -10,9 +10,8 @@ import ./make-test.nix ({pkgs, ...}: rec {
emptyDiskImages = [ 20480 20480 ];
vlans = [ 1 ];
};
networking = {
firewall.allowPing = true;
useDHCP = false;
interfaces.eth1.ipv4.addresses = pkgs.lib.mkOverride 0 [
{ address = "192.168.1.1"; prefixLength = 24; }
@ -54,7 +53,7 @@ import ./make-test.nix ({pkgs, ...}: rec {
};
};
};
testScript = { ... }: ''
startAll;
@ -83,7 +82,7 @@ import ./make-test.nix ({pkgs, ...}: rec {
# Can't check ceph status until a mon is up
$aio->succeed("ceph -s | grep 'mon: 1 daemons'");
# Start the ceph-mgr daemon, it has no deps and hardly any setup
$aio->mustSucceed(
"ceph auth get-or-create mgr.aio mon 'allow profile mgr' osd 'allow *' mds 'allow *' > /var/lib/ceph/mgr/ceph-aio/keyring",

Some files were not shown because too many files have changed in this diff Show More