Merge commit '703a9f93c1254f7bdf0350ca0462de0d78033c62' into gcc-simplify-flags

This commit is contained in:
John Ericson 2017-12-05 17:58:16 -05:00
commit 43d5c5d6db
1287 changed files with 31668 additions and 79990 deletions

4
.github/CODEOWNERS vendored
View File

@ -80,3 +80,7 @@
# Eclipse
/pkgs/applications/editors/eclipse @rycee
# https://github.com/NixOS/nixpkgs/issues/31401
/lib/maintainers.nix @ghost
/lib/licenses.nix @ghost

View File

@ -12,11 +12,11 @@ under the terms of [COPYING](../COPYING), which is an MIT-like license.
## Submitting changes
* Format the commits in the following way:
* Format the commit messages in the following way:
```
(pkg-name | nixos/<module>): (from -> to | init at version | refactor | etc)
(Motivation for change. Additional information.)
```
@ -25,10 +25,10 @@ under the terms of [COPYING](../COPYING), which is an MIT-like license.
* nginx: init at 2.0.1
* firefox: 54.0.1 -> 55.0
* nixos/hydra: add bazBaz option
Dual baz behavior is needed to do foo.
* nixos/nginx: refactor config generation
The old config generation system used impure shell scripts and could break in specific circumstances (see #1234).
* `meta.description` should:

View File

@ -8,7 +8,5 @@
## Technical details
* System: (NixOS: `nixos-version`, Ubuntu/Fedora: `lsb_release -a`, ...)
* Nix version: (run `nix-env --version`)
* Nixpkgs version: (run `nix-instantiate --eval '<nixpkgs>' -A lib.nixpkgsVersion`)
* Sandboxing enabled: (run `grep build-use-sandbox /etc/nix/nix.conf`)
Please run `nix-shell -p nix-info --run "nix-info -m"` and paste the
results.

View File

@ -1,35 +0,0 @@
language: nix
sudo: true
# 'sudo: false' == containers that start fast, but only get 4G ram;
# 'sudo: true' == VMs that start slow, but with 8G
# ..as per: https://docs.travis-ci.com/user/ci-environment/#Virtualization-environments
# Nixpkgs PR tests OOM with 4G: https://github.com/NixOS/nixpkgs/issues/24200
matrix:
include:
- os: linux
sudo: required
script:
- ./maintainers/scripts/travis-nox-review-pr.sh nixpkgs-verify nixpkgs-manual nixpkgs-tarball nixpkgs-unstable
- ./maintainers/scripts/travis-nox-review-pr.sh nixos-options nixos-manual
env:
- BUILD_TYPE="Test Nixpkgs evaluation & NixOS manual build"
- os: linux
sudo: required
dist: trusty
before_script:
- sudo mount -o remount,exec,size=2G,mode=755 /run/user
script: ./maintainers/scripts/travis-nox-review-pr.sh nox pr
env:
- BUILD_TYPE="Build affected packages (Linux)"
- os: osx
osx_image: xcode7.3
script: ./maintainers/scripts/travis-nox-review-pr.sh nox pr
env:
- BUILD_TYPE="Build affected packages (macOS)"
env:
global:
- GITHUB_TOKEN=5edaaf1017f691ed34e7f80878f8f5fbd071603f
notifications:
email: false

View File

@ -1,10 +1,9 @@
[<img src="http://nixos.org/logo/nixos-hires.png" width="500px" alt="logo" />](https://nixos.org/nixos)
[<img src="https://nixos.org/logo/nixos-hires.png" width="500px" alt="logo" />](https://nixos.org/nixos)
[![Build Status](https://travis-ci.org/NixOS/nixpkgs.svg?branch=master)](https://travis-ci.org/NixOS/nixpkgs)
[![Code Triagers Badge](https://www.codetriage.com/nixos/nixpkgs/badges/users.svg)](https://www.codetriage.com/nixos/nixpkgs)
Nixpkgs is a collection of packages for the [Nix](https://nixos.org/nix/) package
manager. It is periodically built and tested by the [hydra](http://hydra.nixos.org/)
manager. It is periodically built and tested by the [Hydra](https://hydra.nixos.org/)
build daemon as so-called channels. To get channel information via git, add
[nixpkgs-channels](https://github.com/NixOS/nixpkgs-channels.git) as a remote:
@ -23,7 +22,7 @@ release and `nixos-unstable` for the latest successful build of master:
For pull-requests, please rebase onto nixpkgs `master`.
[NixOS](https://nixos.org/nixos/) linux distribution source code is located inside
[NixOS](https://nixos.org/nixos/) Linux distribution source code is located inside
`nixos/` folder.
* [NixOS installation instructions](https://nixos.org/nixos/manual/#ch-installation)

View File

@ -661,8 +661,6 @@ src = fetchFromGitHub {
</section>
<section xml:id="sec-patches"><title>Patches</title>
<para>Only patches that are unique to <literal>nixpkgs</literal> should be
included in <literal>nixpkgs</literal> source.</para>
<para>Patches available online should be retrieved using
<literal>fetchpatch</literal>.</para>
<para>
@ -676,5 +674,30 @@ patches = [
];
</programlisting>
</para>
<para>Otherwise, you can add a <literal>.patch</literal> file to the
<literal>nixpkgs</literal> repository. In the interest of keeping our
maintenance burden to a minimum, only patches that are unique
to <literal>nixpkgs</literal> should be added in this way.</para>
<para><programlisting>
patches = [ ./0001-changes.patch ];
</programlisting></para>
<para>If you do need to do create this sort of patch file,
one way to do so is with git:
<orderedlist>
<listitem><para>Move to the root directory of the source code
you're patching.<screen>
$ cd the/program/source</screen></para></listitem>
<listitem><para>If a git repository is not already present,
create one and stage all of the source files.<screen>
$ git init
$ git add .</screen></para></listitem>
<listitem><para>Edit some files to make whatever changes need
to be included in the patch.</para></listitem>
<listitem><para>Use git to create a diff, and pipe the output
to a patch file:<screen>
$ git diff > nixpkgs/pkgs/the/package/0001-changes.patch</screen>
</para></listitem>
</orderedlist></para>
</section>
</chapter>

View File

@ -11,7 +11,7 @@
For example, a typical use of cross compilation is to compile programs for embedded devices.
These devices often don't have the computing power and memory to compile their own programs.
One might think that cross-compilation is a fairly niche concern, but there are advantages to being rigorous about distinguishing build-time vs run-time environments even when one is developing and deploying on the same machine.
Nixpkgs is increasingly adopting this opinion in that packages should be written with cross-compilation in mind, and nixpkgs should evaluate in a similar way (by minimizing cross-compilation-specific special cases) whether or not one is cross-compiling.
Nixpkgs is increasingly adopting the opinion that packages should be written with cross-compilation in mind, and nixpkgs should evaluate in a similar way (by minimizing cross-compilation-specific special cases) whether or not one is cross-compiling.
</para>
<para>
@ -30,11 +30,11 @@
<section>
<title>Platform parameters</title>
<para>
The three GNU Autoconf platforms, <wordasword>build</wordasword>, <wordasword>host</wordasword>, and <wordasword>target</wordasword>, are historically the result of much confusion.
<link xlink:href="https://gcc.gnu.org/onlinedocs/gccint/Configure-Terms.html" /> clears this up somewhat but there is more to be said.
An important advice to get out the way is, unless you are packaging a compiler or other build tool, just worry about the build and host platforms.
Dealing with just two platforms usually better matches people's preconceptions, and in this case is completely correct.
Nixpkgs follows the <link xlink:href="https://gcc.gnu.org/onlinedocs/gccint/Configure-Terms.html">common historical convention of GNU autoconf</link> of distinguishing between 3 types of platform: <wordasword>build</wordasword>, <wordasword>host</wordasword>, and <wordasword>target</wordasword>.
In summary, <wordasword>build</wordasword> is the platform on which a package is being built, <wordasword>host</wordasword> is the platform on which it is to run. The third attribute, <wordasword>target</wordasword>, is relevant only for certain specific compilers and build tools.
</para>
<para>
In Nixpkgs, these three platforms are defined as attribute sets under the names <literal>buildPlatform</literal>, <literal>hostPlatform</literal>, and <literal>targetPlatform</literal>.
All three are always defined as attributes in the standard environment, and at the top level. That means one can get at them just like a dependency in a function that is imported with <literal>callPackage</literal>:
@ -52,7 +52,7 @@
<varlistentry>
<term><varname>hostPlatform</varname></term>
<listitem><para>
The "host platform" is the platform on which a package is run.
The "host platform" is the platform on which a package will be run.
This is the simplest platform to understand, but also the one with the worst name.
</para></listitem>
</varlistentry>
@ -60,22 +60,24 @@
<term><varname>targetPlatform</varname></term>
<listitem>
<para>
The "target platform" is black sheep.
The other two intrinsically apply to all compiled software—or any build process with a notion of "build-time" followed by "run-time".
The target platform only applies to programming tools, and even then only is a good for for some of them.
Briefly, GCC, Binutils, GHC, and certain other tools are written in such a way such that a single build can only compile code for a single platform.
Thus, when building them, one must think ahead about which platforms they wish to use the tool to produce machine code for, and build binaries for each.
The "target platform" attribute is, unlike the other two attributes, not actually fundamental to the process of building software.
Instead, it is only relevant for compatability with building certain specific compilers and build tools.
It can be safely ignored for all other packages.
</para>
<para>
There is no fundamental need to think about the target ahead of time like this.
LLVM, for example, was designed from the beginning with cross-compilation in mind, and so a normal LLVM binary will support every architecture that LLVM supports.
If the tool supports modular or pluggable backends, one might imagine specifying a <emphasis>set</emphasis> of target platforms / backends one wishes to support, rather than a single one.
The build process of certain compilers is written in such a way that the compiler resulting from a single build can itself only produce binaries for a single platform.
The task specifying this single "target platform" is thus pushed to build time of the compiler.
The root cause of this mistake is often that the compiler (which will be run on the host) and the the standard library/runtime (which will be run on the target) are built by a single build process.
</para>
<para>
The biggest reason for mess, if there is one, is that many compilers have the bad habit a build process that builds the compiler and standard library/runtime together.
Then the specifying target platform is essential, because it determines the host platform of the standard library/runtime.
Nixpkgs tries to avoid this where possible too, but still, because the concept of a target platform is so ingrained now in Autoconf and other tools, it is best to support it as is.
Tools like LLVM that don't need up-front target platforms can safely ignore it like normal packages, and it will do no harm.
There is no fundamental need to think about a single target ahead of time like this.
If the tool supports modular or pluggable backends, both the need to specify the target at build time and the constraint of having only a single target disappear.
An example of such a tool is LLVM.
</para>
<para>
Although the existance of a "target platfom" is arguably a historical mistake, it is a common one: examples of tools that suffer from it are GCC, Binutils, GHC and Autoconf.
Nixpkgs tries to avoid sharing in the mistake where possible.
Still, because the concept of a target platform is so ingrained, it is best to support it as is.
</para>
</listitem>
</varlistentry>
@ -155,14 +157,22 @@
<section>
<title>Specifying Dependencies</title>
<para>
As mentioned in the introduction to this chapter, one can think about a build time vs run time distinction whether cross-compiling or not.
In the case of cross-compilation, this corresponds with whether a derivation running on the native or foreign platform is produced.
An interesting thing to think about is how this corresponds with the three Autoconf platforms.
In the run-time case, the depending and depended-on package simply have matching build, host, and target platforms.
But in the build-time case, one can imagine "sliding" the platforms one over.
The depended-on package's host and target platforms (respectively) become the depending package's build and host platforms.
This is the most important guiding principle behind cross-compilation with Nixpkgs, and will be called the <wordasword>sliding window principle</wordasword>.
In this section we explore the relationship between both runtime and buildtime dependencies and the 3 Autoconf platforms.
</para>
<para>
A runtime dependency between 2 packages implies that between them both the host and target platforms match.
This is directly implied by the meaning of "host platform" and "runtime dependency":
The package dependency exists while both packages are runnign on a single host platform.
</para>
<para>
A build time dependency, however, implies a shift in platforms between the depending package and the depended-on package.
The meaning of a build time dependency is that to build the depending package we need to be able to run the depended-on's package.
The depending package's build platform is therefore equal to the depended-on package's host platform.
Analogously, the depending package's host platform is equal to the depended-on package's target platform.
</para>
<para>
In this manner, given the 3 platforms for one package, we can determine the three platforms for all its transitive dependencies.
This is the most important guiding principle behind cross-compilation with Nixpkgs, and will be called the <wordasword>sliding window principle</wordasword>.
</para>
<para>
Some examples will probably make this clearer.

View File

@ -48,7 +48,7 @@ trouble with packages like `3dmodels` and `4Blocks`, because these names are
invalid identifiers in the Nix language. The issue of how to deal with these
rare corner cases is currently unresolved.)
Haskell packages who's Nix name (second column) begins with a `haskell-` prefix
Haskell packages whose Nix name (second column) begins with a `haskell-` prefix
are packages that provide a library whereas packages without that prefix
provide just executables. Libraries may provide executables too, though: the
package `haskell-pandoc`, for example, installs both a library and an

View File

@ -530,7 +530,6 @@ Based on the packages defined in `pkgs/top-level/python-packages.nix` an
attribute set is created for each available Python interpreter. The available
sets are
* `pkgs.python26Packages`
* `pkgs.python27Packages`
* `pkgs.python34Packages`
* `pkgs.python35Packages`
@ -540,7 +539,7 @@ sets are
and the aliases
* `pkgs.python2Packages` pointing to `pkgs.python27Packages`
* `pkgs.python3Packages` pointing to `pkgs.python35Packages`
* `pkgs.python3Packages` pointing to `pkgs.python36Packages`
* `pkgs.pythonPackages` pointing to `pkgs.python2Packages`
#### `buildPythonPackage` function

View File

@ -9,11 +9,12 @@ date: 2017-03-05
To install the rust compiler and cargo put
```
rust
rustc
cargo
```
into the `environment.systemPackages` or bring them into
scope with `nix-shell -p rust`.
scope with `nix-shell -p rustc cargo`.
For daily builds (beta and nightly) use either rustup from
nixpkgs or use the [Rust nightlies
@ -24,9 +25,7 @@ overlay](#using-the-rust-nightlies-overlay).
Rust applications are packaged by using the `buildRustPackage` helper from `rustPlatform`:
```
with rustPlatform;
buildRustPackage rec {
rustPlatform.buildRustPackage rec {
name = "ripgrep-${version}";
version = "0.4.0";
@ -40,9 +39,9 @@ buildRustPackage rec {
cargoSha256 = "0q68qyl2h6i0qsz82z840myxlnjay8p1w5z7hfyr8fqp7wgwa9cx";
meta = with stdenv.lib; {
description = "A utility that combines the usability of The Silver Searcher with the raw speed of grep";
description = "A fast line-oriented regex search tool, similar to ag and ack";
homepage = https://github.com/BurntSushi/ripgrep;
license = with licenses; [ unlicense ];
license = licenses.unlicense;
maintainers = [ maintainers.tailhook ];
platforms = platforms.all;
};

View File

@ -68,7 +68,7 @@
<varlistentry><term><varname>
$outputDevdoc</varname></term><listitem><para>
is for <emphasis>developer</emphasis> documentation. Currently we count gtk-doc in there. It goes to <varname>devdoc</varname> or is removed (!) by default. This is because e.g. gtk-doc tends to be rather large and completely unused by nixpkgs users.
is for <emphasis>developer</emphasis> documentation. Currently we count gtk-doc and devhelp books in there. It goes to <varname>devdoc</varname> or is removed (!) by default. This is because e.g. gtk-doc tends to be rather large and completely unused by nixpkgs users.
</para></listitem></varlistentry>
<varlistentry><term><varname>

View File

@ -231,7 +231,8 @@ genericBuild
<listitem><para>
Like <varname>nativeBuildInputs</varname>, but these dependencies are <emphasis>propagated</emphasis>:
that is, the dependencies listed here are added to the <varname>nativeBuildInputs</varname> of any package that uses <emphasis>this</emphasis> package as a dependency.
So if package Y has <literal>propagatedBuildInputs = [X]</literal>, and package Z has <literal>buildInputs = [Y]</literal>, then package X will appear in Zs build environment automatically.
So if package Y has <literal>propagatedNativeBuildInputs = [X]</literal>, and package Z has <literal>nativeBuildInputs = [Y]</literal>,
then package X will appear in Zs build environment automatically.
</para></listitem>
</varlistentry>

View File

@ -41,7 +41,6 @@ let
generators = callLibs ./generators.nix;
misc = callLibs ./deprecated.nix;
# domain-specific
sandbox = callLibs ./sandbox.nix;
fetchers = callLibs ./fetchers.nix;
# Eval-time filesystem handling

View File

@ -22,11 +22,15 @@ rec {
* character sep. If sep appears in k, it is escaped.
* Helper for synaxes with different separators.
*
* mkKeyValueDefault ":" "f:oo" "bar"
* mkValueString specifies how values should be formatted.
*
* mkKeyValueDefault {} ":" "f:oo" "bar"
* > "f\:oo:bar"
*/
mkKeyValueDefault = sep: k: v:
"${libStr.escape [sep] k}${sep}${toString v}";
mkKeyValueDefault = {
mkValueString ? toString
}: sep: k: v:
"${libStr.escape [sep] k}${sep}${mkValueString v}";
/* Generate a key-value-style config file from an attrset.
@ -34,7 +38,7 @@ rec {
* mkKeyValue is the same as in toINI.
*/
toKeyValue = {
mkKeyValue ? mkKeyValueDefault "="
mkKeyValue ? mkKeyValueDefault {} "="
}: attrs:
let mkLine = k: v: mkKeyValue k v + "\n";
in libStr.concatStrings (libAttr.mapAttrsToList mkLine attrs);
@ -64,7 +68,7 @@ rec {
# apply transformations (e.g. escapes) to section names
mkSectionName ? (name: libStr.escape [ "[" "]" ] name),
# format a setting line from key and value
mkKeyValue ? mkKeyValueDefault "="
mkKeyValue ? mkKeyValueDefault {} "="
}: attrsOfAttrs:
let
# map function to string for each key val

View File

@ -15,7 +15,12 @@ lib.mapAttrs (n: v: v // { shortName = n; }) rec {
afl21 = spdx {
spdxId = "AFL-2.1";
fullName = "Academic Free License";
fullName = "Academic Free License v2.1";
};
afl3 = spdx {
spdxId = "AFL-3.0";
fullName = "Academic Free License v3.0";
};
agpl3 = spdx {
@ -426,6 +431,11 @@ lib.mapAttrs (n: v: v // { shortName = n; }) rec {
fullName = "Notion modified LGPL";
};
nposl3 = spdx {
spdxId = "NPOSL-3.0";
fullName = "Non-Profit Open Software License 3.0";
};
ofl = spdx {
spdxId = "OFL-1.1";
fullName = "SIL Open Font License 1.1";
@ -441,6 +451,16 @@ lib.mapAttrs (n: v: v // { shortName = n; }) rec {
fullName = "OpenSSL License";
};
osl21 = spdx {
spdxId = "OSL-2.1";
fullName = "Open Software License 2.1";
};
osl3 = spdx {
spdxId = "OSL-3.0";
fullName = "Open Software License 3.0";
};
php301 = spdx {
spdxId = "PHP-3.01";
fullName = "PHP License v3.01";

View File

@ -62,11 +62,12 @@
asppsa = "Alastair Pharo <asppsa@gmail.com>";
astsmtl = "Alexander Tsamutali <astsmtl@yandex.ru>";
asymmetric = "Lorenzo Manacorda <lorenzo@mailbox.org>";
aszlig = "aszlig <aszlig@redmoonstudios.org>";
aszlig = "aszlig <aszlig@nix.build>";
auntie = "Jonathan Glines <auntieNeo@gmail.com>";
avnik = "Alexander V. Nikolaev <avn@avnik.info>";
aycanirican = "Aycan iRiCAN <iricanaycan@gmail.com>";
bachp = "Pascal Bach <pascal.bach@nextrem.ch>";
backuitist = "Bruno Bieth";
badi = "Badi' Abdul-Wahid <abdulwahidc@gmail.com>";
balajisivaraman = "Balaji Sivaraman <sivaraman.balaji@gmail.com>";
barrucadu = "Michael Walker <mike@barrucadu.co.uk>";
@ -111,6 +112,7 @@
changlinli = "Changlin Li <mail@changlinli.com>";
chaoflow = "Florian Friesdorf <flo@chaoflow.net>";
chattered = "Phil Scott <me@philscotted.com>";
ChengCat = "Yucheng Zhang <yu@cheng.cat>";
choochootrain = "Hurshal Patel <hurshal@imap.cc>";
chpatrick = "Patrick Chilton <chpatrick@gmail.com>";
chris-martin = "Chris Martin <ch.martin@gmail.com>";
@ -118,6 +120,7 @@
chrisrosset = "Christopher Rosset <chris@rosset.org.uk>";
christopherpoole = "Christopher Mark Poole <mail@christopherpoole.net>";
ciil = "Simon Lackerbauer <simon@lackerbauer.com>";
ck3d = "Christian Kögler <ck3d@gmx.de>";
ckampka = "Christian Kampka <christian@kampka.net>";
ckauhaus = "Christian Kauhaus <christian@kauhaus.de>";
cko = "Christine Koppelt <christine.koppelt@gmail.com>";
@ -144,6 +147,7 @@
DamienCassou = "Damien Cassou <damien@cassou.me>";
danbst = "Danylo Hlynskyi <abcz2.uprola@gmail.com>";
dancek = "Hannu Hartikainen <hannu.hartikainen@gmail.com>";
danharaj = "Dan Haraj <dan@obsidian.systems>";
danielfullmer = "Daniel Fullmer <danielrf12@gmail.com>";
dasuxullebt = "Christoph-Simon Senjak <christoph.senjak@googlemail.com>";
david50407 = "David Kuo <me@davy.tw>";
@ -218,6 +222,7 @@
falsifian = "James Cook <james.cook@utoronto.ca>";
fare = "Francois-Rene Rideau <fahree@gmail.com>";
fgaz = "Francesco Gazzetta <francygazz@gmail.com>";
FireyFly = "Jonas Höglund <nix@firefly.nu>";
flokli = "Florian Klink <flokli@flokli.de>";
florianjacob = "Florian Jacob <projects+nixos@florianjacob.de>";
flosse = "Markus Kohlhase <mail@markus-kohlhase.de>";
@ -282,6 +287,7 @@
infinisil = "Silvan Mosberger <infinisil@icloud.com>";
ironpinguin = "Michele Catalano <michele@catalano.de>";
ivan-tkatchev = "Ivan Tkatchev <tkatchev@gmail.com>";
ixmatus = "Parnell Springmeyer <parnell@digitalmentat.com>";
j-keck = "Jürgen Keck <jhyphenkeck@gmail.com>";
jagajaga = "Arseniy Seroka <ars.seroka@gmail.com>";
jammerful = "jammerful <jammerful@gmail.com>";
@ -325,6 +331,7 @@
kaiha = "Kai Harries <kai.harries@gmail.com>";
kamilchm = "Kamil Chmielewski <kamil.chm@gmail.com>";
kampfschlaefer = "Arnold Krille <arnold@arnoldarts.de>";
karolchmist = "karolchmist <info+nix@chmist.com>";
kentjames = "James Kent <jameschristopherkent@gmail.com";
kevincox = "Kevin Cox <kevincox@kevincox.ca>";
khumba = "Bryan Gardiner <bog@khumba.net>";
@ -409,11 +416,13 @@
michelk = "Michel Kuhlmann <michel@kuhlmanns.info>";
midchildan = "midchildan <midchildan+nix@gmail.com>";
mikefaille = "Michaël Faille <michael@faille.io>";
mikoim = "Eshin Kunishima <ek@esh.ink>";
miltador = "Vasiliy Solovey <miltador@yandex.ua>";
mimadrid = "Miguel Madrid <mimadrid@ucm.es>";
mirdhyn = "Merlin Gaillard <mirdhyn@gmail.com>";
mirrexagon = "Andrew Abbott <mirrexagon@mirrexagon.com>";
mjanczyk = "Marcin Janczyk <m@dragonvr.pl>";
mjp = "Mike Playle <mike@mythik.co.uk>"; # github = "MikePlayle";
mlieberman85 = "Michael Lieberman <mlieberman85@gmail.com>";
modulistic = "Pablo Costa <modulistic@gmail.com>";
mog = "Matthew O'Gorman <mog-lists@rldn.net>";
@ -481,6 +490,7 @@
patternspandemic = "Brad Christensen <patternspandemic@live.com>";
pawelpacana = "Paweł Pacana <pawel.pacana@gmail.com>";
pbogdan = "Piotr Bogdan <ppbogdan@gmail.com>";
pcarrier = "Pierre Carrier <pc@rrier.ca>";
periklis = "theopompos@gmail.com";
pesterhazy = "Paulus Esterhazy <pesterhazy@gmail.com>";
peterhoeg = "Peter Hoeg <peter@hoeg.com>";
@ -491,6 +501,7 @@
Phlogistique = "Noé Rubinstein <noe.rubinstein@gmail.com>";
phreedom = "Evgeny Egorochkin <phreedom@yandex.ru>";
phunehehe = "Hoang Xuan Phu <phunehehe@gmail.com>";
pierrechevalier83 = "Pierre Chevalier <pierrechevalier83@gmail.com>";
pierrer = "Pierre Radermecker <pierrer@pi3r.be>";
pierron = "Nicolas B. Pierron <nixos@nbp.name>";
piotr = "Piotr Pietraszkiewicz <ppietrasa@gmail.com>";
@ -633,6 +644,7 @@
theuni = "Christian Theune <ct@flyingcircus.io>";
ThomasMader = "Thomas Mader <thomas.mader@gmail.com>";
thoughtpolice = "Austin Seipp <aseipp@pobox.com>";
thpham = "Thomas Pham <thomas.pham@ithings.ch>";
timbertson = "Tim Cuthbertson <tim@gfxmonk.net>";
timokau = "Timo Kaufmann <timokau@zoho.com>";
titanous = "Jonathan Rudenberg <jonathan@titanous.com>";
@ -690,6 +702,7 @@
womfoo = "Kranium Gikos Mendoza <kranium@gikos.net>";
wscott = "Wayne Scott <wsc9tt@gmail.com>";
wyvie = "Elijah Rum <elijahrum@gmail.com>";
xaverdh = "Dominik Xaver Hörl <hoe.dom@gmx.de>";
xnwdd = "Guillermo NWDD <nwdd+nixos@no.team>";
xvapx = "Marti Serra <marti.serra.coscollano@gmail.com>";
xwvvvvwx = "David Terry <davidterry@posteo.de>";
@ -699,6 +712,7 @@
ylwghst = "Burim Augustin Berisa <ylwghst@onionmail.info>";
yochai = "Yochai <yochai@titat.info>";
yorickvp = "Yorick van Pelt <yorickvanpelt@gmail.com>";
yrashk = "Yurii Rashkovskii <yrashk@gmail.com>";
yuriaisaka = "Yuri Aisaka <yuri.aisaka+nix@gmail.com>";
yurrriq = "Eric Bailey <eric@ericb.me>";
z77z = "Marco Maggesi <maggesi@math.unifi.it>";

View File

@ -1,48 +0,0 @@
{ lib }:
with lib.strings;
/* Helpers for creating lisp S-exprs for the Apple sandbox
lib.sandbox.allowFileRead [ "/usr/bin/file" ];
# => "(allow file-read* (literal \"/usr/bin/file\"))";
lib.sandbox.allowFileRead {
literal = [ "/usr/bin/file" ];
subpath = [ "/usr/lib/system" ];
}
# => "(allow file-read* (literal \"/usr/bin/file\") (subpath \"/usr/lib/system\"))"
*/
let
sexp = tokens: "(" + builtins.concatStringsSep " " tokens + ")";
generateFileList = files:
if builtins.isList files
then concatMapStringsSep " " (x: sexp [ "literal" ''"${x}"'' ]) files
else if builtins.isString files
then generateFileList [ files ]
else concatStringsSep " " (
(map (x: sexp [ "literal" ''"${x}"'' ]) (files.literal or [])) ++
(map (x: sexp [ "subpath" ''"${x}"'' ]) (files.subpath or []))
);
applyToFiles = f: act: files: f "${act} ${generateFileList files}";
genActions = actionName: let
action = feature: sexp [ actionName feature ];
self = {
"${actionName}" = action;
"${actionName}File" = applyToFiles action "file*";
"${actionName}FileRead" = applyToFiles action "file-read*";
"${actionName}FileReadMetadata" = applyToFiles action "file-read-metadata";
"${actionName}DirectoryList" = self."${actionName}FileReadMetadata";
"${actionName}FileWrite" = applyToFiles action "file-write*";
"${actionName}FileWriteMetadata" = applyToFiles action "file-write-metadata";
};
in self;
in
genActions "allow" // genActions "deny" // {
importProfile = derivation: ''
(import "${derivation}")
'';
}

View File

@ -201,7 +201,7 @@ runTests {
# in alphabetical order
testMkKeyValueDefault = {
expr = generators.mkKeyValueDefault ":" "f:oo" "bar";
expr = generators.mkKeyValueDefault {} ":" "f:oo" "bar";
expected = ''f\:oo:bar'';
};

View File

@ -1,82 +0,0 @@
#! /usr/bin/env bash
set -e
while test -n "$1"; do
# tell Travis to use folding
echo -en "travis_fold:start:$1\r"
case $1 in
nixpkgs-verify)
echo "=== Verifying that nixpkgs evaluates..."
nix-env --file $TRAVIS_BUILD_DIR --query --available --json > /dev/null
;;
nixos-options)
echo "=== Checking NixOS options"
nix-build $TRAVIS_BUILD_DIR/nixos/release.nix --attr options --show-trace
;;
nixos-manual)
echo "=== Checking NixOS manuals"
nix-build $TRAVIS_BUILD_DIR/nixos/release.nix --attr manual --show-trace
;;
nixpkgs-manual)
echo "=== Checking nixpkgs manuals"
nix-build $TRAVIS_BUILD_DIR/pkgs/top-level/release.nix --attr manual --show-trace
;;
nixpkgs-tarball)
echo "=== Checking nixpkgs tarball creation"
nix-build $TRAVIS_BUILD_DIR/pkgs/top-level/release.nix --attr tarball --show-trace
;;
nixpkgs-unstable)
echo "=== Checking nixpkgs unstable job"
nix-instantiate $TRAVIS_BUILD_DIR/pkgs/top-level/release.nix --attr unstable --show-trace
;;
nixpkgs-lint)
echo "=== Checking nixpkgs lint"
nix-shell --packages nixpkgs-lint --run "nixpkgs-lint -f $TRAVIS_BUILD_DIR"
;;
nox)
echo "=== Fetching Nox from binary cache"
# build nox (+ a basic nix-shell env) silently so it's not in the log
nix-shell -p nox stdenv --command true
;;
pr)
if [ "$TRAVIS_PULL_REQUEST" == "false" ]; then
echo "=== No pull request found"
else
echo "=== Building pull request #$TRAVIS_PULL_REQUEST"
token=""
if [ -n "$GITHUB_TOKEN" ]; then
token="--token $GITHUB_TOKEN"
fi
nix-shell --packages nox --run "nox-review pr --slug $TRAVIS_REPO_SLUG $token $TRAVIS_PULL_REQUEST"
fi
;;
*)
echo "Skipping unknown option $1"
;;
esac
echo -en "travis_fold:end:$1\r"
shift
done

View File

@ -12,7 +12,7 @@ management. In the declarative style, users are specified in
states that a user account named <literal>alice</literal> shall exist:
<programlisting>
users.extraUsers.alice =
users.users.alice =
{ isNormalUser = true;
home = "/home/alice";
description = "Alice Foobar";
@ -34,7 +34,7 @@ to set a password, which is retained across invocations of
<para>If you set users.mutableUsers to false, then the contents of /etc/passwd
and /etc/group will be congruent to your NixOS configuration. For instance,
if you remove a user from users.extraUsers and run nixos-rebuild, the user
if you remove a user from users.users and run nixos-rebuild, the user
account will cease to exist. Also, imperative commands for managing users
and groups, such as useradd, are no longer available. Passwords may still be
assigned by setting the user's <literal>hashedPassword</literal> option. A
@ -54,7 +54,7 @@ to the user specification.</para>
group named <literal>students</literal> shall exist:
<programlisting>
users.extraGroups.students.gid = 1000;
users.groups.students.gid = 1000;
</programlisting>
As with users, the group ID (gid) is optional and will be assigned
@ -68,8 +68,8 @@ account named <literal>alice</literal>:
<screen>
# useradd -m alice</screen>
To make all nix tools available to this new user use `su - USER` which
opens a login shell (==shell that loads the profile) for given user.
To make all nix tools available to this new user use `su - USER` which
opens a login shell (==shell that loads the profile) for given user.
This will create the ~/.nix-defexpr symlink. So run:
<screen>

View File

@ -106,13 +106,43 @@ let
xmllint --xinclude --noxincludenode \
--output ./man-pages-combined.xml ./man-pages.xml
xmllint --debug --noout --nonet \
--relaxng ${docbook5}/xml/rng/docbook/docbook.rng \
manual-combined.xml
xmllint --debug --noout --nonet \
--relaxng ${docbook5}/xml/rng/docbook/docbook.rng \
man-pages-combined.xml
# outputs the context of an xmllint error output
# LEN lines around the failing line are printed
function context {
# length of context
local LEN=6
# lines to print before error line
local BEFORE=4
# xmllint output lines are:
# file.xml:1234: there was an error on line 1234
while IFS=':' read -r file line rest; do
echo
if [[ -n "$rest" ]]; then
echo "$file:$line:$rest"
local FROM=$(($line>$BEFORE ? $line - $BEFORE : 1))
# number lines & filter context
nl --body-numbering=a "$file" | sed -n "$FROM,+$LEN p"
else
if [[ -n "$line" ]]; then
echo "$file:$line"
else
echo "$file"
fi
fi
done
}
function lintrng {
xmllint --debug --noout --nonet \
--relaxng ${docbook5}/xml/rng/docbook/docbook.rng \
"$1" \
2>&1 | context 1>&2
# ^ redirect assumes xmllint doesnt print to stdout
}
lintrng manual-combined.xml
lintrng man-pages-combined.xml
mkdir $out
cp manual-combined.xml $out/

View File

@ -262,8 +262,18 @@ startAll;
<literal>waitForWindow(qr/Terminal/)</literal>.</para></listitem>
</varlistentry>
<varlistentry>
<term><methodname>copyFileFromHost</methodname></term>
<listitem><para>Copies a file from host to machine, e.g.,
<literal>copyFileFromHost("myfile", "/etc/my/important/file")</literal>.</para>
<para>The first argument is the file on the host. The file needs to be
accessible while building the nix derivation. The second argument is
the location of the file on the machine.</para>
</listitem>
</varlistentry>
</variablelist>
</para>
</section>
</section>

View File

@ -72,6 +72,29 @@ following incompatible changes:</para>
<option>services.pgmanage</option>.
</para>
</listitem>
<listitem>
<para>
<emphasis role="strong">
The OpenSSH service no longer enables support for DSA keys by default,
which could cause a system lock out. Update your keys or, unfavorably,
re-enable DSA support manually.
</emphasis>
</para>
<para>
DSA support was
<link xlink:href="https://www.openssh.com/legacy.html">deprecated in OpenSSH 7.0</link>,
due to it being too weak. To re-enable support, add
<literal>PubkeyAcceptedKeyTypes +ssh-dss</literal> to the end of your
<option>services.openssh.extraConfig</option>.
</para>
<para>
After updating the keys to be stronger, anyone still on a pre-17.03
version is safe to jump to 17.03, as vetted
<link xlink:href="https://search.nix.gsc.io/?q=stateVersion">here</link>.
</para>
</listitem>
</itemizedlist>
</section>
@ -86,8 +109,26 @@ following incompatible changes:</para>
<itemizedlist>
<listitem>
<para>
ZNC option <option>services.znc.mutable</option> now defaults to <literal>true</literal>.
That means that old configuration is not overwritten by default when update to the znc options are made.
ZNC option <option>services.znc.mutable</option> now defaults to
<literal>true</literal>. That means that old configuration is not
overwritten by default when update to the znc options are made.
</para>
</listitem>
<listitem>
<para>
The option <option>networking.wireless.networks.&lt;name&gt;.auth</option>
has been added for wireless networks with WPA-Enterprise authentication.
There is also a new <option>extraConfig</option> option to directly
configure <literal>wpa_supplicant</literal> and <option>hidden</option>
to connect to hidden networks.
</para>
</listitem>
<listitem>
<para>
The option <option>services.xserver.desktopManager.default</option> is now <literal>none</literal> by default.
An assertion failure is thrown if WM's and DM's default are <literal>none</literal>.
To explicitly run a plain X session without and DM or WM, the newly introduced option <option>services.xserver.plainX</option>
must be set to true.
</para>
</listitem>
</itemizedlist>

View File

@ -33,18 +33,23 @@
, name ? "nixos-disk-image"
, # Disk image format, one of qcow2, vpc, raw.
, # Disk image format, one of qcow2, qcow2-compressed, vpc, raw.
format ? "raw"
}:
with lib;
let
extensions = {
let format' = format; in let
format = if (format' == "qcow2-compressed") then "qcow2" else format';
compress = optionalString (format' == "qcow2-compressed") "-c";
filename = "nixos." + {
qcow2 = "qcow2";
vpc = "vhd";
raw = "img";
};
}.${format};
nixpkgs = cleanSource pkgs.path;
@ -125,7 +130,7 @@ let
fakeroot nixos-prepare-root $root ${channelSources} ${config.system.build.toplevel} closure
echo "copying staging root to image..."
cptofs ${pkgs.lib.optionalString partitioned "-P 1"} -t ${fsType} -i $diskImage $root/* /
cptofs ${optionalString partitioned "-P 1"} -t ${fsType} -i $diskImage $root/* /
'';
in pkgs.vmTools.runInLinuxVM (
pkgs.runCommand name
@ -134,12 +139,11 @@ in pkgs.vmTools.runInLinuxVM (
exportReferencesGraph = [ "closure" metaClosure ];
postVM = ''
${if format == "raw" then ''
mv $diskImage $out/nixos.img
diskImage=$out/nixos.img
mv $diskImage $out/${filename}
'' else ''
${pkgs.qemu}/bin/qemu-img convert -f raw -O ${format} $diskImage $out/nixos.${extensions.${format}}
diskImage=$out/nixos.${extensions.${format}}
${pkgs.qemu}/bin/qemu-img convert -f raw -O ${format} ${compress} $diskImage $out/${filename}
''}
diskImage=$out/${filename}
${postVM}
'';
memSize = 1024;

View File

@ -146,6 +146,7 @@ sub start {
($self->{allowReboot} ? "" : "-no-reboot ") .
"-monitor unix:./monitor -chardev socket,id=shell,path=./shell " .
"-device virtio-serial -device virtconsole,chardev=shell " .
"-device virtio-rng-pci " .
($showGraphics ? "-serial stdio" : "-nographic") . " " . ($ENV{QEMU_OPTS} || "");
chdir $self->{stateDir} or die;
exec $self->{startCommand};

View File

@ -113,8 +113,7 @@ rec {
--add-flags "''${vms[*]}" \
${lib.optionalString enableOCR
"--prefix PATH : '${ocrProg}/bin:${imagemagick}/bin'"} \
--run "testScript=\"\$(cat $out/test-script)\"" \
--set testScript '$testScript' \
--run "export testScript=\"\$(cat $out/test-script)\"" \
--set VLANS '${toString vlans}'
ln -s ${testDriver}/bin/nixos-test-driver $out/bin/nixos-run-vms
wrapProgram $out/bin/nixos-run-vms \

View File

@ -52,6 +52,8 @@ let
</fontconfig>
'';
localConf = pkgs.writeText "fc-local.conf" cfg.localConf;
# The configuration to be included in /etc/font/
penultimateConf = pkgs.runCommand "font-penultimate-conf" {} ''
support_folder=$out/etc/fonts/conf.d
@ -107,6 +109,12 @@ let
$latest_folder/51-local.conf \
--replace local.conf /etc/fonts/${latestVersion}/local.conf
# local.conf (indirect priority 51)
${optionalString (cfg.localConf != "") ''
ln -s ${localConf} $out/etc/fonts/local.conf
ln -s ${localConf} $out/etc/fonts/${latestVersion}/local.conf
''}
ln -s ${defaultFontsConf} $support_folder/52-default-fonts.conf
ln -s ${defaultFontsConf} $latest_folder/52-default-fonts.conf

View File

@ -45,7 +45,7 @@ let
uid = ids.uids.pulseaudio;
gid = ids.gids.pulseaudio;
stateDir = "/var/run/pulse";
stateDir = "/run/pulse";
# Create pulse/client.conf even if PulseAudio is disabled so
# that we can disable the autospawn feature in programs that
@ -219,6 +219,12 @@ in {
{ target = "pulse/daemon.conf";
source = writeText "daemon.conf" (lib.generators.toKeyValue {} cfg.daemon.config); }
{ target = "openal/alsoft.conf";
source = writeText "alsoft.conf" "drivers=pulse"; }
{ target = "libao.conf";
source = writeText "libao.conf" "default_driver=pulse"; }
];
# Allow PulseAudio to get realtime priority using rtkit.

View File

@ -582,13 +582,15 @@ in {
{
environment = {
etc = mapAttrs' (name: { packages, ... }: {
name = "per-user-pkgs/${name}";
value.source = pkgs.symlinkJoin {
name = "per-user-pkgs.${name}";
name = "profiles/per-user/${name}";
value.source = pkgs.buildEnv {
name = "user-environment";
paths = packages;
inherit (config.environment) pathsToLink extraOutputsToInstall;
inherit (config.system.path) ignoreCollisions postBuild;
};
}) (filterAttrs (_: { packages, ... }: packages != []) cfg.users);
profiles = ["/etc/per-user-pkgs/$LOGNAME"];
profiles = ["/etc/profiles/per-user/$USER"];
};
}
];

View File

@ -40,16 +40,6 @@ in
sdImage = {
populateBootCommands = let
# Contains a couple of fixes for booting a Linux kernel, will hopefully appear upstream soon.
patchedUboot = pkgs.ubootRaspberryPi3_64bit.overrideAttrs (oldAttrs: {
src = pkgs.fetchFromGitHub {
owner = "dezgeg";
repo = "u-boot";
rev = "baab53ec244fe44def01948a0f10e67342d401e6";
sha256 = "0r5j2pc42ws3w3im0a9c6bh01czz5kapqrqp0ik9ra823cw73lxr";
};
});
configTxt = pkgs.writeText "config.txt" ''
kernel=u-boot-rpi3.bin
arm_control=0x200
@ -57,7 +47,7 @@ in
'';
in ''
(cd ${pkgs.raspberrypifw}/share/raspberrypi/boot && cp bootcode.bin fixup*.dat start*.elf $NIX_BUILD_TOP/boot/)
cp ${patchedUboot}/u-boot.bin boot/u-boot-rpi3.bin
cp ${pkgs.ubootRaspberryPi3_64bit}/u-boot.bin boot/u-boot-rpi3.bin
cp ${configTxt} boot/config.txt
${extlinux-conf-builder} -t 3 -c ${config.system.build.toplevel} -d ./boot
'';

View File

@ -291,7 +291,7 @@ if test "$(evalOpt "_type" 2> /dev/null)" = '"option"'; then
fi
echo "Description:"
echo
eval printf $(evalOpt "description")
echo $(evalOpt "description")
echo $desc;

View File

@ -300,6 +300,7 @@
kanboard = 281;
pykms = 282;
kodi = 283;
restya-board = 284;
# When adding a uid, make sure it doesn't match an existing gid. And don't use uids above 399!
@ -568,6 +569,7 @@
kanboard = 281;
pykms = 282;
kodi = 283;
restya-board = 284;
# When adding a gid, make sure it doesn't match an existing
# uid. Users and groups with the same name should have equal

View File

@ -193,6 +193,8 @@
./services/databases/stanchion.nix
./services/databases/virtuoso.nix
./services/desktops/accountsservice.nix
./services/desktops/dleyna-renderer.nix
./services/desktops/dleyna-server.nix
./services/desktops/geoclue2.nix
./services/desktops/gnome3/at-spi2-core.nix
./services/desktops/gnome3/evolution-data-server.nix
@ -208,6 +210,7 @@
./services/desktops/gnome3/seahorse.nix
./services/desktops/gnome3/sushi.nix
./services/desktops/gnome3/tracker.nix
./services/desktops/gnome3/tracker-miners.nix
./services/desktops/profile-sync-daemon.nix
./services/desktops/telepathy.nix
./services/development/hoogle.nix
@ -224,6 +227,7 @@
./services/hardware/bluetooth.nix
./services/hardware/brltty.nix
./services/hardware/freefall.nix
./services/hardware/fwupd.nix
./services/hardware/illum.nix
./services/hardware/interception-tools.nix
./services/hardware/irqbalance.nix
@ -482,6 +486,7 @@
./services/networking/networkmanager.nix
./services/networking/nftables.nix
./services/networking/ngircd.nix
./services/networking/nghttpx/default.nix
./services/networking/nix-serve.nix
./services/networking/nntp-proxy.nix
./services/networking/nsd.nix
@ -605,6 +610,7 @@
./services/web-apps/pgpkeyserver-lite.nix
./services/web-apps/piwik.nix
./services/web-apps/pump.io.nix
./services/web-apps/restya-board.nix
./services/web-apps/tt-rss.nix
./services/web-apps/selfoss.nix
./services/web-apps/quassel-webserver.nix
@ -639,6 +645,7 @@
./services/x11/display-managers/sddm.nix
./services/x11/display-managers/slim.nix
./services/x11/display-managers/xpra.nix
./services/x11/fractalart.nix
./services/x11/hardware/libinput.nix
./services/x11/hardware/multitouch.nix
./services/x11/hardware/synaptics.nix

View File

@ -4,7 +4,7 @@
{ config, pkgs, ... }:
{
boot.initrd.availableKernelModules = [ "virtio_net" "virtio_pci" "virtio_blk" "virtio_scsi" "9p" "9pnet_virtio" ];
boot.initrd.availableKernelModules = [ "virtio_net" "virtio_pci" "virtio_mmio" "virtio_blk" "virtio_scsi" "9p" "9pnet_virtio" ];
boot.initrd.kernelModules = [ "virtio_balloon" "virtio_console" "virtio_rng" ];
boot.initrd.postDeviceCommands =

View File

@ -197,8 +197,9 @@ in
fi
'';
# Configuration for readline in bash.
environment.etc."inputrc".source = ./inputrc;
# Configuration for readline in bash. We use "option default"
# priority to allow user override using both .text and .source.
environment.etc."inputrc".source = mkOptionDefault ./inputrc;
users.defaultUserShell = mkDefault pkgs.bashInteractive;

View File

@ -8,8 +8,11 @@ let
swayWrapped = pkgs.writeScriptBin "sway" ''
#! ${pkgs.stdenv.shell}
if [ "$1" != "" ]; then
sway-setcap "$@"
exit
fi
${cfg.extraSessionCommands}
PATH="${sway}/bin:$PATH"
exec ${pkgs.dbus.dbus-launch} --exit-with-session sway-setcap
'';
swayJoined = pkgs.symlinkJoin {

View File

@ -89,8 +89,8 @@ in
description = ''
Enable zsh-autosuggestions
'';
type = types.bool;
};
};
};

View File

@ -48,6 +48,10 @@ with lib;
(mkRemovedOptionModule [ "services" "rmilter" "bindInetSockets" ] "Use services.rmilter.bindSocket.* instead")
(mkRemovedOptionModule [ "services" "rmilter" "bindUnixSockets" ] "Use services.rmilter.bindSocket.* instead")
# Xsession script
(mkRenamedOptionModule [ "services" "xserver" "displayManager" "job" "logsXsession" ] [ "services" "xserver" "displayManager" "job" "logToFile" ])
(mkRenamedOptionModule [ "services" "xserver" "displayManager" "logToJournal" ] [ "services" "xserver" "displayManager" "job" "logToJournal" ])
# Old Grub-related options.
(mkRenamedOptionModule [ "boot" "initrd" "extraKernelModules" ] [ "boot" "initrd" "kernelModules" ])
(mkRenamedOptionModule [ "boot" "extraKernelParams" ] [ "boot" "kernelParams" ])

View File

@ -41,7 +41,7 @@ let
type = types.bool;
description = ''
If set, users listed in
<filename>~/.yubico/u2f_keys</filename> are able to log in
<filename>~/.config/Yubico/u2f_keys</filename> are able to log in
with the associated U2F key.
'';
};

View File

@ -4,14 +4,6 @@ with lib;
let
cfg = config.services.buildkite-agent;
configFile = pkgs.writeText "buildkite-agent.cfg"
''
token="${cfg.token}"
name="${cfg.name}"
meta-data="${cfg.meta-data}"
hooks-path="${cfg.package}/share/hooks"
build-path="${cfg.dataDir}"
'';
in
{
@ -39,10 +31,13 @@ in
type = types.listOf types.package;
};
token = mkOption {
type = types.str;
tokenPath = mkOption {
type = types.path;
description = ''
The token from your Buildkite "Agents" page.
A run-time path to the token file, which is supposed to be provisioned
outside of Nix store.
'';
};
@ -62,16 +57,22 @@ in
};
openssh =
{ privateKey = mkOption {
type = types.str;
{ privateKeyPath = mkOption {
type = types.path;
description = ''
Private agent key.
A run-time path to the key file, which is supposed to be provisioned
outside of Nix store.
'';
};
publicKey = mkOption {
type = types.str;
publicKeyPath = mkOption {
type = types.path;
description = ''
Public agent key.
A run-time path to the key file, which is supposed to be provisioned
outside of Nix store.
'';
};
};
@ -84,11 +85,15 @@ in
home = cfg.dataDir;
createHome = true;
description = "Buildkite agent user";
extraGroups = [ "keys" ];
};
environment.systemPackages = [ cfg.package ];
systemd.services.buildkite-agent =
let copy = x: target: perms:
"cp -f ${x} ${target}; ${pkgs.coreutils}/bin/chmod ${toString perms} ${target}; ";
in
{ description = "Buildkite Agent";
wantedBy = [ "multi-user.target" ];
after = [ "network.target" ];
@ -97,18 +102,26 @@ in
HOME = cfg.dataDir;
NIX_REMOTE = "daemon";
};
## NB: maximum care is taken so that secrets (ssh keys and the CI token)
## don't end up in the Nix store.
preStart = ''
${pkgs.coreutils}/bin/mkdir -m 0700 -p ${cfg.dataDir}/.ssh
${pkgs.coreutils}/bin/mkdir -m 0700 -p ${cfg.dataDir}/.ssh
${copy (toString cfg.openssh.privateKeyPath) "${cfg.dataDir}/.ssh/id_rsa" 600}
${copy (toString cfg.openssh.publicKeyPath) "${cfg.dataDir}/.ssh/id_rsa.pub" 600}
echo "${cfg.openssh.privateKey}" > ${cfg.dataDir}/.ssh/id_rsa
${pkgs.coreutils}/bin/chmod 600 ${cfg.dataDir}/.ssh/id_rsa
echo "${cfg.openssh.publicKey}" > ${cfg.dataDir}/.ssh/id_rsa.pub
${pkgs.coreutils}/bin/chmod 600 ${cfg.dataDir}/.ssh/id_rsa.pub
'';
cat > "${cfg.dataDir}/buildkite-agent.cfg" <<EOF
token="$(cat ${toString cfg.tokenPath})"
name="${cfg.name}"
meta-data="${cfg.meta-data}"
hooks-path="${pkgs.buildkite-agent}/share/hooks"
build-path="${cfg.dataDir}/builds"
bootstrap-script="${pkgs.buildkite-agent}/share/bootstrap.sh"
EOF
'';
serviceConfig =
{ ExecStart = "${pkgs.buildkite-agent}/bin/buildkite-agent start --config ${configFile}";
{ ExecStart = "${pkgs.buildkite-agent}/bin/buildkite-agent start --config /var/lib/buildkite-agent/buildkite-agent.cfg";
User = "buildkite-agent";
RestartSec = 5;
Restart = "on-failure";
@ -116,4 +129,9 @@ in
};
};
};
imports = [
(mkRenamedOptionModule [ "services" "buildkite-agent" "token" ] [ "services" "buildkite-agent" "tokenPath" ])
(mkRenamedOptionModule [ "services" "buildkite-agent" "openssh" "privateKey" ] [ "services" "buildkite-agent" "openssh" "privateKeyPath" ])
(mkRenamedOptionModule [ "services" "buildkite-agent" "openssh" "publicKey" ] [ "services" "buildkite-agent" "openssh" "publicKeyPath" ])
];
}

View File

@ -28,6 +28,7 @@ let
serverEnv = env //
{ HYDRA_TRACKER = cfg.tracker;
XDG_CACHE_HOME = "${baseDir}/www/.cache";
COLUMNS = "80";
PGPASSFILE = "${baseDir}/pgpass-www"; # grrr
} // (optionalAttrs cfg.debugServer { DBIC_TRACE = "1"; });
@ -225,14 +226,14 @@ in
services.hydra.extraConfig =
''
using_frontend_proxy 1
base_uri ${cfg.hydraURL}
notification_sender ${cfg.notificationSender}
max_servers 25
using_frontend_proxy = 1
base_uri = ${cfg.hydraURL}
notification_sender = ${cfg.notificationSender}
max_servers = 25
${optionalString (cfg.logo != null) ''
hydra_logo ${cfg.logo}
hydra_logo = ${cfg.logo}
''}
gc_roots_dir ${cfg.gcRootsDir}
gc_roots_dir = ${cfg.gcRootsDir}
use-substitutes = ${if cfg.useSubstitutes then "1" else "0"}
'';

View File

@ -7,6 +7,11 @@ let
cfg = config.services.mysql;
mysql = cfg.package;
isMariaDB =
let
pName = _p: (builtins.parseDrvName (_p.name)).name;
in pName mysql == pName pkgs.mariadb;
atLeast55 = versionAtLeast mysql.mysqlVersion "5.5";
@ -59,7 +64,7 @@ in
type = types.package;
example = literalExample "pkgs.mysql";
description = "
Which MySQL derivation to use.
Which MySQL derivation to use. MariaDB packages are supported too.
";
};
@ -360,7 +365,7 @@ in
${concatMapStrings (user:
''
( echo "CREATE USER IF NOT EXISTS '${user.name}'@'localhost' IDENTIFIED WITH ${if mysql == pkgs.mariadb then "unix_socket" else "auth_socket"};"
( echo "CREATE USER IF NOT EXISTS '${user.name}'@'localhost' IDENTIFIED WITH ${if isMariaDB then "unix_socket" else "auth_socket"};"
${concatStringsSep "\n" (mapAttrsToList (database: permission: ''
echo "GRANT ${permission} ON ${database} TO '${user.name}'@'localhost';"
'') user.ensurePermissions)}

View File

@ -153,7 +153,7 @@ in
default= if versionAtLeast config.system.stateVersion "17.09" then "postgres" else "root";
internal = true;
description = ''
NixOS traditionally used `root` as superuser, most other distros use `postgres`.
NixOS traditionally used 'root' as superuser, most other distros use 'postgres'.
From 17.09 we also try to follow this standard. Internal since changing this value
would lead to breakage while setting up databases.
'';

View File

@ -0,0 +1,28 @@
# dleyna-renderer service.
{ config, lib, pkgs, ... }:
with lib;
{
###### interface
options = {
services.dleyna-renderer = {
enable = mkOption {
type = types.bool;
default = false;
description = ''
Whether to enable dleyna-renderer service, a DBus service
for handling DLNA renderers.
'';
};
};
};
###### implementation
config = mkIf config.services.dleyna-renderer.enable {
environment.systemPackages = [ pkgs.dleyna-renderer ];
services.dbus.packages = [ pkgs.dleyna-renderer ];
};
}

View File

@ -0,0 +1,28 @@
# dleyna-server service.
{ config, lib, pkgs, ... }:
with lib;
{
###### interface
options = {
services.dleyna-server = {
enable = mkOption {
type = types.bool;
default = false;
description = ''
Whether to enable dleyna-server service, a DBus service
for handling DLNA servers.
'';
};
};
};
###### implementation
config = mkIf config.services.dleyna-server.enable {
environment.systemPackages = [ pkgs.dleyna-server ];
services.dbus.packages = [ pkgs.dleyna-server ];
};
}

View File

@ -0,0 +1,41 @@
# Tracker Miners daemons.
{ config, pkgs, lib, ... }:
with lib;
{
###### interface
options = {
services.gnome3.tracker-miners = {
enable = mkOption {
type = types.bool;
default = false;
description = ''
Whether to enable Tracker miners, indexing services for Tracker
search engine and metadata storage system.
'';
};
};
};
###### implementation
config = mkIf config.services.gnome3.tracker-miners.enable {
environment.systemPackages = [ pkgs.gnome3.tracker-miners ];
services.dbus.packages = [ pkgs.gnome3.tracker-miners ];
systemd.packages = [ pkgs.gnome3.tracker-miners ];
};
}

View File

@ -6,7 +6,7 @@ let
cfg = config.services.factorio;
factorio = pkgs.factorio-headless;
name = "Factorio";
stateDir = "/var/lib/factorio";
stateDir = cfg.stateDir;
mkSavePath = name: "${stateDir}/saves/${name}.zip";
configFile = pkgs.writeText "factorio.conf" ''
use-system-read-write-data-directories=true
@ -25,7 +25,7 @@ let
password = cfg.password;
token = cfg.token;
game_password = cfg.game-password;
require_user_verification = true;
require_user_verification = cfg.requireUserVerification;
max_upload_in_kilobytes_per_second = 0;
minimum_latency_in_ticks = 0;
ignore_player_limit_for_returning_players = false;
@ -80,6 +80,15 @@ in
customizations.
'';
};
stateDir = mkOption {
type = types.path;
default = "/var/lib/factorio";
description = ''
The server's data directory.
The configuration and map will be stored here.
'';
};
mods = mkOption {
type = types.listOf types.package;
default = [];
@ -148,6 +157,13 @@ in
Game password.
'';
};
requireUserVerification = mkOption {
type = types.bool;
default = true;
description = ''
When set to true, the server will only allow clients that have a valid factorio.com account.
'';
};
autosave-interval = mkOption {
type = types.nullOr types.int;
default = null;

View File

@ -0,0 +1,90 @@
# fwupd daemon.
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.services.fwupd;
originalEtc =
let
isRegular = v: v == "regular";
listFiles = d: builtins.attrNames (filterAttrs (const isRegular) (builtins.readDir d));
copiedDirs = [ "fwupd/remotes.d" "pki/fwupd" "pki/fwupd-metadata" ];
originalFiles = concatMap (d: map (f: "${d}/${f}") (listFiles "${pkgs.fwupd}/etc/${d}")) copiedDirs;
mkEtcFile = n: nameValuePair n { source = "${pkgs.fwupd}/etc/${n}"; };
in listToAttrs (map mkEtcFile originalFiles);
extraTrustedKeys =
let
mkName = p: "pki/fwupd/${baseNameOf (toString p)}";
mkEtcFile = p: nameValuePair (mkName p) { source = p; };
in listToAttrs (map mkEtcFile cfg.extraTrustedKeys);
in {
###### interface
options = {
services.fwupd = {
enable = mkOption {
type = types.bool;
default = false;
description = ''
Whether to enable fwupd, a DBus service that allows
applications to update firmware.
'';
};
blacklistDevices = mkOption {
type = types.listOf types.string;
default = [];
example = [ "2082b5e0-7a64-478a-b1b2-e3404fab6dad" ];
description = ''
Allow blacklisting specific devices by their GUID
'';
};
blacklistPlugins = mkOption {
type = types.listOf types.string;
default = [];
example = [ "udev" ];
description = ''
Allow blacklisting specific plugins
'';
};
extraTrustedKeys = mkOption {
type = types.listOf types.path;
default = [];
example = literalExample "[ /etc/nixos/fwupd/myfirmware.pem ]";
description = ''
Installing a public key allows firmware signed with a matching private key to be recognized as trusted, which may require less authentication to install than for untrusted files. By default trusted firmware can be upgraded (but not downgraded) without the user or administrator password. Only very few keys are installed by default.
'';
};
};
};
###### implementation
config = mkIf cfg.enable {
environment.systemPackages = [ pkgs.fwupd ];
environment.etc = {
"fwupd/daemon.conf" = {
source = pkgs.writeText "daemon.conf" ''
[fwupd]
BlacklistDevices=${lib.concatStringsSep ";" cfg.blacklistDevices}
BlacklistPlugins=${lib.concatStringsSep ";" cfg.blacklistPlugins}
'';
};
} // originalEtc // extraTrustedKeys;
services.dbus.packages = [ pkgs.fwupd ];
services.udev.packages = [ pkgs.fwupd ];
systemd.packages = [ pkgs.fwupd ];
systemd.tmpfiles.rules = [
"d /var/lib/fwupd 0755 root root -"
];
};
}

View File

@ -62,14 +62,14 @@ let
}
'')
''
(optionalString (cfg.mailboxes != []) ''
protocol imap {
namespace inbox {
inbox=yes
${concatStringsSep "\n" (map mailboxConfig cfg.mailboxes)}
}
}
''
'')
(optionalString cfg.enableQuota ''
mail_plugins = $mail_plugins quota

View File

@ -44,7 +44,6 @@ database: {
}
event_cache_size: "${cfg.event_cache_size}"
verbose: ${cfg.verbose}
log_file: "/var/log/matrix-synapse/homeserver.log"
log_config: "${logConfigFile}"
rc_messages_per_second: ${cfg.rc_messages_per_second}
rc_message_burst_count: ${cfg.rc_message_burst_count}
@ -53,8 +52,8 @@ federation_rc_sleep_limit: ${cfg.federation_rc_sleep_limit}
federation_rc_sleep_delay: ${cfg.federation_rc_sleep_delay}
federation_rc_reject_limit: ${cfg.federation_rc_reject_limit}
federation_rc_concurrent: ${cfg.federation_rc_concurrent}
media_store_path: "/var/lib/matrix-synapse/media"
uploads_path: "/var/lib/matrix-synapse/uploads"
media_store_path: "${cfg.dataDir}/media"
uploads_path: "${cfg.dataDir}/uploads"
max_upload_size: "${cfg.max_upload_size}"
max_image_pixels: "${cfg.max_image_pixels}"
dynamic_thumbnails: ${boolToString cfg.dynamic_thumbnails}
@ -86,7 +85,7 @@ ${optionalString (cfg.macaroon_secret_key != null) ''
expire_access_token: ${boolToString cfg.expire_access_token}
enable_metrics: ${boolToString cfg.enable_metrics}
report_stats: ${boolToString cfg.report_stats}
signing_key_path: "/var/lib/matrix-synapse/homeserver.signing.key"
signing_key_path: "${cfg.dataDir}/homeserver.signing.key"
key_refresh_interval: "${cfg.key_refresh_interval}"
perspectives:
servers: {
@ -348,7 +347,7 @@ in {
database_args = mkOption {
type = types.attrs;
default = {
database = "/var/lib/matrix-synapse/homeserver.db";
database = "${cfg.dataDir}/homeserver.db";
};
description = ''
Arguments to pass to the engine.
@ -586,6 +585,14 @@ in {
A yaml python logging config file
'';
};
dataDir = mkOption {
type = types.str;
default = "/var/lib/matrix-synapse";
description = ''
The directory where matrix-synapse stores its stateful data such as
certificates, media and uploads.
'';
};
};
};
@ -593,7 +600,7 @@ in {
users.extraUsers = [
{ name = "matrix-synapse";
group = "matrix-synapse";
home = "/var/lib/matrix-synapse";
home = cfg.dataDir;
createHome = true;
shell = "${pkgs.bash}/bin/bash";
uid = config.ids.uids.matrix-synapse;
@ -611,16 +618,16 @@ in {
preStart = ''
${cfg.package}/bin/homeserver \
--config-path ${configFile} \
--keys-directory /var/lib/matrix-synapse \
--keys-directory ${cfg.dataDir} \
--generate-keys
'';
serviceConfig = {
Type = "simple";
User = "matrix-synapse";
Group = "matrix-synapse";
WorkingDirectory = "/var/lib/matrix-synapse";
WorkingDirectory = cfg.dataDir;
PermissionsStartOnly = true;
ExecStart = "${cfg.package}/bin/homeserver --config-path ${configFile} --keys-directory /var/lib/matrix-synapse";
ExecStart = "${cfg.package}/bin/homeserver --config-path ${configFile} --keys-directory ${cfg.dataDir}";
Restart = "on-failure";
};
};

View File

@ -55,9 +55,6 @@ in {
description = "Fusion Inventory Agent";
wantedBy = [ "multi-user.target" ];
environment = {
OPTIONS = "--no-category=software";
};
serviceConfig = {
ExecStart = "${pkgs.fusionInventory}/bin/fusioninventory-agent --conf-file=${configFile} --daemon --no-fork";
};

View File

@ -111,7 +111,7 @@ in {
type = mkOption {
description = "Database type.";
default = "sqlite3";
type = types.enum ["mysql" "sqlite3" "postgresql"];
type = types.enum ["mysql" "sqlite3" "postgres"];
};
host = mkOption {

View File

@ -17,40 +17,6 @@ let
nodeCfg = config.services.munin-node;
cronCfg = config.services.munin-cron;
muninPlugins = pkgs.stdenv.mkDerivation {
name = "munin-available-plugins";
buildCommand = ''
mkdir -p $out
cp --preserve=mode ${pkgs.munin}/lib/plugins/* $out/
for file in $out/*; do
case "$file" in
*/plugin.sh|*/plugins.history)
chmod +x "$file"
continue;;
esac
# read magic makers from the file
family=$(sed -nr 's/.*#%#\s+family\s*=\s*(\S+)\s*/\1/p' $file)
cap=$(sed -nr 's/.*#%#\s+capabilities\s*=\s*(.+)/\1/p' $file)
wrapProgram $file \
--set PATH "/run/wrappers/bin:/run/current-system/sw/bin" \
--set MUNIN_LIBDIR "${pkgs.munin}/lib" \
--set MUNIN_PLUGSTATE "/var/run/munin"
# munin uses markers to tell munin-node-configure what a plugin can do
echo "#%# family=$family" >> $file
echo "#%# capabilities=$cap" >> $file
done
# NOTE: we disable disktstats because plugin seems to fail and it hangs html generation (100% CPU + memory leak)
rm -f $out/diskstats
'';
buildInputs = [ pkgs.makeWrapper ];
};
muninConf = pkgs.writeText "munin.conf"
''
dbdir /var/lib/munin
@ -83,6 +49,29 @@ let
${nodeCfg.extraConfig}
'';
pluginConf = pkgs.writeText "munin-plugin-conf"
''
[hddtemp_smartctl]
user root
group root
[meminfo]
user root
group root
[ipmi*]
user root
group root
'';
pluginConfDir = pkgs.stdenv.mkDerivation {
name = "munin-plugin-conf.d";
buildCommand = ''
mkdir $out
ln -s ${pluginConf} $out/nixos-config
'';
};
in
{
@ -179,17 +168,22 @@ in
description = "Munin Node";
after = [ "network.target" ];
wantedBy = [ "multi-user.target" ];
path = [ pkgs.munin ];
path = with pkgs; [ munin smartmontools "/run/current-system/sw" "/run/wrappers" ];
environment.MUNIN_LIBDIR = "${pkgs.munin}/lib";
environment.MUNIN_PLUGSTATE = "/var/run/munin";
environment.MUNIN_LOGDIR = "/var/log/munin";
preStart = ''
echo "updating munin plugins..."
mkdir -p /etc/munin/plugins
rm -rf /etc/munin/plugins/*
PATH="/run/wrappers/bin:/run/current-system/sw/bin" ${pkgs.munin}/sbin/munin-node-configure --shell --families contrib,auto,manual --config ${nodeConf} --libdir=${muninPlugins} --servicedir=/etc/munin/plugins 2>/dev/null | ${pkgs.bash}/bin/bash
${pkgs.munin}/bin/munin-node-configure --suggest --shell --families contrib,auto,manual --config ${nodeConf} --libdir=${pkgs.munin}/lib/plugins --servicedir=/etc/munin/plugins --sconfdir=${pluginConfDir} 2>/dev/null | ${pkgs.bash}/bin/bash
# NOTE: we disable disktstats because plugin seems to fail and it hangs html generation (100% CPU + memory leak)
rm /etc/munin/plugins/diskstats || true
'';
serviceConfig = {
ExecStart = "${pkgs.munin}/sbin/munin-node --config ${nodeConf} --servicedir /etc/munin/plugins/";
ExecStart = "${pkgs.munin}/sbin/munin-node --config ${nodeConf} --servicedir /etc/munin/plugins/ --sconfdir=${pluginConfDir}";
};
};

View File

@ -66,6 +66,16 @@ let
How frequently to evaluate rules by default.
'';
};
external_labels = mkOption {
type = types.attrsOf types.str;
description = ''
The labels to add to any time series or alerts when
communicating with external systems (federation, remote
storage, Alertmanager).
'';
default = {};
};
};
};
@ -100,6 +110,29 @@ let
The HTTP resource path on which to fetch metrics from targets.
'';
};
honor_labels = mkOption {
type = types.bool;
default = false;
description = ''
Controls how Prometheus handles conflicts between labels
that are already present in scraped data and labels that
Prometheus would attach server-side ("job" and "instance"
labels, manually configured target labels, and labels
generated by service discovery implementations).
If honor_labels is set to "true", label conflicts are
resolved by keeping label values from the scraped data and
ignoring the conflicting server-side labels.
If honor_labels is set to "false", label conflicts are
resolved by renaming conflicting labels in the scraped data
to "exported_&lt;original-label&gt;" (for example
"exported_instance", "exported_job") and then attaching
server-side labels. This is useful for use cases such as
federation, where all labels specified in the target should
be preserved.
'';
};
scheme = mkOption {
type = types.enum ["http" "https"];
default = "http";

View File

@ -17,7 +17,7 @@ let
#! ${pkgs.stdenv.shell}
${optionalString nm.enable ''
{
cat << EOF
${pkgs.coreutils}/bin/cat << EOF
From: smartd on ${host} <root>
To: undisclosed-recipients:;
Subject: SMART error on $SMARTD_DEVICESTRING: $SMARTD_FAILTYPE
@ -30,7 +30,7 @@ let
''}
${optionalString nw.enable ''
{
cat << EOF
${pkgs.coreutils}/bin/cat << EOF
Problem detected with disk: $SMARTD_DEVICESTRING
Warning message from smartd is:
@ -41,7 +41,7 @@ let
${optionalString nx.enable ''
export DISPLAY=${nx.display}
{
cat << EOF
${pkgs.coreutils}/bin/cat << EOF
Problem detected with disk: $SMARTD_DEVICESTRING
Warning message from smartd is:

View File

@ -39,8 +39,6 @@ let
# NB: migration must be performed prior to pre-start, else we get the failure message!
preStart = ''
ipfs repo fsck # workaround for BUG #4212 (https://github.com/ipfs/go-ipfs/issues/4214)
ipfs --local config Addresses.API ${cfg.apiAddress}
ipfs --local config Addresses.Gateway ${cfg.gatewayAddress}
'' + optionalString cfg.autoMount ''
ipfs --local config Mounts.FuseAllowOther --json true
ipfs --local config Mounts.IPFS ${cfg.ipfsMountDir}
@ -56,7 +54,11 @@ let
EOF
ipfs --local config --json "${concatStringsSep "." path}" "$value"
'')
cfg.extraConfig)
({ Addresses.API = cfg.apiAddress;
Addresses.Gateway = cfg.gatewayAddress;
Addresses.Swarm = cfg.swarmAddress;
} //
cfg.extraConfig))
);
serviceConfig = {
ExecStart = "${wrapped}/bin/ipfs daemon ${ipfsFlags}";
@ -140,6 +142,12 @@ in {
description = "Where IPFS exposes its API to";
};
swarmAddress = mkOption {
type = types.listOf types.str;
default = [ "/ip4/0.0.0.0/tcp/4001" "/ip6/::/tcp/4001" ];
description = "Where IPFS listens for incoming p2p connections";
};
enableGC = mkOption {
type = types.bool;
default = false;

View File

@ -6,8 +6,10 @@ let
cfg = config.services.babeld;
conditionalBoolToString = value: if (isBool value) then (boolToString value) else (toString value);
paramsString = params:
concatMapStringsSep "" (name: "${name} ${boolToString (getAttr name params)}")
concatMapStringsSep " " (name: "${name} ${conditionalBoolToString (getAttr name params)}")
(attrNames params);
interfaceConfig = name:
@ -49,7 +51,7 @@ in
type = types.nullOr (types.attrsOf types.unspecified);
example =
{
wired = true;
type = "tunnel";
"split-horizon" = true;
};
};
@ -63,7 +65,7 @@ in
type = types.attrsOf (types.attrsOf types.unspecified);
example =
{ enp0s2 =
{ wired = true;
{ type = "wired";
"hello-interval" = 5;
"split-horizon" = "auto";
};

View File

@ -125,6 +125,9 @@ let
ip46tables -t raw -N nixos-fw-rpfilter 2> /dev/null || true
ip46tables -t raw -A nixos-fw-rpfilter -m rpfilter ${optionalString (cfg.checkReversePath == "loose") "--loose"} -j RETURN
# Allows this host to act as a DHCP4 client without first having to use APIPA
iptables -t raw -A nixos-fw-rpfilter -p udp --sport 67 --dport 68 -j RETURN
# Allows this host to act as a DHCPv4 server
iptables -t raw -A nixos-fw-rpfilter -s 0.0.0.0 -d 255.255.255.255 -p udp --sport 68 --dport 67 -j RETURN

View File

@ -0,0 +1,131 @@
{ lib, ...}:
{ options = {
proto = lib.mkOption {
type = lib.types.enum [ "h2" "http/1.1" ];
default = "http/1.1";
description = ''
This option configures the protocol the backend server expects
to use.
Please see https://nghttp2.org/documentation/nghttpx.1.html#cmdoption-nghttpx-b
for more detail.
'';
};
tls = lib.mkOption {
type = lib.types.bool;
default = false;
description = ''
This option determines whether nghttpx will negotiate its
connection with a backend server using TLS or not. The burden
is on the backend server to provide the TLS certificate!
Please see https://nghttp2.org/documentation/nghttpx.1.html#cmdoption-nghttpx-b
for more detail.
'';
};
sni = lib.mkOption {
type = lib.types.nullOr lib.types.str;
default = null;
description = ''
Override the TLS SNI field value. This value (in nghttpx)
defaults to the host value of the backend configuration.
Please see https://nghttp2.org/documentation/nghttpx.1.html#cmdoption-nghttpx-b
for more detail.
'';
};
fall = lib.mkOption {
type = lib.types.int;
default = 0;
description = ''
If nghttpx cannot connect to the backend N times in a row, the
backend is assumed to be offline and is excluded from load
balancing. If N is 0 the backend is never excluded from load
balancing.
Please see https://nghttp2.org/documentation/nghttpx.1.html#cmdoption-nghttpx-b
for more detail.
'';
};
rise = lib.mkOption {
type = lib.types.int;
default = 0;
description = ''
If the backend is excluded from load balancing, nghttpx will
periodically attempt to make a connection to the backend. If
the connection is successful N times in a row the backend is
re-included in load balancing. If N is 0 a backend is never
reconsidered for load balancing once it falls.
Please see https://nghttp2.org/documentation/nghttpx.1.html#cmdoption-nghttpx-b
for more detail.
'';
};
affinity = lib.mkOption {
type = lib.types.enum [ "ip" "none" ];
default = "none";
description = ''
If "ip" is given, client IP based session affinity is
enabled. If "none" is given, session affinity is disabled.
Session affinity is enabled (by nghttpx) per-backend
pattern. If at least one backend has a non-"none" affinity,
then session affinity is enabled for all backend servers
sharing the same pattern.
It is advised to set affinity on all backends explicitly if
session affinity is desired. The session affinity may break if
one of the backend gets unreachable, or backend settings are
reloaded or replaced by API.
Please see https://nghttp2.org/documentation/nghttpx.1.html#cmdoption-nghttpx-b
for more detail.
'';
};
dns = lib.mkOption {
type = lib.types.bool;
default = false;
description = ''
Name resolution of a backends host name is done at start up,
or configuration reload. If "dns" is true, name resolution
takes place dynamically.
This is useful if a backends address changes frequently. If
"dns" is true, name resolution of a backend's host name at
start up, or configuration reload is skipped.
Please see https://nghttp2.org/documentation/nghttpx.1.html#cmdoption-nghttpx-b
for more detail.
'';
};
redirect-if-not-tls = lib.mkOption {
type = lib.types.bool;
default = false;
description = ''
If true, a backend match requires the frontend connection be
TLS encrypted. If it is not, nghttpx responds to the request
with a 308 status code and https URI the client should use
instead in the Location header.
The port number in the redirect URI is 443 by default and can
be changed using 'services.nghttpx.redirect-https-port'
option.
If at least one backend has "redirect-if-not-tls" set to true,
this feature is enabled for all backend servers with the same
pattern. It is advised to set "redirect-if-no-tls" parameter
to all backends explicitly if this feature is desired.
Please see https://nghttp2.org/documentation/nghttpx.1.html#cmdoption-nghttpx-b
for more detail.
'';
};
};
}

View File

@ -0,0 +1,50 @@
{ lib, ... }:
{ options = {
server = lib.mkOption {
type =
lib.types.either
(lib.types.submodule (import ./server-options.nix))
(lib.types.path);
example = {
host = "127.0.0.1";
port = 8888;
};
default = {
host = "127.0.0.1";
port = 80;
};
description = ''
Backend server location specified as either a host:port pair
or a unix domain docket.
'';
};
patterns = lib.mkOption {
type = lib.types.listOf lib.types.str;
example = [
"*.host.net/v1/"
"host.org/v2/mypath"
"/somepath"
];
default = [];
description = ''
List of nghttpx backend patterns.
Please see https://nghttp2.org/documentation/nghttpx.1.html#cmdoption-nghttpx-b
for more information on the pattern syntax and nghttpxs behavior.
'';
};
params = lib.mkOption {
type = lib.types.nullOr (lib.types.submodule (import ./backend-params-submodule.nix));
example = {
proto = "h2";
tls = true;
};
default = null;
description = ''
Parameters to configure a backend.
'';
};
};
}

View File

@ -0,0 +1,117 @@
{config, pkgs, lib, ...}:
let
cfg = config.services.nghttpx;
# renderHost :: Either ServerOptions Path -> String
renderHost = server:
if builtins.isString server
then "unix://${server}"
else "${server.host},${builtins.toString server.port}";
# Filter out submodule parameters whose value is null or false or is
# the key _module.
#
# filterParams :: ParamsSubmodule -> ParamsSubmodule
filterParams = p:
lib.filterAttrs
(n: v: ("_module" != n) && (null != v) && (false != v))
(lib.optionalAttrs (null != p) p);
# renderBackend :: BackendSubmodule -> String
renderBackend = backend:
let
host = renderHost backend.server;
patterns = lib.concatStringsSep ":" backend.patterns;
# Render a set of backend parameters, this is somewhat
# complicated because nghttpx backend patterns can be entirely
# omitted and the params may be given as a mixed collection of
# 'key=val' pairs or atoms (e.g: 'proto=h2;tls')
params =
lib.mapAttrsToList
(n: v:
if builtins.isBool v
then n
else if builtins.isString v
then "${n}=${v}"
else "${n}=${builtins.toString v}")
(filterParams backend.params);
# NB: params are delimited by a ";" which is the same delimiter
# to separate the host;[pattern];[params] sections of a backend
sections =
builtins.filter (e: "" != e) ([
host
patterns
]++params);
formattedSections = lib.concatStringsSep ";" sections;
in
"backend=${formattedSections}";
# renderFrontend :: FrontendSubmodule -> String
renderFrontend = frontend:
let
host = renderHost frontend.server;
params0 =
lib.mapAttrsToList
(n: v: if builtins.isBool v then n else v)
(filterParams frontend.params);
# NB: nghttpx doesn't accept "tls", you must omit "no-tls" for
# the default behavior of turning on TLS.
params1 = lib.remove "tls" params0;
sections = [ host] ++ params1;
formattedSections = lib.concatStringsSep ";" sections;
in
"frontend=${formattedSections}";
configurationFile = pkgs.writeText "nghttpx.conf" ''
${lib.optionalString (null != cfg.tls) ("private-key-file="+cfg.tls.key)}
${lib.optionalString (null != cfg.tls) ("certificate-file="+cfg.tls.crt)}
user=nghttpx
${lib.concatMapStringsSep "\n" renderFrontend cfg.frontends}
${lib.concatMapStringsSep "\n" renderBackend cfg.backends}
backlog=${builtins.toString cfg.backlog}
backend-address-family=${cfg.backend-address-family}
workers=${builtins.toString cfg.workers}
rlimit-nofile=${builtins.toString cfg.rlimit-nofile}
${lib.optionalString cfg.single-thread "single-thread=yes"}
${lib.optionalString cfg.single-process "single-process=yes"}
${cfg.extraConfig}
'';
in
{ imports = [
./nghttpx-options.nix
];
config = lib.mkIf cfg.enable {
users.groups.nghttpx = { };
users.users.nghttpx = {
group = config.users.groups.nghttpx.name;
};
systemd.services = {
nghttpx = {
wantedBy = [ "multi-user.target" ];
after = [ "network.target" ];
script = ''
${pkgs.nghttp2}/bin/nghttpx --conf=${configurationFile}
'';
serviceConfig = {
Restart = "on-failure";
RestartSec = 60;
};
};
};
};
}

View File

@ -0,0 +1,64 @@
{ lib, ...}:
{ options = {
tls = lib.mkOption {
type = lib.types.enum [ "tls" "no-tls" ];
default = "tls";
description = ''
Enable or disable TLS. If true (enabled) the key and
certificate must be configured for nghttpx.
Please see https://nghttp2.org/documentation/nghttpx.1.html#cmdoption-nghttpx-f
for more detail.
'';
};
sni-fwd = lib.mkOption {
type = lib.types.bool;
default = false;
description = ''
When performing a match to select a backend server, SNI host
name received from the client is used instead of the request
host. See --backend option about the pattern match.
Please see https://nghttp2.org/documentation/nghttpx.1.html#cmdoption-nghttpx-f
for more detail.
'';
};
api = lib.mkOption {
type = lib.types.bool;
default = false;
description = ''
Enable API access for this frontend. This enables you to
dynamically modify nghttpx at run-time therefore this feature
is disabled by default and should be turned on with care.
Please see https://nghttp2.org/documentation/nghttpx.1.html#cmdoption-nghttpx-f
for more detail.
'';
};
healthmon = lib.mkOption {
type = lib.types.bool;
default = false;
description = ''
Make this frontend a health monitor endpoint. Any request
received on this frontend is responded to with a 200 OK.
Please see https://nghttp2.org/documentation/nghttpx.1.html#cmdoption-nghttpx-f
for more detail.
'';
};
proxyproto = lib.mkOption {
type = lib.types.bool;
default = false;
description = ''
Accept PROXY protocol version 1 on frontend connection.
Please see https://nghttp2.org/documentation/nghttpx.1.html#cmdoption-nghttpx-f
for more detail.
'';
};
};
}

View File

@ -0,0 +1,36 @@
{ lib, ... }:
{ options = {
server = lib.mkOption {
type =
lib.types.either
(lib.types.submodule (import ./server-options.nix))
(lib.types.path);
example = {
host = "127.0.0.1";
port = 8888;
};
default = {
host = "127.0.0.1";
port = 80;
};
description = ''
Frontend server interface binding specification as either a
host:port pair or a unix domain docket.
NB: a host of "*" listens on all interfaces and includes IPv6
addresses.
'';
};
params = lib.mkOption {
type = lib.types.nullOr (lib.types.submodule (import ./frontend-params-submodule.nix));
example = {
tls = "tls";
};
default = null;
description = ''
Parameters to configure a backend.
'';
};
};
}

View File

@ -0,0 +1,142 @@
{ config, lib, ... }:
{ options.services.nghttpx = {
enable = lib.mkEnableOption "nghttpx";
frontends = lib.mkOption {
type = lib.types.listOf (lib.types.submodule (import ./frontend-submodule.nix));
description = ''
A list of frontend listener specifications.
'';
example = [
{ server = {
host = "*";
port = 80;
};
params = {
tls = "no-tls";
};
}
];
};
backends = lib.mkOption {
type = lib.types.listOf (lib.types.submodule (import ./backend-submodule.nix));
description = ''
A list of backend specifications.
'';
example = [
{ server = {
host = "172.16.0.22";
port = 8443;
};
patterns = [ "/" ];
params = {
proto = "http/1.1";
redirect-if-not-tls = true;
};
}
];
};
tls = lib.mkOption {
type = lib.types.nullOr (lib.types.submodule (import ./tls-submodule.nix));
default = null;
description = ''
TLS certificate and key paths. Note that this does not enable
TLS for a frontend listener, to do so, a frontend
specification must set <literal>params.tls</literal> to true.
'';
example = {
key = "/etc/ssl/keys/server.key";
crt = "/etc/ssl/certs/server.crt";
};
};
extraConfig = lib.mkOption {
type = lib.types.lines;
default = "";
description = ''
Extra configuration options to be appended to the generated
configuration file.
'';
};
single-process = lib.mkOption {
type = lib.types.bool;
default = false;
description = ''
Run this program in a single process mode for debugging
purpose. Without this option, nghttpx creates at least 2
processes: master and worker processes. If this option is
used, master and worker are unified into a single
process. nghttpx still spawns additional process if neverbleed
is used. In the single process mode, the signal handling
feature is disabled.
Please see https://nghttp2.org/documentation/nghttpx.1.html#cmdoption-nghttpx--single-process
'';
};
backlog = lib.mkOption {
type = lib.types.int;
default = 65536;
description = ''
Listen backlog size.
Please see https://nghttp2.org/documentation/nghttpx.1.html#cmdoption-nghttpx--backlog
'';
};
backend-address-family = lib.mkOption {
type = lib.types.enum [
"auto"
"IPv4"
"IPv6"
];
default = "auto";
description = ''
Specify address family of backend connections. If "auto" is
given, both IPv4 and IPv6 are considered. If "IPv4" is given,
only IPv4 address is considered. If "IPv6" is given, only IPv6
address is considered.
Please see https://nghttp2.org/documentation/nghttpx.1.html#cmdoption-nghttpx--backend-address-family
'';
};
workers = lib.mkOption {
type = lib.types.int;
default = 1;
description = ''
Set the number of worker threads.
Please see https://nghttp2.org/documentation/nghttpx.1.html#cmdoption-nghttpx-n
'';
};
single-thread = lib.mkOption {
type = lib.types.bool;
default = false;
description = ''
Run everything in one thread inside the worker process. This
feature is provided for better debugging experience, or for
the platforms which lack thread support. If threading is
disabled, this option is always enabled.
Please see https://nghttp2.org/documentation/nghttpx.1.html#cmdoption-nghttpx--single-thread
'';
};
rlimit-nofile = lib.mkOption {
type = lib.types.int;
default = 0;
description = ''
Set maximum number of open files (RLIMIT_NOFILE) to &lt;N&gt;. If 0
is given, nghttpx does not set the limit.
Please see https://nghttp2.org/documentation/nghttpx.1.html#cmdoption-nghttpx--rlimit-nofile
'';
};
};
}

View File

@ -0,0 +1,18 @@
{ lib, ... }:
{ options = {
host = lib.mkOption {
type = lib.types.str;
example = "127.0.0.1";
description = ''
Server host address.
'';
};
port = lib.mkOption {
type = lib.types.int;
example = 5088;
description = ''
Server host port.
'';
};
};
}

View File

@ -0,0 +1,21 @@
{lib, ...}:
{ options = {
key = lib.mkOption {
type = lib.types.str;
example = "/etc/ssl/keys/mykeyfile.key";
default = "/etc/ssl/keys/server.key";
description = ''
Path to the TLS key file.
'';
};
crt = lib.mkOption {
type = lib.types.str;
example = "/etc/ssl/certs/mycert.crt";
default = "/etc/ssl/certs/server.crt";
description = ''
Path to the TLS certificate file.
'';
};
};
}

View File

@ -54,8 +54,6 @@ let
));
in listToAttrs (map mkAuthKeyFile usersWithKeys);
supportOldHostKeys = !versionAtLeast config.system.stateVersion "15.07";
in
{
@ -191,9 +189,6 @@ in
default =
[ { type = "rsa"; bits = 4096; path = "/etc/ssh/ssh_host_rsa_key"; }
{ type = "ed25519"; path = "/etc/ssh/ssh_host_ed25519_key"; }
] ++ optionals supportOldHostKeys
[ { type = "dsa"; path = "/etc/ssh/ssh_host_dsa_key"; }
{ type = "ecdsa"; bits = 521; path = "/etc/ssh/ssh_host_ecdsa_key"; }
];
description = ''
NixOS can automatically generate SSH host keys. This option
@ -363,14 +358,21 @@ in
HostKey ${k.path}
'')}
# Allow DSA client keys for now. (These were deprecated
# in OpenSSH 7.0.)
PubkeyAcceptedKeyTypes +ssh-dss
### Recommended settings from both:
# https://stribika.github.io/2015/01/04/secure-secure-shell.html
# and
# https://wiki.mozilla.org/Security/Guidelines/OpenSSH#Modern_.28OpenSSH_6.7.2B.29
# Re-enable DSA host keys for now.
${optionalString supportOldHostKeys ''
HostKeyAlgorithms +ssh-dss
''}
KexAlgorithms curve25519-sha256@libssh.org,diffie-hellman-group-exchange-sha256
Ciphers chacha20-poly1305@openssh.com,aes256-gcm@openssh.com,aes128-gcm@openssh.com,aes256-ctr,aes192-ctr,aes128-ctr
MACs hmac-sha2-512-etm@openssh.com,hmac-sha2-256-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-512,hmac-sha2-256,umac-128@openssh.com
# LogLevel VERBOSE logs user's key fingerprint on login.
# Needed to have a clear audit track of which key was used to log in.
LogLevel VERBOSE
# Use kernel sandbox mechanisms where possible in unprivileged processes.
UsePrivilegeSeparation sandbox
'';
assertions = [{ assertion = if cfg.forwardX11 then cfgc.setXAuthLocation else true;

View File

@ -76,8 +76,9 @@ in
};
};
config = mkIf cfg.updater.enable or cfg.daemon.enable {
config = mkIf (cfg.updater.enable || cfg.daemon.enable) {
environment.systemPackages = [ pkg ];
users.extraUsers = singleton {
name = clamavUser;
uid = config.ids.uids.clamav;
@ -94,7 +95,7 @@ in
environment.etc."clamav/freshclam.conf".source = freshclamConfigFile;
environment.etc."clamav/clamd.conf".source = clamdConfigFile;
systemd.services.clamav-daemon = mkIf cfg.daemon.enable {
systemd.services.clamav-daemon = optionalAttrs cfg.daemon.enable {
description = "ClamAV daemon (clamd)";
after = mkIf cfg.updater.enable [ "clamav-freshclam.service" ];
requires = mkIf cfg.updater.enable [ "clamav-freshclam.service" ];
@ -115,7 +116,7 @@ in
};
};
systemd.timers.clamav-freshclam = mkIf cfg.updater.enable {
systemd.timers.clamav-freshclam = optionalAttrs cfg.updater.enable {
description = "Timer for ClamAV virus database updater (freshclam)";
wantedBy = [ "timers.target" ];
timerConfig = {
@ -124,7 +125,7 @@ in
};
};
systemd.services.clamav-freshclam = mkIf cfg.updater.enable {
systemd.services.clamav-freshclam = optionalAttrs cfg.updater.enable {
description = "ClamAV virus database updater (freshclam)";
restartTriggers = [ freshclamConfigFile ];

View File

@ -0,0 +1,384 @@
{ config, lib, pkgs, ... }:
with lib;
# TODO: are these php-packages needed?
#imagick
#php-geoip -> php.ini: extension = geoip.so
#expat
let
cfg = config.services.restya-board;
runDir = "/run/restya-board";
poolName = "restya-board";
phpfpmSocketName = "/var/run/phpfpm/${poolName}.sock";
in
{
###### interface
options = {
services.restya-board = {
enable = mkEnableOption "restya-board";
dataDir = mkOption {
type = types.path;
default = "/var/lib/restya-board";
example = "/var/lib/restya-board";
description = ''
Data of the application.
'';
};
user = mkOption {
type = types.str;
default = "restya-board";
example = "restya-board";
description = ''
User account under which the web-application runs.
'';
};
group = mkOption {
type = types.str;
default = "nginx";
example = "nginx";
description = ''
Group account under which the web-application runs.
'';
};
virtualHost = {
serverName = mkOption {
type = types.str;
default = "restya.board";
description = ''
Name of the nginx virtualhost to use.
'';
};
listenHost = mkOption {
type = types.str;
default = "localhost";
description = ''
Listen address for the virtualhost to use.
'';
};
listenPort = mkOption {
type = types.int;
default = 3000;
description = ''
Listen port for the virtualhost to use.
'';
};
};
database = {
host = mkOption {
type = types.nullOr types.str;
default = null;
description = ''
Host of the database. Leave 'null' to use a local PostgreSQL database.
A local PostgreSQL database is initialized automatically.
'';
};
port = mkOption {
type = types.nullOr types.int;
default = 5432;
description = ''
The database's port.
'';
};
name = mkOption {
type = types.str;
default = "restya_board";
description = ''
Name of the database. The database must exist.
'';
};
user = mkOption {
type = types.str;
default = "restya_board";
description = ''
The database user. The user must exist and have access to
the specified database.
'';
};
passwordFile = mkOption {
type = types.nullOr types.str;
default = null;
description = ''
The database user's password. 'null' if no password is set.
'';
};
};
email = {
server = mkOption {
type = types.nullOr types.str;
default = null;
example = "localhost";
description = ''
Hostname to send outgoing mail. Null to use the system MTA.
'';
};
port = mkOption {
type = types.int;
default = 25;
description = ''
Port used to connect to SMTP server.
'';
};
login = mkOption {
type = types.str;
default = "";
description = ''
SMTP authentication login used when sending outgoing mail.
'';
};
password = mkOption {
type = types.str;
default = "";
description = ''
SMTP authentication password used when sending outgoing mail.
ATTENTION: The password is stored world-readable in the nix-store!
'';
};
};
timezone = mkOption {
type = types.lines;
default = "GMT";
description = ''
Timezone the web-app runs in.
'';
};
};
};
###### implementation
config = mkIf cfg.enable {
services.phpfpm.poolConfigs = {
"${poolName}" = ''
listen = "${phpfpmSocketName}";
listen.owner = nginx
listen.group = nginx
listen.mode = 0600
user = ${cfg.user}
group = ${cfg.group}
pm = dynamic
pm.max_children = 75
pm.start_servers = 10
pm.min_spare_servers = 5
pm.max_spare_servers = 20
pm.max_requests = 500
catch_workers_output = 1
'';
};
services.phpfpm.phpOptions = ''
date.timezone = "CET"
${optionalString (!isNull cfg.email.server) ''
SMTP = ${cfg.email.server}
smtp_port = ${toString cfg.email.port}
auth_username = ${cfg.email.login}
auth_password = ${cfg.email.password}
''}
'';
services.nginx.enable = true;
services.nginx.virtualHosts."${cfg.virtualHost.serverName}" = {
listen = [ { addr = cfg.virtualHost.listenHost; port = cfg.virtualHost.listenPort; } ];
serverName = cfg.virtualHost.serverName;
root = runDir;
extraConfig = ''
index index.html index.php;
gzip on;
gzip_disable "msie6";
gzip_comp_level 6;
gzip_min_length 1100;
gzip_buffers 16 8k;
gzip_proxied any;
gzip_types text/plain application/xml text/css text/js text/xml application/x-javascript text/javascript application/json application/xml+rss;
client_max_body_size 300M;
rewrite ^/oauth/authorize$ /server/php/authorize.php last;
rewrite ^/oauth_callback/([a-zA-Z0-9_\.]*)/([a-zA-Z0-9_\.]*)$ /server/php/oauth_callback.php?plugin=$1&code=$2 last;
rewrite ^/download/([0-9]*)/([a-zA-Z0-9_\.]*)$ /server/php/download.php?id=$1&hash=$2 last;
rewrite ^/ical/([0-9]*)/([0-9]*)/([a-z0-9]*).ics$ /server/php/ical.php?board_id=$1&user_id=$2&hash=$3 last;
rewrite ^/api/(.*)$ /server/php/R/r.php?_url=$1&$args last;
rewrite ^/api_explorer/api-docs/$ /client/api_explorer/api-docs/index.php last;
'';
locations."/".root = "${runDir}/client";
locations."~ \.php$" = {
tryFiles = "$uri =404";
extraConfig = ''
include ${pkgs.nginx}/conf/fastcgi_params;
fastcgi_pass unix:${phpfpmSocketName};
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PHP_VALUE "upload_max_filesize=9G \n post_max_size=9G \n max_execution_time=200 \n max_input_time=200 \n memory_limit=256M";
'';
};
locations."~* \.(css|js|less|html|ttf|woff|jpg|jpeg|gif|png|bmp|ico)" = {
root = "${runDir}/client";
extraConfig = ''
if (-f $request_filename) {
break;
}
rewrite ^/img/([a-zA-Z_]*)/([a-zA-Z_]*)/([a-zA-Z0-9_\.]*)$ /server/php/image.php?size=$1&model=$2&filename=$3 last;
add_header Cache-Control public;
add_header Cache-Control must-revalidate;
expires 7d;
'';
};
};
systemd.services.restya-board-init = {
description = "Restya board initialization";
serviceConfig.Type = "oneshot";
serviceConfig.RemainAfterExit = true;
wantedBy = [ "multi-user.target" ];
requires = [ "postgresql.service" ];
after = [ "network.target" "postgresql.service" ];
script = ''
rm -rf "${runDir}"
mkdir -m 750 -p "${runDir}"
cp -r "${pkgs.restya-board}/"* "${runDir}"
sed -i "s/@restya.com/@${cfg.virtualHost.serverName}/g" "${runDir}/sql/restyaboard_with_empty_data.sql"
rm -rf "${runDir}/media"
rm -rf "${runDir}/client/img"
chmod -R 0750 "${runDir}"
sed -i "s@^php@${config.services.phpfpm.phpPackage}/bin/php@" "${runDir}/server/php/shell/"*.sh
${if (isNull cfg.database.host) then ''
sed -i "s/^.*'R_DB_HOST'.*$/define('R_DB_HOST', 'localhost');/g" "${runDir}/server/php/config.inc.php"
sed -i "s/^.*'R_DB_PASSWORD'.*$/define('R_DB_PASSWORD', 'restya');/g" "${runDir}/server/php/config.inc.php"
'' else ''
sed -i "s/^.*'R_DB_HOST'.*$/define('R_DB_HOST', '${cfg.database.host}');/g" "${runDir}/server/php/config.inc.php"
sed -i "s/^.*'R_DB_PASSWORD'.*$/define('R_DB_PASSWORD', '$(<${cfg.database.dbPassFile})');/g" "${runDir}/server/php/config.inc.php"
''}
sed -i "s/^.*'R_DB_PORT'.*$/define('R_DB_PORT', '${toString cfg.database.port}');/g" "${runDir}/server/php/config.inc.php"
sed -i "s/^.*'R_DB_NAME'.*$/define('R_DB_NAME', '${cfg.database.name}');/g" "${runDir}/server/php/config.inc.php"
sed -i "s/^.*'R_DB_USER'.*$/define('R_DB_USER', '${cfg.database.user}');/g" "${runDir}/server/php/config.inc.php"
chmod 0400 "${runDir}/server/php/config.inc.php"
ln -sf "${cfg.dataDir}/media" "${runDir}/media"
ln -sf "${cfg.dataDir}/client/img" "${runDir}/client/img"
chmod g+w "${runDir}/tmp/cache"
chown -R "${cfg.user}"."${cfg.group}" "${runDir}"
mkdir -m 0750 -p "${cfg.dataDir}"
mkdir -m 0750 -p "${cfg.dataDir}/media"
mkdir -m 0750 -p "${cfg.dataDir}/client/img"
cp -r "${pkgs.restya-board}/media/"* "${cfg.dataDir}/media"
cp -r "${pkgs.restya-board}/client/img/"* "${cfg.dataDir}/client/img"
chown "${cfg.user}"."${cfg.group}" "${cfg.dataDir}"
chown -R "${cfg.user}"."${cfg.group}" "${cfg.dataDir}/media"
chown -R "${cfg.user}"."${cfg.group}" "${cfg.dataDir}/client/img"
${optionalString (isNull cfg.database.host) ''
if ! [ -e "${cfg.dataDir}/.db-initialized" ]; then
${pkgs.sudo}/bin/sudo -u ${config.services.postgresql.superUser} \
${config.services.postgresql.package}/bin/psql -U ${config.services.postgresql.superUser} \
-c "CREATE USER ${cfg.database.user} WITH ENCRYPTED PASSWORD 'restya'"
${pkgs.sudo}/bin/sudo -u ${config.services.postgresql.superUser} \
${config.services.postgresql.package}/bin/psql -U ${config.services.postgresql.superUser} \
-c "CREATE DATABASE ${cfg.database.name} OWNER ${cfg.database.user} ENCODING 'UTF8' TEMPLATE template0"
${pkgs.sudo}/bin/sudo -u ${cfg.user} \
${config.services.postgresql.package}/bin/psql -U ${cfg.database.user} \
-d ${cfg.database.name} -f "${runDir}/sql/restyaboard_with_empty_data.sql"
touch "${cfg.dataDir}/.db-initialized"
fi
''}
'';
};
systemd.timers.restya-board = {
description = "restya-board scripts for e.g. email notification";
wantedBy = [ "timers.target" ];
after = [ "restya-board-init.service" ];
requires = [ "restya-board-init.service" ];
timerConfig = {
OnUnitInactiveSec = "60s";
Unit = "restya-board-timers.service";
};
};
systemd.services.restya-board-timers = {
description = "restya-board scripts for e.g. email notification";
serviceConfig.Type = "oneshot";
serviceConfig.User = cfg.user;
after = [ "restya-board-init.service" ];
requires = [ "restya-board-init.service" ];
script = ''
/bin/sh ${runDir}/server/php/shell/instant_email_notification.sh 2> /dev/null || true
/bin/sh ${runDir}/server/php/shell/periodic_email_notification.sh 2> /dev/null || true
/bin/sh ${runDir}/server/php/shell/imap.sh 2> /dev/null || true
/bin/sh ${runDir}/server/php/shell/webhook.sh 2> /dev/null || true
/bin/sh ${runDir}/server/php/shell/card_due_notification.sh 2> /dev/null || true
'';
};
users.extraUsers.restya-board = {
isSystemUser = true;
createHome = false;
home = runDir;
group = "restya-board";
};
users.extraGroups.restya-board = {};
services.postgresql.enable = mkIf (isNull cfg.database.host) true;
services.postgresql.identMap = optionalString (isNull cfg.database.host)
''
restya-board-users restya-board restya_board
'';
services.postgresql.authentication = optionalString (isNull cfg.database.host)
''
local restya_board all ident map=restya-board-users
'';
};
}

View File

@ -188,8 +188,7 @@ let
/* date format to be used while writing to the owncloud logfile */
'logdateformat' => 'F d, Y H:i:s',
/* timezone used while writing to the owncloud logfile (default: UTC) */
'logtimezone' => '${serverInfo.fullConfig.time.timeZone}',
${tzSetting}
/* Append all database queries and parameters to the log file.
(watch out, this option can increase the size of your log file)*/
@ -339,6 +338,31 @@ let
'';
tzSetting = let tz = serverInfo.fullConfig.time.timeZone; in optionalString (!isNull tz) ''
/* timezone used while writing to the owncloud logfile (default: UTC) */
'logtimezone' => '${tz}',
'';
postgresql = serverInfo.fullConfig.services.postgresql.package;
setupDb = pkgs.writeScript "setup-owncloud-db" ''
#!${pkgs.stdenv.shell}
PATH="${postgresql}/bin"
createuser --no-superuser --no-createdb --no-createrole "${config.dbUser}" || true
createdb "${config.dbName}" -O "${config.dbUser}" || true
psql -U postgres -d postgres -c "alter user ${config.dbUser} with password '${config.dbPassword}';" || true
QUERY="CREATE TABLE appconfig
( appid VARCHAR( 255 ) NOT NULL
, configkey VARCHAR( 255 ) NOT NULL
, configvalue VARCHAR( 255 ) NOT NULL
);
GRANT ALL ON appconfig TO ${config.dbUser};
ALTER TABLE appconfig OWNER TO ${config.dbUser};"
psql -h "/tmp" -U postgres -d ${config.dbName} -Atw -c "$QUERY" || true
'';
in
rec {
@ -353,7 +377,7 @@ rec {
''}
<Directory ${config.package}>
${builtins.readFile "${config.package}/.htaccess"}
Include ${config.package}/.htaccess
</Directory>
'';
@ -373,7 +397,7 @@ rec {
defaultText = "pkgs.owncloud70";
example = literalExample "pkgs.owncloud70";
description = ''
PostgreSQL package to use.
ownCloud package to use.
'';
};
@ -574,13 +598,7 @@ rec {
chmod -R o-rwx ${config.dataDir}
chown -R wwwrun:wwwrun ${config.dataDir}
${pkgs.postgresql}/bin/createuser -s -r postgres
${pkgs.postgresql}/bin/createuser --no-superuser --no-createdb --no-createrole "${config.dbUser}" || true
${pkgs.postgresql}/bin/createdb "${config.dbName}" -O "${config.dbUser}" || true
${pkgs.sudo}/bin/sudo -u postgres ${pkgs.postgresql}/bin/psql -U postgres -d postgres -c "alter user ${config.dbUser} with password '${config.dbPassword}';" || true
QUERY="CREATE TABLE appconfig (appid VARCHAR( 255 ) NOT NULL ,configkey VARCHAR( 255 ) NOT NULL ,configvalue VARCHAR( 255 ) NOT NULL); GRANT ALL ON appconfig TO ${config.dbUser}; ALTER TABLE appconfig OWNER TO ${config.dbUser};"
${pkgs.sudo}/bin/sudo -u postgres ${pkgs.postgresql}/bin/psql -h "/tmp" -U postgres -d ${config.dbName} -Atw -c "$QUERY" || true
${pkgs.sudo}/bin/sudo -u postgres ${setupDb}
fi
if [ -e ${config.package}/config/ca-bundle.crt ]; then
@ -591,7 +609,11 @@ rec {
chown wwwrun:wwwrun ${config.dataDir}/owncloud.log || true
QUERY="INSERT INTO groups (gid) values('admin'); INSERT INTO users (uid,password) values('${config.adminUser}','${builtins.hashString "sha1" config.adminPassword}'); INSERT INTO group_user (gid,uid) values('admin','${config.adminUser}');"
${pkgs.sudo}/bin/sudo -u postgres ${pkgs.postgresql}/bin/psql -h "/tmp" -U postgres -d ${config.dbName} -Atw -c "$QUERY" || true
QUERY="INSERT INTO groups (gid) values('admin');
INSERT INTO users (uid,password)
values('${config.adminUser}','${builtins.hashString "sha1" config.adminPassword}');
INSERT INTO group_user (gid,uid)
values('admin','${config.adminUser}');"
${pkgs.sudo}/bin/sudo -u postgres ${postgresql}/bin/psql -h "/tmp" -U postgres -d ${config.dbName} -Atw -c "$QUERY" || true
'';
}

View File

@ -113,7 +113,6 @@ in
tasksDirectory = mkOption {
type = types.path;
default = "${inginious}/lib/python2.7/site-packages/inginious/tasks";
example = "/var/lib/INGInious/tasks";
description = ''
Path to the tasks folder.
@ -218,6 +217,7 @@ in
# Common
{
services.lighttpd.inginious.tasksDirectory = mkDefault "${inginious}/lib/python2.7/site-packages/inginious/tasks";
# To access inginous tools (like inginious-test-task)
environment.systemPackages = [ inginious ];

View File

@ -181,10 +181,10 @@ in {
};
backend = mkOption {
type = types.enum [ "glx" "xrender" ];
type = types.enum [ "glx" "xrender" "xr_glx_hybrid" ];
default = "xrender";
description = ''
Backend to use: <literal>glx</literal> or <literal>xrender</literal>.
Backend to use: <literal>glx</literal>, <literal>xrender</literal> or <literal>xr_glx_hybrid</literal>.
'';
};

View File

@ -87,11 +87,11 @@ in
default = mkOption {
type = types.str;
default = "";
example = "none";
default = "none";
example = "plasma5";
description = "Default desktop manager loaded if none have been chosen.";
apply = defaultDM:
if defaultDM == "" && cfg.session.list != [] then
if defaultDM == "none" && cfg.session.list != [] then
(head cfg.session.list).name
else if any (w: w.name == defaultDM) cfg.session.list then
defaultDM

View File

@ -61,7 +61,7 @@ in
'';
}];
security.wrappers.e_freqset.source = "${e.enlightenment.out}/bin/e_freqset";
security.wrappers = (import (builtins.toPath "${e.enlightenment}/e-wrappers.nix")).security.wrappers;
environment.etc = singleton
{ source = xcfg.xkbDir;

View File

@ -94,6 +94,8 @@ in {
services.udisks2.enable = true;
services.accounts-daemon.enable = true;
services.geoclue2.enable = mkDefault true;
services.dleyna-renderer.enable = mkDefault true;
services.dleyna-server.enable = mkDefault true;
services.gnome3.at-spi2-core.enable = true;
services.gnome3.evolution-data-server.enable = true;
services.gnome3.gnome-disks.enable = mkDefault true;
@ -106,6 +108,7 @@ in {
services.gnome3.seahorse.enable = mkDefault true;
services.gnome3.sushi.enable = mkDefault true;
services.gnome3.tracker.enable = mkDefault true;
services.gnome3.tracker-miners.enable = mkDefault true;
hardware.pulseaudio.enable = mkDefault true;
services.telepathy.enable = mkDefault true;
networking.networkmanager.enable = mkDefault true;
@ -150,7 +153,7 @@ in {
export XDG_DATA_DIRS=$XDG_DATA_DIRS''${XDG_DATA_DIRS:+:}${mimeAppsList}/share
# Override gsettings-desktop-schema
export XDG_DATA_DIRS=${nixos-gsettings-desktop-schemas}/share/gsettings-schemas/nixos-gsettings-overrides''${XDG_DATA_DIRS:+:}$XDG_DATA_DIRS
export NIX_GSETTINGS_OVERRIDES_DIR=${nixos-gsettings-desktop-schemas}/share/gsettings-schemas/nixos-gsettings-overrides/glib-2.0/schemas
# Let nautilus find extensions
export NAUTILUS_EXTENSION_DIR=${config.system.path}/lib/nautilus/extensions-3.0/

View File

@ -195,7 +195,12 @@ in
boot.plymouth = {
theme = mkDefault "breeze";
themePackages = mkDefault [ pkgs.breeze-plymouth ];
themePackages = mkDefault [
(pkgs.breeze-plymouth.override {
nixosBranding = true;
nixosVersion = config.system.nixosRelease;
})
];
};
security.pam.services.kde = { allowNullPassword = true; };

View File

@ -59,12 +59,6 @@ let
# Now it should be safe to assume that the script was called with the
# expected parameters.
${optionalString cfg.displayManager.logToJournal ''
if [ -z "$_DID_SYSTEMD_CAT" ]; then
_DID_SYSTEMD_CAT=1 exec ${config.systemd.package}/bin/systemd-cat -t xsession -- "$0" "$@"
fi
''}
. /etc/profile
cd "$HOME"
@ -72,16 +66,23 @@ let
sessionType="$1"
if [ "$sessionType" = default ]; then sessionType=""; fi
${optionalString (!cfg.displayManager.job.logsXsession && !cfg.displayManager.logToJournal) ''
exec > ~/.xsession-errors 2>&1
''}
${optionalString cfg.startDbusSession ''
if test -z "$DBUS_SESSION_BUS_ADDRESS"; then
exec ${pkgs.dbus.dbus-launch} --exit-with-session "$0" "$sessionType"
fi
''}
${optionalString cfg.displayManager.job.logToJournal ''
if [ -z "$_DID_SYSTEMD_CAT" ]; then
export _DID_SYSTEMD_CAT=1
exec ${config.systemd.package}/bin/systemd-cat -t xsession "$0" "$sessionType"
fi
''}
${optionalString cfg.displayManager.job.logToFile ''
exec &> >(tee ~/.xsession-errors)
''}
# Start PulseAudio if enabled.
${optionalString (config.hardware.pulseaudio.enable) ''
${optionalString (!config.hardware.pulseaudio.systemWide)
@ -306,26 +307,24 @@ in
description = "Additional environment variables needed by the display manager.";
};
logsXsession = mkOption {
logToFile = mkOption {
type = types.bool;
default = false;
description = ''
Whether the display manager redirects the
output of the session script to
<filename>~/.xsession-errors</filename>.
Whether the display manager redirects the output of the
session script to <filename>~/.xsession-errors</filename>.
'';
};
};
logToJournal = mkOption {
type = types.bool;
default = true;
description = ''
Whether the display manager redirects the output of the
session script to the systemd journal.
'';
};
logToJournal = mkOption {
type = types.bool;
default = true;
description = ''
By default, the stdout/stderr of sessions is written
to <filename>~/.xsession-errors</filename>. When this option
is enabled, it will instead be written to the journal.
'';
};
};

View File

@ -122,11 +122,8 @@ in
"rc-local.service"
"systemd-machined.service"
"systemd-user-sessions.service"
"getty@tty1.service"
];
systemd.services."getty@tty1".enable = false;
systemd.services.display-manager.conflicts = [ "getty@tty1.service" ];
systemd.services.display-manager.serviceConfig = {
# Restart = "always"; - already defined in xserver.nix
KillMode = "mixed";

View File

@ -190,7 +190,7 @@ in
services.xserver.displayManager.slim.enable = false;
services.xserver.displayManager.job = {
logsXsession = true;
logToFile = true;
# lightdm relaunches itself via just `lightdm`, so needs to be on the PATH
execCmd = ''

View File

@ -205,7 +205,7 @@ in
services.xserver.displayManager.slim.enable = false;
services.xserver.displayManager.job = {
logsXsession = true;
logToFile = true;
environment = {
# Load themes from system environment

View File

@ -220,7 +220,7 @@ in
'';
services.xserver.displayManager.job = {
logsXsession = true;
logToFile = true;
execCmd = ''
${optionalString (cfg.pulseaudio)

View File

@ -0,0 +1,36 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.services.fractalart;
in {
options.services.fractalart = {
enable = mkOption {
type = types.bool;
default = false;
example = true;
description = "Enable FractalArt for generating colorful wallpapers on login";
};
width = mkOption {
type = types.nullOr types.int;
default = null;
example = 1920;
description = "Screen width";
};
height = mkOption {
type = types.nullOr types.int;
default = null;
example = 1080;
description = "Screen height";
};
};
config = mkIf cfg.enable {
environment.systemPackages = [ pkgs.haskellPackages.FractalArt ];
services.xserver.displayManager.sessionCommands =
"${pkgs.haskellPackages.FractalArt}/bin/FractalArt --no-bg -f .background-image"
+ optionalString (cfg.width != null) " -w ${toString cfg.width}"
+ optionalString (cfg.height != null) " -h ${toString cfg.height}";
};
}

View File

@ -198,6 +198,13 @@ in {
environment.systemPackages = [ pkgs.xorg.xf86inputlibinput ];
environment.etc = [
(let cfgPath = "X11/xorg.conf.d/40-libinput.conf"; in {
source = pkgs.xorg.xf86inputlibinput.out + "/share/" + cfgPath;
target = cfgPath;
})
];
services.udev.packages = [ pkgs.libinput ];
services.xserver.config =

View File

@ -61,7 +61,9 @@ in
example = "wmii";
description = "Default window manager loaded if none have been chosen.";
apply = defaultWM:
if any (w: w.name == defaultWM) cfg.session then
if defaultWM == "none" && cfg.session != [] then
(head cfg.session).name
else if any (w: w.name == defaultWM) cfg.session then
defaultWM
else
throw "Default window manager (${defaultWM}) not found.";

View File

@ -0,0 +1,25 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.services.xserver.windowManager.evilwm;
in
{
###### interface
options = {
services.xserver.windowManager.evilwm.enable = mkEnableOption "evilwm";
};
###### implementation
config = mkIf cfg.enable {
services.xserver.windowManager.session = singleton {
name = "evilwm";
start = ''
${pkgs.evilwm}/bin/evilwm &
waitPID=$!
'';
};
environment.systemPackages = [ pkgs.evilwm ];
};
}

View File

@ -161,6 +161,15 @@ in
'';
};
plainX = mkOption {
type = types.bool;
default = false;
description = ''
Whether the X11 session can be plain (without DM/WM) and
the Xsession script will be used as fallback or not.
'';
};
autorun = mkOption {
type = types.bool;
default = true;
@ -552,6 +561,11 @@ in
+ "${toString (length primaryHeads)} heads set to primary: "
+ concatMapStringsSep ", " (x: x.output) primaryHeads;
})
{ assertion = cfg.desktopManager.default == "none" && cfg.windowManager.default == "none" -> cfg.plainX;
message = "Either the desktop manager or the window manager shouldn't be `none`! "
+ "To explicitly allow this, you can also set `services.xserver.plainX` to `true`. "
+ "The `default` value looks for enabled WMs/DMs and select the first one.";
}
];
environment.etc =
@ -686,7 +700,6 @@ in
Section "InputClass"
Identifier "Keyboard catchall"
MatchIsKeyboard "on"
Option "XkbRules" "base"
Option "XkbModel" "${cfg.xkbModel}"
Option "XkbLayout" "${cfg.layout}"
Option "XkbOptions" "${cfg.xkbOptions}"

View File

@ -16,6 +16,10 @@ my $reloadListFile = "/run/systemd/reload-list";
my $action = shift @ARGV;
if ("@localeArchive@" ne "") {
$ENV{LOCALE_ARCHIVE} = "@localeArchive@";
}
if (!defined $action || ($action ne "switch" && $action ne "boot" && $action ne "test" && $action ne "dry-activate")) {
print STDERR <<EOF;
Usage: $0 [switch|boot|test]
@ -65,7 +69,8 @@ $SIG{PIPE} = "IGNORE";
sub getActiveUnits {
# FIXME: use D-Bus or whatever to query this, since parsing the
# output of list-units is likely to break.
my $lines = `LANG= systemctl list-units --full --no-legend`;
# Use current version of systemctl binary before daemon is reexeced.
my $lines = `LANG= /run/current-system/sw/bin/systemctl list-units --full --no-legend`;
my $res = {};
foreach my $line (split '\n', $lines) {
chomp $line;
@ -262,7 +267,8 @@ while (my ($unit, $state) = each %{$activePrev}) {
sub pathToUnitName {
my ($path) = @_;
open my $cmd, "-|", "@systemd@/bin/systemd-escape", "--suffix=mount", "-p", $path
# Use current version of systemctl binary before daemon is reexeced.
open my $cmd, "-|", "/run/current-system/sw/bin/systemd-escape", "--suffix=mount", "-p", $path
or die "Unable to escape $path!\n";
my $escaped = join "", <$cmd>;
chomp $escaped;
@ -364,7 +370,8 @@ syslog(LOG_NOTICE, "switching to system configuration $out");
if (scalar (keys %unitsToStop) > 0) {
print STDERR "stopping the following units: ", join(", ", @unitsToStopFiltered), "\n"
if scalar @unitsToStopFiltered;
system("systemctl", "stop", "--", sort(keys %unitsToStop)); # FIXME: ignore errors?
# Use current version of systemctl binary before daemon is reexeced.
system("/run/current-system/sw/bin/systemctl", "stop", "--", sort(keys %unitsToStop)); # FIXME: ignore errors?
}
print STDERR "NOT restarting the following changed units: ", join(", ", sort(keys %unitsToSkip)), "\n"

View File

@ -26,7 +26,6 @@ let
cloner false config.nesting.children
++ cloner true config.nesting.clone;
systemBuilder =
let
kernelPath = "${config.boot.kernelPackages.kernel}/" +
@ -83,6 +82,7 @@ let
done
mkdir $out/bin
export localeArchive="${config.i18n.glibcLocales}/lib/locale/locale-archive"
substituteAll ${./switch-to-configuration.pl} $out/bin/switch-to-configuration
chmod +x $out/bin/switch-to-configuration

View File

@ -5,7 +5,13 @@
with lib;
let kernel = config.boot.kernelPackages.kernel; in
let
kernel = config.boot.kernelPackages.kernel;
# FIXME: figure out a common place for this instead of copy pasting
serialDevice = if pkgs.stdenv.isi686 || pkgs.stdenv.isx86_64 then "ttyS0"
else if pkgs.stdenv.isArm || pkgs.stdenv.isAarch64 then "ttyAMA0"
else throw "Unknown QEMU serial device for system '${pkgs.stdenv.system}'";
in
{
@ -22,8 +28,8 @@ let kernel = config.boot.kernelPackages.kernel; in
systemd.services.backdoor =
{ wantedBy = [ "multi-user.target" ];
requires = [ "dev-hvc0.device" "dev-ttyS0.device" ];
after = [ "dev-hvc0.device" "dev-ttyS0.device" ];
requires = [ "dev-hvc0.device" "dev-${serialDevice}.device" ];
after = [ "dev-hvc0.device" "dev-${serialDevice}.device" ];
script =
''
export USER=root
@ -40,7 +46,7 @@ let kernel = config.boot.kernelPackages.kernel; in
cd /tmp
exec < /dev/hvc0 > /dev/hvc0
while ! exec 2> /dev/ttyS0; do sleep 0.1; done
while ! exec 2> /dev/${serialDevice}; do sleep 0.1; done
echo "connecting to host..." >&2
stty -F /dev/hvc0 raw -echo # prevent nl -> cr/nl conversion
echo
@ -49,10 +55,10 @@ let kernel = config.boot.kernelPackages.kernel; in
serviceConfig.KillSignal = "SIGHUP";
};
# Prevent agetty from being instantiated on ttyS0, since it
# interferes with the backdoor (writes to ttyS0 will randomly fail
# Prevent agetty from being instantiated on ${serialDevice}, since it
# interferes with the backdoor (writes to ${serialDevice} will randomly fail
# with EIO). Likewise for hvc0.
systemd.services."serial-getty@ttyS0".enable = false;
systemd.services."serial-getty@${serialDevice}".enable = false;
systemd.services."serial-getty@hvc0".enable = false;
boot.initrd.preDeviceCommands =
@ -88,7 +94,7 @@ let kernel = config.boot.kernelPackages.kernel; in
# Panic if an error occurs in stage 1 (rather than waiting for
# user intervention).
boot.kernelParams =
[ "console=ttyS0" "panic=1" "boot.panic_on_fail" ];
[ "console=${serialDevice}" "panic=1" "boot.panic_on_fail" ];
# `xwininfo' is used by the test driver to query open windows.
environment.systemPackages = [ pkgs.xorg.xwininfo ];

View File

@ -27,7 +27,6 @@ in
};
config = mkIf config.hardware.parallels.enable {
services.xserver = {
drivers = singleton
{ name = "prlvideo"; modules = [ prl-tools ]; libPath = [ prl-tools ]; };
@ -55,7 +54,7 @@ in
boot.extraModulePackages = [ prl-tools ];
boot.kernelModules = [ "prl_tg" "prl_eth" "prl_fs" "prl_fs_freeze" "acpi_memhotplug" ];
boot.kernelModules = [ "prl_tg" "prl_eth" "prl_fs" "prl_fs_freeze" ];
services.timesyncd.enable = false;
@ -89,5 +88,50 @@ in
};
};
systemd.user.services = {
prlcc = {
description = "Parallels Control Center";
wantedBy = [ "graphical-session.target" ];
serviceConfig = {
ExecStart = "${prl-tools}/bin/prlcc";
};
};
prldnd = {
description = "Parallels Control Center";
wantedBy = [ "graphical-session.target" ];
serviceConfig = {
ExecStart = "${prl-tools}/bin/prldnd";
};
};
prl_wmouse_d = {
description = "Parallels Walking Mouse Daemon";
wantedBy = [ "graphical-session.target" ];
serviceConfig = {
ExecStart = "${prl-tools}/bin/prl_wmouse_d";
};
};
prlcp = {
description = "Parallels CopyPaste Tool";
wantedBy = [ "graphical-session.target" ];
serviceConfig = {
ExecStart = "${prl-tools}/bin/prlcp";
};
};
prlsga = {
description = "Parallels Shared Guest Applications Tool";
wantedBy = [ "graphical-session.target" ];
serviceConfig = {
ExecStart = "${prl-tools}/bin/prlsga";
};
};
prlshprof = {
description = "Parallels Shared Profile Tool";
wantedBy = [ "graphical-session.target" ];
serviceConfig = {
ExecStart = "${prl-tools}/bin/prlshprof";
};
};
};
};
}

View File

@ -14,6 +14,17 @@ with lib;
let
qemu = config.system.build.qemu or pkgs.qemu_test;
qemuKvm = {
"i686-linux" = "${qemu}/bin/qemu-kvm";
"x86_64-linux" = "${qemu}/bin/qemu-kvm -cpu kvm64";
"armv7l-linux" = "${qemu}/bin/qemu-system-arm -enable-kvm -machine virt -cpu host";
"aarch64-linux" = "${qemu}/bin/qemu-system-aarch64 -enable-kvm -machine virt -cpu host";
}.${pkgs.stdenv.system};
# FIXME: figure out a common place for this instead of copy pasting
serialDevice = if pkgs.stdenv.isi686 || pkgs.stdenv.isx86_64 then "ttyS0"
else if pkgs.stdenv.isArm || pkgs.stdenv.isAarch64 then "ttyAMA0"
else throw "Unknown QEMU serial device for system '${pkgs.stdenv.system}'";
vmName =
if config.networking.hostName == ""
@ -23,7 +34,7 @@ let
cfg = config.virtualisation;
qemuGraphics = if cfg.graphics then "" else "-nographic";
kernelConsole = if cfg.graphics then "" else "console=ttyS0";
kernelConsole = if cfg.graphics then "" else "console=${serialDevice}";
ttys = [ "tty1" "tty2" "tty3" "tty4" "tty5" "tty6" ];
# Shell script to start the VM.
@ -72,11 +83,10 @@ let
'')}
# Start QEMU.
exec ${qemu}/bin/qemu-kvm \
exec ${qemuKvm} \
-name ${vmName} \
-m ${toString config.virtualisation.memorySize} \
-smp ${toString config.virtualisation.cores} \
${optionalString (pkgs.stdenv.system == "x86_64-linux") "-cpu kvm64"} \
${concatStringsSep " " config.virtualisation.qemu.networkingOptions} \
-virtfs local,path=/nix/store,security_model=none,mount_tag=store \
-virtfs local,path=$TMPDIR/xchg,security_model=none,mount_tag=xchg \
@ -434,7 +444,9 @@ in
virtualisation.pathsInNixDB = [ config.system.build.toplevel ];
virtualisation.qemu.options = [ "-vga std" "-usbdevice tablet" ];
# FIXME: Figure out how to make this work on non-x86
virtualisation.qemu.options =
mkIf (pkgs.stdenv.isi686 || pkgs.stdenv.isx86_64) [ "-vga std" "-usbdevice tablet" ];
# Mount the host filesystem via 9P, and bind-mount the Nix store
# of the host into our own filesystem. We use mkVMOverride to

View File

@ -95,6 +95,7 @@ in rec {
#(all nixos.tests.lightdm)
(all nixos.tests.login)
(all nixos.tests.misc)
(all nixos.tests.mutableUsers)
(all nixos.tests.nat.firewall)
(all nixos.tests.nat.standalone)
(all nixos.tests.networking.scripted.loopback)
@ -109,11 +110,13 @@ in rec {
(all nixos.tests.nfs3)
(all nixos.tests.nfs4)
(all nixos.tests.openssh)
(all nixos.tests.php-pcre)
(all nixos.tests.printing)
(all nixos.tests.proxy)
(all nixos.tests.sddm.default)
(all nixos.tests.simple)
(all nixos.tests.slim)
(all nixos.tests.switchTest)
(all nixos.tests.udisks2)
(all nixos.tests.xfce)

View File

@ -40,6 +40,7 @@ in rec {
nat
nfs3
openssh
php-pcre
proxy
simple;
installer = {

View File

@ -235,6 +235,7 @@ in rec {
tests.containers-tmpfs = callTest tests/containers-tmpfs.nix {};
tests.containers-hosts = callTest tests/containers-hosts.nix {};
tests.containers-macvlans = callTest tests/containers-macvlans.nix {};
tests.couchdb = callTest tests/couchdb.nix {};
tests.docker = hydraJob (import tests/docker.nix { system = "x86_64-linux"; });
tests.docker-edge = hydraJob (import tests/docker-edge.nix { system = "x86_64-linux"; });
tests.dovecot = callTest tests/dovecot.nix {};
@ -262,7 +263,7 @@ in rec {
tests.hibernate = callTest tests/hibernate.nix {};
tests.hound = callTest tests/hound.nix {};
tests.i3wm = callTest tests/i3wm.nix {};
tests.initrd-network-ssh = callTest tests/initrd-network-ssh.nix {};
tests.initrd-network-ssh = callTest tests/initrd-network-ssh {};
tests.installer = callSubTests tests/installer.nix {};
tests.influxdb = callTest tests/influxdb.nix {};
tests.ipv6 = callTest tests/ipv6.nix {};
@ -290,6 +291,7 @@ in rec {
tests.mongodb = callTest tests/mongodb.nix {};
tests.mumble = callTest tests/mumble.nix {};
tests.munin = callTest tests/munin.nix {};
tests.mutableUsers = callTest tests/mutable-users.nix {};
tests.mysql = callTest tests/mysql.nix {};
tests.mysqlBackup = callTest tests/mysql-backup.nix {};
tests.mysqlReplication = callTest tests/mysql-replication.nix {};
@ -303,12 +305,15 @@ in rec {
tests.nfs3 = callTest tests/nfs.nix { version = 3; };
tests.nfs4 = callTest tests/nfs.nix { version = 4; };
tests.nginx = callTest tests/nginx.nix { };
tests.nghttpx = callTest tests/nghttpx.nix { };
tests.leaps = callTest tests/leaps.nix { };
tests.nsd = callTest tests/nsd.nix {};
tests.openssh = callTest tests/openssh.nix {};
tests.owncloud = callTest tests/owncloud.nix {};
tests.pam-oath-login = callTest tests/pam-oath-login.nix {};
#tests.panamax = hydraJob (import tests/panamax.nix { system = "x86_64-linux"; });
tests.peerflix = callTest tests/peerflix.nix {};
tests.php-pcre = callTest tests/php-pcre.nix {};
tests.postgresql = callSubTests tests/postgresql.nix {};
tests.pgmanage = callTest tests/pgmanage.nix {};
tests.postgis = callTest tests/postgis.nix {};
@ -327,6 +332,7 @@ in rec {
tests.slim = callTest tests/slim.nix {};
tests.smokeping = callTest tests/smokeping.nix {};
tests.snapper = callTest tests/snapper.nix {};
tests.switchTest = callTest tests/switch-test.nix {};
tests.taskserver = callTest tests/taskserver.nix {};
tests.tomcat = callTest tests/tomcat.nix {};
tests.udisks2 = callTest tests/udisks2.nix {};

View File

@ -21,11 +21,16 @@ import ./make-test.nix ({ pkgs, ... }: {
# the boot process kills any kthread by accident, like what happened in
# issue #15226.
kcanary = compileKernelModule "kcanary" ''
#include <linux/version.h>
#include <linux/init.h>
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/kthread.h>
#include <linux/sched.h>
#include <linux/signal.h>
#if LINUX_VERSION_CODE >= KERNEL_VERSION(4, 10, 0)
#include <linux/sched/signal.h>
#endif
struct task_struct *canaryTask;

View File

@ -228,12 +228,12 @@ let
# Retrieved via:
# curl -s -I https://acme-v01.api.letsencrypt.org/terms \
# | sed -ne 's/^[Ll]ocation: *//p'
tosUrl = "https://letsencrypt.org/documents/LE-SA-v1.1.1-August-1-2016.pdf";
tosUrl = "https://letsencrypt.org/documents/2017.11.15-LE-SA-v1.2.pdf";
tosPath = builtins.head (builtins.match "https?://[^/]+(.*)" tosUrl);
tosFile = pkgs.fetchurl {
url = tosUrl;
sha256 = "08b2gacdz23mzji2pjr1pwnk82a84rzvr36isif7mmi9kydl6wv3";
sha256 = "0yvyckqzj0b1xi61sypcha82nanizzlm8yqy828h2jbza7cxi26c";
};
resolver = let

View File

@ -69,6 +69,12 @@ import ./make-test.nix ({ pkgs, ...} : {
$machine->succeed("ping -n -c 1 $ip6");
$machine->succeed("curl --fail http://[$ip6]/ > /dev/null");
# Check that nixos-container show-ip works in case of an ipv4 address with
# subnetmask in CIDR notation.
my $result = $machine->succeed("nixos-container show-ip webserver");
chomp $result;
$result eq $ip or die;
# Stop the container.
$machine->succeed("nixos-container stop webserver");
$machine->fail("curl --fail --connect-timeout 2 http://$ip/ > /dev/null");

56
nixos/tests/couchdb.nix Normal file
View File

@ -0,0 +1,56 @@
import ./make-test.nix ({ pkgs, lib, ...}:
with lib;
{
name = "couchdb";
meta = with pkgs.stdenv.lib.maintainers; {
maintainers = [ fpletz ];
};
nodes = {
couchdb1 =
{ pkgs, config, ... }:
{ environment.systemPackages = with pkgs; [ jq ];
services.couchdb.enable = true;
};
couchdb2 =
{ pkgs, config, ... }:
{ environment.systemPackages = with pkgs; [ jq ];
services.couchdb.enable = true;
services.couchdb.package = pkgs.couchdb2;
};
};
testScript = let
curlJqCheck = action: path: jqexpr: result:
pkgs.writeScript "curl-jq-check-${action}-${path}.sh" ''
RESULT=$(curl -X ${action} http://127.0.0.1:5984/${path} | jq -r '${jqexpr}')
echo $RESULT >&2
if [ "$RESULT" != "${result}" ]; then
exit 1
fi
'';
in ''
startAll;
$couchdb1->waitForUnit("couchdb.service");
$couchdb1->waitUntilSucceeds("${curlJqCheck "GET" "" ".couchdb" "Welcome"}");
$couchdb1->waitUntilSucceeds("${curlJqCheck "GET" "_all_dbs" ". | length" "2"}");
$couchdb1->succeed("${curlJqCheck "PUT" "foo" ".ok" "true"}");
$couchdb1->succeed("${curlJqCheck "GET" "_all_dbs" ". | length" "3"}");
$couchdb1->succeed("${curlJqCheck "DELETE" "foo" ".ok" "true"}");
$couchdb1->succeed("${curlJqCheck "GET" "_all_dbs" ". | length" "2"}");
$couchdb2->waitForUnit("couchdb.service");
$couchdb2->waitUntilSucceeds("${curlJqCheck "GET" "" ".couchdb" "Welcome"}");
$couchdb2->waitUntilSucceeds("${curlJqCheck "GET" "_all_dbs" ". | length" "0"}");
$couchdb2->succeed("${curlJqCheck "PUT" "foo" ".ok" "true"}");
$couchdb2->succeed("${curlJqCheck "GET" "_all_dbs" ". | length" "1"}");
$couchdb2->succeed("${curlJqCheck "DELETE" "foo" ".ok" "true"}");
$couchdb2->succeed("${curlJqCheck "GET" "_all_dbs" ". | length" "0"}");
'';
})

View File

@ -1,19 +1,6 @@
import ./make-test.nix ({ pkgs, lib, ... }:
import ../make-test.nix ({ pkgs, lib, ... }:
let
keys = pkgs.runCommand "gen-keys" {
outputs = [ "out" "dbPub" "dbPriv" "sshPub" "sshPriv" ];
buildInputs = with pkgs; [ dropbear openssh ];
}
''
touch $out
dropbearkey -t rsa -f $dbPriv -s 4096 | sed -n 2p > $dbPub
ssh-keygen -q -t rsa -b 4096 -N "" -f client
mv client $sshPriv
mv client.pub $sshPub
'';
in {
{
name = "initrd-network-ssh";
meta = with lib.maintainers; {
maintainers = [ willibutz ];
@ -32,9 +19,9 @@ in {
enable = true;
ssh = {
enable = true;
authorizedKeys = [ "${readFile keys.sshPub}" ];
authorizedKeys = [ "${readFile ./openssh.pub}" ];
port = 22;
hostRSAKey = keys.dbPriv;
hostRSAKey = ./dropbear.priv;
};
};
boot.initrd.preLVMCommands = ''
@ -56,7 +43,7 @@ in {
"${toString (head (splitString " " (
toString (elemAt (splitString "\n" config.networking.extraHosts) 2)
)))} "
"${readFile keys.dbPub}"
"${readFile ./dropbear.pub}"
];
};
};
@ -65,7 +52,7 @@ in {
testScript = ''
startAll;
$client->waitForUnit("network.target");
$client->copyFileFromHost("${keys.sshPriv}","/etc/sshKey");
$client->copyFileFromHost("${./openssh.priv}","/etc/sshKey");
$client->succeed("chmod 0600 /etc/sshKey");
$client->waitUntilSucceeds("ping -c 1 server");
$client->succeed("ssh -i /etc/sshKey -o UserKnownHostsFile=/etc/knownHosts server 'touch /fnord'");

Binary file not shown.

Some files were not shown because too many files have changed in this diff Show More