mirror of
https://github.com/ilyakooo0/nixpkgs.git
synced 2024-12-24 03:43:53 +03:00
Merge commit staging+systemd into closure-size
Many non-conflict problems weren't (fully) resolved in this commit yet.
This commit is contained in:
commit
5227fb1dd5
12
CONTRIBUTING.md
Normal file
12
CONTRIBUTING.md
Normal file
@ -0,0 +1,12 @@
|
||||
# How to contribute
|
||||
|
||||
## Opening issues
|
||||
|
||||
* Make sure you have a [GitHub account](https://github.com/signup/free)
|
||||
* [Submit an issue](https://github.com/NixOS/nixpkgs/issues) - assuming one does not already exist.
|
||||
* Clearly describe the issue including steps to reproduce when it is a bug.
|
||||
* Include information what version of nixpkgs and Nix are you using (nixos-version or git revision).
|
||||
|
||||
## Submitting changes
|
||||
|
||||
See the nixpkgs manual for details on how to [Submit changes to nixpkgs](http://hydra.nixos.org/job/nixpkgs/trunk/manual/latest/download-by-type/doc/manual#chap-submitting-changes).
|
36
README.md
36
README.md
@ -1,12 +1,31 @@
|
||||
[<img src="http://nixos.org/logo/nixos-hires.png" width="500px" alt="logo" />
|
||||
](https://nixos.org/nixos)
|
||||
[<img src="http://nixos.org/logo/nixos-hires.png" width="500px" alt="logo" />](https://nixos.org/nixos)
|
||||
|
||||
[![Build Status](https://travis-ci.org/NixOS/nixpkgs.svg?branch=master)](https://travis-ci.org/NixOS/nixpkgs) [![Issue Stats](http://www.issuestats.com/github/nixos/nixpkgs/badge/pr)](http://www.issuestats.com/github/nixos/nixpkgs) [![Issue Stats](http://www.issuestats.com/github/nixos/nixpkgs/badge/issue)](http://www.issuestats.com/github/nixos/nixpkgs)
|
||||
[![Build Status](https://travis-ci.org/NixOS/nixpkgs.svg?branch=master)](https://travis-ci.org/NixOS/nixpkgs)
|
||||
[![Issue Stats](http://www.issuestats.com/github/nixos/nixpkgs/badge/pr)](http://www.issuestats.com/github/nixos/nixpkgs)
|
||||
[![Issue Stats](http://www.issuestats.com/github/nixos/nixpkgs/badge/issue)](http://www.issuestats.com/github/nixos/nixpkgs)
|
||||
|
||||
Nixpkgs is a collection of packages for [Nix](https://nixos.org/nix/) package
|
||||
manager.
|
||||
Nixpkgs is a collection of packages for the [Nix](https://nixos.org/nix/) package
|
||||
manager. It is periodically built and tested by the [hydra](http://hydra.nixos.org/)
|
||||
build daemon as so-called channels. To get channel information via git, add
|
||||
[nixpkgs-channels](https://github.com/NixOS/nixpkgs-channels.git) as a remote:
|
||||
|
||||
[NixOS](https://nixos.org/nixos/) linux distribution source code is located inside `nixos/` folder.
|
||||
```
|
||||
% git remote add channels git://github.com/NixOS/nixpkgs-channels.git
|
||||
```
|
||||
|
||||
For stability and maximum binary package support, it is recommended to maintain
|
||||
custom changes on top of one of the channels, e.g. `nixos-14.12` for the latest
|
||||
release and `nixos-unstable` for the latest successful build of master:
|
||||
|
||||
```
|
||||
% git remote update channels
|
||||
% git rebase channels/nixos-14.12
|
||||
```
|
||||
|
||||
For pull-requests, please rebase onto nixpkgs `master`.
|
||||
|
||||
[NixOS](https://nixos.org/nixos/) linux distribution source code is located inside
|
||||
`nixos/` folder.
|
||||
|
||||
* [NixOS installation instructions](https://nixos.org/nixos/manual/#ch-installation)
|
||||
* [Documentation (Nix Expression Language chapter)](https://nixos.org/nix/manual/#ch-expression-language)
|
||||
@ -14,13 +33,12 @@ manager.
|
||||
* [Manual (NixOS)](https://nixos.org/nixos/manual/)
|
||||
* [Continuous package builds for unstable/master](https://hydra.nixos.org/jobset/nixos/trunk-combined)
|
||||
* [Continuous package builds for 14.12 release](https://hydra.nixos.org/jobset/nixos/release-14.12)
|
||||
* [Continuous package builds for 15.09 release](https://hydra.nixos.org/jobset/nixos/release-15.09)
|
||||
* [Tests for unstable/master](https://hydra.nixos.org/job/nixos/trunk-combined/tested#tabs-constituents)
|
||||
* [Tests for 14.12 release](https://hydra.nixos.org/job/nixos/release-14.12/tested#tabs-constituents)
|
||||
* [Tests for 15.09 release](https://hydra.nixos.org/job/nixos/release-15.09/tested#tabs-constituents)
|
||||
|
||||
Communication:
|
||||
|
||||
* [Mailing list](http://lists.science.uu.nl/mailman/listinfo/nix-dev)
|
||||
* [IRC - #nixos on freenode.net](irc://irc.freenode.net/#nixos)
|
||||
|
||||
---
|
||||
[![Throughput Graph](https://graphs.waffle.io/nixos/nixpkgs/throughput.svg)](https://waffle.io/nixos/nixpkgs/metrics)
|
||||
|
@ -5,7 +5,7 @@
|
||||
<title>Coding conventions</title>
|
||||
|
||||
|
||||
<section><title>Syntax</title>
|
||||
<section xml:id="sec-syntax"><title>Syntax</title>
|
||||
|
||||
<itemizedlist>
|
||||
|
||||
@ -169,8 +169,8 @@ stdenv.mkDerivation { ...
|
||||
args: with args; <replaceable>...</replaceable>
|
||||
</programlisting>
|
||||
|
||||
or
|
||||
|
||||
or
|
||||
|
||||
<programlisting>
|
||||
{ stdenv, fetchurl, perl, ... }: <replaceable>...</replaceable>
|
||||
</programlisting>
|
||||
@ -207,7 +207,7 @@ args.stdenv.mkDerivation (args // {
|
||||
</section>
|
||||
|
||||
|
||||
<section><title>Package naming</title>
|
||||
<section xml:id="sec-package-naming"><title>Package naming</title>
|
||||
|
||||
<para>In Nixpkgs, there are generally three different names associated with a package:
|
||||
|
||||
@ -256,6 +256,12 @@ bound to the variable name <varname>e2fsprogs</varname> in
|
||||
a package named <literal>hello-svn</literal> by
|
||||
<command>nix-env</command>.</para></listitem>
|
||||
|
||||
<listitem><para>If package is fetched from git's commit then
|
||||
the version part of the name <emphasis>must</emphasis> be the date of that
|
||||
(fetched) commit. The date must be in <literal>"YYYY-MM-DD"</literal> format.
|
||||
Also add <literal>"git"</literal> to the name - e.g.,
|
||||
<literal>"pkgname-git-2014-09-23"</literal>.</para></listitem>
|
||||
|
||||
<listitem><para>Dashes in the package name should be preserved
|
||||
in new variable names, rather than converted to underscores
|
||||
(which was convention up to around 2013 and most names
|
||||
@ -286,7 +292,7 @@ dashes between words — not in camel case. For instance, it should be
|
||||
<filename>allPackages.nix</filename> or
|
||||
<filename>AllPackages.nix</filename>.</para>
|
||||
|
||||
<section><title>Hierarchy</title>
|
||||
<section xml:id="sec-hierarchy"><title>Hierarchy</title>
|
||||
|
||||
<para>Each package should be stored in its own directory somewhere in
|
||||
the <filename>pkgs/</filename> tree, i.e. in
|
||||
@ -445,12 +451,17 @@ splitting up an existing category.</para>
|
||||
</listitem>
|
||||
</varlistentry>
|
||||
<varlistentry>
|
||||
<term>If it’s a <emphasis>desktop environment</emphasis>
|
||||
(including <emphasis>window managers</emphasis>):</term>
|
||||
<term>If it’s a <emphasis>desktop environment</emphasis>:</term>
|
||||
<listitem>
|
||||
<para><filename>desktops</filename> (e.g. <filename>kde</filename>, <filename>gnome</filename>, <filename>enlightenment</filename>)</para>
|
||||
</listitem>
|
||||
</varlistentry>
|
||||
<varlistentry>
|
||||
<term>If it’s a <emphasis>window manager</emphasis>:</term>
|
||||
<listitem>
|
||||
<para><filename>applications/window-managers</filename> (e.g. <filename>awesome</filename>, <filename>compiz</filename>, <filename>stumpwm</filename>)</para>
|
||||
</listitem>
|
||||
</varlistentry>
|
||||
<varlistentry>
|
||||
<term>If it’s an <emphasis>application</emphasis>:</term>
|
||||
<listitem>
|
||||
@ -598,6 +609,57 @@ evaluate correctly.</para>
|
||||
</section>
|
||||
|
||||
</section>
|
||||
|
||||
|
||||
<section xml:id="sec-sources"><title>Fetching Sources</title>
|
||||
<para>There are multiple ways to fetch a package source in nixpkgs. The
|
||||
general guidline is that you should package sources with a high degree of
|
||||
availability. Right now there is only one fetcher which has mirroring
|
||||
support and that is <literal>fetchurl</literal>. Note that you should also
|
||||
prefer protocols which have a corresponding proxy environment variable.
|
||||
</para>
|
||||
<para>You can find many source fetch helpers in <literal>pkgs/build-support/fetch*</literal>.
|
||||
</para>
|
||||
<para>In the file <literal>pkgs/top-level/all-packages.nix</literal> you can
|
||||
find fetch helpers, these have names on the form
|
||||
<literal>fetchFrom*</literal>. The intention of these are to provide
|
||||
snapshot fetches but using the same api as some of the version controlled
|
||||
fetchers from <literal>pkgs/build-support/</literal>. As an example going
|
||||
from bad to good:
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>Uses <literal>git://</literal> which won't be proxied.
|
||||
<programlisting>
|
||||
src = fetchgit {
|
||||
url = "git://github.com/NixOS/nix.git";
|
||||
rev = "1f795f9f44607cc5bec70d1300150bfefcef2aae";
|
||||
sha256 = "1cw5fszffl5pkpa6s6wjnkiv6lm5k618s32sp60kvmvpy7a2v9kg";
|
||||
}
|
||||
</programlisting>
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>This is ok, but an archive fetch will still be faster.
|
||||
<programlisting>
|
||||
src = fetchgit {
|
||||
url = "https://github.com/NixOS/nix.git";
|
||||
rev = "1f795f9f44607cc5bec70d1300150bfefcef2aae";
|
||||
sha256 = "1cw5fszffl5pkpa6s6wjnkiv6lm5k618s32sp60kvmvpy7a2v9kg";
|
||||
}
|
||||
</programlisting>
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Fetches a snapshot archive and you get the rev you want.
|
||||
<programlisting>
|
||||
src = fetchFromGitHub {
|
||||
owner = "NixOS";
|
||||
repo = "nix";
|
||||
rev = "1f795f9f44607cc5bec70d1300150bfefcef2aae";
|
||||
sha256 = "04yri911rj9j19qqqn6m82266fl05pz98inasni0vxr1cf1gdgv9";
|
||||
}
|
||||
</programlisting>
|
||||
</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
</para>
|
||||
</section>
|
||||
</chapter>
|
||||
|
@ -2,18 +2,19 @@
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||
xml:id="chap-contributing">
|
||||
|
||||
<title>Contributing</title>
|
||||
<title>Contributing to this documentation</title>
|
||||
|
||||
<para>If you make modifications to the manual, it's important to build the manual before contributing:</para>
|
||||
<para>The DocBook sources of the Nixpkgs manual are in the <filename
|
||||
xlink:href="https://github.com/NixOS/nixpkgs/tree/master/doc">doc</filename>
|
||||
subdirectory of the Nixpkgs repository. If you make modifications to
|
||||
the manual, it's important to build it before committing. You can do that as follows:
|
||||
|
||||
<orderedlist>
|
||||
<screen>
|
||||
$ cd /path/to/nixpkgs
|
||||
$ nix-build doc
|
||||
</screen>
|
||||
|
||||
<listitem><para><command>$ git clone git://github.com/NixOS/nixpkgs.git</command></para></listitem>
|
||||
|
||||
<listitem><para><command>$ nix-build -A manual nixpkgs/pkgs/top-level/release.nix</command></para></listitem>
|
||||
|
||||
<listitem><para>Inside the built derivation you shall see <literal>manual/index.html</literal> file.</para></listitem>
|
||||
|
||||
</orderedlist>
|
||||
If the build succeeds, the manual will be in
|
||||
<filename>./result/share/doc/nixpkgs/manual.html</filename>.</para>
|
||||
|
||||
</chapter>
|
||||
|
@ -6,7 +6,7 @@ stdenv.mkDerivation {
|
||||
|
||||
sources = sourceFilesBySuffices ./. [".xml"];
|
||||
|
||||
buildInputs = [ libxml2 libxslt ];
|
||||
buildInputs = [ pandoc libxml2 libxslt ];
|
||||
|
||||
xsltFlags = ''
|
||||
--param section.autolabel 1
|
||||
@ -19,7 +19,23 @@ stdenv.mkDerivation {
|
||||
'';
|
||||
|
||||
buildCommand = ''
|
||||
ln -s $sources/*.xml . # */
|
||||
{
|
||||
echo "<chapter xmlns=\"http://docbook.org/ns/docbook\""
|
||||
echo " xmlns:xlink=\"http://www.w3.org/1999/xlink\""
|
||||
echo " xml:id=\"users-guide-to-the-haskell-infrastructure\">"
|
||||
echo ""
|
||||
echo "<title>User's Guide to the Haskell Infrastructure</title>"
|
||||
echo ""
|
||||
pandoc ${./haskell-users-guide.md} -w docbook | \
|
||||
sed -e 's|<ulink url=|<link xlink:href=|' \
|
||||
-e 's|</ulink>|</link>|' \
|
||||
-e 's|<sect. id=|<section xml:id=|' \
|
||||
-e 's|</sect[0-9]>|</section>|'
|
||||
echo ""
|
||||
echo "</chapter>"
|
||||
} >haskell-users-guide.xml
|
||||
|
||||
ln -s "$sources/"*.xml .
|
||||
|
||||
echo ${nixpkgsVersion} > .version
|
||||
|
||||
@ -36,6 +52,9 @@ stdenv.mkDerivation {
|
||||
|
||||
cp ${./style.css} $dst/style.css
|
||||
|
||||
mkdir -p $dst/images/callouts
|
||||
cp "${docbook5_xsl}/xml/xsl/docbook/images/callouts/"*.gif $dst/images/callouts/
|
||||
|
||||
mkdir -p $out/nix-support
|
||||
echo "doc manual $dst manual.html" >> $out/nix-support/hydra-build-products
|
||||
'';
|
||||
|
273
doc/functions.xml
Normal file
273
doc/functions.xml
Normal file
@ -0,0 +1,273 @@
|
||||
<chapter xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||
xml:id="chap-functions">
|
||||
|
||||
<title>Functions reference</title>
|
||||
|
||||
<para>
|
||||
The nixpkgs repository has several utility functions to manipulate Nix expressions.
|
||||
</para>
|
||||
|
||||
<section xml:id="sec-pkgs-overridePackages">
|
||||
<title>pkgs.overridePackages</title>
|
||||
|
||||
<para>
|
||||
This function inside the nixpkgs expression (<varname>pkgs</varname>)
|
||||
can be used to override the set of packages itself.
|
||||
</para>
|
||||
<para>
|
||||
Warning: this function is expensive and must not be used from within
|
||||
the nixpkgs repository.
|
||||
</para>
|
||||
<para>
|
||||
Example usage:
|
||||
|
||||
<programlisting>let
|
||||
pkgs = import <nixpkgs> {};
|
||||
newpkgs = pkgs.overridePackages (self: super: {
|
||||
foo = super.foo.override { ... };
|
||||
};
|
||||
in ...</programlisting>
|
||||
</para>
|
||||
|
||||
<para>
|
||||
The resulting <varname>newpkgs</varname> will have the new <varname>foo</varname>
|
||||
expression, and all other expressions depending on <varname>foo</varname> will also
|
||||
use the new <varname>foo</varname> expression.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
The behavior of this function is similar to <link
|
||||
linkend="sec-modify-via-packageOverrides">config.packageOverrides</link>.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
The <varname>self</varname> parameter refers to the final package set with the
|
||||
applied overrides. Using this parameter may lead to infinite recursion if not
|
||||
used consciously.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
The <varname>super</varname> parameter refers to the old package set.
|
||||
It's equivalent to <varname>pkgs</varname> in the above example.
|
||||
</para>
|
||||
|
||||
</section>
|
||||
|
||||
<section xml:id="sec-pkg-override">
|
||||
<title><pkg>.override</title>
|
||||
|
||||
<para>
|
||||
The function <varname>override</varname> is usually available for all the
|
||||
derivations in the nixpkgs expression (<varname>pkgs</varname>).
|
||||
</para>
|
||||
<para>
|
||||
It is used to override the arguments passed to a function.
|
||||
</para>
|
||||
<para>
|
||||
Example usages:
|
||||
|
||||
<programlisting>pkgs.foo.override { arg1 = val1; arg2 = val2; ... }</programlisting>
|
||||
<programlisting>pkgs.overridePackages (self: super: {
|
||||
foo = super.foo.override { barSupport = true ; };
|
||||
})</programlisting>
|
||||
<programlisting>mypkg = pkgs.callPackage ./mypkg.nix {
|
||||
mydep = pkgs.mydep.override { ... };
|
||||
})</programlisting>
|
||||
</para>
|
||||
|
||||
<para>
|
||||
In the first example, <varname>pkgs.foo</varname> is the result of a function call
|
||||
with some default arguments, usually a derivation.
|
||||
Using <varname>pkgs.foo.override</varname> will call the same function with
|
||||
the given new arguments.
|
||||
</para>
|
||||
|
||||
</section>
|
||||
|
||||
<section xml:id="sec-pkg-overrideDerivation">
|
||||
<title><pkg>.overrideDerivation</title>
|
||||
|
||||
<para>
|
||||
The function <varname>overrideDerivation</varname> is usually available for all the
|
||||
derivations in the nixpkgs expression (<varname>pkgs</varname>).
|
||||
</para>
|
||||
<para>
|
||||
It is used to create a new derivation by overriding the attributes of
|
||||
the original derivation according to the given function.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Example usage:
|
||||
|
||||
<programlisting>mySed = pkgs.gnused.overrideDerivation (oldAttrs: {
|
||||
name = "sed-4.2.2-pre";
|
||||
src = fetchurl {
|
||||
url = ftp://alpha.gnu.org/gnu/sed/sed-4.2.2-pre.tar.bz2;
|
||||
sha256 = "11nq06d131y4wmf3drm0yk502d2xc6n5qy82cg88rb9nqd2lj41k";
|
||||
};
|
||||
patches = [];
|
||||
});</programlisting>
|
||||
</para>
|
||||
|
||||
<para>
|
||||
In the above example, the name, src and patches of the derivation
|
||||
will be overridden, while all other attributes will be retained from the
|
||||
original derivation.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
The argument <varname>oldAttrs</varname> is used to refer to the attribute set of
|
||||
the original derivation.
|
||||
</para>
|
||||
|
||||
</section>
|
||||
|
||||
<section xml:id="sec-lib-makeOverridable">
|
||||
<title>lib.makeOverridable</title>
|
||||
|
||||
<para>
|
||||
The function <varname>lib.makeOverridable</varname> is used to make the result
|
||||
of a function easily customizable. This utility only makes sense for functions
|
||||
that accept an argument set and return an attribute set.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Example usage:
|
||||
|
||||
<programlisting>f = { a, b }: { result = a+b; }
|
||||
c = lib.makeOverridable f { a = 1; b = 2; }</programlisting>
|
||||
|
||||
</para>
|
||||
|
||||
<para>
|
||||
The variable <varname>c</varname> is the value of the <varname>f</varname> function
|
||||
applied with some default arguments. Hence the value of <varname>c.result</varname>
|
||||
is <literal>3</literal>, in this example.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
The variable <varname>c</varname> however also has some additional functions, like
|
||||
<link linkend="sec-pkg-override">c.override</link> which can be used to
|
||||
override the default arguments. In this example the value of
|
||||
<varname>(c.override { a = 4; }).result</varname> is 6.
|
||||
</para>
|
||||
|
||||
</section>
|
||||
|
||||
|
||||
<section xml:id="sec-fhs-environments">
|
||||
<title>buildFHSChrootEnv/buildFHSUserEnv</title>
|
||||
|
||||
<para>
|
||||
<function>buildFHSChrootEnv</function> and
|
||||
<function>buildFHSUserEnv</function> provide a way to build and run
|
||||
FHS-compatible lightweight sandboxes. They get their own isolated root with
|
||||
binded <filename>/nix/store</filename>, so their footprint in terms of disk
|
||||
space needed is quite small. This allows one to run software which is hard or
|
||||
unfeasible to patch for NixOS -- 3rd-party source trees with FHS assumptions,
|
||||
games distributed as tarballs, software with integrity checking and/or external
|
||||
self-updated binaries.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
<function>buildFHSChrootEnv</function> allows to create persistent
|
||||
environments, which can be constructed, deconstructed and entered by
|
||||
multiple users at once. A downside is that it requires
|
||||
<literal>root</literal> access for both those who create and destroy and
|
||||
those who enter it. It can be useful to create environments for daemons that
|
||||
one can enter and observe.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
<function>buildFHSUserEnv</function> uses Linux namespaces feature to create
|
||||
temporary lightweight environments which are destroyed after all child
|
||||
processes exit. It does not require root access, and can be useful to create
|
||||
sandboxes and wrap applications.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Those functions both rely on <function>buildFHSEnv</function>, which creates
|
||||
an actual directory structure given a list of necessary packages and extra
|
||||
build commands.
|
||||
<function>buildFHSChrootEnv</function> and <function>buildFHSUserEnv</function>
|
||||
both accept those arguments which are passed to
|
||||
<function>buildFHSEnv</function>:
|
||||
</para>
|
||||
|
||||
<variablelist>
|
||||
<varlistentry>
|
||||
<term><literal>name</literal></term>
|
||||
|
||||
<listitem><para>Environment name.</para></listitem>
|
||||
</varlistentry>
|
||||
|
||||
<varlistentry>
|
||||
<term><literal>targetPkgs</literal></term>
|
||||
|
||||
<listitem><para>Packages to be installed for the main host's architecture
|
||||
(i.e. x86_64 on x86_64 installations).</para></listitem>
|
||||
</varlistentry>
|
||||
|
||||
<varlistentry>
|
||||
<term><literal>multiPkgs</literal></term>
|
||||
|
||||
<listitem><para>Packages to be installed for all architectures supported by
|
||||
a host (i.e. i686 and x86_64 on x86_64 installations).</para></listitem>
|
||||
</varlistentry>
|
||||
|
||||
<varlistentry>
|
||||
<term><literal>extraBuildCommands</literal></term>
|
||||
|
||||
<listitem><para>Additional commands to be executed for finalizing the
|
||||
directory structure.</para></listitem>
|
||||
</varlistentry>
|
||||
|
||||
<varlistentry>
|
||||
<term><literal>extraBuildCommandsMulti</literal></term>
|
||||
|
||||
<listitem><para>Like <literal>extraBuildCommandsMulti</literal>, but
|
||||
executed only on multilib architectures.</para></listitem>
|
||||
</varlistentry>
|
||||
</variablelist>
|
||||
|
||||
<para>
|
||||
Additionally, <function>buildFHSUserEnv</function> accepts
|
||||
<literal>runScript</literal> parameter, which is a command that would be
|
||||
executed inside the sandbox and passed all the command line arguments. It
|
||||
default to <literal>bash</literal>.
|
||||
One can create a simple environment using a <literal>shell.nix</literal>
|
||||
like that:
|
||||
</para>
|
||||
|
||||
<programlisting><![CDATA[
|
||||
{ pkgs ? import <nixpkgs> {} }:
|
||||
|
||||
(pkgs.buildFHSUserEnv {
|
||||
name = "simple-x11-env";
|
||||
targetPkgs = pkgs: (with pkgs;
|
||||
[ udev
|
||||
alsaLib
|
||||
]) ++ (with pkgs.xorg;
|
||||
[ libX11
|
||||
libXcursor
|
||||
libXrandr
|
||||
]);
|
||||
multiPkgs = pkgs: (with pkgs;
|
||||
[ udev
|
||||
alsaLib
|
||||
]) ++ (with [];
|
||||
runScript = "bash";
|
||||
}).env
|
||||
]]></programlisting>
|
||||
|
||||
<para>
|
||||
Running <literal>nix-shell</literal> would then drop you into a shell with
|
||||
these libraries and binaries available. You can use this to run
|
||||
closed-source applications which expect FHS structure without hassles:
|
||||
simply change <literal>runScript</literal> to the application path,
|
||||
e.g. <filename>./bin/start.sh</filename> -- relative paths are supported.
|
||||
</para>
|
||||
</section>
|
||||
|
||||
</chapter>
|
698
doc/haskell-users-guide.md
Normal file
698
doc/haskell-users-guide.md
Normal file
@ -0,0 +1,698 @@
|
||||
---
|
||||
title: User's Guide for Haskell in Nixpkgs
|
||||
author: Peter Simons
|
||||
date: 2015-06-01
|
||||
---
|
||||
|
||||
# How to install Haskell packages
|
||||
|
||||
Nixpkgs distributes build instructions for all Haskell packages registered on
|
||||
[Hackage](http://hackage.haskell.org/), but strangely enough normal Nix package
|
||||
lookups don't seem to discover any of them, except for the default version of ghc, cabal-install, and stack:
|
||||
|
||||
$ nix-env -i alex
|
||||
error: selector ‘alex’ matches no derivations
|
||||
$ nix-env -qa ghc
|
||||
ghc-7.10.2
|
||||
|
||||
The Haskell package set is not registered in the top-level namespace because it
|
||||
is *huge*. If all Haskell packages were visible to these commands, then
|
||||
name-based search/install operations would be much slower than they are now. We
|
||||
avoided that by keeping all Haskell-related packages in a separate attribute
|
||||
set called `haskellPackages`, which the following command will list:
|
||||
|
||||
$ nix-env -f "<nixpkgs>" -qaP -A haskellPackages
|
||||
haskellPackages.a50 a50-0.5
|
||||
haskellPackages.abacate haskell-abacate-0.0.0.0
|
||||
haskellPackages.abcBridge haskell-abcBridge-0.12
|
||||
haskellPackages.afv afv-0.1.1
|
||||
haskellPackages.alex alex-3.1.4
|
||||
haskellPackages.Allure Allure-0.4.101.1
|
||||
haskellPackages.alms alms-0.6.7
|
||||
[... some 8000 entries omitted ...]
|
||||
|
||||
To install any of those packages into your profile, refer to them by their
|
||||
attribute path (first column):
|
||||
|
||||
$ nix-env -f "<nixpkgs>" -iA haskellPackages.Allure ...
|
||||
|
||||
The attribute path of any Haskell packages corresponds to the name of that
|
||||
particular package on Hackage: the package `cabal-install` has the attribute
|
||||
`haskellPackages.cabal-install`, and so on. (Actually, this convention causes
|
||||
trouble with packages like `3dmodels` and `4Blocks`, because these names are
|
||||
invalid identifiers in the Nix language. The issue of how to deal with these
|
||||
rare corner cases is currently unresolved.)
|
||||
|
||||
Haskell packages who's Nix name (second column) begins with a `haskell-` prefix
|
||||
are packages that provide a library whereas packages without that prefix
|
||||
provide just executables. Libraries may provide executables too, though: the
|
||||
package `haskell-pandoc`, for example, installs both a library and an
|
||||
application. You can install and use Haskell executables just like any other
|
||||
program in Nixpkgs, but using Haskell libraries for development is a bit
|
||||
trickier and we'll address that subject in great detail in section [How to
|
||||
create a development environment].
|
||||
|
||||
Attribute paths are deterministic inside of Nixpkgs, but the path necessary to
|
||||
reach Nixpkgs varies from system to system. We dodged that problem by giving
|
||||
`nix-env` an explicit `-f "<nixpkgs>"` parameter, but if you call `nix-env`
|
||||
without that flag, then chances are the invocation fails:
|
||||
|
||||
$ nix-env -iA haskellPackages.cabal-install
|
||||
error: attribute ‘haskellPackages’ in selection path
|
||||
‘haskellPackages.cabal-install’ not found
|
||||
|
||||
On NixOS, for example, Nixpkgs does *not* exist in the top-level namespace by
|
||||
default. To figure out the proper attribute path, it's easiest to query for the
|
||||
path of a well-known Nixpkgs package, i.e.:
|
||||
|
||||
$ nix-env -qaP coreutils
|
||||
nixos.coreutils coreutils-8.23
|
||||
|
||||
If your system responds like that (most NixOS installations will), then the
|
||||
attribute path to `haskellPackages` is `nixos.haskellPackages`. Thus, if you
|
||||
want to use `nix-env` without giving an explicit `-f` flag, then that's the way
|
||||
to do it:
|
||||
|
||||
$ nix-env -qaP -A nixos.haskellPackages
|
||||
$ nix-env -iA nixos.haskellPackages.cabal-install
|
||||
|
||||
Our current default compiler is GHC 7.10.x and the `haskellPackages` set
|
||||
contains packages built with that particular version. Nixpkgs contains the
|
||||
latest major release of every GHC since 6.10.4, however, and there is a whole
|
||||
family of package sets available that defines Hackage packages built with each
|
||||
of those compilers, too:
|
||||
|
||||
$ nix-env -f "<nixpkgs>" -qaP -A haskell.packages.ghc6123
|
||||
$ nix-env -f "<nixpkgs>" -qaP -A haskell.packages.ghc763
|
||||
|
||||
The name `haskellPackages` is really just a synonym for
|
||||
`haskell.packages.ghc7102`, because we prefer that package set internally and
|
||||
recommend it to our users as their default choice, but ultimately you are free
|
||||
to compile your Haskell packages with any GHC version you please. The following
|
||||
command displays the complete list of available compilers:
|
||||
|
||||
$ nix-env -f "<nixpkgs>" -qaP -A haskell.compiler
|
||||
haskell.compiler.ghc6104 ghc-6.10.4
|
||||
haskell.compiler.ghc6123 ghc-6.12.3
|
||||
haskell.compiler.ghc704 ghc-7.0.4
|
||||
haskell.compiler.ghc722 ghc-7.2.2
|
||||
haskell.compiler.ghc742 ghc-7.4.2
|
||||
haskell.compiler.ghc763 ghc-7.6.3
|
||||
haskell.compiler.ghc784 ghc-7.8.4
|
||||
haskell.compiler.ghc7102 ghc-7.10.2
|
||||
haskell.compiler.ghcHEAD ghc-7.11.20150402
|
||||
haskell.compiler.ghcNokinds ghc-nokinds-7.11.20150704
|
||||
haskell.compiler.ghcjs ghcjs-0.1.0
|
||||
haskell.compiler.jhc jhc-0.8.2
|
||||
haskell.compiler.uhc uhc-1.1.9.0
|
||||
|
||||
We have no package sets for `jhc` or `uhc` yet, unfortunately, but for every
|
||||
version of GHC listed above, there exists a package set based on that compiler.
|
||||
Also, the attributes `haskell.compiler.ghcXYC` and
|
||||
`haskell.packages.ghcXYC.ghc` are synonymous for the sake of convenience.
|
||||
|
||||
# How to create a development environment
|
||||
|
||||
## How to install a compiler
|
||||
|
||||
A simple development environment consists of a Haskell compiler and the tool
|
||||
`cabal-install`, and we saw in section [How to install Haskell packages] how
|
||||
you can install those programs into your user profile:
|
||||
|
||||
$ nix-env -f "<nixpkgs>" -iA haskellPackages.ghc haskellPackages.cabal-install
|
||||
|
||||
Instead of the default package set `haskellPackages`, you can also use the more
|
||||
precise name `haskell.compiler.ghc7102`, which has the advantage that it refers
|
||||
to the same GHC version regardless of what Nixpkgs considers "default" at any
|
||||
given time.
|
||||
|
||||
Once you've made those tools available in `$PATH`, it's possible to build
|
||||
Hackage packages the same way people without access to Nix do it all the time:
|
||||
|
||||
$ cabal get lens-4.11 && cd lens-4.11
|
||||
$ cabal install -j --dependencies-only
|
||||
$ cabal configure
|
||||
$ cabal build
|
||||
|
||||
If you enjoy working with Cabal sandboxes, then that's entirely possible too:
|
||||
just execute the command
|
||||
|
||||
$ cabal sandbox init
|
||||
|
||||
before installing the required dependencies.
|
||||
|
||||
The `nix-shell` utility makes it easy to switch to a different compiler
|
||||
version; just enter the Nix shell environment with the command
|
||||
|
||||
$ nix-shell -p haskell.compiler.ghc784
|
||||
|
||||
to bring GHC 7.8.4 into `$PATH`. Re-running `cabal configure` switches your
|
||||
build to use that compiler instead. If you're working on a project that doesn't
|
||||
depend on any additional system libraries outside of GHC, then it's sufficient
|
||||
even to run the `cabal configure` command inside of the shell:
|
||||
|
||||
$ nix-shell -p haskell.compiler.ghc784 --command "cabal configure"
|
||||
|
||||
Afterwards, all other commands like `cabal build` work just fine in any shell
|
||||
environment, because the configure phase recorded the absolute paths to all
|
||||
required tools like GHC in its build configuration inside of the `dist/`
|
||||
directory. Please note, however, that `nix-collect-garbage` can break such an
|
||||
environment because the Nix store paths created by `nix-shell` aren't "alive"
|
||||
anymore once `nix-shell` has terminated. If you find that your Haskell builds
|
||||
no longer work after garbage collection, then you'll have to re-run `cabal
|
||||
configure` inside of a new `nix-shell` environment.
|
||||
|
||||
## How to install a compiler with libraries
|
||||
|
||||
GHC expects to find all installed libraries inside of its own `lib` directory.
|
||||
This approach works fine on traditional Unix systems, but it doesn't work for
|
||||
Nix, because GHC's store path is immutable once it's built. We cannot install
|
||||
additional libraries into that location. As a consequence, our copies of GHC
|
||||
don't know any packages except their own core libraries, like `base`,
|
||||
`containers`, `Cabal`, etc.
|
||||
|
||||
We can register additional libraries to GHC, however, using a special build
|
||||
function called `ghcWithPackages`. That function expects one argument: a
|
||||
function that maps from an attribute set of Haskell packages to a list of
|
||||
packages, which determines the libraries known to that particular version of
|
||||
GHC. For example, the Nix expression `ghcWithPackages (pkgs: [pkgs.mtl])`
|
||||
generates a copy of GHC that has the `mtl` library registered in addition to
|
||||
its normal core packages:
|
||||
|
||||
$ nix-shell -p "haskellPackages.ghcWithPackages (pkgs: [pkgs.mtl])"
|
||||
|
||||
[nix-shell:~]$ ghc-pkg list mtl
|
||||
/nix/store/zy79...-ghc-7.10.2/lib/ghc-7.10.2/package.conf.d:
|
||||
mtl-2.2.1
|
||||
|
||||
This function allows users to define their own development environment by means
|
||||
of an override. After adding the following snippet to `~/.nixpkgs/config.nix`,
|
||||
|
||||
{
|
||||
packageOverrides = super: let self = super.pkgs; in
|
||||
{
|
||||
myHaskellEnv = self.haskell.packages.ghc7102.ghcWithPackages
|
||||
(haskellPackages: with haskellPackages; [
|
||||
# libraries
|
||||
arrows async cgi criterion
|
||||
# tools
|
||||
cabal-install haskintex
|
||||
]);
|
||||
};
|
||||
}
|
||||
|
||||
it's possible to install that compiler with `nix-env -f "<nixpkgs>" -iA
|
||||
myHaskellEnv`. If you'd like to switch that development environment to a
|
||||
different version of GHC, just replace the `ghc7102` bit in the previous
|
||||
definition with the appropriate name. Of course, it's also possible to define
|
||||
any number of these development environments! (You can't install two of them
|
||||
into the same profile at the same time, though, because that would result in
|
||||
file conflicts.)
|
||||
|
||||
The generated `ghc` program is a wrapper script that re-directs the real
|
||||
GHC executable to use a new `lib` directory --- one that we specifically
|
||||
constructed to contain all those packages the user requested:
|
||||
|
||||
$ cat $(type -p ghc)
|
||||
#! /nix/store/xlxj...-bash-4.3-p33/bin/bash -e
|
||||
export NIX_GHC=/nix/store/19sm...-ghc-7.10.2/bin/ghc
|
||||
export NIX_GHCPKG=/nix/store/19sm...-ghc-7.10.2/bin/ghc-pkg
|
||||
export NIX_GHC_DOCDIR=/nix/store/19sm...-ghc-7.10.2/share/doc/ghc/html
|
||||
export NIX_GHC_LIBDIR=/nix/store/19sm...-ghc-7.10.2/lib/ghc-7.10.2
|
||||
exec /nix/store/j50p...-ghc-7.10.2/bin/ghc "-B$NIX_GHC_LIBDIR" "$@"
|
||||
|
||||
The variables `$NIX_GHC`, `$NIX_GHCPKG`, etc. point to the *new* store path
|
||||
`ghcWithPackages` constructed specifically for this environment. The last line
|
||||
of the wrapper script then executes the real `ghc`, but passes the path to the
|
||||
new `lib` directory using GHC's `-B` flag.
|
||||
|
||||
The purpose of those environment variables is to work around an impurity in the
|
||||
popular [ghc-paths](http://hackage.haskell.org/package/ghc-paths) library. That
|
||||
library promises to give its users access to GHC's installation paths. Only,
|
||||
the library can't possible know that path when it's compiled, because the path
|
||||
GHC considers its own is determined only much later, when the user configures
|
||||
it through `ghcWithPackages`. So we [patched
|
||||
ghc-paths](https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/haskell-modules/ghc-paths-nix.patch)
|
||||
to return the paths found in those environment variables at run-time rather
|
||||
than trying to guess them at compile-time.
|
||||
|
||||
To make sure that mechanism works properly all the time, we recommend that you
|
||||
set those variables to meaningful values in your shell environment, too, i.e.
|
||||
by adding the following code to your `~/.bashrc`:
|
||||
|
||||
if type >/dev/null 2>&1 -p ghc; then
|
||||
eval "$(egrep ^export "$(type -p ghc)")"
|
||||
fi
|
||||
|
||||
If you are certain that you'll use only one GHC environment which is located in
|
||||
your user profile, then you can use the following code, too, which has the
|
||||
advantage that it doesn't contain any paths from the Nix store, i.e. those
|
||||
settings always remain valid even if a `nix-env -u` operation updates the GHC
|
||||
environment in your profile:
|
||||
|
||||
if [ -e ~/.nix-profile/bin/ghc ]; then
|
||||
export NIX_GHC="$HOME/.nix-profile/bin/ghc"
|
||||
export NIX_GHCPKG="$HOME/.nix-profile/bin/ghc-pkg"
|
||||
export NIX_GHC_DOCDIR="$HOME/.nix-profile/share/doc/ghc/html"
|
||||
export NIX_GHC_LIBDIR="$HOME/.nix-profile/lib/ghc-$($NIX_GHC --numeric-version)"
|
||||
fi
|
||||
|
||||
## How to install a compiler with libraries, hoogle and documentation indexes
|
||||
|
||||
If you plan to use your environment for interactive programming, not just
|
||||
compiling random Haskell code, you might want to replace `ghcWithPackages` in
|
||||
all the listings above with `ghcWithHoogle`.
|
||||
|
||||
This environment generator not only produces an environment with GHC and all
|
||||
the specified libraries, but also generates a `hoogle` and `haddock` indexes
|
||||
for all the packages, and provides a wrapper script around `hoogle` binary that
|
||||
uses all those things. A precise name for this thing would be
|
||||
"`ghcWithPackagesAndHoogleAndDocumentationIndexes`", which is, regrettably, too
|
||||
long and scary.
|
||||
|
||||
For example, installing the following environment
|
||||
|
||||
{
|
||||
packageOverrides = super: let self = super.pkgs; in
|
||||
{
|
||||
myHaskellEnv = self.haskellPackages.ghcWithHoogle
|
||||
(haskellPackages: with haskellPackages; [
|
||||
# libraries
|
||||
arrows async cgi criterion
|
||||
# tools
|
||||
cabal-install haskintex
|
||||
]);
|
||||
};
|
||||
}
|
||||
|
||||
allows one to browse module documentation index [not too dissimilar to
|
||||
this](https://downloads.haskell.org/~ghc/latest/docs/html/libraries/index.html)
|
||||
for all the specified packages and their dependencies by directing a browser of
|
||||
choice to `~/.nix-profiles/share/doc/hoogle/index.html` (or
|
||||
`/run/current-system/sw/share/doc/hoogle/index.html` in case you put it in
|
||||
`environment.systemPackages` in NixOS).
|
||||
|
||||
After you've marveled enough at that try adding the following to your
|
||||
`~/.ghc/ghci.conf`
|
||||
|
||||
:def hoogle \s -> return $ ":! hoogle search -cl --count=15 \"" ++ s ++ "\""
|
||||
:def doc \s -> return $ ":! hoogle search -cl --info \"" ++ s ++ "\""
|
||||
|
||||
and test it by typing into `ghci`:
|
||||
|
||||
:hoogle a -> a
|
||||
:doc a -> a
|
||||
|
||||
Be sure to note the links to `haddock` files in the output. With any modern and
|
||||
properly configured terminal emulator you can just click those links to
|
||||
navigate there.
|
||||
|
||||
Finally, you can run
|
||||
|
||||
hoogle server -p 8080
|
||||
|
||||
and navigate to http://localhost:8080/ for your own local
|
||||
[Hoogle](https://www.haskell.org/hoogle/). Note, however, that Firefox and
|
||||
possibly other browsers disallow navigation from `http:` to `file:` URIs for
|
||||
security reasons, which might be quite an inconvenience. See [this
|
||||
page](http://kb.mozillazine.org/Links_to_local_pages_do_not_work) for
|
||||
workarounds.
|
||||
|
||||
|
||||
## How to create ad hoc environments for `nix-shell`
|
||||
|
||||
The easiest way to create an ad hoc development environment is to run
|
||||
`nix-shell` with the appropriate GHC environment given on the command-line:
|
||||
|
||||
nix-shell -p "haskellPackages.ghcWithPackages (pkgs: with pkgs; [mtl pandoc])"
|
||||
|
||||
For more sophisticated use-cases, however, it's more convenient to save the
|
||||
desired configuration in a file called `shell.nix` that looks like this:
|
||||
|
||||
{ nixpkgs ? import <nixpkgs> {}, compiler ? "ghc7102" }:
|
||||
let
|
||||
inherit (nixpkgs) pkgs;
|
||||
ghc = pkgs.haskell.packages.${compiler}.ghcWithPackages (ps: with ps; [
|
||||
monad-par mtl
|
||||
]);
|
||||
in
|
||||
pkgs.stdenv.mkDerivation {
|
||||
name = "my-haskell-env-0";
|
||||
buildInputs = [ ghc ];
|
||||
shellHook = "eval $(egrep ^export ${ghc}/bin/ghc)";
|
||||
}
|
||||
|
||||
Now run `nix-shell` --- or even `nix-shell --pure` --- to enter a shell
|
||||
environment that has the appropriate compiler in `$PATH`. If you use `--pure`,
|
||||
then add all other packages that your development environment needs into the
|
||||
`buildInputs` attribute. If you'd like to switch to a different compiler
|
||||
version, then pass an appropriate `compiler` argument to the expression, i.e.
|
||||
`nix-shell --argstr compiler ghc784`.
|
||||
|
||||
If you need such an environment because you'd like to compile a Hackage package
|
||||
outside of Nix --- i.e. because you're hacking on the latest version from Git
|
||||
---, then the package set provides suitable nix-shell environments for you
|
||||
already! Every Haskell package has an `env` attribute that provides a shell
|
||||
environment suitable for compiling that particular package. If you'd like to
|
||||
hack the `lens` library, for example, then you just have to check out the
|
||||
source code and enter the appropriate environment:
|
||||
|
||||
$ cabal get lens-4.11 && cd lens-4.11
|
||||
Downloading lens-4.11...
|
||||
Unpacking to lens-4.11/
|
||||
|
||||
$ nix-shell "<nixpkgs>" -A haskellPackages.lens.env
|
||||
[nix-shell:/tmp/lens-4.11]$
|
||||
|
||||
At point, you can run `cabal configure`, `cabal build`, and all the other
|
||||
development commands. Note that you need `cabal-install` installed in your
|
||||
`$PATH` already to use it here --- the `nix-shell` environment does not provide
|
||||
it.
|
||||
|
||||
# How to create Nix builds for your own private Haskell packages
|
||||
|
||||
If your own Haskell packages have build instructions for Cabal, then you can
|
||||
convert those automatically into build instructions for Nix using the
|
||||
`cabal2nix` utility, which you can install into your profile by running
|
||||
`nix-env -i cabal2nix`.
|
||||
|
||||
## How to build a stand-alone project
|
||||
|
||||
For example, let's assume that you're working on a private project called
|
||||
`foo`. To generate a Nix build expression for it, change into the project's
|
||||
top-level directory and run the command:
|
||||
|
||||
$ cabal2nix . >foo.nix
|
||||
|
||||
Then write the following snippet into a file called `default.nix`:
|
||||
|
||||
{ nixpkgs ? import <nixpkgs> {}, compiler ? "ghc7102" }:
|
||||
nixpkgs.pkgs.haskell.packages.${compiler}.callPackage ./foo.nix { }
|
||||
|
||||
Finally, store the following code in a file called `shell.nix`:
|
||||
|
||||
{ nixpkgs ? import <nixpkgs> {}, compiler ? "ghc7102" }:
|
||||
(import ./default.nix { inherit nixpkgs compiler; }).env
|
||||
|
||||
At this point, you can run `nix-build` to have Nix compile your project and
|
||||
install it into a Nix store path. The local directory will contain a symlink
|
||||
called `result` after `nix-build` returns that points into that location. Of
|
||||
course, passing the flag `--argstr compiler ghc763` allows switching the build
|
||||
to any version of GHC currently supported.
|
||||
|
||||
Furthermore, you can call `nix-shell` to enter an interactive development
|
||||
environment in which you can use `cabal configure` and `cabal build` to develop
|
||||
your code. That environment will automatically contain a proper GHC derivation
|
||||
with all the required libraries registered as well as all the system-level
|
||||
libraries your package might need.
|
||||
|
||||
If your package does not depend on any system-level libraries, then it's
|
||||
sufficient to run
|
||||
|
||||
$ nix-shell --command "cabal configure"
|
||||
|
||||
once to set up your build. `cabal-install` determines the absolute paths to all
|
||||
resources required for the build and writes them into a config file in the
|
||||
`dist/` directory. Once that's done, you can run `cabal build` and any other
|
||||
command for that project even outside of the `nix-shell` environment. This
|
||||
feature is particularly nice for those of us who like to edit their code with
|
||||
an IDE, like Emacs' `haskell-mode`, because it's not necessary to start Emacs
|
||||
inside of nix-shell just to make it find out the necessary settings for
|
||||
building the project; `cabal-install` has already done that for us.
|
||||
|
||||
If you want to do some quick-and-dirty hacking and don't want to bother setting
|
||||
up a `default.nix` and `shell.nix` file manually, then you can use the
|
||||
`--shell` flag offered by `cabal2nix` to have it generate a stand-alone
|
||||
`nix-shell` environment for you. With that feature, running
|
||||
|
||||
$ cabal2nix --shell . >shell.nix
|
||||
$ nix-shell --command "cabal configure"
|
||||
|
||||
is usually enough to set up a build environment for any given Haskell package.
|
||||
You can even use that generated file to run `nix-build`, too:
|
||||
|
||||
$ nix-build shell.nix
|
||||
|
||||
## How to build projects that depend on each other
|
||||
|
||||
If you have multiple private Haskell packages that depend on each other, then
|
||||
you'll have to register those packages in the Nixpkgs set to make them visible
|
||||
for the dependency resolution performed by `callPackage`. First of all, change
|
||||
into each of your projects top-level directories and generate a `default.nix`
|
||||
file with `cabal2nix`:
|
||||
|
||||
$ cd ~/src/foo && cabal2nix . >default.nix
|
||||
$ cd ~/src/bar && cabal2nix . >default.nix
|
||||
|
||||
Then edit your `~/.nixpkgs/config.nix` file to register those builds in the
|
||||
default Haskell package set:
|
||||
|
||||
{
|
||||
packageOverrides = super: let self = super.pkgs; in
|
||||
{
|
||||
haskellPackages = super.haskellPackages.override {
|
||||
overrides = self: super: {
|
||||
foo = self.callPackage ../src/foo {};
|
||||
bar = self.callPackage ../src/bar {};
|
||||
};
|
||||
};
|
||||
};
|
||||
}
|
||||
|
||||
Once that's accomplished, `nix-env -f "<nixpkgs>" -qA haskellPackages` will
|
||||
show your packages like any other package from Hackage, and you can build them
|
||||
|
||||
$ nix-build "<nixpkgs>" -A haskellPackages.foo
|
||||
|
||||
or enter an interactive shell environment suitable for building them:
|
||||
|
||||
$ nix-shell "<nixpkgs>" -A haskellPackages.bar.env
|
||||
|
||||
# Miscellaneous Topics
|
||||
|
||||
## How to build with profiling enabled
|
||||
|
||||
Every Haskell package set takes a function called `overrides` that you can use
|
||||
to manipulate the package as much as you please. One useful application of this
|
||||
feature is to replace the default `mkDerivation` function with one that enables
|
||||
library profiling for all packages. To accomplish that, add configure the
|
||||
following snippet in your `~/.nixpkgs/config.nix` file:
|
||||
|
||||
{
|
||||
packageOverrides = super: let self = super.pkgs; in
|
||||
{
|
||||
profiledHaskellPackages = self.haskellPackages.override {
|
||||
overrides = self: super: {
|
||||
mkDerivation = args: super.mkDerivation (args // {
|
||||
enableLibraryProfiling = true;
|
||||
});
|
||||
};
|
||||
};
|
||||
};
|
||||
}
|
||||
|
||||
Then, replace instances of `haskellPackages` in the `cabal2nix`-generated
|
||||
`default.nix` or `shell.nix` files with `profiledHaskellPackages`.
|
||||
|
||||
## How to override package versions in a compiler-specific package set
|
||||
|
||||
Nixpkgs provides the latest version of
|
||||
[`ghc-events`](http://hackage.haskell.org/package/ghc-events), which is 0.4.4.0
|
||||
at the time of this writing. This is fine for users of GHC 7.10.x, but GHC
|
||||
7.8.4 cannot compile that binary. Now, one way to solve that problem is to
|
||||
register an older version of `ghc-events` in the 7.8.x-specific package set.
|
||||
The first step is to generate Nix build instructions with `cabal2nix`:
|
||||
|
||||
$ cabal2nix cabal://ghc-events-0.4.3.0 >~/.nixpkgs/ghc-events-0.4.3.0.nix
|
||||
|
||||
Then add the override in `~/.nixpkgs/config.nix`:
|
||||
|
||||
{
|
||||
packageOverrides = super: let self = super.pkgs; in
|
||||
{
|
||||
haskell = super.haskell // {
|
||||
packages = super.haskell.packages // {
|
||||
ghc784 = super.haskell.packages.ghc784.override {
|
||||
overrides = self: super: {
|
||||
ghc-events = self.callPackage ./ghc-events-0.4.3.0.nix {};
|
||||
};
|
||||
};
|
||||
};
|
||||
};
|
||||
};
|
||||
}
|
||||
|
||||
This code is a little crazy, no doubt, but it's necessary because the intuitive
|
||||
version
|
||||
|
||||
haskell.packages.ghc784 = super.haskell.packages.ghc784.override {
|
||||
overrides = self: super: {
|
||||
ghc-events = self.callPackage ./ghc-events-0.4.3.0.nix {};
|
||||
};
|
||||
};
|
||||
|
||||
doesn't do what we want it to: that code replaces the `haskell` package set in
|
||||
Nixpkgs with one that contains only one entry,`packages`, which contains only
|
||||
one entry `ghc784`. This override loses the `haskell.compiler` set, and it
|
||||
loses the `haskell.packages.ghcXYZ` sets for all compilers but GHC 7.8.4. To
|
||||
avoid that problem, we have to perform the convoluted little dance from above,
|
||||
iterating over each step in hierarchy.
|
||||
|
||||
Once it's accomplished, however, we can install a variant of `ghc-events`
|
||||
that's compiled with GHC 7.8.4:
|
||||
|
||||
nix-env -f "<nixpkgs>" -iA haskell.packages.ghc784.ghc-events
|
||||
|
||||
Unfortunately, it turns out that this build fails again while executing the
|
||||
test suite! Apparently, the release archive on Hackage is missing some data
|
||||
files that the test suite requires, so we cannot run it. We accomplish that by
|
||||
re-generating the Nix expression with the `--no-check` flag:
|
||||
|
||||
$ cabal2nix --no-check cabal://ghc-events-0.4.3.0 >~/.nixpkgs/ghc-events-0.4.3.0.nix
|
||||
|
||||
Now the builds succeeds.
|
||||
|
||||
Of course, in the concrete example of `ghc-events` this whole exercise is not
|
||||
an ideal solution, because `ghc-events` can analyze the output emitted by any
|
||||
version of GHC later than 6.12 regardless of the compiler version that was used
|
||||
to build the `ghc-events' executable, so strictly speaking there's no reason to
|
||||
prefer one built with GHC 7.8.x in the first place. However, for users who
|
||||
cannot use GHC 7.10.x at all for some reason, the approach of downgrading to an
|
||||
older version might be useful.
|
||||
|
||||
## How to recover from GHC's infamous non-deterministic library ID bug
|
||||
|
||||
GHC and distributed build farms don't get along well:
|
||||
|
||||
https://ghc.haskell.org/trac/ghc/ticket/4012
|
||||
|
||||
When you see an error like this one
|
||||
|
||||
package foo-0.7.1.0 is broken due to missing package
|
||||
text-1.2.0.4-98506efb1b9ada233bb5c2b2db516d91
|
||||
|
||||
then you have to download and re-install `foo` and all its dependents from
|
||||
scratch:
|
||||
|
||||
# nix-store -q --referrers /nix/store/*-haskell-text-1.2.0.4 \
|
||||
| xargs -L 1 nix-store --repair-path --option binary-caches http://hydra.nixos.org
|
||||
|
||||
If you're using additional Hydra servers other than `hydra.nixos.org`, then it
|
||||
might be necessary to purge the local caches that store data from those
|
||||
machines to disable these binary channels for the duration of the previous
|
||||
command, i.e. by running:
|
||||
|
||||
rm /nix/var/nix/binary-cache-v3.sqlite
|
||||
rm /nix/var/nix/manifests/*
|
||||
rm /nix/var/nix/channel-cache/*
|
||||
|
||||
## Builds on Darwin fail with `math.h` not found
|
||||
|
||||
Users of GHC on Darwin have occasionally reported that builds fail, because the
|
||||
compiler complains about a missing include file:
|
||||
|
||||
fatal error: 'math.h' file not found
|
||||
|
||||
The issue has been discussed at length in [ticket
|
||||
6390](https://github.com/NixOS/nixpkgs/issues/6390), and so far no good
|
||||
solution has been proposed. As a work-around, users who run into this problem
|
||||
can configure the environment variables
|
||||
|
||||
export NIX_CFLAGS_COMPILE="-idirafter /usr/include"
|
||||
export NIX_CFLAGS_LINK="-L/usr/lib"
|
||||
|
||||
in their `~/.bashrc` file to avoid the compiler error.
|
||||
|
||||
## Using Stack together with Nix
|
||||
|
||||
-- While building package zlib-0.5.4.2 using:
|
||||
runhaskell -package=Cabal-1.22.4.0 -clear-package-db [... lots of flags ...]
|
||||
Process exited with code: ExitFailure 1
|
||||
Logs have been written to: /home/foo/src/stack-ide/.stack-work/logs/zlib-0.5.4.2.log
|
||||
|
||||
Configuring zlib-0.5.4.2...
|
||||
Setup.hs: Missing dependency on a foreign library:
|
||||
* Missing (or bad) header file: zlib.h
|
||||
This problem can usually be solved by installing the system package that
|
||||
provides this library (you may need the "-dev" version). If the library is
|
||||
already installed but in a non-standard location then you can use the flags
|
||||
--extra-include-dirs= and --extra-lib-dirs= to specify where it is.
|
||||
If the header file does exist, it may contain errors that are caught by the C
|
||||
compiler at the preprocessing stage. In this case you can re-run configure
|
||||
with the verbosity flag -v3 to see the error messages.
|
||||
|
||||
When you run the build inside of the nix-shell environment, the system
|
||||
is configured to find libz.so without any special flags -- the compiler
|
||||
and linker "just know" how to find it. Consequently, Cabal won't record
|
||||
any search paths for libz.so in the package description, which means
|
||||
that the package works fine inside of nix-shell, but once you leave the
|
||||
shell the shared object can no longer be found. That issue is by no
|
||||
means specific to Stack: you'll have that problem with any other
|
||||
Haskell package that's built inside of nix-shell but run outside of that
|
||||
environment.
|
||||
|
||||
I suppose we could try to remedy the issue by wrapping `stack` or
|
||||
`cabal` with a script that tries to find those kind of implicit search
|
||||
paths and makes them explicit on the "cabal configure" command line. I
|
||||
don't think anyone is working on that subject yet, though, because the
|
||||
problem doesn't seem so bad in practice.
|
||||
|
||||
You can remedy that issue in several ways. First of all, run
|
||||
|
||||
$ nix-build --no-out-link "<nixpkgs>" -A zlib
|
||||
/nix/store/alsvwzkiw4b7ip38l4nlfjijdvg3fvzn-zlib-1.2.8
|
||||
|
||||
to find out the store path of the system's zlib library. Now, you can
|
||||
|
||||
1) add that path (plus a "/lib" suffix) to your $LD_LIBRARY_PATH
|
||||
environment variable to make sure your system linker finds libz.so
|
||||
automatically. It's no pretty solution, but it will work.
|
||||
|
||||
2) As a variant of (1), you can also install any number of system
|
||||
libraries into your user's profile (or some other profile) and point
|
||||
$LD_LIBRARY_PATH to that profile instead, so that you don't have to
|
||||
list dozens of those store paths all over the place.
|
||||
|
||||
3) The solution I prefer is to call stack with an appropriate
|
||||
--extra-lib-dirs flag like so:
|
||||
|
||||
$ stack --extra-lib-dirs=/nix/store/alsvwzkiw4b7ip38l4nlfjijdvg3fvzn-zlib-1.2.8/lib build
|
||||
|
||||
Typically, you'll need --extra-include-dirs as well. It's possible
|
||||
to add those flag to the project's "stack.yaml" or your user's
|
||||
global "~/.stack/global/stack.yaml" file so that you don't have to
|
||||
specify them manually every time.
|
||||
|
||||
The same thing applies to `cabal configure`, of course, if you're
|
||||
building with `cabal-install` instead of Stack.
|
||||
|
||||
|
||||
# Other resources
|
||||
|
||||
- The Youtube video [Nix Loves Haskell](https://www.youtube.com/watch?v=BsBhi_r-OeE)
|
||||
provides an introduction into Haskell NG aimed at beginners. The slides are
|
||||
available at http://cryp.to/nixos-meetup-3-slides.pdf and also -- in a form
|
||||
ready for cut & paste -- at
|
||||
https://github.com/NixOS/cabal2nix/blob/master/doc/nixos-meetup-3-slides.md.
|
||||
|
||||
- Another Youtube video is [Escaping Cabal Hell with Nix](https://www.youtube.com/watch?v=mQd3s57n_2Y),
|
||||
which discusses the subject of Haskell development with Nix but also provides
|
||||
a basic introduction to Nix as well, i.e. it's suitable for viewers with
|
||||
almost no prior Nix experience.
|
||||
|
||||
- Oliver Charles wrote a very nice [Tutorial how to develop Haskell packages with Nix](http://wiki.ocharles.org.uk/Nix).
|
||||
|
||||
- The *Journey into the Haskell NG infrastructure* series of postings
|
||||
describe the new Haskell infrastructure in great detail:
|
||||
|
||||
- [Part 1](http://lists.science.uu.nl/pipermail/nix-dev/2015-January/015591.html)
|
||||
explains the differences between the old and the new code and gives
|
||||
instructions how to migrate to the new setup.
|
||||
|
||||
- [Part 2](http://lists.science.uu.nl/pipermail/nix-dev/2015-January/015608.html)
|
||||
looks in-depth at how to tweak and configure your setup by means of
|
||||
overrides.
|
||||
|
||||
- [Part 3](http://lists.science.uu.nl/pipermail/nix-dev/2015-April/016912.html)
|
||||
describes the infrastructure that keeps the Haskell package set in Nixpkgs
|
||||
up-to-date.
|
@ -1,3 +1,4 @@
|
||||
|
||||
<chapter xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||
xml:id="chap-language-support">
|
||||
@ -13,7 +14,7 @@ in Nixpkgs to easily build packages for other programming languages,
|
||||
such as Perl or Haskell. These are described in this chapter.</para>
|
||||
|
||||
|
||||
<section xml:id="ssec-language-perl"><title>Perl</title>
|
||||
<section xml:id="sec-language-perl"><title>Perl</title>
|
||||
|
||||
<para>Nixpkgs provides a function <varname>buildPerlPackage</varname>,
|
||||
a generic package builder function for any Perl package that has a
|
||||
@ -151,7 +152,7 @@ ClassC3Componentised = buildPerlPackage rec {
|
||||
|
||||
</para>
|
||||
|
||||
<section><title>Generation from CPAN</title>
|
||||
<section xml:id="ssec-generation-from-CPAN"><title>Generation from CPAN</title>
|
||||
|
||||
<para>Nix expressions for Perl packages can be generated (almost)
|
||||
automatically from CPAN. This is done by the program
|
||||
@ -191,7 +192,7 @@ you need it.</para>
|
||||
</section>
|
||||
|
||||
|
||||
<section xml:id="python"><title>Python</title>
|
||||
<section xml:id="sec-python"><title>Python</title>
|
||||
|
||||
<para>
|
||||
Currently supported interpreters are <varname>python26</varname>, <varname>python27</varname>,
|
||||
@ -245,44 +246,44 @@ are provided with all modules included.</para>
|
||||
Name of the folder in <literal>${python}/lib/</literal> for corresponding interpreter.
|
||||
</para></listitem>
|
||||
</varlistentry>
|
||||
|
||||
|
||||
<varlistentry>
|
||||
<term><varname>interpreter</varname></term>
|
||||
<listitem><para>
|
||||
Alias for <literal>${python}/bin/${executable}.</literal>
|
||||
</para></listitem>
|
||||
</varlistentry>
|
||||
|
||||
|
||||
<varlistentry>
|
||||
<term><varname>buildEnv</varname></term>
|
||||
<listitem><para>
|
||||
Function to build python interpreter environments with extra packages bundled together.
|
||||
See <xref linkend="python-build-env" /> for usage and documentation.
|
||||
See <xref linkend="ssec-python-build-env" /> for usage and documentation.
|
||||
</para></listitem>
|
||||
</varlistentry>
|
||||
|
||||
|
||||
<varlistentry>
|
||||
<term><varname>sitePackages</varname></term>
|
||||
<listitem><para>
|
||||
Alias for <literal>lib/${libPrefix}/site-packages</literal>.
|
||||
</para></listitem>
|
||||
</varlistentry>
|
||||
|
||||
|
||||
<varlistentry>
|
||||
<term><varname>executable</varname></term>
|
||||
<listitem><para>
|
||||
Name of the interpreter executable, ie <literal>python3.4</literal>.
|
||||
</para></listitem>
|
||||
</varlistentry>
|
||||
|
||||
|
||||
</variablelist>
|
||||
<section xml:id="build-python-package"><title><varname>buildPythonPackage</varname> function</title>
|
||||
|
||||
<section xml:id="ssec-build-python-package"><title><varname>buildPythonPackage</varname> function</title>
|
||||
|
||||
<para>
|
||||
The function is implemented in <link xlink:href="https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/python-modules/generic/default.nix">
|
||||
<filename>pkgs/development/python-modules/generic/default.nix</filename></link>.
|
||||
Example usage:
|
||||
|
||||
|
||||
<programlisting language="nix">
|
||||
twisted = buildPythonPackage {
|
||||
name = "twisted-8.1.0";
|
||||
@ -308,27 +309,27 @@ twisted = buildPythonPackage {
|
||||
<varname>python27Packages</varname>, <varname>python32Packages</varname>, <varname>python33Packages</varname>,
|
||||
<varname>python34Packages</varname> and <varname>pypyPackages</varname>.
|
||||
</para>
|
||||
|
||||
|
||||
<para>
|
||||
<function>buildPythonPackage</function> mainly does four things:
|
||||
|
||||
|
||||
<orderedlist>
|
||||
<listitem><para>
|
||||
In the <varname>configurePhase</varname>, it patches
|
||||
<literal>setup.py</literal> to always include setuptools before
|
||||
distutils for monkeypatching machinery to take place.
|
||||
</para></listitem>
|
||||
|
||||
|
||||
<listitem><para>
|
||||
In the <varname>buildPhase</varname>, it calls
|
||||
In the <varname>buildPhase</varname>, it calls
|
||||
<literal>${python.interpreter} setup.py build ...</literal>
|
||||
</para></listitem>
|
||||
|
||||
|
||||
<listitem><para>
|
||||
In the <varname>installPhase</varname>, it calls
|
||||
In the <varname>installPhase</varname>, it calls
|
||||
<literal>${python.interpreter} setup.py install ...</literal>
|
||||
</para></listitem>
|
||||
|
||||
|
||||
<listitem><para>
|
||||
In the <varname>postFixup</varname> phase, <literal>wrapPythonPrograms</literal>
|
||||
bash function is called to wrap all programs in <filename>$out/bin/*</filename>
|
||||
@ -337,23 +338,30 @@ twisted = buildPythonPackage {
|
||||
</para></listitem>
|
||||
</orderedlist>
|
||||
</para>
|
||||
|
||||
<para>By default <varname>doCheck = true</varname> is set and tests are run with
|
||||
|
||||
<para>By default <varname>doCheck = true</varname> is set and tests are run with
|
||||
<literal>${python.interpreter} setup.py test</literal> command in <varname>checkPhase</varname>.</para>
|
||||
|
||||
<para><varname>propagatedBuildInputs</varname> packages are propagated to user environment.</para>
|
||||
|
||||
|
||||
<para>
|
||||
As in Perl, dependencies on other Python packages can be specified in the
|
||||
<varname>buildInputs</varname> and
|
||||
<varname>propagatedBuildInputs</varname> attributes. If something is
|
||||
exclusively a build-time dependency, use
|
||||
<varname>buildInputs</varname>; if it’s (also) a runtime dependency,
|
||||
use <varname>propagatedBuildInputs</varname>.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
By default <varname>meta.platforms</varname> is set to the same value
|
||||
as the interpreter unless overriden otherwise.
|
||||
</para>
|
||||
|
||||
|
||||
<variablelist>
|
||||
<title>
|
||||
<varname>buildPythonPackage</varname> parameters
|
||||
(all parameters from <varname>mkDerivation</varname> function are still supported)
|
||||
</title>
|
||||
|
||||
|
||||
<varlistentry>
|
||||
<term><varname>namePrefix</varname></term>
|
||||
<listitem><para>
|
||||
@ -363,7 +371,7 @@ twisted = buildPythonPackage {
|
||||
if you're packaging an application or a command line tool.
|
||||
</para></listitem>
|
||||
</varlistentry>
|
||||
|
||||
|
||||
<varlistentry>
|
||||
<term><varname>disabled</varname></term>
|
||||
<listitem><para>
|
||||
@ -373,21 +381,21 @@ twisted = buildPythonPackage {
|
||||
for examples.
|
||||
</para></listitem>
|
||||
</varlistentry>
|
||||
|
||||
|
||||
<varlistentry>
|
||||
<term><varname>setupPyInstallFlags</varname></term>
|
||||
<listitem><para>
|
||||
List of flags passed to <command>setup.py install</command> command.
|
||||
</para></listitem>
|
||||
</varlistentry>
|
||||
|
||||
|
||||
<varlistentry>
|
||||
<term><varname>setupPyBuildFlags</varname></term>
|
||||
<listitem><para>
|
||||
List of flags passed to <command>setup.py build</command> command.
|
||||
</para></listitem>
|
||||
</varlistentry>
|
||||
|
||||
|
||||
<varlistentry>
|
||||
<term><varname>pythonPath</varname></term>
|
||||
<listitem><para>
|
||||
@ -396,21 +404,21 @@ twisted = buildPythonPackage {
|
||||
(contrary to <varname>propagatedBuildInputs</varname>).
|
||||
</para></listitem>
|
||||
</varlistentry>
|
||||
|
||||
|
||||
<varlistentry>
|
||||
<term><varname>preShellHook</varname></term>
|
||||
<listitem><para>
|
||||
Hook to execute commands before <varname>shellHook</varname>.
|
||||
</para></listitem>
|
||||
</varlistentry>
|
||||
|
||||
|
||||
<varlistentry>
|
||||
<term><varname>postShellHook</varname></term>
|
||||
<listitem><para>
|
||||
Hook to execute commands after <varname>shellHook</varname>.
|
||||
</para></listitem>
|
||||
</varlistentry>
|
||||
|
||||
|
||||
<varlistentry>
|
||||
<term><varname>distutilsExtraCfg</varname></term>
|
||||
<listitem><para>
|
||||
@ -419,15 +427,29 @@ twisted = buildPythonPackage {
|
||||
configuration).
|
||||
</para></listitem>
|
||||
</varlistentry>
|
||||
|
||||
|
||||
<varlistentry>
|
||||
<term><varname>makeWrapperArgs</varname></term>
|
||||
<listitem><para>
|
||||
A list of strings. Arguments to be passed to
|
||||
<varname>makeWrapper</varname>, which wraps generated binaries. By
|
||||
default, the arguments to <varname>makeWrapper</varname> set
|
||||
<varname>PATH</varname> and <varname>PYTHONPATH</varname> environment
|
||||
variables before calling the binary. Additional arguments here can
|
||||
allow a developer to set environment variables which will be
|
||||
available when the binary is run. For example,
|
||||
<varname>makeWrapperArgs = ["--set FOO BAR" "--set BAZ QUX"]</varname>.
|
||||
</para></listitem>
|
||||
</varlistentry>
|
||||
|
||||
</variablelist>
|
||||
|
||||
|
||||
</section>
|
||||
|
||||
<section xml:id="python-build-env"><title><function>python.buildEnv</function> function</title>
|
||||
<section xml:id="ssec-python-build-env"><title><function>python.buildEnv</function> function</title>
|
||||
<para>
|
||||
Create Python environments using low-level <function>pkgs.buildEnv</function> function. Example <filename>default.nix</filename>:
|
||||
|
||||
|
||||
<programlisting language="nix">
|
||||
<![CDATA[with import <nixpkgs> {};
|
||||
|
||||
@ -436,31 +458,52 @@ python.buildEnv.override {
|
||||
ignoreCollisions = true;
|
||||
}]]>
|
||||
</programlisting>
|
||||
|
||||
|
||||
Running <command>nix-build</command> will create
|
||||
<filename>/nix/store/cf1xhjwzmdki7fasgr4kz6di72ykicl5-python-2.7.8-env</filename>
|
||||
with wrapped binaries in <filename>bin/</filename>.
|
||||
</para>
|
||||
|
||||
|
||||
<para>
|
||||
You can also use <varname>env</varname> attribute to create local
|
||||
environments with needed packages installed (somewhat comparable to
|
||||
<literal>virtualenv</literal>). For example, with the following
|
||||
<filename>shell.nix</filename>:
|
||||
|
||||
<programlisting language="nix">
|
||||
<![CDATA[with import <nixpkgs> {};
|
||||
|
||||
(python3.buildEnv.override {
|
||||
extraLibs = with python3Packages;
|
||||
[ numpy
|
||||
requests
|
||||
];
|
||||
}).env]]>
|
||||
</programlisting>
|
||||
|
||||
Running <command>nix-shell</command> will drop you into a shell where
|
||||
<command>python</command> will have specified packages in its path.
|
||||
</para>
|
||||
|
||||
<variablelist>
|
||||
<title>
|
||||
<function>python.buildEnv</function> arguments
|
||||
</title>
|
||||
|
||||
|
||||
<varlistentry>
|
||||
<term><varname>extraLibs</varname></term>
|
||||
<listitem><para>
|
||||
List of packages installed inside the environment.
|
||||
</para></listitem>
|
||||
</varlistentry>
|
||||
|
||||
|
||||
<varlistentry>
|
||||
<term><varname>postBuild</varname></term>
|
||||
<listitem><para>
|
||||
Shell command executed after the build of environment.
|
||||
</para></listitem>
|
||||
</varlistentry>
|
||||
|
||||
|
||||
<varlistentry>
|
||||
<term><varname>ignoreCollisions</varname></term>
|
||||
<listitem><para>
|
||||
@ -470,7 +513,7 @@ python.buildEnv.override {
|
||||
</variablelist>
|
||||
</section>
|
||||
|
||||
<section xml:id="python-tools"><title>Tools</title>
|
||||
<section xml:id="ssec-python-tools"><title>Tools</title>
|
||||
|
||||
<para>Packages inside nixpkgs are written by hand. However many tools
|
||||
exist in community to help save time. No tool is preferred at the moment.
|
||||
@ -497,20 +540,20 @@ exist in community to help save time. No tool is preferred at the moment.
|
||||
|
||||
</section>
|
||||
|
||||
<section xml:id="python-development"><title>Development</title>
|
||||
<section xml:id="ssec-python-development"><title>Development</title>
|
||||
|
||||
<para>
|
||||
To develop Python packages <function>buildPythonPackage</function> has
|
||||
additional logic inside <varname>shellPhase</varname> to run
|
||||
<command>${python.interpreter} setup.py develop</command> for the package.
|
||||
</para>
|
||||
|
||||
|
||||
<warning><para><varname>shellPhase</varname> is executed only if <filename>setup.py</filename>
|
||||
exists.</para></warning>
|
||||
|
||||
|
||||
<para>
|
||||
Given a <filename>default.nix</filename>:
|
||||
|
||||
|
||||
<programlisting language="nix">
|
||||
<![CDATA[with import <nixpkgs> {};
|
||||
|
||||
@ -522,18 +565,18 @@ buildPythonPackage {
|
||||
src = ./.;
|
||||
}]]>
|
||||
</programlisting>
|
||||
|
||||
|
||||
Running <command>nix-shell</command> with no arguments should give you
|
||||
the environment in which the package would be build with
|
||||
<command>nix-build</command>.
|
||||
</para>
|
||||
|
||||
|
||||
<para>
|
||||
Shortcut to setup environments with C headers/libraries and python packages:
|
||||
|
||||
|
||||
<programlisting language="bash">$ nix-shell -p pythonPackages.pyramid zlib libjpeg git</programlisting>
|
||||
</para>
|
||||
|
||||
|
||||
<note><para>
|
||||
There is a boolean value <varname>lib.inNixShell</varname> set to
|
||||
<varname>true</varname> if nix-shell is invoked.
|
||||
@ -541,7 +584,7 @@ buildPythonPackage {
|
||||
|
||||
</section>
|
||||
|
||||
<section xml:id="python-faq"><title>FAQ</title>
|
||||
<section xml:id="ssec-python-faq"><title>FAQ</title>
|
||||
|
||||
<variablelist>
|
||||
|
||||
@ -562,18 +605,18 @@ buildPythonPackage {
|
||||
Known bug in setuptools <varname>install_data</varname> does not respect --prefix</link>. Example of
|
||||
such package using the feature is <filename>pkgs/tools/X11/xpra/default.nix</filename>. As workaround
|
||||
install it as an extra <varname>preInstall</varname> step:
|
||||
|
||||
|
||||
<programlisting>${python.interpreter} setup.py install_data --install-dir=$out --root=$out
|
||||
sed -i '/ = data_files/d' setup.py</programlisting>
|
||||
</para></listitem>
|
||||
</varlistentry>
|
||||
|
||||
|
||||
<varlistentry>
|
||||
<term>Rationale of non-existent global site-packages</term>
|
||||
<listitem><para>
|
||||
There is no need to have global site-packages in Nix. Each package has isolated
|
||||
dependency tree and installing any python package will only populate <varname>$PATH</varname>
|
||||
inside user environment. See <xref linkend="python-build-env" /> to create self-contained
|
||||
inside user environment. See <xref linkend="ssec-python-build-env" /> to create self-contained
|
||||
interpreter with a set of packages.
|
||||
</para></listitem>
|
||||
</varlistentry>
|
||||
@ -583,7 +626,7 @@ sed -i '/ = data_files/d' setup.py</programlisting>
|
||||
</section>
|
||||
|
||||
|
||||
<section xml:id="python-contrib"><title>Contributing guidelines</title>
|
||||
<section xml:id="ssec-python-contrib"><title>Contributing guidelines</title>
|
||||
<para>
|
||||
Following rules are desired to be respected:
|
||||
</para>
|
||||
@ -611,12 +654,12 @@ sed -i '/ = data_files/d' setup.py</programlisting>
|
||||
</section>
|
||||
|
||||
|
||||
<section xml:id="ssec-language-ruby"><title>Ruby</title>
|
||||
<section xml:id="sec-language-ruby"><title>Ruby</title>
|
||||
<para>There currently is support to bundle applications that are packaged as Ruby gems. The utility "bundix" allows you to write a <filename>Gemfile</filename>, let bundler create a <filename>Gemfile.lock</filename>, and then convert
|
||||
this into a nix expression that contains all Gem dependencies automatically.</para>
|
||||
|
||||
<para>For example, to package sensu, we did:</para>
|
||||
|
||||
|
||||
<screen>
|
||||
<![CDATA[$ cd pkgs/servers/monitoring
|
||||
$ mkdir sensu
|
||||
@ -624,7 +667,7 @@ $ cat > Gemfile
|
||||
source 'https://rubygems.org'
|
||||
gem 'sensu'
|
||||
$ bundler package --path /tmp/vendor/bundle
|
||||
$ $(nix-build '&nixpkgs>' -A bundix)/bin/bundix
|
||||
$ $(nix-build '<nixpkgs>' -A bundix)/bin/bundix
|
||||
$ cat > default.nix
|
||||
{ lib, bundlerEnv, ruby }:
|
||||
|
||||
@ -652,7 +695,7 @@ and scalable.";
|
||||
|
||||
</section>
|
||||
|
||||
<section xml:id="ssec-language-go"><title>Go</title>
|
||||
<section xml:id="sec-language-go"><title>Go</title>
|
||||
|
||||
<para>The function <varname>buildGoPackage</varname> builds
|
||||
standard Go packages.
|
||||
@ -662,20 +705,19 @@ standard Go packages.
|
||||
<programlisting>
|
||||
net = buildGoPackage rec {
|
||||
name = "go.net-${rev}";
|
||||
goPackagePath = "code.google.com/p/go.net"; <co xml:id='ex-buildGoPackage-1' />
|
||||
goPackagePath = "golang.org/x/net"; <co xml:id='ex-buildGoPackage-1' />
|
||||
subPackages = [ "ipv4" "ipv6" ]; <co xml:id='ex-buildGoPackage-2' />
|
||||
rev = "28ff664507e4";
|
||||
src = fetchhg {
|
||||
rev = "e0403b4e005";
|
||||
src = fetchFromGitHub {
|
||||
inherit rev;
|
||||
url = "https://${goPackagePath}";
|
||||
sha256 = "1lkz4c9pyz3yz2yz18hiycvlfhgy3jxp68bs7mv7bcfpaj729qav";
|
||||
owner = "golang";
|
||||
repo = "net";
|
||||
sha256 = "1g7cjzw4g4301a3yqpbk8n1d4s97sfby2aysl275x04g0zh8jxqp";
|
||||
};
|
||||
renameImports = [ <co xml:id='ex-buildGoPackage-3' />
|
||||
"code.google.com/p/go.crypto golang.org/x/crypto"
|
||||
"code.google.com/p/goprotobuf github.com/golang/protobuf"
|
||||
];
|
||||
goPackageAliases = [ "code.google.com/p/go.net" ]; <co xml:id='ex-buildGoPackage-3' />
|
||||
propagatedBuildInputs = [ goPackages.text ]; <co xml:id='ex-buildGoPackage-4' />
|
||||
buildFlags = "--tags release"; <co xml:id='ex-buildGoPackage-5' />
|
||||
disabled = isGo13;<co xml:id='ex-buildGoPackage-6' />
|
||||
};
|
||||
</programlisting>
|
||||
</example>
|
||||
@ -703,17 +745,18 @@ the following arguments are of special significance to the function:
|
||||
</para>
|
||||
</callout>
|
||||
|
||||
<callout arearefs='ex-buildGoPackage-4'>
|
||||
<callout arearefs='ex-buildGoPackage-3'>
|
||||
<para>
|
||||
<varname>renameImports</varname> is a list of import paths to be renamed before
|
||||
building the package. The path to be renamed can be a regular expression.
|
||||
<varname>goPackageAliases</varname> is a list of alternative import paths
|
||||
that are valid for this library.
|
||||
Packages that depend on this library will automatically rename
|
||||
import paths that match any of the aliases to <literal>goPackagePath</literal>.
|
||||
</para>
|
||||
<para>
|
||||
In this example imports will be renamed from
|
||||
<literal>code.google.com/p/go.crypto</literal> to
|
||||
<literal>golang.org/x/crypto</literal> and from
|
||||
<literal>code.google.com/p/goprotobuf</literal> to
|
||||
<literal>github.com/golang/protobuf</literal>.
|
||||
<literal>code.google.com/p/go.net</literal> to
|
||||
<literal>golang.org/x/net</literal> in every package that depend on the
|
||||
<literal>go.net</literal> library.
|
||||
</para>
|
||||
</callout>
|
||||
|
||||
@ -732,6 +775,18 @@ the following arguments are of special significance to the function:
|
||||
</para>
|
||||
</callout>
|
||||
|
||||
<callout arearefs='ex-buildGoPackage-6'>
|
||||
<para>
|
||||
If <varname>disabled</varname> is <literal>true</literal>,
|
||||
nix will refuse to build this package.
|
||||
</para>
|
||||
<para>
|
||||
In this example the package will not be built for go 1.3. The <literal>isGo13</literal>
|
||||
is an utility function that returns <literal>true</literal> if go used to build the
|
||||
package has version 1.3.x.
|
||||
</para>
|
||||
</callout>
|
||||
|
||||
</calloutlist>
|
||||
|
||||
</para>
|
||||
@ -761,7 +816,7 @@ done
|
||||
</section>
|
||||
|
||||
|
||||
<section xml:id="ssec-language-java"><title>Java</title>
|
||||
<section xml:id="sec-language-java"><title>Java</title>
|
||||
|
||||
<para>Ant-based Java packages are typically built from source as follows:
|
||||
|
||||
@ -842,7 +897,7 @@ Runtime) instead of the OpenJRE.</para>
|
||||
</section>
|
||||
|
||||
|
||||
<section xml:id="ssec-language-lua"><title>Lua</title>
|
||||
<section xml:id="sec-language-lua"><title>Lua</title>
|
||||
|
||||
<para>
|
||||
Lua packages are built by the <varname>buildLuaPackage</varname> function. This function is
|
||||
@ -850,7 +905,7 @@ Runtime) instead of the OpenJRE.</para>
|
||||
in <link xlink:href="https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/lua-modules/generic/default.nix">
|
||||
<filename>pkgs/development/lua-modules/generic/default.nix</filename></link>
|
||||
and works similarly to <varname>buildPerlPackage</varname>. (See
|
||||
<xref linkend="ssec-language-perl"/> for details.)
|
||||
<xref linkend="sec-language-perl"/> for details.)
|
||||
</para>
|
||||
|
||||
<para>
|
||||
@ -864,7 +919,7 @@ fileSystem = buildLuaPackage {
|
||||
src = fetchurl {
|
||||
url = "https://github.com/keplerproject/luafilesystem/archive/v1_6_2.tar.gz";
|
||||
sha256 = "1n8qdwa20ypbrny99vhkmx8q04zd2jjycdb5196xdhgvqzk10abz";
|
||||
};
|
||||
};
|
||||
meta = {
|
||||
homepage = "https://github.com/keplerproject/luafilesystem";
|
||||
hydraPlatforms = stdenv.lib.platforms.linux;
|
||||
@ -875,7 +930,7 @@ fileSystem = buildLuaPackage {
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Though, more complicated package should be placed in a seperate file in
|
||||
Though, more complicated package should be placed in a seperate file in
|
||||
<link
|
||||
xlink:href="https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/lua-modules"><filename>pkgs/development/lua-modules</filename></link>.
|
||||
</para>
|
||||
@ -889,7 +944,7 @@ fileSystem = buildLuaPackage {
|
||||
|
||||
</section>
|
||||
|
||||
<section xml:id="ssec-language-coq"><title>Coq</title>
|
||||
<section xml:id="sec-language-coq"><title>Coq</title>
|
||||
<para>
|
||||
Coq libraries should be installed in
|
||||
<literal>$(out)/lib/coq/${coq.coq-version}/user-contrib/</literal>.
|
||||
@ -926,6 +981,72 @@ stdenv.mkDerivation {
|
||||
</programlisting>
|
||||
</section>
|
||||
|
||||
<section xml:id="sec-language-qt"><title>Qt</title>
|
||||
|
||||
<para>The information in this section applies to Qt 5.5 and later.</para>
|
||||
|
||||
<para>Qt is an application development toolkit for C++. Although it is
|
||||
not a distinct programming language, there are special considerations
|
||||
for packaging Qt-based programs and libraries. A small set of tools
|
||||
and conventions has grown out of these considerations.</para>
|
||||
|
||||
<section xml:id="ssec-qt-libraries"><title>Libraries</title>
|
||||
|
||||
<para>Packages that provide libraries should be listed in
|
||||
<varname>qt5LibsFun</varname> so that the library is built with each
|
||||
Qt version. A set of packages is provided for each version of Qt; for
|
||||
example, <varname>qt5Libs</varname> always provides libraries built
|
||||
with the latest version, <varname>qt55Libs</varname> provides
|
||||
libraries built with Qt 5.5, and so on. To avoid version conflicts, no
|
||||
top-level attributes are created for these packages.</para>
|
||||
|
||||
</section>
|
||||
|
||||
<section xml:id="ssec-qt-programs"><title>Programs</title>
|
||||
|
||||
<para>Application packages do not need to be built with every Qt
|
||||
version. To ensure consistency between the package's dependencies,
|
||||
call the package with <literal>qt5Libs.callPackage</literal> instead
|
||||
of the usual <literal>callPackage</literal>. An older version may be
|
||||
selected in case of incompatibility. For example, to build with Qt
|
||||
5.5, call the package with
|
||||
<literal>qt55Libs.callPackage</literal>.</para>
|
||||
|
||||
<para>Several environment variables must be set at runtime for Qt
|
||||
applications to function correctly, including:</para>
|
||||
|
||||
<itemizedlist>
|
||||
<listitem><para><envar>QT_PLUGIN_PATH</envar></para></listitem>
|
||||
<listitem><para><envar>QML_IMPORT_PATH</envar></para></listitem>
|
||||
<listitem><para><envar>QML2_IMPORT_PATH</envar></para></listitem>
|
||||
<listitem><para><envar>XDG_DATA_DIRS</envar></para></listitem>
|
||||
</itemizedlist>
|
||||
|
||||
<para>To ensure that these are set correctly, the program must be wrapped by
|
||||
invoking <literal>wrapQtProgram <replaceable>program</replaceable></literal>
|
||||
during installation (for example, during
|
||||
<literal>fixupPhase</literal>). <literal>wrapQtProgram</literal>
|
||||
accepts the same options as <literal>makeWrapper</literal>.
|
||||
</para>
|
||||
|
||||
</section>
|
||||
|
||||
<section xml:id="ssec-qt-kde"><title>KDE</title>
|
||||
|
||||
<para>Many of the considerations above also apply to KDE packages,
|
||||
especially the need to set the correct environment variables at
|
||||
runtime. To ensure that this is done, invoke <literal>wrapKDEProgram
|
||||
<replaceable>program</replaceable></literal> during
|
||||
installation. <literal>wrapKDEProgram</literal> also generates a
|
||||
<literal>ksycoca</literal> database so that required data and services
|
||||
can be found. Like its Qt counterpart,
|
||||
<literal>wrapKDEProgram</literal> accepts the same options as
|
||||
<literal>makeWrapper</literal>.</para>
|
||||
|
||||
</section>
|
||||
|
||||
</section>
|
||||
|
||||
<!--
|
||||
<section><title>Haskell</title>
|
||||
|
||||
|
@ -13,10 +13,13 @@
|
||||
<xi:include href="quick-start.xml" />
|
||||
<xi:include href="stdenv.xml" />
|
||||
<xi:include href="packageconfig.xml" />
|
||||
<xi:include href="functions.xml" />
|
||||
<xi:include href="meta.xml" />
|
||||
<xi:include href="language-support.xml" />
|
||||
<xi:include href="package-notes.xml" />
|
||||
<xi:include href="coding-conventions.xml" />
|
||||
<xi:include href="submitting-changes.xml" />
|
||||
<xi:include href="haskell-users-guide.xml" />
|
||||
<xi:include href="contributing.xml" />
|
||||
|
||||
</book>
|
||||
|
44
doc/meta.xml
44
doc/meta.xml
@ -61,7 +61,7 @@ $ nix-env -qa hello --meta --json
|
||||
"i686-openbsd",
|
||||
"x86_64-openbsd"
|
||||
],
|
||||
"position": "/home/user/dev/nixpkgs/pkgs/applications/misc/hello/ex-2/default.nix:14"
|
||||
"position": "/home/user/dev/nixpkgs/pkgs/applications/misc/hello/default.nix:14"
|
||||
},
|
||||
"name": "hello-2.9",
|
||||
"system": "x86_64-linux"
|
||||
@ -82,7 +82,8 @@ hello-2.3 A program that produces a familiar, friendly greeting
|
||||
</para>
|
||||
|
||||
|
||||
<section><title>Standard meta-attributes</title>
|
||||
<section xml:id="sec-standard-meta-attributes"><title>Standard
|
||||
meta-attributes</title>
|
||||
|
||||
<para>It is expected that each meta-attribute is one of the following:</para>
|
||||
|
||||
@ -137,12 +138,39 @@ hello-2.3 A program that produces a familiar, friendly greeting
|
||||
|
||||
<varlistentry>
|
||||
<term><varname>license</varname></term>
|
||||
<listitem><para>The license for the package. One from the
|
||||
attribute set defined in <link
|
||||
xlink:href="https://github.com/NixOS/nixpkgs/blob/master/lib/licenses.nix">
|
||||
<filename>nixpkgs/lib/licenses.nix</filename></link>. Example:
|
||||
<literal>stdenv.lib.licenses.gpl3</literal>. For details, see
|
||||
<xref linkend='sec-meta-license'/>.</para></listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
The license, or licenses, for the package. One from the attribute set
|
||||
defined in <link
|
||||
xlink:href="https://github.com/NixOS/nixpkgs/blob/master/lib/licenses.nix">
|
||||
<filename>nixpkgs/lib/licenses.nix</filename></link>. At this moment
|
||||
using both a list of licenses and a single license is valid. If the
|
||||
license field is in the form of a list representation, then it means
|
||||
that parts of the package are licensed differently. Each license
|
||||
should preferably be referenced by their attribute. The non-list
|
||||
attribute value can also be a space delimited string representation of
|
||||
the contained attribute shortNames or spdxIds. The following are all valid
|
||||
examples:
|
||||
<itemizedlist>
|
||||
<listitem><para>Single license referenced by attribute (preferred)
|
||||
<literal>stdenv.lib.licenses.gpl3</literal>.
|
||||
</para></listitem>
|
||||
<listitem><para>Single license referenced by its attribute shortName (frowned upon)
|
||||
<literal>"gpl3"</literal>.
|
||||
</para></listitem>
|
||||
<listitem><para>Single license referenced by its attribute spdxId (frowned upon)
|
||||
<literal>"GPL-3.0"</literal>.
|
||||
</para></listitem>
|
||||
<listitem><para>Multiple licenses referenced by attribute (preferred)
|
||||
<literal>with stdenv.lib.licenses; [ asl20 free ofl ]</literal>.
|
||||
</para></listitem>
|
||||
<listitem><para>Multiple licenses referenced as a space delimited string of attribute shortNames (frowned upon)
|
||||
<literal>"asl20 free ofl"</literal>.
|
||||
</para></listitem>
|
||||
</itemizedlist>
|
||||
For details, see <xref linkend='sec-meta-license'/>.
|
||||
</para>
|
||||
</listitem>
|
||||
</varlistentry>
|
||||
|
||||
<varlistentry>
|
||||
|
@ -141,7 +141,7 @@ $ make menuconfig ARCH=<replaceable>arch</replaceable></screen>
|
||||
|
||||
<!--============================================================-->
|
||||
|
||||
<section>
|
||||
<section xml:id="sec-xorg">
|
||||
|
||||
<title>X.org</title>
|
||||
|
||||
@ -219,5 +219,151 @@ you should modify
|
||||
</section>
|
||||
-->
|
||||
|
||||
<!--============================================================-->
|
||||
|
||||
<section xml:id="sec-eclipse">
|
||||
|
||||
<title>Eclipse</title>
|
||||
|
||||
<para>
|
||||
The Nix expressions related to the Eclipse platform and IDE are in
|
||||
<link xlink:href="https://github.com/NixOS/nixpkgs/blob/master/pkgs/applications/editors/eclipse"><filename>pkgs/applications/editors/eclipse</filename></link>.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Nixpkgs provides a number of packages that will install Eclipse in
|
||||
its various forms, these range from the bare-bones Eclipse
|
||||
Platform to the more fully featured Eclipse SDK or Scala-IDE
|
||||
packages and multiple version are often available. It is possible
|
||||
to list available Eclipse packages by issuing the command:
|
||||
|
||||
<screen>
|
||||
$ nix-env -f '<nixpkgs>' -qaP -A eclipses --description
|
||||
</screen>
|
||||
|
||||
Once an Eclipse variant is installed it can be run using the
|
||||
<command>eclipse</command> command, as expected. From within
|
||||
Eclipse it is then possible to install plugins in the usual manner
|
||||
by either manually specifying an Eclipse update site or by
|
||||
installing the Marketplace Client plugin and using it to discover
|
||||
and install other plugins. This installation method provides an
|
||||
Eclipse installation that closely resemble a manually installed
|
||||
Eclipse.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
If you prefer to install plugins in a more declarative manner then
|
||||
Nixpkgs also offer a number of Eclipse plugins that can be
|
||||
installed in an <emphasis>Eclipse environment</emphasis>. This
|
||||
type of environment is created using the function
|
||||
<varname>eclipseWithPlugins</varname> found inside the
|
||||
<varname>nixpkgs.eclipses</varname> attribute set. This function
|
||||
takes as argument <literal>{ eclipse, plugins ? [], jvmArgs ? []
|
||||
}</literal> where <varname>eclipse</varname> is a one of the
|
||||
Eclipse packages described above, <varname>plugins</varname> is a
|
||||
list of plugin derivations, and <varname>jvmArgs</varname> is a
|
||||
list of arguments given to the JVM running the Eclipse. For
|
||||
example, say you wish to install the latest Eclipse Platform with
|
||||
the popular Eclipse Color Theme plugin and also allow Eclipse to
|
||||
use more RAM. You could then add
|
||||
|
||||
<screen>
|
||||
packageOverrides = pkgs: {
|
||||
myEclipse = with pkgs.eclipses; eclipseWithPlugins {
|
||||
eclipse = eclipse-platform;
|
||||
jvmArgs = [ "-Xmx2048m" ];
|
||||
plugins = [ plugins.color-theme ];
|
||||
};
|
||||
}
|
||||
</screen>
|
||||
|
||||
to your Nixpkgs configuration
|
||||
(<filename>~/.nixpkgs/config.nix</filename>) and install it by
|
||||
running <command>nix-env -f '<nixpkgs>' -iA
|
||||
myEclipse</command> and afterward run Eclipse as usual. It is
|
||||
possible to find out which plugins are available for installation
|
||||
using <varname>eclipseWithPlugins</varname> by running
|
||||
|
||||
<screen>
|
||||
$ nix-env -f '<nixpkgs>' -qaP -A eclipses.plugins --description
|
||||
</screen>
|
||||
</para>
|
||||
|
||||
<para>
|
||||
If there is a need to install plugins that are not available in
|
||||
Nixpkgs then it may be possible to define these plugins outside
|
||||
Nixpkgs using the <varname>buildEclipseUpdateSite</varname> and
|
||||
<varname>buildEclipsePlugin</varname> functions found in the
|
||||
<varname>nixpkgs.eclipses.plugins</varname> attribute set. Use the
|
||||
<varname>buildEclipseUpdateSite</varname> function to install a
|
||||
plugin distributed as an Eclipse update site. This function takes
|
||||
<literal>{ name, src }</literal> as argument where
|
||||
<literal>src</literal> indicates the Eclipse update site archive.
|
||||
All Eclipse features and plugins within the downloaded update site
|
||||
will be installed. When an update site archive is not available
|
||||
then the <varname>buildEclipsePlugin</varname> function can be
|
||||
used to install a plugin that consists of a pair of feature and
|
||||
plugin JARs. This function takes an argument <literal>{ name,
|
||||
srcFeature, srcPlugin }</literal> where
|
||||
<literal>srcFeature</literal> and <literal>srcPlugin</literal> are
|
||||
the feature and plugin JARs, respectively.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Expanding the previous example with two plugins using the above
|
||||
functions we have
|
||||
<screen>
|
||||
packageOverrides = pkgs: {
|
||||
myEclipse = with pkgs.eclipses; eclipseWithPlugins {
|
||||
eclipse = eclipse-platform;
|
||||
jvmArgs = [ "-Xmx2048m" ];
|
||||
plugins = [
|
||||
plugins.color-theme
|
||||
(plugins.buildEclipsePlugin {
|
||||
name = "myplugin1-1.0";
|
||||
srcFeature = fetchurl {
|
||||
url = "http://…/features/myplugin1.jar";
|
||||
sha256 = "123…";
|
||||
};
|
||||
srcPlugin = fetchurl {
|
||||
url = "http://…/plugins/myplugin1.jar";
|
||||
sha256 = "123…";
|
||||
};
|
||||
});
|
||||
(plugins.buildEclipseUpdateSite {
|
||||
name = "myplugin2-1.0";
|
||||
src = fetchurl {
|
||||
stripRoot = false;
|
||||
url = "http://…/myplugin2.zip";
|
||||
sha256 = "123…";
|
||||
};
|
||||
});
|
||||
];
|
||||
};
|
||||
}
|
||||
</screen>
|
||||
</para>
|
||||
|
||||
</section>
|
||||
|
||||
<section xml:id="sec-elm">
|
||||
|
||||
<title>Elm</title>
|
||||
|
||||
<para>
|
||||
The Nix expressions for Elm reside in
|
||||
<filename>pkgs/development/compilers/elm</filename>. They are generated
|
||||
automatically by <command>update-elm.rb</command> script. One should
|
||||
specify versions of Elm packages inside the script, clear the
|
||||
<filename>packages</filename> directory and run the script from inside it.
|
||||
<literal>elm-reactor</literal> is special because it also has Elm package
|
||||
dependencies. The process is not automated very much for now -- you should
|
||||
get the <literal>elm-reactor</literal> source tree (e.g. with
|
||||
<command>nix-shell</command>) and run <command>elm2nix.rb</command> inside
|
||||
it. Place the resulting <filename>package.nix</filename> file into
|
||||
<filename>packages/elm-reactor-elm.nix</filename>.
|
||||
</para>
|
||||
|
||||
</section>
|
||||
|
||||
</chapter>
|
||||
|
@ -67,7 +67,8 @@
|
||||
<filename>lib/licenses.nix</filename> of the nix package tree.
|
||||
</para>
|
||||
|
||||
<section><title>Modify packages via <literal>packageOverrides</literal></title>
|
||||
<section xml:id="sec-modify-via-packageOverrides"><title>Modify
|
||||
packages via <literal>packageOverrides</literal></title>
|
||||
|
||||
<para>
|
||||
|
||||
|
@ -55,18 +55,18 @@ $ git add pkgs/development/libraries/libfoo/default.nix</screen>
|
||||
<itemizedlist>
|
||||
|
||||
<listitem>
|
||||
<para>GNU cpio: <link
|
||||
xlink:href="https://github.com/NixOS/nixpkgs/blob/master/pkgs/tools/archivers/cpio/default.nix"><filename>pkgs/tools/archivers/cpio/default.nix</filename></link>.
|
||||
The simplest possible package. The generic builder in
|
||||
<varname>stdenv</varname> does everything for you. It has
|
||||
no dependencies beyond <varname>stdenv</varname>.</para>
|
||||
<para>GNU Hello: <link
|
||||
xlink:href="https://github.com/NixOS/nixpkgs/blob/master/pkgs/applications/misc/hello/default.nix"><filename>pkgs/applications/misc/hello/default.nix</filename></link>.
|
||||
Trivial package, which specifies some <varname>meta</varname>
|
||||
attributes which is good practice.</para>
|
||||
</listitem>
|
||||
|
||||
<listitem>
|
||||
<para>GNU Hello: <link
|
||||
xlink:href="https://github.com/NixOS/nixpkgs/blob/master/pkgs/applications/misc/hello/ex-2/default.nix"><filename>pkgs/applications/misc/hello/ex-2/default.nix</filename></link>.
|
||||
Also trivial, but it specifies some <varname>meta</varname>
|
||||
attributes which is good practice.</para>
|
||||
<para>GNU cpio: <link
|
||||
xlink:href="https://github.com/NixOS/nixpkgs/blob/master/pkgs/tools/archivers/cpio/default.nix"><filename>pkgs/tools/archivers/cpio/default.nix</filename></link>.
|
||||
Also a simple package. The generic builder in
|
||||
<varname>stdenv</varname> does everything for you. It has
|
||||
no dependencies beyond <varname>stdenv</varname>.</para>
|
||||
</listitem>
|
||||
|
||||
<listitem>
|
||||
|
@ -15,7 +15,8 @@ environment does everything automatically. If
|
||||
can easily customise or override the various build phases.</para>
|
||||
|
||||
|
||||
<section><title>Using <literal>stdenv</literal></title>
|
||||
<section xml:id="sec-using-stdenv"><title>Using
|
||||
<literal>stdenv</literal></title>
|
||||
|
||||
<para>To build a package with the standard environment, you use the
|
||||
function <varname>stdenv.mkDerivation</varname>, instead of the
|
||||
@ -58,7 +59,7 @@ build. To make this easier, the standard environment breaks the
|
||||
package build into a number of <emphasis>phases</emphasis>, all of
|
||||
which can be overridden or modified individually: unpacking the
|
||||
sources, applying patches, configuring, building, and installing.
|
||||
(There are some others; see <xref linkend="ssec-stdenv-phases"/>.)
|
||||
(There are some others; see <xref linkend="sec-stdenv-phases"/>.)
|
||||
For instance, a package that doesn’t supply a makefile but instead has
|
||||
to be compiled “manually” could be handled like this:
|
||||
|
||||
@ -124,7 +125,8 @@ genericBuild
|
||||
</section>
|
||||
|
||||
|
||||
<section><title>Tools provided by <literal>stdenv</literal></title>
|
||||
<section xml:id="sec-tools-of-stdenv"><title>Tools provided by
|
||||
<literal>stdenv</literal></title>
|
||||
|
||||
<para>The standard environment provides the following packages:
|
||||
|
||||
@ -225,7 +227,7 @@ genericBuild
|
||||
</section>
|
||||
|
||||
|
||||
<section xml:id="ssec-stdenv-phases"><title>Phases</title>
|
||||
<section xml:id="sec-stdenv-phases"><title>Phases</title>
|
||||
|
||||
<para>The generic builder has a number of <emphasis>phases</emphasis>.
|
||||
Package builds are split into phases to make it easier to override
|
||||
@ -243,7 +245,8 @@ is convenient to override a phase from the derivation, while the
|
||||
latter is convenient from a build script.</para>
|
||||
|
||||
|
||||
<section><title>Controlling phases</title>
|
||||
<section xml:id="ssec-controlling-phases"><title>Controlling
|
||||
phases</title>
|
||||
|
||||
<para>There are a number of variables that control what phases are
|
||||
executed and in what order:
|
||||
@ -327,7 +330,7 @@ executed and in what order:
|
||||
</section>
|
||||
|
||||
|
||||
<section><title>The unpack phase</title>
|
||||
<section xml:id="ssec-unpack-phase"><title>The unpack phase</title>
|
||||
|
||||
<para>The unpack phase is responsible for unpacking the source code of
|
||||
the package. The default implementation of
|
||||
@ -434,7 +437,7 @@ Additional file types can be supported by setting the
|
||||
</section>
|
||||
|
||||
|
||||
<section><title>The patch phase</title>
|
||||
<section xml:id="ssec-patch-phase"><title>The patch phase</title>
|
||||
|
||||
<para>The patch phase applies the list of patches defined in the
|
||||
<varname>patches</varname> variable.</para>
|
||||
@ -477,7 +480,7 @@ Additional file types can be supported by setting the
|
||||
</section>
|
||||
|
||||
|
||||
<section><title>The configure phase</title>
|
||||
<section xml:id="ssec-configure-phase"><title>The configure phase</title>
|
||||
|
||||
<para>The configure phase prepares the source tree for building. The
|
||||
default <function>configurePhase</function> runs
|
||||
@ -513,8 +516,8 @@ script) if it exists.</para>
|
||||
<term><varname>dontAddPrefix</varname></term>
|
||||
<listitem><para>By default, the flag
|
||||
<literal>--prefix=$prefix</literal> is added to the configure
|
||||
flags. If this is undesirable, set this variable to a non-empty
|
||||
value.</para></listitem>
|
||||
flags. If this is undesirable, set this variable to
|
||||
true.</para></listitem>
|
||||
</varlistentry>
|
||||
|
||||
<varlistentry>
|
||||
@ -530,8 +533,7 @@ script) if it exists.</para>
|
||||
<listitem><para>By default, the flag
|
||||
<literal>--disable-dependency-tracking</literal> is added to the
|
||||
configure flags to speed up Automake-based builds. If this is
|
||||
undesirable, set this variable to a non-empty
|
||||
value.</para></listitem>
|
||||
undesirable, set this variable to true.</para></listitem>
|
||||
</varlistentry>
|
||||
|
||||
<varlistentry>
|
||||
@ -544,7 +546,16 @@ script) if it exists.</para>
|
||||
variables in the Libtool script to prevent Libtool from using
|
||||
libraries in <filename>/usr/lib</filename> and
|
||||
such.</para></footnote>. If this is undesirable, set this
|
||||
variable to a non-empty value.</para></listitem>
|
||||
variable to true.</para></listitem>
|
||||
</varlistentry>
|
||||
|
||||
<varlistentry>
|
||||
<term><varname>dontDisableStatic</varname></term>
|
||||
<listitem><para>By default, when the configure script has
|
||||
<option>--enable-static</option>, the option
|
||||
<option>--disable-static</option> is added to the configure flags.</para>
|
||||
<para>If this is undesirable, set this variable to
|
||||
true.</para></listitem>
|
||||
</varlistentry>
|
||||
|
||||
<varlistentry>
|
||||
@ -565,7 +576,7 @@ script) if it exists.</para>
|
||||
</section>
|
||||
|
||||
|
||||
<section><title>The build phase</title>
|
||||
<section xml:id="build-phase"><title>The build phase</title>
|
||||
|
||||
<para>The build phase is responsible for actually building the package
|
||||
(e.g. compiling it). The default <function>buildPhase</function>
|
||||
@ -649,7 +660,7 @@ called, respectively.</para>
|
||||
</section>
|
||||
|
||||
|
||||
<section><title>The check phase</title>
|
||||
<section xml:id="ssec-check-phase"><title>The check phase</title>
|
||||
|
||||
<para>The check phase checks whether the package was built correctly
|
||||
by running its test suite. The default
|
||||
@ -709,7 +720,7 @@ doCheck = true;</programlisting>
|
||||
</section>
|
||||
|
||||
|
||||
<section><title>The install phase</title>
|
||||
<section xml:id="ssec-install-phase"><title>The install phase</title>
|
||||
|
||||
<para>The install phase is responsible for installing the package in
|
||||
the Nix store under <envar>out</envar>. The default
|
||||
@ -764,7 +775,7 @@ installTargets = "install-bin install-doc";</programlisting>
|
||||
</section>
|
||||
|
||||
|
||||
<section><title>The fixup phase</title>
|
||||
<section xml:id="ssec-fixup-phase"><title>The fixup phase</title>
|
||||
|
||||
<para>The fixup phase performs some (Nix-specific) post-processing
|
||||
actions on the files installed under <filename>$out</filename> by the
|
||||
@ -805,6 +816,12 @@ following:
|
||||
stripped. By default, they are.</para></listitem>
|
||||
</varlistentry>
|
||||
|
||||
<varlistentry>
|
||||
<term><varname>dontMoveSbin</varname></term>
|
||||
<listitem><para>If set, files in <filename>$out/sbin</filename> are not moved
|
||||
to <filename>$out/bin</filename>. By default, they are.</para></listitem>
|
||||
</varlistentry>
|
||||
|
||||
<varlistentry>
|
||||
<term><varname>stripAllList</varname></term>
|
||||
<listitem><para>List of directories to search for libraries and
|
||||
@ -882,12 +899,41 @@ following:
|
||||
phase.</para></listitem>
|
||||
</varlistentry>
|
||||
|
||||
<varlistentry>
|
||||
<term><varname>separateDebugInfo</varname></term>
|
||||
<listitem><para>If set to <literal>true</literal>, the standard
|
||||
environment will enable debug information in C/C++ builds. After
|
||||
installation, the debug information will be separated from the
|
||||
executables and stored in the output named
|
||||
<literal>debug</literal>. (This output is enabled automatically;
|
||||
you don’t need to set the <varname>outputs</varname> attribute
|
||||
explicitly.) To be precise, the debug information is stored in
|
||||
<filename><replaceable>debug</replaceable>/lib/debug/.build-id/<replaceable>XX</replaceable>/<replaceable>YYYY…</replaceable></filename>,
|
||||
where <replaceable>XXYYYY…</replaceable> is the <replaceable>build
|
||||
ID</replaceable> of the binary — a SHA-1 hash of the contents of
|
||||
the binary. Debuggers like GDB use the build ID to look up the
|
||||
separated debug information.</para>
|
||||
|
||||
<para>For example, with GDB, you can add
|
||||
|
||||
<programlisting>
|
||||
set debug-file-directory ~/.nix-profile/lib/debug
|
||||
</programlisting>
|
||||
|
||||
to <filename>~/.gdbinit</filename>. GDB will then be able to find
|
||||
debug information installed via <literal>nix-env
|
||||
-i</literal>.</para>
|
||||
|
||||
</listitem>
|
||||
</varlistentry>
|
||||
|
||||
</variablelist>
|
||||
|
||||
</section>
|
||||
|
||||
|
||||
<section><title>The distribution phase</title>
|
||||
<section xml:id="ssec-distribution-phase"><title>The distribution
|
||||
phase</title>
|
||||
|
||||
<para>The distribution phase is intended to produce a source
|
||||
distribution of the package. The default
|
||||
@ -1158,7 +1204,7 @@ echo @foo@
|
||||
</varlistentry>
|
||||
|
||||
<varlistentry>
|
||||
<term>Qt</term>
|
||||
<term>Qt 4</term>
|
||||
<listitem><para>Sets the <envar>QTDIR</envar> environment variable
|
||||
to Qt’s path.</para></listitem>
|
||||
</varlistentry>
|
||||
@ -1191,7 +1237,7 @@ echo @foo@
|
||||
</section>
|
||||
|
||||
|
||||
<section><title>Purity in Nixpkgs</title>
|
||||
<section xml:id="sec-purity-in-nixpkgs"><title>Purity in Nixpkgs</title>
|
||||
|
||||
<para>[measures taken to prevent dependencies on packages outside the
|
||||
store, and what you can do to prevent them]</para>
|
||||
|
283
doc/submitting-changes.xml
Normal file
283
doc/submitting-changes.xml
Normal file
@ -0,0 +1,283 @@
|
||||
<chapter xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||
xml:id="chap-submitting-changes">
|
||||
|
||||
<title>Submitting changes</title>
|
||||
|
||||
<section>
|
||||
<title>Making patches</title>
|
||||
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>Read <link xlink:href="https://nixos.org/nixpkgs/manual/">Manual (How to write packages for Nix)</link>.</para>
|
||||
</listitem>
|
||||
|
||||
<listitem>
|
||||
<para>Fork the repository on GitHub.</para>
|
||||
</listitem>
|
||||
|
||||
<listitem>
|
||||
<para>Create a branch for your future fix.
|
||||
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>You can make branch from a commit of your local <command>nixos-version</command>. That will help you to avoid additional local compilations. Because you will receive packages from binary cache.
|
||||
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>For example: <command>nixos-version</command> returns <command>15.05.git.0998212 (Dingo)</command>. So you can do:</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
|
||||
<screen>
|
||||
$ git checkout 0998212
|
||||
$ git checkout -b 'fix/pkg-name-update'
|
||||
</screen>
|
||||
</para>
|
||||
</listitem>
|
||||
|
||||
<listitem>
|
||||
<para>Please avoid working directly on the <command>master</command> branch.</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
</para>
|
||||
</listitem>
|
||||
|
||||
<listitem>
|
||||
<para>Make commits of logical units.
|
||||
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>If you removed pkgs, made some major NixOS changes etc., write about them in <command>nixos/doc/manual/release-notes/rl-unstable.xml</command>.</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
</para>
|
||||
</listitem>
|
||||
|
||||
<listitem>
|
||||
<para>Check for unnecessary whitespace with <command>git diff --check</command> before committing.</para>
|
||||
</listitem>
|
||||
|
||||
<listitem>
|
||||
<para>Format the commit in a following way:</para>
|
||||
<programlisting>
|
||||
(pkg-name | service-name): (from -> to | init at version | refactor | etc)
|
||||
Additional information.
|
||||
</programlisting>
|
||||
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>Examples:
|
||||
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>
|
||||
<command>nginx: init at 2.0.1</command>
|
||||
</para>
|
||||
</listitem>
|
||||
|
||||
<listitem>
|
||||
<para>
|
||||
<command>firefox: 3.0 -> 3.1.1</command>
|
||||
</para>
|
||||
</listitem>
|
||||
|
||||
<listitem>
|
||||
<para>
|
||||
<command>hydra service: add bazBaz option</command>
|
||||
</para>
|
||||
</listitem>
|
||||
|
||||
<listitem>
|
||||
<para>
|
||||
<command>nginx service: refactor config generation</command>
|
||||
</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
</listitem>
|
||||
|
||||
<listitem>
|
||||
<para>Test your changes. If you work with
|
||||
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>nixpkgs:
|
||||
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>update pkg ->
|
||||
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>
|
||||
<command>nix-env -i pkg-name -f <path to your local nixpkgs folder></command>
|
||||
</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
</para>
|
||||
</listitem>
|
||||
|
||||
<listitem>
|
||||
<para>add pkg ->
|
||||
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>Make sure it's in <command>pkgs/top-level/all-packages.nix</command>
|
||||
</para>
|
||||
</listitem>
|
||||
|
||||
<listitem>
|
||||
<para>
|
||||
<command>nix-env -i pkg-name -f <path to your local nixpkgs folder></command>
|
||||
</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
</para>
|
||||
</listitem>
|
||||
|
||||
<listitem>
|
||||
<para>
|
||||
<emphasis>If you don't want to install pkg in you profile</emphasis>.
|
||||
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>
|
||||
<command>nix-build -A pkg-attribute-name <path to your local nixpkgs folder>/default.nix</command> and check results in the folder <command>result</command>. It will appear in the same directory where you did <command>nix-build</command>.</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
</para>
|
||||
</listitem>
|
||||
|
||||
<listitem>
|
||||
<para>If you did <command>nix-env -i pkg-name</command> you can do <command>nix-env -e pkg-name</command> to uninstall it from your system.</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
</para>
|
||||
</listitem>
|
||||
|
||||
<listitem>
|
||||
<para>NixOS and its modules:
|
||||
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>You can add new module to your NixOS configuration file (usually it's <command>/etc/nixos/configuration.nix</command>).
|
||||
And do <command>sudo nixos-rebuild test -I nixpkgs=<path to your local nixpkgs folder> --fast</command>.</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
</para>
|
||||
</listitem>
|
||||
|
||||
<listitem>
|
||||
<para>If you have commits <command>pkg-name: oh, forgot to insert whitespace</command>: squash commits in this case. Use <command>git rebase -i</command>.</para>
|
||||
</listitem>
|
||||
|
||||
<listitem>
|
||||
<para>Rebase you branch against current <command>master</command>.</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
</section>
|
||||
|
||||
<section>
|
||||
<title>Submitting changes</title>
|
||||
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>Push your changes to your fork of nixpkgs.</para>
|
||||
</listitem>
|
||||
|
||||
<listitem>
|
||||
<para>Create pull request:
|
||||
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>Write the title in format <command>(pkg-name | service): improvement</command>.
|
||||
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>If you update the pkg, write versions <command>from -> to</command>.</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
</para>
|
||||
</listitem>
|
||||
|
||||
<listitem>
|
||||
<para>Write in comment if you have tested your patch. Do not rely much on <command>TravisCI</command>.</para>
|
||||
</listitem>
|
||||
|
||||
<listitem>
|
||||
<para>If you make an improvement, write about your motivation.</para>
|
||||
</listitem>
|
||||
|
||||
<listitem>
|
||||
<para>Notify maintainers of the package. For example add to the message: <command>cc @jagajaga @domenkozar</command>.</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
</section>
|
||||
|
||||
<section>
|
||||
<title>Hotfixing pull requests</title>
|
||||
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>Make the appropriate changes in you branch.</para>
|
||||
</listitem>
|
||||
|
||||
<listitem>
|
||||
<para>Don't create additional commits, do
|
||||
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para><command>git rebase -i</command></para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
<command>git push --force</command> to your branch.</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
</para>
|
||||
</listitem>
|
||||
|
||||
</itemizedlist>
|
||||
</section>
|
||||
|
||||
<section>
|
||||
<title>Commit policy</title>
|
||||
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>Commits must be sufficiently tested before being merged, both for the master and staging branches.</para>
|
||||
</listitem>
|
||||
|
||||
<listitem>
|
||||
<para>Hydra builds for master and staging should not be used as testing platform, it's a build farm for changes that have been already tested.</para>
|
||||
</listitem>
|
||||
|
||||
<listitem>
|
||||
<para>Master should only see non-breaking commits that do not cause mass rebuilds.</para>
|
||||
</listitem>
|
||||
|
||||
<listitem>
|
||||
<para>Staging should only see non-breaking mass-rebuild commits. That means it's not to be used for testing, and changes must have been well tested already. <link xlink:href="http://comments.gmane.org/gmane.linux.distributions.nixos/13447">Read policy here</link>.</para>
|
||||
</listitem>
|
||||
|
||||
<listitem>
|
||||
<para>If staging is already in a broken state, please refrain from adding extra new breakages. Stabilize it for a few days, merge into master, then resume development on staging. <link xlink:href="http://hydra.nixos.org/jobset/nixpkgs/staging#tabs-evaluations">Keep an eye on the staging evaluations here</link>.</para>
|
||||
</listitem>
|
||||
|
||||
<listitem>
|
||||
<para>When changing the bootloader installation process, extra care must be taken. Grub installations cannot be rolled back, hence changes may break people's installations forever. For any non-trivial change to the bootloader please file a PR asking for review, especially from @edolstra.</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
|
||||
</section>
|
||||
</chapter>
|
||||
|
@ -6,7 +6,6 @@ with {
|
||||
inherit (import ./default.nix) fold;
|
||||
inherit (import ./strings.nix) concatStringsSep;
|
||||
inherit (import ./lists.nix) concatMap concatLists all deepSeqList;
|
||||
inherit (import ./misc.nix) maybeAttr;
|
||||
};
|
||||
|
||||
rec {
|
||||
@ -76,9 +75,29 @@ rec {
|
||||
=> { foo = 1; }
|
||||
*/
|
||||
filterAttrs = pred: set:
|
||||
listToAttrs (fold (n: ys: let v = set.${n}; in if pred n v then [(nameValuePair n v)] ++ ys else ys) [] (attrNames set));
|
||||
listToAttrs (concatMap (name: let v = set.${name}; in if pred name v then [(nameValuePair name v)] else []) (attrNames set));
|
||||
|
||||
|
||||
/* Filter an attribute set recursivelly by removing all attributes for
|
||||
which the given predicate return false.
|
||||
|
||||
Example:
|
||||
filterAttrsRecursive (n: v: v != null) { foo = { bar = null; }; }
|
||||
=> { foo = {}; }
|
||||
*/
|
||||
filterAttrsRecursive = pred: set:
|
||||
listToAttrs (
|
||||
concatMap (name:
|
||||
let v = set.${name}; in
|
||||
if pred name v then [
|
||||
(nameValuePair name (
|
||||
if isAttrs v then filterAttrsRecursive pred v
|
||||
else v
|
||||
))
|
||||
] else []
|
||||
) (attrNames set)
|
||||
);
|
||||
|
||||
/* foldAttrs: apply fold functions to values grouped by key. Eg accumulate values as list:
|
||||
foldAttrs (n: a: [n] ++ a) [] [{ a = 2; } { a = 3; }]
|
||||
=> { a = [ 2 3 ]; }
|
||||
@ -86,7 +105,7 @@ rec {
|
||||
foldAttrs = op: nul: list_of_attrs:
|
||||
fold (n: a:
|
||||
fold (name: o:
|
||||
o // (listToAttrs [{inherit name; value = op n.${name} (maybeAttr name nul a); }])
|
||||
o // (listToAttrs [{inherit name; value = op n.${name} (a.${name} or nul); }])
|
||||
) a (attrNames n)
|
||||
) {} list_of_attrs;
|
||||
|
||||
@ -222,6 +241,16 @@ rec {
|
||||
isDerivation = x: isAttrs x && x ? type && x.type == "derivation";
|
||||
|
||||
|
||||
/* Convert a store path to a fake derivation. */
|
||||
toDerivation = path:
|
||||
let path' = builtins.storePath path; in
|
||||
{ type = "derivation";
|
||||
name = builtins.unsafeDiscardStringContext (builtins.substring 33 (-1) (baseNameOf path'));
|
||||
outPath = path';
|
||||
outputs = [ "out" ];
|
||||
};
|
||||
|
||||
|
||||
/* If the Boolean `cond' is true, return the attribute set `as',
|
||||
otherwise an empty attribute set. */
|
||||
optionalAttrs = cond: as: if cond then as else {};
|
||||
|
@ -1,6 +1,8 @@
|
||||
let
|
||||
|
||||
lib = import ./default.nix;
|
||||
inherit (builtins) attrNames isFunction;
|
||||
|
||||
in
|
||||
|
||||
rec {
|
||||
@ -49,10 +51,6 @@ rec {
|
||||
else { }));
|
||||
|
||||
|
||||
# usage: (you can use override multiple times)
|
||||
# let d = makeOverridable stdenv.mkDerivation { name = ..; buildInputs; }
|
||||
# noBuildInputs = d.override { buildInputs = []; }
|
||||
# additionalBuildInputs = d.override ( args : args // { buildInputs = args.buildInputs ++ [ additional ]; } )
|
||||
makeOverridable = f: origArgs:
|
||||
let
|
||||
ff = f origArgs;
|
||||
@ -60,24 +58,16 @@ rec {
|
||||
in
|
||||
if builtins.isAttrs ff then (ff //
|
||||
{ override = newArgs: makeOverridable f (overrideWith newArgs);
|
||||
deepOverride = newArgs:
|
||||
makeOverridable f (lib.overrideExisting (lib.mapAttrs (deepOverrider newArgs) origArgs) newArgs);
|
||||
overrideDerivation = fdrv:
|
||||
makeOverridable (args: overrideDerivation (f args) fdrv) origArgs;
|
||||
})
|
||||
else if builtins.isFunction ff then
|
||||
{ override = newArgs: makeOverridable f (overrideWith newArgs);
|
||||
__functor = self: ff;
|
||||
deepOverride = throw "deepOverride not yet supported for functors";
|
||||
overrideDerivation = throw "overrideDerivation not yet supported for functors";
|
||||
}
|
||||
else ff;
|
||||
|
||||
deepOverrider = newArgs: name: x: if builtins.isAttrs x then (
|
||||
if x ? deepOverride then (x.deepOverride newArgs) else
|
||||
if x ? override then (x.override newArgs) else
|
||||
x) else x;
|
||||
|
||||
|
||||
/* Call the package function in the file `fn' with the required
|
||||
arguments automatically. The function is called with the
|
||||
@ -102,12 +92,28 @@ rec {
|
||||
*/
|
||||
callPackageWith = autoArgs: fn: args:
|
||||
let
|
||||
f = if builtins.isFunction fn then fn else import fn;
|
||||
f = if builtins.isFunction fn then fn else import fn;
|
||||
auto = builtins.intersectAttrs (builtins.functionArgs f) autoArgs;
|
||||
in makeOverridable f (auto // args);
|
||||
|
||||
|
||||
/* Add attributes to each output of a derivation without changing the derivation itself */
|
||||
/* Like callPackage, but for a function that returns an attribute
|
||||
set of derivations. The override function is added to the
|
||||
individual attributes. */
|
||||
callPackagesWith = autoArgs: fn: args:
|
||||
let
|
||||
f = if builtins.isFunction fn then fn else import fn;
|
||||
auto = builtins.intersectAttrs (builtins.functionArgs f) autoArgs;
|
||||
finalArgs = auto // args;
|
||||
pkgs = f finalArgs;
|
||||
mkAttrOverridable = name: pkg: pkg // {
|
||||
override = newArgs: mkAttrOverridable name (f (finalArgs // newArgs)).${name};
|
||||
};
|
||||
in lib.mapAttrs mkAttrOverridable pkgs;
|
||||
|
||||
|
||||
/* Add attributes to each output of a derivation without changing
|
||||
the derivation itself. */
|
||||
addPassthru = drv: passthru:
|
||||
let
|
||||
outputs = drv.outputs or [ "out" ];
|
||||
@ -158,4 +164,23 @@ rec {
|
||||
drv' = (lib.head outputsList).value;
|
||||
in lib.deepSeq drv' drv';
|
||||
|
||||
/* Make a set of packages with a common scope. All packages called
|
||||
with the provided `callPackage' will be evaluated with the same
|
||||
arguments. Any package in the set may depend on any other. The
|
||||
`override' function allows subsequent modification of the package
|
||||
set in a consistent way, i.e. all packages in the set will be
|
||||
called with the overridden packages. The package sets may be
|
||||
hierarchical: the packages in the set are called with the scope
|
||||
provided by `newScope' and the set provides a `newScope' attribute
|
||||
which can form the parent scope for later package sets. */
|
||||
makeScope = newScope: f:
|
||||
let self = f self // {
|
||||
newScope = scope: newScope (self // scope);
|
||||
callPackage = self.newScope {};
|
||||
override = g: makeScope newScope (self_:
|
||||
let super = f self_;
|
||||
in super // g super self_);
|
||||
};
|
||||
in self;
|
||||
|
||||
}
|
||||
|
@ -11,7 +11,7 @@ let
|
||||
types = import ./types.nix;
|
||||
meta = import ./meta.nix;
|
||||
debug = import ./debug.nix;
|
||||
misc = import ./misc.nix;
|
||||
misc = import ./deprecated.nix;
|
||||
maintainers = import ./maintainers.nix;
|
||||
platforms = import ./platforms.nix;
|
||||
systems = import ./systems.nix;
|
||||
|
423
lib/deprecated.nix
Normal file
423
lib/deprecated.nix
Normal file
@ -0,0 +1,423 @@
|
||||
let lib = import ./default.nix;
|
||||
inherit (builtins) isFunction head tail isList isAttrs isInt attrNames;
|
||||
|
||||
in
|
||||
|
||||
with import ./lists.nix;
|
||||
with import ./attrsets.nix;
|
||||
with import ./strings.nix;
|
||||
|
||||
rec {
|
||||
|
||||
# returns default if env var is not set
|
||||
maybeEnv = name: default:
|
||||
let value = builtins.getEnv name; in
|
||||
if value == "" then default else value;
|
||||
|
||||
defaultMergeArg = x : y: if builtins.isAttrs y then
|
||||
y
|
||||
else
|
||||
(y x);
|
||||
defaultMerge = x: y: x // (defaultMergeArg x y);
|
||||
foldArgs = merger: f: init: x:
|
||||
let arg=(merger init (defaultMergeArg init x));
|
||||
# now add the function with composed args already applied to the final attrs
|
||||
base = (setAttrMerge "passthru" {} (f arg)
|
||||
( z : z // rec {
|
||||
function = foldArgs merger f arg;
|
||||
args = (lib.attrByPath ["passthru" "args"] {} z) // x;
|
||||
} ));
|
||||
withStdOverrides = base // {
|
||||
override = base.passthru.function;
|
||||
} ;
|
||||
in
|
||||
withStdOverrides;
|
||||
|
||||
|
||||
# predecessors: proposed replacement for applyAndFun (which has a bug cause it merges twice)
|
||||
# the naming "overridableDelayableArgs" tries to express that you can
|
||||
# - override attr values which have been supplied earlier
|
||||
# - use attr values before they have been supplied by accessing the fix point
|
||||
# name "fixed"
|
||||
# f: the (delayed overridden) arguments are applied to this
|
||||
#
|
||||
# initial: initial attrs arguments and settings. see defaultOverridableDelayableArgs
|
||||
#
|
||||
# returns: f applied to the arguments // special attributes attrs
|
||||
# a) merge: merge applied args with new args. Wether an argument is overridden depends on the merge settings
|
||||
# b) replace: this let's you replace and remove names no matter which merge function has been set
|
||||
#
|
||||
# examples: see test cases "res" below;
|
||||
overridableDelayableArgs =
|
||||
f : # the function applied to the arguments
|
||||
initial : # you pass attrs, the functions below are passing a function taking the fix argument
|
||||
let
|
||||
takeFixed = if isFunction initial then initial else (fixed : initial); # transform initial to an expression always taking the fixed argument
|
||||
tidy = args :
|
||||
let # apply all functions given in "applyPreTidy" in sequence
|
||||
applyPreTidyFun = fold ( n : a : x : n ( a x ) ) lib.id (maybeAttr "applyPreTidy" [] args);
|
||||
in removeAttrs (applyPreTidyFun args) ( ["applyPreTidy"] ++ (maybeAttr "removeAttrs" [] args) ); # tidy up args before applying them
|
||||
fun = n : x :
|
||||
let newArgs = fixed :
|
||||
let args = takeFixed fixed;
|
||||
mergeFun = args.${n};
|
||||
in if isAttrs x then (mergeFun args x)
|
||||
else assert isFunction x;
|
||||
mergeFun args (x ( args // { inherit fixed; }));
|
||||
in overridableDelayableArgs f newArgs;
|
||||
in
|
||||
(f (tidy (lib.fix takeFixed))) // {
|
||||
merge = fun "mergeFun";
|
||||
replace = fun "keepFun";
|
||||
};
|
||||
defaultOverridableDelayableArgs = f :
|
||||
let defaults = {
|
||||
mergeFun = mergeAttrByFunc; # default merge function. merge strategie (concatenate lists, strings) is given by mergeAttrBy
|
||||
keepFun = a : b : { inherit (a) removeAttrs mergeFun keepFun mergeAttrBy; } // b; # even when using replace preserve these values
|
||||
applyPreTidy = []; # list of functions applied to args before args are tidied up (usage case : prepareDerivationArgs)
|
||||
mergeAttrBy = mergeAttrBy // {
|
||||
applyPreTidy = a : b : a ++ b;
|
||||
removeAttrs = a : b: a ++ b;
|
||||
};
|
||||
removeAttrs = ["mergeFun" "keepFun" "mergeAttrBy" "removeAttrs" "fixed" ]; # before applying the arguments to the function make sure these names are gone
|
||||
};
|
||||
in (overridableDelayableArgs f defaults).merge;
|
||||
|
||||
|
||||
|
||||
# rec { # an example of how composedArgsAndFun can be used
|
||||
# a = composedArgsAndFun (x : x) { a = ["2"]; meta = { d = "bar";}; };
|
||||
# # meta.d will be lost ! It's your task to preserve it (eg using a merge function)
|
||||
# b = a.passthru.function { a = [ "3" ]; meta = { d2 = "bar2";}; };
|
||||
# # instead of passing/ overriding values you can use a merge function:
|
||||
# c = b.passthru.function ( x: { a = x.a ++ ["4"]; }); # consider using (maybeAttr "a" [] x)
|
||||
# }
|
||||
# result:
|
||||
# {
|
||||
# a = { a = ["2"]; meta = { d = "bar"; }; passthru = { function = .. }; };
|
||||
# b = { a = ["3"]; meta = { d2 = "bar2"; }; passthru = { function = .. }; };
|
||||
# c = { a = ["3" "4"]; meta = { d2 = "bar2"; }; passthru = { function = .. }; };
|
||||
# # c2 is equal to c
|
||||
# }
|
||||
composedArgsAndFun = f: foldArgs defaultMerge f {};
|
||||
|
||||
|
||||
# shortcut for attrByPath ["name"] default attrs
|
||||
maybeAttrNullable = maybeAttr;
|
||||
|
||||
# shortcut for attrByPath ["name"] default attrs
|
||||
maybeAttr = name: default: attrs: attrs.${name} or default;
|
||||
|
||||
|
||||
# Return the second argument if the first one is true or the empty version
|
||||
# of the second argument.
|
||||
ifEnable = cond: val:
|
||||
if cond then val
|
||||
else if builtins.isList val then []
|
||||
else if builtins.isAttrs val then {}
|
||||
# else if builtins.isString val then ""
|
||||
else if val == true || val == false then false
|
||||
else null;
|
||||
|
||||
|
||||
# Return true only if there is an attribute and it is true.
|
||||
checkFlag = attrSet: name:
|
||||
if name == "true" then true else
|
||||
if name == "false" then false else
|
||||
if (elem name (attrByPath ["flags"] [] attrSet)) then true else
|
||||
attrByPath [name] false attrSet ;
|
||||
|
||||
|
||||
# Input : attrSet, [ [name default] ... ], name
|
||||
# Output : its value or default.
|
||||
getValue = attrSet: argList: name:
|
||||
( attrByPath [name] (if checkFlag attrSet name then true else
|
||||
if argList == [] then null else
|
||||
let x = builtins.head argList; in
|
||||
if (head x) == name then
|
||||
(head (tail x))
|
||||
else (getValue attrSet
|
||||
(tail argList) name)) attrSet );
|
||||
|
||||
|
||||
# Input : attrSet, [[name default] ...], [ [flagname reqs..] ... ]
|
||||
# Output : are reqs satisfied? It's asserted.
|
||||
checkReqs = attrSet : argList : condList :
|
||||
(
|
||||
fold lib.and true
|
||||
(map (x: let name = (head x) ; in
|
||||
|
||||
((checkFlag attrSet name) ->
|
||||
(fold lib.and true
|
||||
(map (y: let val=(getValue attrSet argList y); in
|
||||
(val!=null) && (val!=false))
|
||||
(tail x))))) condList)) ;
|
||||
|
||||
|
||||
# This function has O(n^2) performance.
|
||||
uniqList = {inputList, acc ? []} :
|
||||
let go = xs : acc :
|
||||
if xs == []
|
||||
then []
|
||||
else let x = head xs;
|
||||
y = if elem x acc then [] else [x];
|
||||
in y ++ go (tail xs) (y ++ acc);
|
||||
in go inputList acc;
|
||||
|
||||
uniqListExt = {inputList, outputList ? [],
|
||||
getter ? (x : x), compare ? (x: y: x==y)}:
|
||||
if inputList == [] then outputList else
|
||||
let x=head inputList;
|
||||
isX = y: (compare (getter y) (getter x));
|
||||
newOutputList = outputList ++
|
||||
(if any isX outputList then [] else [x]);
|
||||
in uniqListExt {outputList=newOutputList;
|
||||
inputList = (tail inputList);
|
||||
inherit getter compare;
|
||||
};
|
||||
|
||||
|
||||
|
||||
condConcat = name: list: checker:
|
||||
if list == [] then name else
|
||||
if checker (head list) then
|
||||
condConcat
|
||||
(name + (head (tail list)))
|
||||
(tail (tail list))
|
||||
checker
|
||||
else condConcat
|
||||
name (tail (tail list)) checker;
|
||||
|
||||
lazyGenericClosure = {startSet, operator}:
|
||||
let
|
||||
work = list: doneKeys: result:
|
||||
if list == [] then
|
||||
result
|
||||
else
|
||||
let x = head list; key = x.key; in
|
||||
if elem key doneKeys then
|
||||
work (tail list) doneKeys result
|
||||
else
|
||||
work (tail list ++ operator x) ([key] ++ doneKeys) ([x] ++ result);
|
||||
in
|
||||
work startSet [] [];
|
||||
|
||||
innerModifySumArgs = f: x: a: b: if b == null then (f a b) // x else
|
||||
innerModifySumArgs f x (a // b);
|
||||
modifySumArgs = f: x: innerModifySumArgs f x {};
|
||||
|
||||
|
||||
innerClosePropagation = acc : xs :
|
||||
if xs == []
|
||||
then acc
|
||||
else let y = head xs;
|
||||
ys = tail xs;
|
||||
in if ! isAttrs y
|
||||
then innerClosePropagation acc ys
|
||||
else let acc' = [y] ++ acc;
|
||||
in innerClosePropagation
|
||||
acc'
|
||||
(uniqList { inputList = (maybeAttrNullable "propagatedBuildInputs" [] y)
|
||||
++ (maybeAttrNullable "propagatedNativeBuildInputs" [] y)
|
||||
++ ys;
|
||||
acc = acc';
|
||||
}
|
||||
);
|
||||
|
||||
closePropagation = list: (uniqList {inputList = (innerClosePropagation [] list);});
|
||||
|
||||
# calls a function (f attr value ) for each record item. returns a list
|
||||
mapAttrsFlatten = f : r : map (attr: f attr r.${attr}) (attrNames r);
|
||||
|
||||
# attribute set containing one attribute
|
||||
nvs = name : value : listToAttrs [ (nameValuePair name value) ];
|
||||
# adds / replaces an attribute of an attribute set
|
||||
setAttr = set : name : v : set // (nvs name v);
|
||||
|
||||
# setAttrMerge (similar to mergeAttrsWithFunc but only merges the values of a particular name)
|
||||
# setAttrMerge "a" [] { a = [2];} (x : x ++ [3]) -> { a = [2 3]; }
|
||||
# setAttrMerge "a" [] { } (x : x ++ [3]) -> { a = [ 3]; }
|
||||
setAttrMerge = name : default : attrs : f :
|
||||
setAttr attrs name (f (maybeAttr name default attrs));
|
||||
|
||||
# Using f = a : b = b the result is similar to //
|
||||
# merge attributes with custom function handling the case that the attribute
|
||||
# exists in both sets
|
||||
mergeAttrsWithFunc = f : set1 : set2 :
|
||||
fold (n: set : if set ? ${n}
|
||||
then setAttr set n (f set.${n} set2.${n})
|
||||
else set )
|
||||
(set2 // set1) (attrNames set2);
|
||||
|
||||
# merging two attribute set concatenating the values of same attribute names
|
||||
# eg { a = 7; } { a = [ 2 3 ]; } becomes { a = [ 7 2 3 ]; }
|
||||
mergeAttrsConcatenateValues = mergeAttrsWithFunc ( a : b : (toList a) ++ (toList b) );
|
||||
|
||||
# merges attributes using //, if a name exisits in both attributes
|
||||
# an error will be triggered unless its listed in mergeLists
|
||||
# so you can mergeAttrsNoOverride { buildInputs = [a]; } { buildInputs = [a]; } {} to get
|
||||
# { buildInputs = [a b]; }
|
||||
# merging buildPhase does'nt really make sense. The cases will be rare where appending /prefixing will fit your needs?
|
||||
# in these cases the first buildPhase will override the second one
|
||||
# ! deprecated, use mergeAttrByFunc instead
|
||||
mergeAttrsNoOverride = { mergeLists ? ["buildInputs" "propagatedBuildInputs"],
|
||||
overrideSnd ? [ "buildPhase" ]
|
||||
} : attrs1 : attrs2 :
|
||||
fold (n: set :
|
||||
setAttr set n ( if set ? ${n}
|
||||
then # merge
|
||||
if elem n mergeLists # attribute contains list, merge them by concatenating
|
||||
then attrs2.${n} ++ attrs1.${n}
|
||||
else if elem n overrideSnd
|
||||
then attrs1.${n}
|
||||
else throw "error mergeAttrsNoOverride, attribute ${n} given in both attributes - no merge func defined"
|
||||
else attrs2.${n} # add attribute not existing in attr1
|
||||
)) attrs1 (attrNames attrs2);
|
||||
|
||||
|
||||
# example usage:
|
||||
# mergeAttrByFunc {
|
||||
# inherit mergeAttrBy; # defined below
|
||||
# buildInputs = [ a b ];
|
||||
# } {
|
||||
# buildInputs = [ c d ];
|
||||
# };
|
||||
# will result in
|
||||
# { mergeAttrsBy = [...]; buildInputs = [ a b c d ]; }
|
||||
# is used by prepareDerivationArgs, defaultOverridableDelayableArgs and can be used when composing using
|
||||
# foldArgs, composedArgsAndFun or applyAndFun. Example: composableDerivation in all-packages.nix
|
||||
mergeAttrByFunc = x : y :
|
||||
let
|
||||
mergeAttrBy2 = { mergeAttrBy=lib.mergeAttrs; }
|
||||
// (maybeAttr "mergeAttrBy" {} x)
|
||||
// (maybeAttr "mergeAttrBy" {} y); in
|
||||
fold lib.mergeAttrs {} [
|
||||
x y
|
||||
(mapAttrs ( a : v : # merge special names using given functions
|
||||
if x ? ${a}
|
||||
then if y ? ${a}
|
||||
then v x.${a} y.${a} # both have attr, use merge func
|
||||
else x.${a} # only x has attr
|
||||
else y.${a} # only y has attr)
|
||||
) (removeAttrs mergeAttrBy2
|
||||
# don't merge attrs which are neither in x nor y
|
||||
(filter (a: ! x ? ${a} && ! y ? ${a})
|
||||
(attrNames mergeAttrBy2))
|
||||
)
|
||||
)
|
||||
];
|
||||
mergeAttrsByFuncDefaults = foldl mergeAttrByFunc { inherit mergeAttrBy; };
|
||||
mergeAttrsByFuncDefaultsClean = list: removeAttrs (mergeAttrsByFuncDefaults list) ["mergeAttrBy"];
|
||||
|
||||
# merge attrs based on version key into mkDerivation args, see mergeAttrBy to learn about smart merge defaults
|
||||
#
|
||||
# This function is best explained by an example:
|
||||
#
|
||||
# {version ? "2.x"} :
|
||||
#
|
||||
# mkDerivation (mergeAttrsByVersion "package-name" version
|
||||
# { # version specific settings
|
||||
# "git" = { src = ..; preConfigre = "autogen.sh"; buildInputs = [automake autoconf libtool]; };
|
||||
# "2.x" = { src = ..; };
|
||||
# }
|
||||
# { // shared settings
|
||||
# buildInputs = [ common build inputs ];
|
||||
# meta = { .. }
|
||||
# }
|
||||
# )
|
||||
#
|
||||
# Please note that e.g. Eelco Dolstra usually prefers having one file for
|
||||
# each version. On the other hand there are valuable additional design goals
|
||||
# - readability
|
||||
# - do it once only
|
||||
# - try to avoid duplication
|
||||
#
|
||||
# Marc Weber and Michael Raskin sometimes prefer keeping older
|
||||
# versions around for testing and regression tests - as long as its cheap to
|
||||
# do so.
|
||||
#
|
||||
# Very often it just happens that the "shared" code is the bigger part.
|
||||
# Then using this function might be appropriate.
|
||||
#
|
||||
# Be aware that its easy to cause recompilations in all versions when using
|
||||
# this function - also if derivations get too complex splitting into multiple
|
||||
# files is the way to go.
|
||||
#
|
||||
# See misc.nix -> versionedDerivation
|
||||
# discussion: nixpkgs: pull/310
|
||||
mergeAttrsByVersion = name: version: attrsByVersion: base:
|
||||
mergeAttrsByFuncDefaultsClean [ { name = "${name}-${version}"; } base (maybeAttr version (throw "bad version ${version} for ${name}") attrsByVersion)];
|
||||
|
||||
# sane defaults (same name as attr name so that inherit can be used)
|
||||
mergeAttrBy = # { buildInputs = concatList; [...]; passthru = mergeAttr; [..]; }
|
||||
listToAttrs (map (n : nameValuePair n lib.concat)
|
||||
[ "nativeBuildInputs" "buildInputs" "propagatedBuildInputs" "configureFlags" "prePhases" "postAll" "patches" ])
|
||||
// listToAttrs (map (n : nameValuePair n lib.mergeAttrs) [ "passthru" "meta" "cfg" "flags" ])
|
||||
// listToAttrs (map (n : nameValuePair n (a: b: "${a}\n${b}") ) [ "preConfigure" "postInstall" ])
|
||||
;
|
||||
|
||||
# prepareDerivationArgs tries to make writing configurable derivations easier
|
||||
# example:
|
||||
# prepareDerivationArgs {
|
||||
# mergeAttrBy = {
|
||||
# myScript = x : y : x ++ "\n" ++ y;
|
||||
# };
|
||||
# cfg = {
|
||||
# readlineSupport = true;
|
||||
# };
|
||||
# flags = {
|
||||
# readline = {
|
||||
# set = {
|
||||
# configureFlags = [ "--with-compiler=${compiler}" ];
|
||||
# buildInputs = [ compiler ];
|
||||
# pass = { inherit compiler; READLINE=1; };
|
||||
# assertion = compiler.dllSupport;
|
||||
# myScript = "foo";
|
||||
# };
|
||||
# unset = { configureFlags = ["--without-compiler"]; };
|
||||
# };
|
||||
# };
|
||||
# src = ...
|
||||
# buildPhase = '' ... '';
|
||||
# name = ...
|
||||
# myScript = "bar";
|
||||
# };
|
||||
# if you don't have need for unset you can omit the surrounding set = { .. } attr
|
||||
# all attrs except flags cfg and mergeAttrBy will be merged with the
|
||||
# additional data from flags depending on config settings
|
||||
# It's used in composableDerivation in all-packages.nix. It's also used
|
||||
# heavily in the new python and libs implementation
|
||||
#
|
||||
# should we check for misspelled cfg options?
|
||||
# TODO use args.mergeFun here as well?
|
||||
prepareDerivationArgs = args:
|
||||
let args2 = { cfg = {}; flags = {}; } // args;
|
||||
flagName = name : "${name}Support";
|
||||
cfgWithDefaults = (listToAttrs (map (n : nameValuePair (flagName n) false) (attrNames args2.flags)))
|
||||
// args2.cfg;
|
||||
opts = attrValues (mapAttrs (a : v :
|
||||
let v2 = if v ? set || v ? unset then v else { set = v; };
|
||||
n = if cfgWithDefaults.${flagName a} then "set" else "unset";
|
||||
attr = maybeAttr n {} v2; in
|
||||
if (maybeAttr "assertion" true attr)
|
||||
then attr
|
||||
else throw "assertion of flag ${a} of derivation ${args.name} failed"
|
||||
) args2.flags );
|
||||
in removeAttrs
|
||||
(mergeAttrsByFuncDefaults ([args] ++ opts ++ [{ passthru = cfgWithDefaults; }]))
|
||||
["flags" "cfg" "mergeAttrBy" ];
|
||||
|
||||
|
||||
nixType = x:
|
||||
if isAttrs x then
|
||||
if x ? outPath then "derivation"
|
||||
else "aattrs"
|
||||
else if isFunction x then "function"
|
||||
else if isList x then "list"
|
||||
else if x == true then "bool"
|
||||
else if x == false then "bool"
|
||||
else if x == null then "null"
|
||||
else if isInt x then "int"
|
||||
else "string";
|
||||
|
||||
}
|
@ -85,6 +85,11 @@ lib.mapAttrs (n: v: v // { shortName = n; }) rec {
|
||||
fullName = "Creative Commons Zero v1.0 Universal";
|
||||
};
|
||||
|
||||
cc-by-sa-25 = spdx {
|
||||
spdxId = "CC-BY-SA-2.5";
|
||||
fullName = "Creative Commons Attribution Share Alike 2.5";
|
||||
};
|
||||
|
||||
cc-by-30 = spdx {
|
||||
spdxId = "CC-BY-3.0";
|
||||
fullName = "Creative Commons Attribution 3.0";
|
||||
@ -150,6 +155,11 @@ lib.mapAttrs (n: v: v // { shortName = n; }) rec {
|
||||
fullName = "GNU Free Documentation License v1.2";
|
||||
};
|
||||
|
||||
fdl13 = spdx {
|
||||
spdxId = "GFDL-1.3";
|
||||
fullName = "GNU Free Documentation License v1.2";
|
||||
};
|
||||
|
||||
free = {
|
||||
fullName = "Unspecified free software license";
|
||||
};
|
||||
@ -322,11 +332,21 @@ lib.mapAttrs (n: v: v // { shortName = n; }) rec {
|
||||
fullName = "University of Illinois/NCSA Open Source License";
|
||||
};
|
||||
|
||||
notion_lgpl = {
|
||||
url = "https://raw.githubusercontent.com/raboof/notion/master/LICENSE";
|
||||
fullName = "Notion modified LGPL";
|
||||
};
|
||||
|
||||
ofl = spdx {
|
||||
spdxId = "OFL-1.1";
|
||||
fullName = "SIL Open Font License 1.1";
|
||||
};
|
||||
|
||||
openldap = spdx {
|
||||
spdxId = "OLDAP-2.8";
|
||||
fullName = "Open LDAP Public License v2.8";
|
||||
};
|
||||
|
||||
openssl = spdx {
|
||||
spdxId = "OpenSSL";
|
||||
fullName = "OpenSSL License";
|
||||
@ -403,6 +423,11 @@ lib.mapAttrs (n: v: v // { shortName = n; }) rec {
|
||||
fullName = "The Unlicense";
|
||||
};
|
||||
|
||||
vim = spdx {
|
||||
spdxId = "Vim";
|
||||
fullName = "Vim License";
|
||||
};
|
||||
|
||||
vsl10 = spdx {
|
||||
spdxId = "VSL-1.0";
|
||||
fullName = "Vovida Software License v1.0";
|
||||
|
170
lib/lists.nix
170
lib/lists.nix
@ -4,7 +4,7 @@ with import ./trivial.nix;
|
||||
|
||||
rec {
|
||||
|
||||
inherit (builtins) head tail length isList elemAt concatLists filter elem;
|
||||
inherit (builtins) head tail length isList elemAt concatLists filter elem genList;
|
||||
|
||||
|
||||
# Create a list consisting of a single element. `singleton x' is
|
||||
@ -38,16 +38,24 @@ rec {
|
||||
in foldl' (length list - 1);
|
||||
|
||||
|
||||
# map with index: `imap (i: v: "${v}-${toString i}") ["a" "b"] ==
|
||||
# ["a-1" "b-2"]'
|
||||
imap = f: list:
|
||||
let
|
||||
len = length list;
|
||||
imap' = n:
|
||||
if n == len
|
||||
then []
|
||||
else [ (f (n + 1) (elemAt list n)) ] ++ imap' (n + 1);
|
||||
in imap' 0;
|
||||
# Strict version of foldl.
|
||||
foldl' = builtins.foldl' or foldl;
|
||||
|
||||
|
||||
# Map with index: `imap (i: v: "${v}-${toString i}") ["a" "b"] ==
|
||||
# ["a-1" "b-2"]'. FIXME: why does this start to count at 1?
|
||||
imap =
|
||||
if builtins ? genList then
|
||||
f: list: genList (n: f (n + 1) (elemAt list n)) (length list)
|
||||
else
|
||||
f: list:
|
||||
let
|
||||
len = length list;
|
||||
imap' = n:
|
||||
if n == len
|
||||
then []
|
||||
else [ (f (n + 1) (elemAt list n)) ] ++ imap' (n + 1);
|
||||
in imap' 0;
|
||||
|
||||
|
||||
# Map and concatenate the result.
|
||||
@ -59,7 +67,7 @@ rec {
|
||||
# == [1 2 3 4 5]' and `flatten 1 == [1]'.
|
||||
flatten = x:
|
||||
if isList x
|
||||
then fold (x: y: (flatten x) ++ y) [] x
|
||||
then foldl' (x: y: x ++ (flatten y)) [] x
|
||||
else [x];
|
||||
|
||||
|
||||
@ -86,17 +94,17 @@ rec {
|
||||
|
||||
# Return true iff function `pred' returns true for at least element
|
||||
# of `list'.
|
||||
any = pred: fold (x: y: if pred x then true else y) false;
|
||||
any = builtins.any or (pred: fold (x: y: if pred x then true else y) false);
|
||||
|
||||
|
||||
# Return true iff function `pred' returns true for all elements of
|
||||
# `list'.
|
||||
all = pred: fold (x: y: if pred x then y else false) true;
|
||||
all = builtins.all or (pred: fold (x: y: if pred x then y else false) true);
|
||||
|
||||
|
||||
# Count how many times function `pred' returns true for the elements
|
||||
# of `list'.
|
||||
count = pred: fold (x: c: if pred x then c + 1 else c) 0;
|
||||
count = pred: foldl' (c: x: if pred x then c + 1 else c) 0;
|
||||
|
||||
|
||||
# Return a singleton list or an empty list, depending on a boolean
|
||||
@ -116,10 +124,17 @@ rec {
|
||||
|
||||
|
||||
# Return a list of integers from `first' up to and including `last'.
|
||||
range = first: last:
|
||||
if last < first
|
||||
then []
|
||||
else [first] ++ range (first + 1) last;
|
||||
range =
|
||||
if builtins ? genList then
|
||||
first: last:
|
||||
if first > last
|
||||
then []
|
||||
else genList (n: first + n) (last - first + 1)
|
||||
else
|
||||
first: last:
|
||||
if last < first
|
||||
then []
|
||||
else [first] ++ range (first + 1) last;
|
||||
|
||||
|
||||
# Partition the elements of a list in two lists, `right' and
|
||||
@ -132,30 +147,37 @@ rec {
|
||||
) { right = []; wrong = []; };
|
||||
|
||||
|
||||
zipListsWith = f: fst: snd:
|
||||
let
|
||||
len1 = length fst;
|
||||
len2 = length snd;
|
||||
len = if len1 < len2 then len1 else len2;
|
||||
zipListsWith' = n:
|
||||
if n != len then
|
||||
[ (f (elemAt fst n) (elemAt snd n)) ]
|
||||
++ zipListsWith' (n + 1)
|
||||
else [];
|
||||
in zipListsWith' 0;
|
||||
zipListsWith =
|
||||
if builtins ? genList then
|
||||
f: fst: snd: genList (n: f (elemAt fst n) (elemAt snd n)) (min (length fst) (length snd))
|
||||
else
|
||||
f: fst: snd:
|
||||
let
|
||||
len = min (length fst) (length snd);
|
||||
zipListsWith' = n:
|
||||
if n != len then
|
||||
[ (f (elemAt fst n) (elemAt snd n)) ]
|
||||
++ zipListsWith' (n + 1)
|
||||
else [];
|
||||
in zipListsWith' 0;
|
||||
|
||||
zipLists = zipListsWith (fst: snd: { inherit fst snd; });
|
||||
|
||||
|
||||
# Reverse the order of the elements of a list. FIXME: O(n^2)!
|
||||
reverseList = fold (e: acc: acc ++ [ e ]) [];
|
||||
# Reverse the order of the elements of a list.
|
||||
reverseList =
|
||||
if builtins ? genList then
|
||||
xs: let l = length xs; in genList (n: elemAt xs (l - n - 1)) l
|
||||
else
|
||||
fold (e: acc: acc ++ [ e ]) [];
|
||||
|
||||
|
||||
# Sort a list based on a comparator function which compares two
|
||||
# elements and returns true if the first argument is strictly below
|
||||
# the second argument. The returned list is sorted in an increasing
|
||||
# order. The implementation does a quick-sort.
|
||||
sort = strictLess: list:
|
||||
sort = builtins.sort or (
|
||||
strictLess: list:
|
||||
let
|
||||
len = length list;
|
||||
first = head list;
|
||||
@ -169,31 +191,50 @@ rec {
|
||||
pivot = pivot' 1 { left = []; right = []; };
|
||||
in
|
||||
if len < 2 then list
|
||||
else (sort strictLess pivot.left) ++ [ first ] ++ (sort strictLess pivot.right);
|
||||
else (sort strictLess pivot.left) ++ [ first ] ++ (sort strictLess pivot.right));
|
||||
|
||||
|
||||
# Return the first (at most) N elements of a list.
|
||||
take = count: list:
|
||||
let
|
||||
len = length list;
|
||||
take' = n:
|
||||
if n == len || n == count
|
||||
then []
|
||||
else
|
||||
[ (elemAt list n) ] ++ take' (n + 1);
|
||||
in take' 0;
|
||||
take =
|
||||
if builtins ? genList then
|
||||
count: sublist 0 count
|
||||
else
|
||||
count: list:
|
||||
let
|
||||
len = length list;
|
||||
take' = n:
|
||||
if n == len || n == count
|
||||
then []
|
||||
else
|
||||
[ (elemAt list n) ] ++ take' (n + 1);
|
||||
in take' 0;
|
||||
|
||||
|
||||
# Remove the first (at most) N elements of a list.
|
||||
drop = count: list:
|
||||
let
|
||||
len = length list;
|
||||
drop' = n:
|
||||
if n == -1 || n < count
|
||||
then []
|
||||
else
|
||||
drop' (n - 1) ++ [ (elemAt list n) ];
|
||||
in drop' (len - 1);
|
||||
drop =
|
||||
if builtins ? genList then
|
||||
count: list: sublist count (length list) list
|
||||
else
|
||||
count: list:
|
||||
let
|
||||
len = length list;
|
||||
drop' = n:
|
||||
if n == -1 || n < count
|
||||
then []
|
||||
else
|
||||
drop' (n - 1) ++ [ (elemAt list n) ];
|
||||
in drop' (len - 1);
|
||||
|
||||
|
||||
# Return a list consisting of at most ‘count’ elements of ‘list’,
|
||||
# starting at index ‘start’.
|
||||
sublist = start: count: list:
|
||||
let len = length list; in
|
||||
genList
|
||||
(n: elemAt list (n + start))
|
||||
(if start >= len then 0
|
||||
else if start + count > len then len - start
|
||||
else count);
|
||||
|
||||
|
||||
# Return the last element of a list.
|
||||
@ -205,25 +246,13 @@ rec {
|
||||
init = list: assert list != []; take (length list - 1) list;
|
||||
|
||||
|
||||
# Zip two lists together.
|
||||
zipTwoLists = xs: ys:
|
||||
let
|
||||
len1 = length xs;
|
||||
len2 = length ys;
|
||||
len = if len1 < len2 then len1 else len2;
|
||||
zipTwoLists' = n:
|
||||
if n != len then
|
||||
[ { first = elemAt xs n; second = elemAt ys n; } ]
|
||||
++ zipTwoLists' (n + 1)
|
||||
else [];
|
||||
in zipTwoLists' 0;
|
||||
|
||||
|
||||
deepSeqList = xs: y: if any (x: deepSeq x false) xs then y else y;
|
||||
|
||||
|
||||
crossLists = f: foldl (fs: args: concatMap (f: map f args) fs) [f];
|
||||
|
||||
# Remove duplicate elements from the list
|
||||
|
||||
# Remove duplicate elements from the list. O(n^2) complexity.
|
||||
unique = list:
|
||||
if list == [] then
|
||||
[]
|
||||
@ -233,9 +262,12 @@ rec {
|
||||
xs = unique (drop 1 list);
|
||||
in [x] ++ remove x xs;
|
||||
|
||||
# Intersects list 'e' and another list
|
||||
|
||||
# Intersects list 'e' and another list. O(nm) complexity.
|
||||
intersectLists = e: filter (x: elem x e);
|
||||
|
||||
# Subtracts list 'e' from another list
|
||||
|
||||
# Subtracts list 'e' from another list. O(nm) complexity.
|
||||
subtractLists = e: filter (x: !(elem x e));
|
||||
|
||||
}
|
||||
|
@ -1,22 +1,31 @@
|
||||
/* -*- coding: utf-8; -*- */
|
||||
|
||||
{
|
||||
/* Add your name and email address here. Keep the list
|
||||
alphabetically sorted. */
|
||||
/* Add your name and email address here.
|
||||
Keep the list alphabetically sorted.
|
||||
Prefer the same attrname as your github username, please,
|
||||
so it's easy to ping a package @maintainer.
|
||||
*/
|
||||
|
||||
_1126 = "Christian Lask <mail@elfsechsundzwanzig.de>";
|
||||
a1russell = "Adam Russell <adamlr6+pub@gmail.com>";
|
||||
abaldeau = "Andreas Baldeau <andreas@baldeau.net>";
|
||||
abbradar = "Nikolay Amiantov <ab@fmap.me>";
|
||||
adev = "Adrien Devresse <adev@adev.name>";
|
||||
aforemny = "Alexander Foremny <alexanderforemny@googlemail.com>";
|
||||
aflatter = "Alexander Flatter <flatter@fastmail.fm>";
|
||||
aherrmann = "Andreas Herrmann <andreash87@gmx.ch>";
|
||||
ak = "Alexander Kjeldaas <ak@formalprivacy.com>";
|
||||
akaWolf = "Artjom Vejsel <akawolf0@gmail.com>";
|
||||
akc = "Anders Claesson <akc@akc.is>";
|
||||
algorith = "Dries Van Daele <dries_van_daele@telenet.be>";
|
||||
all = "Nix Committers <nix-commits@lists.science.uu.nl>";
|
||||
ambrop72 = "Ambroz Bizjak <ambrop7@gmail.com>";
|
||||
amiddelk = "Arie Middelkoop <amiddelk@gmail.com>";
|
||||
amorsillo = "Andrew Morsillo <andrew.morsillo@gmail.com>";
|
||||
AndersonTorres = "Anderson Torres <torres.anderson.85@gmail.com>";
|
||||
anderspapitto = "Anders Papitto <anderspapitto@gmail.com>";
|
||||
andres = "Andres Loeh <ksnixos@andres-loeh.de>";
|
||||
andrewrk = "Andrew Kelley <superjoe30@gmail.com>";
|
||||
antono = "Antono Vasiljev <self@antono.info>";
|
||||
ardumont = "Antoine R. Dumont <eniotna.t@gmail.com>";
|
||||
aristid = "Aristid Breitkreuz <aristidb@gmail.com>";
|
||||
@ -25,9 +34,12 @@
|
||||
astsmtl = "Alexander Tsamutali <astsmtl@yandex.ru>";
|
||||
aszlig = "aszlig <aszlig@redmoonstudios.org>";
|
||||
auntie = "Jonathan Glines <auntieNeo@gmail.com>";
|
||||
avnik = "Alexander V. Nikolaev <avn@avnik.info>";
|
||||
aycanirican = "Aycan iRiCAN <iricanaycan@gmail.com>";
|
||||
badi = "Badi' Abdul-Wahid <abdulwahidc@gmail.com>";
|
||||
balajisivaraman = "Balaji Sivaraman<sivaraman.balaji@gmail.com>";
|
||||
bbenoist = "Baptist BENOIST <return_0@live.com>";
|
||||
bcarrell = "Brandon Carrell <brandoncarrell@gmail.com>";
|
||||
bcdarwin = "Ben Darwin <bcdarwin@gmail.com>";
|
||||
bdimcheff = "Brandon Dimcheff <brandon@dimcheff.com>";
|
||||
bennofs = "Benno Fünfstück <benno.fuenfstueck@gmail.com>";
|
||||
@ -41,27 +53,36 @@
|
||||
bodil = "Bodil Stokke <nix@bodil.org>";
|
||||
boothead = "Ben Ford <ben@perurbis.com>";
|
||||
bosu = "Boris Sukholitko <boriss@gmail.com>";
|
||||
bramd = "Bram Duvigneau <bram@bramd.nl>";
|
||||
bstrik = "Berno Strik <dutchman55@gmx.com>";
|
||||
c0dehero = "CodeHero <codehero@nerdpol.ch>";
|
||||
calrama = "Moritz Maxeiner <moritz@ucworks.org>";
|
||||
campadrenalin = "Philip Horger <campadrenalin@gmail.com>";
|
||||
cdepillabout = "Dennis Gosnell <cdep.illabout@gmail.com>";
|
||||
cfouche = "Chaddaï Fouché <chaddai.fouche@gmail.com>";
|
||||
chaoflow = "Florian Friesdorf <flo@chaoflow.net>";
|
||||
chattered = "Phil Scott <me@philscotted.com>";
|
||||
christopherpoole = "Christopher Mark Poole <mail@christopherpoole.net>";
|
||||
coconnor = "Corey O'Connor <coreyoconnor@gmail.com>";
|
||||
codyopel = "Cody Opel <codyopel@gmail.com>";
|
||||
copumpkin = "Dan Peebles <pumpkingod@gmail.com>";
|
||||
coroa = "Jonas Hörsch <jonas@chaoflow.net>";
|
||||
couchemar = "Andrey Pavlov <couchemar@yandex.ru>";
|
||||
cstrahan = "Charles Strahan <charles.c.strahan@gmail.com>";
|
||||
cwoac = "Oliver Matthews <oliver@codersoffortune.net>";
|
||||
DamienCassou = "Damien Cassou <damien.cassou@gmail.com>";
|
||||
DamienCassou = "Damien Cassou <damien@cassou.me>";
|
||||
davidak = "David Kleuker <post@davidak.de>";
|
||||
davidrusu = "David Rusu <davidrusu.me@gmail.com>";
|
||||
dbohdan = "Danyil Bohdan <danyil.bohdan@gmail.com>";
|
||||
DerGuteMoritz = "Moritz Heidkamp <moritz@twoticketsplease.de>";
|
||||
deepfire = "Kosyrev Serge <_deepfire@feelingofgreen.ru>";
|
||||
desiderius = "Didier J. Devroye <didier@devroye.name>";
|
||||
devhell = "devhell <\"^\"@regexmail.net>";
|
||||
dezgeg = "Tuomas Tynkkynen <tuomas.tynkkynen@iki.fi>";
|
||||
dfoxfranke = "Daniel Fox Franke <dfoxfranke@gmail.com>";
|
||||
dmalikov = "Dmitry Malikov <malikov.d.y@gmail.com>";
|
||||
doublec = "Chris Double <chris.double@double.co.nz>";
|
||||
ebzzry = "Rommel Martinez <ebzzry@gmail.com>";
|
||||
ederoyd46 = "Matthew Brown <matt@ederoyd.co.uk>";
|
||||
eduarrrd = "Eduard Bachmakov <e.bachmakov@gmail.com>";
|
||||
edwtjo = "Edward Tjörnhammar <ed@cflags.cc>";
|
||||
@ -69,6 +90,9 @@
|
||||
eikek = "Eike Kettner <eike.kettner@posteo.de>";
|
||||
ellis = "Ellis Whitehead <nixos@ellisw.net>";
|
||||
emery = "Emery Hemingway <emery@vfemail.net>";
|
||||
epitrochoid = "Mabry Cervin <mpcervin@uncg.edu>";
|
||||
ericbmerritt = "Eric Merritt <eric@afiniate.com>";
|
||||
erikryb = "Erik Rybakken <erik.rybakken@math.ntnu.no>";
|
||||
ertes = "Ertugrul Söylemez <ertesx@gmx.de>";
|
||||
exlevan = "Alexey Levan <exlevan@gmail.com>";
|
||||
falsifian = "James Cook <james.cook@utoronto.ca>";
|
||||
@ -76,6 +100,8 @@
|
||||
fluffynukeit = "Daniel Austin <dan@fluffynukeit.com>";
|
||||
forkk = "Andrew Okin <forkk@forkk.net>";
|
||||
fpletz = "Franz Pletz <fpletz@fnordicwalking.de>";
|
||||
fridh = "Frederik Rietdijk <fridh@fridh.nl>";
|
||||
fro_ozen = "fro_ozen <fro_ozen@gmx.de>";
|
||||
ftrvxmtrx = "Siarhei Zirukin <ftrvxmtrx@gmail.com>";
|
||||
funfunctor = "Edward O'Callaghan <eocallaghan@alterapraxis.com>";
|
||||
fuuzetsu = "Mateusz Kowalczyk <fuuzetsu@fuuzetsu.co.uk>";
|
||||
@ -84,23 +110,29 @@
|
||||
garrison = "Jim Garrison <jim@garrison.cc>";
|
||||
gavin = "Gavin Rogers <gavin@praxeology.co.uk>";
|
||||
gebner = "Gabriel Ebner <gebner@gebner.org>";
|
||||
gfxmonk = "Tim Cuthbertson <tim@gfxmonk.net>";
|
||||
giogadi = "Luis G. Torres <lgtorres42@gmail.com>";
|
||||
globin = "Robin Gloster <robin@glob.in>";
|
||||
goibhniu = "Cillian de Róiste <cillian.deroiste@gmail.com>";
|
||||
gridaphobe = "Eric Seidel <eric@seidel.io>";
|
||||
guibert = "David Guibert <david.guibert@gmail.com>";
|
||||
havvy = "Ryan Scheel <ryan.havvy@gmail.com>";
|
||||
hbunke = "Hendrik Bunke <bunke.hendrik@gmail.com>";
|
||||
henrytill = "Henry Till <henrytill@gmail.com>";
|
||||
hiberno = "Christian Lask <mail@elfsechsundzwanzig.de>";
|
||||
hinton = "Tom Hinton <t@larkery.com>";
|
||||
hrdinka = "Christoph Hrdinka <c.nix@hrdinka.at>";
|
||||
iand675 = "Ian Duncan <ian@iankduncan.com>";
|
||||
ianwookim = "Ian-Woo Kim <ianwookim@gmail.com>";
|
||||
iElectric = "Domen Kozar <domen@dev.si>";
|
||||
ikervagyok = "Balázs Lengyel <ikervagyok@gmail.com>";
|
||||
iyzsong = "Song Wenwu <iyzsong@gmail.com>";
|
||||
j-keck = "Jürgen Keck <jhyphenkeck@gmail.com>";
|
||||
jagajaga = "Arseniy Seroka <ars.seroka@gmail.com>";
|
||||
jb55 = "William Casarin <bill@casarin.me>";
|
||||
jcumming = "Jack Cummings <jack@mudshark.org>";
|
||||
jefdaj = "Jeffrey David Johnson <jefdaj@gmail.com>";
|
||||
jfb = "James Felix Black <james@yamtime.com>";
|
||||
jgeerds = "Jascha Geerds <jg@ekby.de>";
|
||||
jirkamarsik = "Jirka Marsik <jiri.marsik89@gmail.com>";
|
||||
joachifm = "Joachim Fasting <joachifm@fastmail.fm>";
|
||||
@ -109,33 +141,52 @@
|
||||
joelteon = "Joel Taylor <me@joelt.io>";
|
||||
jpbernardy = "Jean-Philippe Bernardy <jeanphilippe.bernardy@gmail.com>";
|
||||
jwiegley = "John Wiegley <johnw@newartisans.com>";
|
||||
jwilberding = "Jordan Wilberding <jwilberding@afiniate.com>";
|
||||
jzellner = "Jeff Zellner <jeffz@eml.cc>";
|
||||
kamilchm = "Kamil Chmielewski <kamil.chm@gmail.com>";
|
||||
khumba = "Bryan Gardiner <bog@khumba.net>";
|
||||
kkallio = "Karn Kallio <tierpluspluslists@gmail.com>";
|
||||
koral = "Koral <koral@mailoo.org>";
|
||||
kovirobi = "Kovacsics Robert <kovirobi@gmail.com>";
|
||||
kragniz = "Louis Taylor <kragniz@gmail.com>";
|
||||
ktosiek = "Tomasz Kontusz <tomasz.kontusz@gmail.com>";
|
||||
lassulus = "Lassulus <lassulus@gmail.com>";
|
||||
layus = "Guillaume Maudoux <layus.on@gmail.com>";
|
||||
lebastr = "Alexander Lebedev <lebastr@gmail.com>";
|
||||
leonardoce = "Leonardo Cecchi <leonardo.cecchi@gmail.com>";
|
||||
lethalman = "Luca Bruno <lucabru@src.gnome.org>";
|
||||
lhvwb = "Nathaniel Baxter <nathaniel.baxter@gmail.com>";
|
||||
lihop = "Leroy Hopson <nixos@leroy.geek.nz>";
|
||||
linquize = "Linquize <linquize@yahoo.com.hk>";
|
||||
linus = "Linus Arver <linusarver@gmail.com>";
|
||||
lnl7 = "Daiderd Jordan <daiderd@gmail.com>";
|
||||
lovek323 = "Jason O'Conal <jason@oconal.id.au>";
|
||||
lowfatcomputing = "Andreas Wagner <andreas.wagner@lowfatcomputing.org>";
|
||||
lsix = "Lancelot SIX <lsix@lancelotsix.com>";
|
||||
ludo = "Ludovic Courtès <ludo@gnu.org>";
|
||||
madjar = "Georges Dubus <georges.dubus@compiletoi.net>";
|
||||
magnetophon = "Bart Brouns <bart@magnetophon.nl>";
|
||||
mahe = "Matthias Herrmann <matthias.mh.herrmann@gmail.com>";
|
||||
makefu = "Felix Richter <makefu@syntax-fehler.de>";
|
||||
malyn = "Michael Alyn Miller <malyn@strangeGizmo.com>";
|
||||
manveru = "Michael Fellinger <m.fellinger@gmail.com>";
|
||||
marcweber = "Marc Weber <marco-oweber@gmx.de>";
|
||||
maurer = "Matthew Maurer <matthew.r.maurer+nix@gmail.com>";
|
||||
matejc = "Matej Cotman <cotman.matej@gmail.com>";
|
||||
mathnerd314 = "Mathnerd314 <mathnerd314.gph+hs@gmail.com>";
|
||||
matthiasbeyer = "Matthias Beyer <mail@beyermatthias.de>";
|
||||
mbakke = "Marius Bakke <ymse@tuta.io>";
|
||||
meditans = "Carlo Nucera <meditans@gmail.com>";
|
||||
meisternu = "Matt Miemiec <meister@krutt.org>";
|
||||
michelk = "Michel Kuhlmann <michel@kuhlmanns.info>";
|
||||
mirdhyn = "Merlin Gaillard <mirdhyn@gmail.com>";
|
||||
mschristiansen = "Mikkel Christiansen <mikkel@rheosystems.com>";
|
||||
modulistic = "Pablo Costa <modulistic@gmail.com>";
|
||||
mornfall = "Petr Ročkai <me@mornfall.net>";
|
||||
MP2E = "Cray Elliott <MP2E@archlinux.us>";
|
||||
msackman = "Matthew Sackman <matthew@wellquite.org>";
|
||||
mtreskin = "Max Treskin <zerthurd@gmail.com>";
|
||||
mudri = "James Wood <lamudri@gmail.com>";
|
||||
muflax = "Stefan Dorn <mail@muflax.com>";
|
||||
nathan-gs = "Nathan Bijnens <nathan@nathan.gs>";
|
||||
nckx = "Tobias Geerinckx-Rice <tobias.geerinckx.rice@gmail.com>";
|
||||
@ -144,14 +195,18 @@
|
||||
nslqqq = "Nikita Mikhailov <nslqqq@gmail.com>";
|
||||
obadz = "obadz <dav-nixos@odav.org>";
|
||||
ocharles = "Oliver Charles <ollie@ocharles.org.uk>";
|
||||
odi = "Oliver Dunkl <oliver.dunkl@gmail.com>";
|
||||
offline = "Jaka Hudoklin <jakahudoklin@gmail.com>";
|
||||
olcai = "Erik Timan <dev@timan.info>";
|
||||
orbitz = "Malcolm Matalka <mmatalka@gmail.com>";
|
||||
osener = "Ozan Sener <ozan@ozansener.com>";
|
||||
page = "Carles Pagès <page@cubata.homelinux.net>";
|
||||
paholg = "Paho Lurie-Gregg <paho@paholg.com>";
|
||||
pakhfn = "Fedor Pakhomov <pakhfn@gmail.com>";
|
||||
pashev = "Igor Pashev <pashev.igor@gmail.com>";
|
||||
pesterhazy = "Paulus Esterhazy <pesterhazy@gmail.com>";
|
||||
phausmann = "Philipp Hausmann <nix@314.ch>";
|
||||
philandstuff = "Philip Potter <philip.g.potter@gmail.com>";
|
||||
phreedom = "Evgeny Egorochkin <phreedom@yandex.ru>";
|
||||
pierron = "Nicolas B. Pierron <nixos@nbp.name>";
|
||||
piotr = "Piotr Pietraszkiewicz <ppietrasa@gmail.com>";
|
||||
@ -159,7 +214,9 @@
|
||||
pkmx = "Chih-Mao Chen <pkmx.tw@gmail.com>";
|
||||
plcplc = "Philip Lykke Carlsen <plcplc@gmail.com>";
|
||||
pmahoney = "Patrick Mahoney <pat@polycrystal.org>";
|
||||
pmiddend = "Philipp Middendorf <pmidden@secure.mailbox.org>";
|
||||
prikhi = "Pavan Rikhi <pavan.rikhi@gmail.com>";
|
||||
psibi = "Sibi <sibi@psibi.in>";
|
||||
pSub = "Pascal Wittmann <mail@pascal-wittmann.de>";
|
||||
puffnfresh = "Brian McKenna <brian@brianmckenna.org>";
|
||||
qknight = "Joachim Schiele <js@lastlog.de>";
|
||||
@ -169,37 +226,47 @@
|
||||
refnil = "Martin Lavoie <broemartino@gmail.com>";
|
||||
relrod = "Ricky Elrod <ricky@elrod.me>";
|
||||
renzo = "Renzo Carbonara <renzocarbonara@gmail.com>";
|
||||
rick68 = "Wei-Ming Yang <rick68@gmail.com>";
|
||||
rickynils = "Rickard Nilsson <rickynils@gmail.com>";
|
||||
rob = "Rob Vermaas <rob.vermaas@gmail.com>";
|
||||
robberer = "Longrin Wischnewski <robberer@freakmail.de>";
|
||||
robbinch = "Robbin C. <robbinch33@gmail.com>";
|
||||
roconnor = "Russell O'Connor <roconnor@theorem.ca>";
|
||||
roelof = "Roelof Wobben <rwobben@hotmail.com>";
|
||||
romildo = "José Romildo Malaquias <malaquias@gmail.com>";
|
||||
rszibele = "Richard Szibele <richard_szibele@hotmail.com>";
|
||||
rushmorem = "Rushmore Mushambi <rushmore@webenchanter.com>";
|
||||
rycee = "Robert Helgesson <robert@rycee.net>";
|
||||
samuelrivas = "Samuel Rivas <samuelrivas@gmail.com>";
|
||||
sander = "Sander van der Burg <s.vanderburg@tudelft.nl>";
|
||||
schmitthenner = "Fabian Schmitthenner <development@schmitthenner.eu>";
|
||||
schristo = "Scott Christopher <schristopher@konputa.com>";
|
||||
sepi = "Raffael Mancini <raffael@mancini.lu>";
|
||||
sheganinans = "Aistis Raulinaitis <sheganinans@gmail.com>";
|
||||
shell = "Shell Turner <cam.turn@gmail.com>";
|
||||
shlevy = "Shea Levy <shea@shealevy.com>";
|
||||
simons = "Peter Simons <simons@cryp.to>";
|
||||
simonvandel = "Simon Vandel Sillesen <simon.vandel@gmail.com>";
|
||||
sjagoe = "Simon Jagoe <simon@simonjagoe.com>";
|
||||
sjmackenzie = "Stewart Mackenzie <setori88@gmail.com>";
|
||||
skeidel = "Sven Keidel <svenkeidel@gmail.com>";
|
||||
smironov = "Sergey Mironov <ierton@gmail.com>";
|
||||
spacefrogg = "Michael Raitza <spacefrogg-nixos@meterriblecrew.net>";
|
||||
sprock = "Roger Mason <rmason@mun.ca>";
|
||||
spwhitt = "Spencer Whitt <sw@swhitt.me>";
|
||||
stephenmw = "Stephen Weinberg <stephen@q5comm.com>";
|
||||
szczyp = "Szczyp <qb@szczyp.com>";
|
||||
sztupi = "Attila Sztupak <attila.sztupak@gmail.com>";
|
||||
tailhook = "Paul Colomiets <paul@colomiets.name>";
|
||||
taktoa = "Remy Goldschmidt <taktoa@gmail.com>";
|
||||
telotortium = "Robert Irelan <rirelan@gmail.com>";
|
||||
thammers = "Tobias Hammerschmidt <jawr@gmx.de>";
|
||||
the-kenny = "Moritz Ulrich <moritz@tarn-vedra.de>";
|
||||
theuni = "Christian Theune <ct@flyingcircus.io>";
|
||||
thoughtpolice = "Austin Seipp <aseipp@pobox.com>";
|
||||
titanous = "Jonathan Rudenberg <jonathan@titanous.com>";
|
||||
tomberek = "Thomas Bereknyei <tomberek@gmail.com>";
|
||||
travisbhartwell = "Travis B. Hartwell <nafai@travishartwell.net>";
|
||||
trino = "Hubert Mühlhans <muehlhans.hubert@ekodia.de>";
|
||||
tstrobel = "Thomas Strobel <ts468@cam.ac.uk>";
|
||||
ttuegel = "Thomas Tuegel <ttuegel@gmail.com>";
|
||||
@ -218,6 +285,7 @@
|
||||
winden = "Antonio Vargas Gonzalez <windenntw@gmail.com>";
|
||||
wizeman = "Ricardo M. Correia <rcorreia@wizy.org>";
|
||||
wjlroe = "William Roe <willroe@gmail.com>";
|
||||
womfoo = "Kranium Gikos Mendoza <kranium@gikos.net>";
|
||||
wkennington = "William A. Kennington III <william@wkennington.com>";
|
||||
wmertens = "Wout Mertens <Wout.Mertens@gmail.com>";
|
||||
wscott = "Wayne Scott <wsc9tt@gmail.com>";
|
||||
|
426
lib/misc.nix
426
lib/misc.nix
@ -1,426 +0,0 @@
|
||||
let lib = import ./default.nix;
|
||||
inherit (builtins) isFunction head tail isList isAttrs isInt attrNames;
|
||||
|
||||
in
|
||||
|
||||
with import ./lists.nix;
|
||||
with import ./attrsets.nix;
|
||||
with import ./strings.nix;
|
||||
|
||||
rec {
|
||||
|
||||
# returns default if env var is not set
|
||||
maybeEnv = name: default:
|
||||
let value = builtins.getEnv name; in
|
||||
if value == "" then default else value;
|
||||
|
||||
defaultMergeArg = x : y: if builtins.isAttrs y then
|
||||
y
|
||||
else
|
||||
(y x);
|
||||
defaultMerge = x: y: x // (defaultMergeArg x y);
|
||||
foldArgs = merger: f: init: x:
|
||||
let arg=(merger init (defaultMergeArg init x));
|
||||
# now add the function with composed args already applied to the final attrs
|
||||
base = (setAttrMerge "passthru" {} (f arg)
|
||||
( z : z // rec {
|
||||
function = foldArgs merger f arg;
|
||||
args = (lib.attrByPath ["passthru" "args"] {} z) // x;
|
||||
} ));
|
||||
withStdOverrides = base // {
|
||||
override = base.passthru.function;
|
||||
deepOverride = a : (base.passthru.function ((lib.mapAttrs (lib.deepOverrider a) base.passthru.args) // a));
|
||||
} ;
|
||||
in
|
||||
withStdOverrides;
|
||||
|
||||
|
||||
# predecessors: proposed replacement for applyAndFun (which has a bug cause it merges twice)
|
||||
# the naming "overridableDelayableArgs" tries to express that you can
|
||||
# - override attr values which have been supplied earlier
|
||||
# - use attr values before they have been supplied by accessing the fix point
|
||||
# name "fixed"
|
||||
# f: the (delayed overridden) arguments are applied to this
|
||||
#
|
||||
# initial: initial attrs arguments and settings. see defaultOverridableDelayableArgs
|
||||
#
|
||||
# returns: f applied to the arguments // special attributes attrs
|
||||
# a) merge: merge applied args with new args. Wether an argument is overridden depends on the merge settings
|
||||
# b) replace: this let's you replace and remove names no matter which merge function has been set
|
||||
#
|
||||
# examples: see test cases "res" below;
|
||||
overridableDelayableArgs =
|
||||
f : # the function applied to the arguments
|
||||
initial : # you pass attrs, the functions below are passing a function taking the fix argument
|
||||
let
|
||||
takeFixed = if isFunction initial then initial else (fixed : initial); # transform initial to an expression always taking the fixed argument
|
||||
tidy = args :
|
||||
let # apply all functions given in "applyPreTidy" in sequence
|
||||
applyPreTidyFun = fold ( n : a : x : n ( a x ) ) lib.id (maybeAttr "applyPreTidy" [] args);
|
||||
in removeAttrs (applyPreTidyFun args) ( ["applyPreTidy"] ++ (maybeAttr "removeAttrs" [] args) ); # tidy up args before applying them
|
||||
fun = n : x :
|
||||
let newArgs = fixed :
|
||||
let args = takeFixed fixed;
|
||||
mergeFun = args.${n};
|
||||
in if isAttrs x then (mergeFun args x)
|
||||
else assert isFunction x;
|
||||
mergeFun args (x ( args // { inherit fixed; }));
|
||||
in overridableDelayableArgs f newArgs;
|
||||
in
|
||||
(f (tidy (lib.fix takeFixed))) // {
|
||||
merge = fun "mergeFun";
|
||||
replace = fun "keepFun";
|
||||
};
|
||||
defaultOverridableDelayableArgs = f :
|
||||
let defaults = {
|
||||
mergeFun = mergeAttrByFunc; # default merge function. merge strategie (concatenate lists, strings) is given by mergeAttrBy
|
||||
keepFun = a : b : { inherit (a) removeAttrs mergeFun keepFun mergeAttrBy; } // b; # even when using replace preserve these values
|
||||
applyPreTidy = []; # list of functions applied to args before args are tidied up (usage case : prepareDerivationArgs)
|
||||
mergeAttrBy = mergeAttrBy // {
|
||||
applyPreTidy = a : b : a ++ b;
|
||||
removeAttrs = a : b: a ++ b;
|
||||
};
|
||||
removeAttrs = ["mergeFun" "keepFun" "mergeAttrBy" "removeAttrs" "fixed" ]; # before applying the arguments to the function make sure these names are gone
|
||||
};
|
||||
in (overridableDelayableArgs f defaults).merge;
|
||||
|
||||
|
||||
|
||||
# rec { # an example of how composedArgsAndFun can be used
|
||||
# a = composedArgsAndFun (x : x) { a = ["2"]; meta = { d = "bar";}; };
|
||||
# # meta.d will be lost ! It's your task to preserve it (eg using a merge function)
|
||||
# b = a.passthru.function { a = [ "3" ]; meta = { d2 = "bar2";}; };
|
||||
# # instead of passing/ overriding values you can use a merge function:
|
||||
# c = b.passthru.function ( x: { a = x.a ++ ["4"]; }); # consider using (maybeAttr "a" [] x)
|
||||
# }
|
||||
# result:
|
||||
# {
|
||||
# a = { a = ["2"]; meta = { d = "bar"; }; passthru = { function = .. }; };
|
||||
# b = { a = ["3"]; meta = { d2 = "bar2"; }; passthru = { function = .. }; };
|
||||
# c = { a = ["3" "4"]; meta = { d2 = "bar2"; }; passthru = { function = .. }; };
|
||||
# # c2 is equal to c
|
||||
# }
|
||||
composedArgsAndFun = f: foldArgs defaultMerge f {};
|
||||
|
||||
|
||||
# shortcut for attrByPath ["name"] default attrs
|
||||
maybeAttrNullable = maybeAttr;
|
||||
|
||||
# shortcut for attrByPath ["name"] default attrs
|
||||
maybeAttr = name: default: attrs: attrs.${name} or default;
|
||||
|
||||
|
||||
# Return the second argument if the first one is true or the empty version
|
||||
# of the second argument.
|
||||
ifEnable = cond: val:
|
||||
if cond then val
|
||||
else if builtins.isList val then []
|
||||
else if builtins.isAttrs val then {}
|
||||
# else if builtins.isString val then ""
|
||||
else if val == true || val == false then false
|
||||
else null;
|
||||
|
||||
|
||||
# Return true only if there is an attribute and it is true.
|
||||
checkFlag = attrSet: name:
|
||||
if name == "true" then true else
|
||||
if name == "false" then false else
|
||||
if (elem name (attrByPath ["flags"] [] attrSet)) then true else
|
||||
attrByPath [name] false attrSet ;
|
||||
|
||||
|
||||
# Input : attrSet, [ [name default] ... ], name
|
||||
# Output : its value or default.
|
||||
getValue = attrSet: argList: name:
|
||||
( attrByPath [name] (if checkFlag attrSet name then true else
|
||||
if argList == [] then null else
|
||||
let x = builtins.head argList; in
|
||||
if (head x) == name then
|
||||
(head (tail x))
|
||||
else (getValue attrSet
|
||||
(tail argList) name)) attrSet );
|
||||
|
||||
|
||||
# Input : attrSet, [[name default] ...], [ [flagname reqs..] ... ]
|
||||
# Output : are reqs satisfied? It's asserted.
|
||||
checkReqs = attrSet : argList : condList :
|
||||
(
|
||||
fold lib.and true
|
||||
(map (x: let name = (head x) ; in
|
||||
|
||||
((checkFlag attrSet name) ->
|
||||
(fold lib.and true
|
||||
(map (y: let val=(getValue attrSet argList y); in
|
||||
(val!=null) && (val!=false))
|
||||
(tail x))))) condList)) ;
|
||||
|
||||
|
||||
# This function has O(n^2) performance.
|
||||
uniqList = {inputList, acc ? []} :
|
||||
let go = xs : acc :
|
||||
if xs == []
|
||||
then []
|
||||
else let x = head xs;
|
||||
y = if elem x acc then [] else [x];
|
||||
in y ++ go (tail xs) (y ++ acc);
|
||||
in go inputList acc;
|
||||
|
||||
uniqListExt = {inputList, outputList ? [],
|
||||
getter ? (x : x), compare ? (x: y: x==y)}:
|
||||
if inputList == [] then outputList else
|
||||
let x=head inputList;
|
||||
isX = y: (compare (getter y) (getter x));
|
||||
newOutputList = outputList ++
|
||||
(if any isX outputList then [] else [x]);
|
||||
in uniqListExt {outputList=newOutputList;
|
||||
inputList = (tail inputList);
|
||||
inherit getter compare;
|
||||
};
|
||||
|
||||
|
||||
|
||||
condConcat = name: list: checker:
|
||||
if list == [] then name else
|
||||
if checker (head list) then
|
||||
condConcat
|
||||
(name + (head (tail list)))
|
||||
(tail (tail list))
|
||||
checker
|
||||
else condConcat
|
||||
name (tail (tail list)) checker;
|
||||
|
||||
lazyGenericClosure = {startSet, operator}:
|
||||
let
|
||||
work = list: doneKeys: result:
|
||||
if list == [] then
|
||||
result
|
||||
else
|
||||
let x = head list; key = x.key; in
|
||||
if elem key doneKeys then
|
||||
work (tail list) doneKeys result
|
||||
else
|
||||
work (tail list ++ operator x) ([key] ++ doneKeys) ([x] ++ result);
|
||||
in
|
||||
work startSet [] [];
|
||||
|
||||
genericClosure = builtins.genericClosure or lazyGenericClosure;
|
||||
|
||||
innerModifySumArgs = f: x: a: b: if b == null then (f a b) // x else
|
||||
innerModifySumArgs f x (a // b);
|
||||
modifySumArgs = f: x: innerModifySumArgs f x {};
|
||||
|
||||
|
||||
innerClosePropagation = acc : xs :
|
||||
if xs == []
|
||||
then acc
|
||||
else let y = head xs;
|
||||
ys = tail xs;
|
||||
in if ! isAttrs y
|
||||
then innerClosePropagation acc ys
|
||||
else let acc' = [y] ++ acc;
|
||||
in innerClosePropagation
|
||||
acc'
|
||||
(uniqList { inputList = (maybeAttrNullable "propagatedBuildInputs" [] y)
|
||||
++ (maybeAttrNullable "propagatedNativeBuildInputs" [] y)
|
||||
++ ys;
|
||||
acc = acc';
|
||||
}
|
||||
);
|
||||
|
||||
closePropagation = list: (uniqList {inputList = (innerClosePropagation [] list);});
|
||||
|
||||
# calls a function (f attr value ) for each record item. returns a list
|
||||
mapAttrsFlatten = f : r : map (attr: f attr r.${attr}) (attrNames r);
|
||||
|
||||
# attribute set containing one attribute
|
||||
nvs = name : value : listToAttrs [ (nameValuePair name value) ];
|
||||
# adds / replaces an attribute of an attribute set
|
||||
setAttr = set : name : v : set // (nvs name v);
|
||||
|
||||
# setAttrMerge (similar to mergeAttrsWithFunc but only merges the values of a particular name)
|
||||
# setAttrMerge "a" [] { a = [2];} (x : x ++ [3]) -> { a = [2 3]; }
|
||||
# setAttrMerge "a" [] { } (x : x ++ [3]) -> { a = [ 3]; }
|
||||
setAttrMerge = name : default : attrs : f :
|
||||
setAttr attrs name (f (maybeAttr name default attrs));
|
||||
|
||||
# Using f = a : b = b the result is similar to //
|
||||
# merge attributes with custom function handling the case that the attribute
|
||||
# exists in both sets
|
||||
mergeAttrsWithFunc = f : set1 : set2 :
|
||||
fold (n: set : if set ? ${n}
|
||||
then setAttr set n (f set.${n} set2.${n})
|
||||
else set )
|
||||
(set2 // set1) (attrNames set2);
|
||||
|
||||
# merging two attribute set concatenating the values of same attribute names
|
||||
# eg { a = 7; } { a = [ 2 3 ]; } becomes { a = [ 7 2 3 ]; }
|
||||
mergeAttrsConcatenateValues = mergeAttrsWithFunc ( a : b : (toList a) ++ (toList b) );
|
||||
|
||||
# merges attributes using //, if a name exisits in both attributes
|
||||
# an error will be triggered unless its listed in mergeLists
|
||||
# so you can mergeAttrsNoOverride { buildInputs = [a]; } { buildInputs = [a]; } {} to get
|
||||
# { buildInputs = [a b]; }
|
||||
# merging buildPhase does'nt really make sense. The cases will be rare where appending /prefixing will fit your needs?
|
||||
# in these cases the first buildPhase will override the second one
|
||||
# ! deprecated, use mergeAttrByFunc instead
|
||||
mergeAttrsNoOverride = { mergeLists ? ["buildInputs" "propagatedBuildInputs"],
|
||||
overrideSnd ? [ "buildPhase" ]
|
||||
} : attrs1 : attrs2 :
|
||||
fold (n: set :
|
||||
setAttr set n ( if set ? ${n}
|
||||
then # merge
|
||||
if elem n mergeLists # attribute contains list, merge them by concatenating
|
||||
then attrs2.${n} ++ attrs1.${n}
|
||||
else if elem n overrideSnd
|
||||
then attrs1.${n}
|
||||
else throw "error mergeAttrsNoOverride, attribute ${n} given in both attributes - no merge func defined"
|
||||
else attrs2.${n} # add attribute not existing in attr1
|
||||
)) attrs1 (attrNames attrs2);
|
||||
|
||||
|
||||
# example usage:
|
||||
# mergeAttrByFunc {
|
||||
# inherit mergeAttrBy; # defined below
|
||||
# buildInputs = [ a b ];
|
||||
# } {
|
||||
# buildInputs = [ c d ];
|
||||
# };
|
||||
# will result in
|
||||
# { mergeAttrsBy = [...]; buildInputs = [ a b c d ]; }
|
||||
# is used by prepareDerivationArgs, defaultOverridableDelayableArgs and can be used when composing using
|
||||
# foldArgs, composedArgsAndFun or applyAndFun. Example: composableDerivation in all-packages.nix
|
||||
mergeAttrByFunc = x : y :
|
||||
let
|
||||
mergeAttrBy2 = { mergeAttrBy=lib.mergeAttrs; }
|
||||
// (maybeAttr "mergeAttrBy" {} x)
|
||||
// (maybeAttr "mergeAttrBy" {} y); in
|
||||
fold lib.mergeAttrs {} [
|
||||
x y
|
||||
(mapAttrs ( a : v : # merge special names using given functions
|
||||
if x ? ${a}
|
||||
then if y ? ${a}
|
||||
then v x.${a} y.${a} # both have attr, use merge func
|
||||
else x.${a} # only x has attr
|
||||
else y.${a} # only y has attr)
|
||||
) (removeAttrs mergeAttrBy2
|
||||
# don't merge attrs which are neither in x nor y
|
||||
(filter (a: ! x ? ${a} && ! y ? ${a})
|
||||
(attrNames mergeAttrBy2))
|
||||
)
|
||||
)
|
||||
];
|
||||
mergeAttrsByFuncDefaults = foldl mergeAttrByFunc { inherit mergeAttrBy; };
|
||||
mergeAttrsByFuncDefaultsClean = list: removeAttrs (mergeAttrsByFuncDefaults list) ["mergeAttrBy"];
|
||||
|
||||
# merge attrs based on version key into mkDerivation args, see mergeAttrBy to learn about smart merge defaults
|
||||
#
|
||||
# This function is best explained by an example:
|
||||
#
|
||||
# {version ? "2.x"} :
|
||||
#
|
||||
# mkDerivation (mergeAttrsByVersion "package-name" version
|
||||
# { # version specific settings
|
||||
# "git" = { src = ..; preConfigre = "autogen.sh"; buildInputs = [automake autoconf libtool]; };
|
||||
# "2.x" = { src = ..; };
|
||||
# }
|
||||
# { // shared settings
|
||||
# buildInputs = [ common build inputs ];
|
||||
# meta = { .. }
|
||||
# }
|
||||
# )
|
||||
#
|
||||
# Please note that e.g. Eelco Dolstra usually prefers having one file for
|
||||
# each version. On the other hand there are valuable additional design goals
|
||||
# - readability
|
||||
# - do it once only
|
||||
# - try to avoid duplication
|
||||
#
|
||||
# Marc Weber and Michael Raskin sometimes prefer keeping older
|
||||
# versions around for testing and regression tests - as long as its cheap to
|
||||
# do so.
|
||||
#
|
||||
# Very often it just happens that the "shared" code is the bigger part.
|
||||
# Then using this function might be appropriate.
|
||||
#
|
||||
# Be aware that its easy to cause recompilations in all versions when using
|
||||
# this function - also if derivations get too complex splitting into multiple
|
||||
# files is the way to go.
|
||||
#
|
||||
# See misc.nix -> versionedDerivation
|
||||
# discussion: nixpkgs: pull/310
|
||||
mergeAttrsByVersion = name: version: attrsByVersion: base:
|
||||
mergeAttrsByFuncDefaultsClean [ { name = "${name}-${version}"; } base (maybeAttr version (throw "bad version ${version} for ${name}") attrsByVersion)];
|
||||
|
||||
# sane defaults (same name as attr name so that inherit can be used)
|
||||
mergeAttrBy = # { buildInputs = concatList; [...]; passthru = mergeAttr; [..]; }
|
||||
listToAttrs (map (n : nameValuePair n lib.concat)
|
||||
[ "nativeBuildInputs" "buildInputs" "propagatedBuildInputs" "configureFlags" "prePhases" "postAll" "patches" ])
|
||||
// listToAttrs (map (n : nameValuePair n lib.mergeAttrs) [ "passthru" "meta" "cfg" "flags" ])
|
||||
// listToAttrs (map (n : nameValuePair n (a: b: "${a}\n${b}") ) [ "preConfigure" "postInstall" ])
|
||||
;
|
||||
|
||||
# prepareDerivationArgs tries to make writing configurable derivations easier
|
||||
# example:
|
||||
# prepareDerivationArgs {
|
||||
# mergeAttrBy = {
|
||||
# myScript = x : y : x ++ "\n" ++ y;
|
||||
# };
|
||||
# cfg = {
|
||||
# readlineSupport = true;
|
||||
# };
|
||||
# flags = {
|
||||
# readline = {
|
||||
# set = {
|
||||
# configureFlags = [ "--with-compiler=${compiler}" ];
|
||||
# buildInputs = [ compiler ];
|
||||
# pass = { inherit compiler; READLINE=1; };
|
||||
# assertion = compiler.dllSupport;
|
||||
# myScript = "foo";
|
||||
# };
|
||||
# unset = { configureFlags = ["--without-compiler"]; };
|
||||
# };
|
||||
# };
|
||||
# src = ...
|
||||
# buildPhase = '' ... '';
|
||||
# name = ...
|
||||
# myScript = "bar";
|
||||
# };
|
||||
# if you don't have need for unset you can omit the surrounding set = { .. } attr
|
||||
# all attrs except flags cfg and mergeAttrBy will be merged with the
|
||||
# additional data from flags depending on config settings
|
||||
# It's used in composableDerivation in all-packages.nix. It's also used
|
||||
# heavily in the new python and libs implementation
|
||||
#
|
||||
# should we check for misspelled cfg options?
|
||||
# TODO use args.mergeFun here as well?
|
||||
prepareDerivationArgs = args:
|
||||
let args2 = { cfg = {}; flags = {}; } // args;
|
||||
flagName = name : "${name}Support";
|
||||
cfgWithDefaults = (listToAttrs (map (n : nameValuePair (flagName n) false) (attrNames args2.flags)))
|
||||
// args2.cfg;
|
||||
opts = attrValues (mapAttrs (a : v :
|
||||
let v2 = if v ? set || v ? unset then v else { set = v; };
|
||||
n = if cfgWithDefaults.${flagName a} then "set" else "unset";
|
||||
attr = maybeAttr n {} v2; in
|
||||
if (maybeAttr "assertion" true attr)
|
||||
then attr
|
||||
else throw "assertion of flag ${a} of derivation ${args.name} failed"
|
||||
) args2.flags );
|
||||
in removeAttrs
|
||||
(mergeAttrsByFuncDefaults ([args] ++ opts ++ [{ passthru = cfgWithDefaults; }]))
|
||||
["flags" "cfg" "mergeAttrBy" ];
|
||||
|
||||
|
||||
nixType = x:
|
||||
if isAttrs x then
|
||||
if x ? outPath then "derivation"
|
||||
else "aattrs"
|
||||
else if isFunction x then "function"
|
||||
else if isList x then "list"
|
||||
else if x == true then "bool"
|
||||
else if x == false then "bool"
|
||||
else if x == null then "null"
|
||||
else if isInt x then "int"
|
||||
else "string";
|
||||
|
||||
}
|
139
lib/modules.nix
139
lib/modules.nix
@ -17,6 +17,10 @@ rec {
|
||||
evalModules) and the less declarative the module set is. */
|
||||
evalModules = { modules
|
||||
, prefix ? []
|
||||
, # This should only be used for special arguments that need to be evaluated
|
||||
# when resolving module structure (like in imports). For everything else,
|
||||
# there's _module.args.
|
||||
specialArgs ? {}
|
||||
, # This would be remove in the future, Prefer _module.args option instead.
|
||||
args ? {}
|
||||
, # This would be remove in the future, Prefer _module.check option instead.
|
||||
@ -39,7 +43,7 @@ rec {
|
||||
};
|
||||
|
||||
_module.check = mkOption {
|
||||
type = types.uniq types.bool;
|
||||
type = types.bool;
|
||||
internal = true;
|
||||
default = check;
|
||||
description = "Whether to check whether all option definitions have matching declarations.";
|
||||
@ -51,7 +55,7 @@ rec {
|
||||
};
|
||||
};
|
||||
|
||||
closed = closeModules (modules ++ [ internalModule ]) { inherit config options; lib = import ./.; };
|
||||
closed = closeModules (modules ++ [ internalModule ]) ({ inherit config options; lib = import ./.; } // specialArgs);
|
||||
|
||||
# Note: the list of modules is reversed to maintain backward
|
||||
# compatibility with the old module system. Not sure if this is
|
||||
@ -72,8 +76,8 @@ rec {
|
||||
else yieldConfig (prefix ++ [n]) v) set) ["_definedNames"];
|
||||
in
|
||||
if options._module.check.value && set ? _definedNames then
|
||||
fold (m: res:
|
||||
fold (name: res:
|
||||
foldl' (res: m:
|
||||
foldl' (res: name:
|
||||
if set ? ${name} then res else throw "The option `${showOption (prefix ++ [name])}' defined in `${m.file}' does not exist.")
|
||||
res m.names)
|
||||
res set._definedNames
|
||||
@ -87,9 +91,11 @@ rec {
|
||||
let
|
||||
toClosureList = file: parentKey: imap (n: x:
|
||||
if isAttrs x || isFunction x then
|
||||
unifyModuleSyntax file "${parentKey}:anon-${toString n}" (unpackSubmodule applyIfFunction x args)
|
||||
let key = "${parentKey}:anon-${toString n}"; in
|
||||
unifyModuleSyntax file key (unpackSubmodule (applyIfFunction key) x args)
|
||||
else
|
||||
unifyModuleSyntax (toString x) (toString x) (applyIfFunction (import x) args));
|
||||
let file = toString x; key = toString x; in
|
||||
unifyModuleSyntax file key (applyIfFunction key (import x) args));
|
||||
in
|
||||
builtins.genericClosure {
|
||||
startSet = toClosureList unknownModule "" modules;
|
||||
@ -118,7 +124,7 @@ rec {
|
||||
config = removeAttrs m ["key" "_file" "require" "imports"];
|
||||
};
|
||||
|
||||
applyIfFunction = f: arg@{ config, options, lib }: if isFunction f then
|
||||
applyIfFunction = key: f: args@{ config, options, lib, ... }: if isFunction f then
|
||||
let
|
||||
# Module arguments are resolved in a strict manner when attribute set
|
||||
# deconstruction is used. As the arguments are now defined with the
|
||||
@ -133,11 +139,18 @@ rec {
|
||||
# not their values. The values are forwarding the result of the
|
||||
# evaluation of the option.
|
||||
requiredArgs = builtins.attrNames (builtins.functionArgs f);
|
||||
context = name: ''while evaluating the module argument `${name}' in "${key}":'';
|
||||
extraArgs = builtins.listToAttrs (map (name: {
|
||||
inherit name;
|
||||
value = config._module.args.${name};
|
||||
value = addErrorContext (context name)
|
||||
(args.${name} or config._module.args.${name});
|
||||
}) requiredArgs);
|
||||
in f (extraArgs // arg)
|
||||
|
||||
# Note: we append in the opposite order such that we can add an error
|
||||
# context on the explicited arguments of "args" too. This update
|
||||
# operator is used to make the "args@{ ... }: with args.lib;" notation
|
||||
# works.
|
||||
in f (args // extraArgs)
|
||||
else
|
||||
f;
|
||||
|
||||
@ -169,18 +182,18 @@ rec {
|
||||
let
|
||||
loc = prefix ++ [name];
|
||||
# Get all submodules that declare ‘name’.
|
||||
decls = concatLists (map (m:
|
||||
decls = concatMap (m:
|
||||
if m.options ? ${name}
|
||||
then [ { inherit (m) file; options = m.options.${name}; } ]
|
||||
else []
|
||||
) options);
|
||||
) options;
|
||||
# Get all submodules that define ‘name’.
|
||||
defns = concatLists (map (m:
|
||||
defns = concatMap (m:
|
||||
if m.config ? ${name}
|
||||
then map (config: { inherit (m) file; inherit config; })
|
||||
(pushDownProperties m.config.${name})
|
||||
else []
|
||||
) configs);
|
||||
) configs;
|
||||
nrOptions = count (m: isOption m.options) decls;
|
||||
# Extract the definitions for this loc
|
||||
defns' = map (m: { inherit (m) file; value = m.config.${name}; })
|
||||
@ -212,7 +225,7 @@ rec {
|
||||
'opts' is a list of modules. Each module has an options attribute which
|
||||
correspond to the definition of 'loc' in 'opt.file'. */
|
||||
mergeOptionDecls = loc: opts:
|
||||
fold (opt: res:
|
||||
foldl' (res: opt:
|
||||
if opt.options ? default && res ? default ||
|
||||
opt.options ? example && res ? example ||
|
||||
opt.options ? description && res ? description ||
|
||||
@ -238,7 +251,7 @@ rec {
|
||||
else if opt.options ? options then map (coerceOption opt.file) options' ++ res.options
|
||||
else res.options;
|
||||
in opt.options // res //
|
||||
{ declarations = [opt.file] ++ res.declarations;
|
||||
{ declarations = res.declarations ++ [opt.file];
|
||||
options = submodules;
|
||||
}
|
||||
) { inherit loc; declarations = []; options = []; } opts;
|
||||
@ -248,58 +261,67 @@ rec {
|
||||
evalOptionValue = loc: opt: defs:
|
||||
let
|
||||
# Add in the default value for this option, if any.
|
||||
defs' = (optional (opt ? default)
|
||||
{ file = head opt.declarations; value = mkOptionDefault opt.default; }) ++ defs;
|
||||
defs' =
|
||||
(optional (opt ? default)
|
||||
{ file = head opt.declarations; value = mkOptionDefault opt.default; }) ++ defs;
|
||||
|
||||
# Handle properties, check types, and merge everything together
|
||||
inherit (mergeDefinitions loc opt.type defs') isDefined defsFinal mergedValue;
|
||||
files = map (def: def.file) defsFinal;
|
||||
merged =
|
||||
if isDefined then mergedValue
|
||||
else throw "The option `${showOption loc}' is used but not defined.";
|
||||
# Handle properties, check types, and merge everything together.
|
||||
res =
|
||||
if opt.readOnly or false && length defs' > 1 then
|
||||
throw "The option `${showOption loc}' is read-only, but it's set multiple times."
|
||||
else
|
||||
mergeDefinitions loc opt.type defs';
|
||||
|
||||
# Check whether the option is defined, and apply the ‘apply’
|
||||
# function to the merged value. This allows options to yield a
|
||||
# value computed from the definitions.
|
||||
value =
|
||||
if !res.isDefined then
|
||||
throw "The option `${showOption loc}' is used but not defined."
|
||||
else if opt ? apply then
|
||||
opt.apply res.mergedValue
|
||||
else
|
||||
res.mergedValue;
|
||||
|
||||
# Finally, apply the ‘apply’ function to the merged
|
||||
# value. This allows options to yield a value computed
|
||||
# from the definitions.
|
||||
value = (opt.apply or id) merged;
|
||||
in opt //
|
||||
{ value = addErrorContext "while evaluating the option `${showOption loc}':" value;
|
||||
definitions = map (def: def.value) defsFinal;
|
||||
inherit isDefined files;
|
||||
definitions = map (def: def.value) res.defsFinal;
|
||||
files = map (def: def.file) res.defsFinal;
|
||||
inherit (res) isDefined;
|
||||
};
|
||||
|
||||
# Merge definitions of a value of a given type
|
||||
mergeDefinitions = loc: type: defs: rec {
|
||||
defsFinal =
|
||||
let
|
||||
# Process mkMerge and mkIf properties
|
||||
processIfAndMerge = defs: concatMap (m:
|
||||
map (value: { inherit (m) file; inherit value; }) (dischargeProperties m.value)
|
||||
) defs;
|
||||
# Merge definitions of a value of a given type.
|
||||
mergeDefinitions = loc: type: defs: rec {
|
||||
defsFinal =
|
||||
let
|
||||
# Process mkMerge and mkIf properties.
|
||||
defs' = concatMap (m:
|
||||
map (value: { inherit (m) file; inherit value; }) (dischargeProperties m.value)
|
||||
) defs;
|
||||
|
||||
# Process mkOverride properties
|
||||
processOverride = defs: filterOverrides defs;
|
||||
# Process mkOverride properties.
|
||||
defs'' = filterOverrides defs';
|
||||
|
||||
# Sort mkOrder properties
|
||||
processOrder = defs:
|
||||
# Avoid sorting if we don't have to.
|
||||
if any (def: def.value._type or "" == "order") defs
|
||||
then sortProperties defs
|
||||
else defs;
|
||||
in
|
||||
processOrder (processOverride (processIfAndMerge defs));
|
||||
# Sort mkOrder properties.
|
||||
defs''' =
|
||||
# Avoid sorting if we don't have to.
|
||||
if any (def: def.value._type or "" == "order") defs''
|
||||
then sortProperties defs''
|
||||
else defs'';
|
||||
in defs''';
|
||||
|
||||
# Type-check the remaining definitions, and merge them
|
||||
mergedValue = fold (def: res:
|
||||
if type.check def.value then res
|
||||
else throw "The option value `${showOption loc}' in `${def.file}' is not a ${type.name}.")
|
||||
(type.merge loc defsFinal) defsFinal;
|
||||
# Type-check the remaining definitions, and merge them.
|
||||
mergedValue = foldl' (res: def:
|
||||
if type.check def.value then res
|
||||
else throw "The option value `${showOption loc}' in `${def.file}' is not a ${type.name}.")
|
||||
(type.merge loc defsFinal) defsFinal;
|
||||
|
||||
isDefined = defsFinal != [];
|
||||
optionalValue =
|
||||
if isDefined then { value = mergedValue; }
|
||||
else {};
|
||||
};
|
||||
isDefined = defsFinal != [];
|
||||
|
||||
optionalValue =
|
||||
if isDefined then { value = mergedValue; }
|
||||
else {};
|
||||
};
|
||||
|
||||
/* Given a config set, expand mkMerge properties, and push down the
|
||||
other properties into the children. The result is a list of
|
||||
@ -370,8 +392,7 @@ rec {
|
||||
let
|
||||
defaultPrio = 100;
|
||||
getPrio = def: if def.value._type or "" == "override" then def.value.priority else defaultPrio;
|
||||
min = x: y: if x < y then x else y;
|
||||
highestPrio = fold (def: prio: min (getPrio def) prio) 9999 defs;
|
||||
highestPrio = foldl' (prio: def: min (getPrio def) prio) 9999 defs;
|
||||
strip = def: if def.value._type or "" == "override" then def // { value = def.value.content; } else def;
|
||||
in concatMap (def: if getPrio def == highestPrio then [(strip def)] else []) defs;
|
||||
|
||||
|
@ -4,7 +4,6 @@ let lib = import ./default.nix; in
|
||||
|
||||
with import ./trivial.nix;
|
||||
with import ./lists.nix;
|
||||
with import ./misc.nix;
|
||||
with import ./attrsets.nix;
|
||||
with import ./strings.nix;
|
||||
|
||||
@ -20,6 +19,7 @@ rec {
|
||||
, apply ? null # Function that converts the option value to something else.
|
||||
, internal ? null # Whether the option is for NixOS developers only.
|
||||
, visible ? null # Whether the option shows up in the manual.
|
||||
, readOnly ? null # Whether the option can be set only once
|
||||
, options ? null # Obsolete, used by types.optionSet.
|
||||
} @ attrs:
|
||||
attrs // { _type = "option"; };
|
||||
@ -53,32 +53,27 @@ rec {
|
||||
if length list == 1 then head list
|
||||
else if all isFunction list then x: mergeDefaultOption loc (map (f: f x) list)
|
||||
else if all isList list then concatLists list
|
||||
else if all isAttrs list then fold lib.mergeAttrs {} list
|
||||
else if all isBool list then fold lib.or false list
|
||||
else if all isAttrs list then foldl' lib.mergeAttrs {} list
|
||||
else if all isBool list then foldl' lib.or false list
|
||||
else if all isString list then lib.concatStrings list
|
||||
else if all isInt list && all (x: x == head list) list then head list
|
||||
else throw "Cannot merge definitions of `${showOption loc}' given in ${showFiles (getFiles defs)}.";
|
||||
|
||||
/* Obsolete, will remove soon. Specify an option type or apply
|
||||
function instead. */
|
||||
mergeTypedOption = typeName: predicate: merge: loc: list:
|
||||
let list' = map (x: x.value) list; in
|
||||
if all predicate list then merge list'
|
||||
else throw "Expected a ${typeName}.";
|
||||
|
||||
mergeEnableOption = mergeTypedOption "boolean"
|
||||
(x: true == x || false == x) (fold lib.or false);
|
||||
|
||||
mergeListOption = mergeTypedOption "list" isList concatLists;
|
||||
|
||||
mergeStringOption = mergeTypedOption "string" isString lib.concatStrings;
|
||||
|
||||
mergeOneOption = loc: defs:
|
||||
if defs == [] then abort "This case should never happen."
|
||||
else if length defs != 1 then
|
||||
throw "The unique option `${showOption loc}' is defined multiple times, in ${showFiles (getFiles defs)}."
|
||||
else (head defs).value;
|
||||
|
||||
/* "Merge" option definitions by checking that they all have the same value. */
|
||||
mergeEqualOption = loc: defs:
|
||||
if defs == [] then abort "This case should never happen."
|
||||
else foldl' (val: def:
|
||||
if def.value != val then
|
||||
throw "The option `${showOption loc}' has conflicting definitions, in ${showFiles (getFiles defs)}."
|
||||
else
|
||||
val) (head defs).value defs;
|
||||
|
||||
getValues = map (x: x.value);
|
||||
getFiles = map (x: x.file);
|
||||
|
||||
@ -88,7 +83,7 @@ rec {
|
||||
optionAttrSetToDocList = optionAttrSetToDocList' [];
|
||||
|
||||
optionAttrSetToDocList' = prefix: options:
|
||||
fold (opt: rest:
|
||||
concatMap (opt:
|
||||
let
|
||||
docOption = rec {
|
||||
name = showOption opt.loc;
|
||||
@ -96,6 +91,7 @@ rec {
|
||||
declarations = filter (x: x != unknownModule) opt.declarations;
|
||||
internal = opt.internal or false;
|
||||
visible = opt.visible or true;
|
||||
readOnly = opt.readOnly or false;
|
||||
type = opt.type.name or null;
|
||||
}
|
||||
// (if opt ? example then { example = scrubOptionValue opt.example; } else {})
|
||||
@ -106,8 +102,7 @@ rec {
|
||||
let ss = opt.type.getSubOptions opt.loc;
|
||||
in if ss != {} then optionAttrSetToDocList' opt.loc ss else [];
|
||||
in
|
||||
# FIXME: expensive, O(n^2)
|
||||
[ docOption ] ++ subOptions ++ rest) [] (collect isOption options);
|
||||
[ docOption ] ++ subOptions) (collect isOption options);
|
||||
|
||||
|
||||
/* This function recursively removes all derivation attributes from
|
||||
|
104
lib/strings.nix
104
lib/strings.nix
@ -8,11 +8,15 @@ in
|
||||
|
||||
rec {
|
||||
|
||||
inherit (builtins) stringLength substring head tail isString;
|
||||
inherit (builtins) stringLength substring head tail isString replaceStrings;
|
||||
|
||||
|
||||
# Concatenate a list of strings.
|
||||
concatStrings = lib.fold (x: y: x + y) "";
|
||||
concatStrings =
|
||||
if builtins ? concatStringsSep then
|
||||
builtins.concatStringsSep ""
|
||||
else
|
||||
lib.foldl' (x: y: x + y) "";
|
||||
|
||||
|
||||
# Map a function over a list and concatenate the resulting strings.
|
||||
@ -25,14 +29,13 @@ rec {
|
||||
intersperse = separator: list:
|
||||
if list == [] || length list == 1
|
||||
then list
|
||||
else [(head list) separator]
|
||||
++ (intersperse separator (tail list));
|
||||
else tail (lib.concatMap (x: [separator x]) list);
|
||||
|
||||
|
||||
# Concatenate a list of strings with a separator between each element, e.g.
|
||||
# concatStringsSep " " ["foo" "bar" "xyzzy"] == "foo bar xyzzy"
|
||||
concatStringsSep = separator: list:
|
||||
concatStrings (intersperse separator list);
|
||||
concatStringsSep = builtins.concatStringsSep or (separator: list:
|
||||
concatStrings (intersperse separator list));
|
||||
|
||||
concatMapStringsSep = sep: f: list: concatStringsSep sep (map f list);
|
||||
concatImapStringsSep = sep: f: list: concatStringsSep sep (lib.imap f list);
|
||||
@ -63,13 +66,13 @@ rec {
|
||||
|
||||
# Determine whether a string has given prefix/suffix.
|
||||
hasPrefix = pref: str:
|
||||
eqStrings (substring 0 (stringLength pref) str) pref;
|
||||
substring 0 (stringLength pref) str == pref;
|
||||
hasSuffix = suff: str:
|
||||
let
|
||||
lenStr = stringLength str;
|
||||
lenSuff = stringLength suff;
|
||||
in lenStr >= lenSuff &&
|
||||
eqStrings (substring (lenStr - lenSuff) lenStr str) suff;
|
||||
substring (lenStr - lenSuff) lenStr str == suff;
|
||||
|
||||
|
||||
# Convert a string to a list of characters (i.e. singleton strings).
|
||||
@ -78,63 +81,57 @@ rec {
|
||||
# will likely be horribly inefficient; Nix is not a general purpose
|
||||
# programming language. Complex string manipulations should, if
|
||||
# appropriate, be done in a derivation.
|
||||
stringToCharacters = s: let l = stringLength s; in
|
||||
if l == 0
|
||||
then []
|
||||
else map (p: substring p 1 s) (lib.range 0 (l - 1));
|
||||
stringToCharacters = s:
|
||||
map (p: substring p 1 s) (lib.range 0 (stringLength s - 1));
|
||||
|
||||
|
||||
# Manipulate a string charcater by character and replace them by strings
|
||||
# before concatenating the results.
|
||||
# Manipulate a string charactter by character and replace them by
|
||||
# strings before concatenating the results.
|
||||
stringAsChars = f: s:
|
||||
concatStrings (
|
||||
map f (stringToCharacters s)
|
||||
);
|
||||
|
||||
|
||||
# same as vim escape function.
|
||||
# Each character contained in list is prefixed by "\"
|
||||
escape = list : string :
|
||||
stringAsChars (c: if lib.elem c list then "\\${c}" else c) string;
|
||||
# Escape occurrence of the elements of ‘list’ in ‘string’ by
|
||||
# prefixing it with a backslash. For example, ‘escape ["(" ")"]
|
||||
# "(foo)"’ returns the string ‘\(foo\)’.
|
||||
escape = list: replaceChars list (map (c: "\\${c}") list);
|
||||
|
||||
|
||||
# still ugly slow. But more correct now
|
||||
# [] for zsh
|
||||
# Escape all characters that have special meaning in the Bourne shell.
|
||||
escapeShellArg = lib.escape (stringToCharacters "\\ ';$`()|<>\t*[]");
|
||||
|
||||
|
||||
# replace characters by their substitutes. This function is equivalent to
|
||||
# the `tr' command except that one character can be replace by multiple
|
||||
# ones. e.g.,
|
||||
# replaceChars ["<" ">"] ["<" ">"] "<foo>" returns "<foo>".
|
||||
replaceChars = del: new: s:
|
||||
# Obsolete - use replaceStrings instead.
|
||||
replaceChars = builtins.replaceStrings or (
|
||||
del: new: s:
|
||||
let
|
||||
substList = lib.zipLists del new;
|
||||
subst = c:
|
||||
(lib.fold
|
||||
(sub: res: if sub.fst == c then sub else res)
|
||||
{fst = c; snd = c;} (lib.zipLists del new)
|
||||
).snd;
|
||||
let found = lib.findFirst (sub: sub.fst == c) null substList; in
|
||||
if found == null then
|
||||
c
|
||||
else
|
||||
found.snd;
|
||||
in
|
||||
stringAsChars subst s;
|
||||
stringAsChars subst s);
|
||||
|
||||
|
||||
# Case conversion utilities
|
||||
# Case conversion utilities.
|
||||
lowerChars = stringToCharacters "abcdefghijklmnopqrstuvwxyz";
|
||||
upperChars = stringToCharacters "ABCDEFGHIJKLMNOPQRSTUVWXYZ";
|
||||
toLower = replaceChars upperChars lowerChars;
|
||||
toUpper = replaceChars lowerChars upperChars;
|
||||
|
||||
# Appends string context from another string
|
||||
|
||||
# Appends string context from another string.
|
||||
addContextFrom = a: b: substring 0 0 a + b;
|
||||
|
||||
# Compares strings not requiring context equality
|
||||
# Obviously, a workaround but works on all Nix versions
|
||||
eqStrings = a: b: addContextFrom b a == addContextFrom a b;
|
||||
|
||||
|
||||
# Cut a string with a separator and produces a list of strings which were
|
||||
# separated by this separator. e.g.,
|
||||
# `splitString "." "foo.bar.baz"' returns ["foo" "bar" "baz"].
|
||||
# Cut a string with a separator and produces a list of strings which
|
||||
# were separated by this separator; e.g., `splitString "."
|
||||
# "foo.bar.baz"' returns ["foo" "bar" "baz"].
|
||||
splitString = _sep: _s:
|
||||
let
|
||||
sep = addContextFrom _s _sep;
|
||||
@ -177,7 +174,7 @@ rec {
|
||||
sufLen = stringLength suf;
|
||||
sLen = stringLength s;
|
||||
in
|
||||
if sufLen <= sLen && eqStrings suf (substring (sLen - sufLen) sufLen s) then
|
||||
if sufLen <= sLen && suf == substring (sLen - sufLen) sufLen s then
|
||||
substring 0 (sLen - sufLen) s
|
||||
else
|
||||
s;
|
||||
@ -196,21 +193,22 @@ rec {
|
||||
|
||||
|
||||
# Extract name with version from URL. Ask for separator which is
|
||||
# supposed to start extension
|
||||
nameFromURL = url: sep: let
|
||||
components = splitString "/" url;
|
||||
filename = lib.last components;
|
||||
name = builtins.head (splitString sep filename);
|
||||
in
|
||||
assert ! eqStrings name filename;
|
||||
name;
|
||||
# supposed to start extension.
|
||||
nameFromURL = url: sep:
|
||||
let
|
||||
components = splitString "/" url;
|
||||
filename = lib.last components;
|
||||
name = builtins.head (splitString sep filename);
|
||||
in assert name != filename; name;
|
||||
|
||||
|
||||
# Create an --{enable,disable}-<feat> string that can be passed to
|
||||
# standard GNU Autoconf scripts.
|
||||
enableFeature = enable: feat: "--${if enable then "enable" else "disable"}-${feat}";
|
||||
|
||||
# Create a fixed width string with additional prefix to match required width
|
||||
|
||||
# Create a fixed width string with additional prefix to match
|
||||
# required width.
|
||||
fixedWidthString = width: filler: str:
|
||||
let
|
||||
strw = lib.stringLength str;
|
||||
@ -219,6 +217,12 @@ rec {
|
||||
assert strw <= width;
|
||||
if strw == width then str else filler + fixedWidthString reqWidth filler str;
|
||||
|
||||
# Format a number adding leading zeroes up to fixed width
|
||||
|
||||
# Format a number adding leading zeroes up to fixed width.
|
||||
fixedWidthNumber = width: n: fixedWidthString width "0" (toString n);
|
||||
|
||||
|
||||
# Check whether a value is a store path.
|
||||
isStorePath = x: builtins.substring 0 1 (toString x) == "/" && dirOf (builtins.toPath x) == builtins.storeDir;
|
||||
|
||||
}
|
||||
|
@ -12,7 +12,7 @@ evalConfig() {
|
||||
local attr=$1
|
||||
shift;
|
||||
local script="import ./default.nix { modules = [ $@ ];}"
|
||||
nix-instantiate --timeout 1 -E "$script" -A "$attr" --eval-only
|
||||
nix-instantiate --timeout 1 -E "$script" -A "$attr" --eval-only --show-trace
|
||||
}
|
||||
|
||||
reportFailure() {
|
||||
@ -100,7 +100,15 @@ checkConfigOutput 'true' "$@" ./define-enable.nix ./define-loaOfSub-foo-if-enabl
|
||||
checkConfigOutput 'true' "$@" ./define-enable.nix ./define-loaOfSub-foo-enable-if.nix
|
||||
|
||||
# Check _module.args.
|
||||
checkConfigOutput "true" config.enable ./declare-enable.nix ./custom-arg-define-enable.nix
|
||||
set -- config.enable ./declare-enable.nix ./define-enable-with-custom-arg.nix
|
||||
checkConfigError 'while evaluating the module argument .*custom.* in .*define-enable-with-custom-arg.nix.*:' "$@"
|
||||
checkConfigOutput "true" "$@" ./define-_module-args-custom.nix
|
||||
|
||||
# Check that using _module.args on imports cause infinite recursions, with
|
||||
# the proper error context.
|
||||
set -- "$@" ./define-_module-args-custom.nix ./import-custom-arg.nix
|
||||
checkConfigError 'while evaluating the module argument .*custom.* in .*import-custom-arg.nix.*:' "$@"
|
||||
checkConfigError 'infinite recursion encountered' "$@"
|
||||
|
||||
# Check _module.check.
|
||||
set -- config.enable ./declare-enable.nix ./define-enable.nix ./define-loaOfSub-foo.nix
|
||||
|
@ -1,8 +0,0 @@
|
||||
{ lib, custom, ... }:
|
||||
|
||||
{
|
||||
config = {
|
||||
_module.args.custom = true;
|
||||
enable = custom;
|
||||
};
|
||||
}
|
7
lib/tests/modules/define-_module-args-custom.nix
Normal file
7
lib/tests/modules/define-_module-args-custom.nix
Normal file
@ -0,0 +1,7 @@
|
||||
{ lib, ... }:
|
||||
|
||||
{
|
||||
config = {
|
||||
_module.args.custom = true;
|
||||
};
|
||||
}
|
7
lib/tests/modules/define-enable-with-custom-arg.nix
Normal file
7
lib/tests/modules/define-enable-with-custom-arg.nix
Normal file
@ -0,0 +1,7 @@
|
||||
{ lib, custom, ... }:
|
||||
|
||||
{
|
||||
config = {
|
||||
enable = custom;
|
||||
};
|
||||
}
|
6
lib/tests/modules/import-custom-arg.nix
Normal file
6
lib/tests/modules/import-custom-arg.nix
Normal file
@ -0,0 +1,6 @@
|
||||
{ lib, custom, ... }:
|
||||
|
||||
{
|
||||
imports = []
|
||||
++ lib.optional custom ./define-enable-force.nix;
|
||||
}
|
@ -22,7 +22,7 @@ rec {
|
||||
inherit (builtins)
|
||||
pathExists readFile isBool isFunction
|
||||
isInt add sub lessThan
|
||||
seq deepSeq;
|
||||
seq deepSeq genericClosure;
|
||||
|
||||
# Return the Nixpkgs version number.
|
||||
nixpkgsVersion =
|
||||
|
@ -6,7 +6,7 @@ with import ./attrsets.nix;
|
||||
with import ./options.nix;
|
||||
with import ./trivial.nix;
|
||||
with import ./strings.nix;
|
||||
with {inherit (import ./modules.nix) mergeDefinitions; };
|
||||
with {inherit (import ./modules.nix) mergeDefinitions filterOverrides; };
|
||||
|
||||
rec {
|
||||
|
||||
@ -54,7 +54,7 @@ rec {
|
||||
bool = mkOptionType {
|
||||
name = "boolean";
|
||||
check = isBool;
|
||||
merge = loc: fold (x: y: x.value || y) false;
|
||||
merge = mergeEqualOption;
|
||||
};
|
||||
|
||||
int = mkOptionType {
|
||||
@ -88,20 +88,22 @@ rec {
|
||||
attrs = mkOptionType {
|
||||
name = "attribute set";
|
||||
check = isAttrs;
|
||||
merge = loc: fold (def: mergeAttrs def.value) {};
|
||||
merge = loc: foldl' (res: def: mergeAttrs res def.value) {};
|
||||
};
|
||||
|
||||
# derivation is a reserved keyword.
|
||||
package = mkOptionType {
|
||||
name = "derivation";
|
||||
check = isDerivation;
|
||||
merge = mergeOneOption;
|
||||
check = x: isDerivation x || isStorePath x;
|
||||
merge = loc: defs:
|
||||
let res = mergeOneOption loc defs;
|
||||
in if isDerivation res then res else toDerivation res;
|
||||
};
|
||||
|
||||
path = mkOptionType {
|
||||
name = "path";
|
||||
# Hacky: there is no ‘isPath’ primop.
|
||||
check = x: builtins.unsafeDiscardStringContext (builtins.substring 0 1 (toString x)) == "/";
|
||||
check = x: builtins.substring 0 1 (toString x) == "/";
|
||||
merge = mergeOneOption;
|
||||
};
|
||||
|
||||
@ -164,6 +166,23 @@ rec {
|
||||
substSubModules = m: loaOf (elemType.substSubModules m);
|
||||
};
|
||||
|
||||
# List or element of ...
|
||||
loeOf = elemType: mkOptionType {
|
||||
name = "element or list of ${elemType.name}s";
|
||||
check = x: isList x || elemType.check x;
|
||||
merge = loc: defs:
|
||||
let
|
||||
defs' = filterOverrides defs;
|
||||
res = (head defs').value;
|
||||
in
|
||||
if isList res then concatLists (getValues defs')
|
||||
else if lessThan 1 (length defs') then
|
||||
throw "The option `${showOption loc}' is defined multiple times, in ${showFiles (getFiles defs)}."
|
||||
else if !isString res then
|
||||
throw "The option `${showOption loc}' does not have a string value, in ${showFiles (getFiles defs)}."
|
||||
else res;
|
||||
};
|
||||
|
||||
uniq = elemType: mkOptionType {
|
||||
inherit (elemType) name check;
|
||||
merge = mergeOneOption;
|
||||
|
@ -1,95 +0,0 @@
|
||||
#!/bin/sh
|
||||
|
||||
GNOME_FTP="ftp.gnome.org/pub/GNOME/sources"
|
||||
|
||||
project=$1
|
||||
|
||||
if [ "$project" == "--help" ]; then
|
||||
echo "Usage: $0 project [major.minor]"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
baseVersion=$2
|
||||
|
||||
if [ -z "$project" ]; then
|
||||
echo "No project specified, exiting"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# curl -l ftp://... doesn't work from my office in HSE, and I don't want to have
|
||||
# any conversations with sysadmin. Somehow lftp works.
|
||||
if [ "$FTP_CLIENT" = "lftp" ]; then
|
||||
ls_ftp() {
|
||||
lftp -c "open $1; cls"
|
||||
}
|
||||
else
|
||||
ls_ftp() {
|
||||
curl -l "$1"/
|
||||
}
|
||||
fi
|
||||
|
||||
if [ -z "$baseVersion" ]; then
|
||||
echo "Looking for available versions..." >&2
|
||||
available_baseversions=( `ls_ftp ftp://${GNOME_FTP}/${project} | grep '[0-9]\.[0-9]' | sort -t. -k1,1n -k 2,2n` )
|
||||
echo -e "The following versions are available:\n ${available_baseversions[@]}" >&2
|
||||
echo -en "Choose one of them: " >&2
|
||||
read baseVersion
|
||||
fi
|
||||
|
||||
FTPDIR="${GNOME_FTP}/${project}/${baseVersion}"
|
||||
|
||||
#version=`curl -l ${FTPDIR}/ 2>/dev/null | grep LATEST-IS | sed -e s/LATEST-IS-//`
|
||||
# gnome's LATEST-IS is broken. Do not trust it.
|
||||
|
||||
files=$(ls_ftp "${FTPDIR}")
|
||||
declare -A versions
|
||||
|
||||
for f in $files; do
|
||||
case $f in
|
||||
(LATEST-IS-*|*.news|*.changes|*.sha256sum|*.diff*):
|
||||
;;
|
||||
($project-*.*.9*.tar.*):
|
||||
tmp=${f#$project-}
|
||||
tmp=${tmp%.tar*}
|
||||
echo "Ignored unstable version ${tmp}" >&2
|
||||
;;
|
||||
($project-*.tar.*):
|
||||
tmp=${f#$project-}
|
||||
tmp=${tmp%.tar*}
|
||||
versions[${tmp}]=1
|
||||
;;
|
||||
(*):
|
||||
echo "UNKNOWN FILE $f"
|
||||
;;
|
||||
esac
|
||||
done
|
||||
echo "Found versions ${!versions[@]}" >&2
|
||||
version=`echo ${!versions[@]} | sed -e 's/ /\n/g' | sort -t. -k1,1n -k 2,2n -k 3,3n | tail -n1`
|
||||
echo "Latest version is: ${version}" >&2
|
||||
|
||||
name=${project}-${version}
|
||||
echo "Fetching .sha256 file" >&2
|
||||
curl -O http://${FTPDIR}/${name}.sha256sum
|
||||
|
||||
extensions=( "xz" "bz2" "gz" )
|
||||
echo "Choosing archive extension (known are ${extensions[@]})..." >&2
|
||||
for ext in ${extensions[@]}; do
|
||||
if grep "\\.tar\\.${ext}$" ${name}.sha256sum >& /dev/null; then
|
||||
ext_pref=$ext
|
||||
sha256=$(grep "\\.tar\\.${ext}$" ${name}.sha256sum | cut -f1 -d\ )
|
||||
break
|
||||
fi
|
||||
done
|
||||
sha256=`nix-hash --to-base32 --type sha256 $sha256`
|
||||
echo "Chosen ${ext_pref}, hash is ${sha256}" >&2
|
||||
|
||||
cat <<EOF
|
||||
name = "${project}-${version}";
|
||||
|
||||
src = fetchurl {
|
||||
url = mirror://gnome/sources/${project}/${baseVersion}/${project}-${version}.tar.${ext_pref};
|
||||
sha256 = "${sha256}";
|
||||
};
|
||||
EOF
|
||||
|
||||
rm -v ${name}.sha256sum >&2
|
194
maintainers/scripts/gnome.sh
Executable file
194
maintainers/scripts/gnome.sh
Executable file
@ -0,0 +1,194 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
set -o pipefail
|
||||
|
||||
GNOME_FTP="ftp.gnome.org/pub/GNOME/sources"
|
||||
|
||||
# projects that don't follow the GNOME major versioning, or that we don't want to
|
||||
# programmatically update
|
||||
NO_GNOME_MAJOR="gtkhtml gdm"
|
||||
|
||||
usage() {
|
||||
echo "Usage: $0 gnome_dir <show project>|<update project>|<update-all> [major.minor]" >&2
|
||||
echo "gnome_dir is for example pkgs/desktops/gnome-3/3.18" >&2
|
||||
exit 0
|
||||
}
|
||||
|
||||
if [ "$#" -lt 2 ]; then
|
||||
usage
|
||||
fi
|
||||
|
||||
GNOME_TOP="$1"
|
||||
shift
|
||||
|
||||
action="$1"
|
||||
|
||||
# curl -l ftp://... doesn't work from my office in HSE, and I don't want to have
|
||||
# any conversations with sysadmin. Somehow lftp works.
|
||||
if [ "$FTP_CLIENT" = "lftp" ]; then
|
||||
ls_ftp() {
|
||||
lftp -c "open $1; cls"
|
||||
}
|
||||
else
|
||||
ls_ftp() {
|
||||
curl -s -l "$1"/
|
||||
}
|
||||
fi
|
||||
|
||||
find_project() {
|
||||
exec find "$GNOME_TOP" -mindepth 2 -maxdepth 2 -type d $@
|
||||
}
|
||||
|
||||
show_project() {
|
||||
local project="$1"
|
||||
local majorVersion="$2"
|
||||
local version=""
|
||||
|
||||
if [ -z "$majorVersion" ]; then
|
||||
echo "Looking for available versions..." >&2
|
||||
local available_baseversions=( `ls_ftp ftp://${GNOME_FTP}/${project} | grep '[0-9]\.[0-9]' | sort -t. -k1,1n -k 2,2n` )
|
||||
if [ "$?" -ne "0" ]; then
|
||||
echo "Project $project not found" >&2
|
||||
return 1
|
||||
fi
|
||||
|
||||
echo -e "The following versions are available:\n ${available_baseversions[@]}" >&2
|
||||
echo -en "Choose one of them: " >&2
|
||||
read majorVersion
|
||||
fi
|
||||
|
||||
if echo "$majorVersion" | grep -q "[0-9]\+\.[0-9]\+\.[0-9]\+"; then
|
||||
# not a major version
|
||||
version="$majorVersion"
|
||||
majorVersion=$(echo "$majorVersion" | cut -d '.' -f 1,2)
|
||||
fi
|
||||
|
||||
local FTPDIR="${GNOME_FTP}/${project}/${majorVersion}"
|
||||
|
||||
#version=`curl -l ${FTPDIR}/ 2>/dev/null | grep LATEST-IS | sed -e s/LATEST-IS-//`
|
||||
# gnome's LATEST-IS is broken. Do not trust it.
|
||||
|
||||
if [ -z "$version" ]; then
|
||||
local files=$(ls_ftp "${FTPDIR}")
|
||||
declare -A versions
|
||||
|
||||
for f in $files; do
|
||||
case $f in
|
||||
(LATEST-IS-*|*.news|*.changes|*.sha256sum|*.diff*):
|
||||
;;
|
||||
($project-*.*.9*.tar.*):
|
||||
tmp=${f#$project-}
|
||||
tmp=${tmp%.tar*}
|
||||
echo "Ignored unstable version ${tmp}" >&2
|
||||
;;
|
||||
($project-*.tar.*):
|
||||
tmp=${f#$project-}
|
||||
tmp=${tmp%.tar*}
|
||||
versions[${tmp}]=1
|
||||
;;
|
||||
(*):
|
||||
echo "UNKNOWN FILE $f" >&2
|
||||
;;
|
||||
esac
|
||||
done
|
||||
echo "Found versions ${!versions[@]}" >&2
|
||||
version=`echo ${!versions[@]} | sed -e 's/ /\n/g' | sort -t. -k1,1n -k 2,2n -k 3,3n | tail -n1`
|
||||
if [ -z "$version" ]; then
|
||||
echo "No version available for major $majorVersion" >&2
|
||||
return 1
|
||||
fi
|
||||
|
||||
echo "Latest version is: ${version}" >&2
|
||||
fi
|
||||
|
||||
local name=${project}-${version}
|
||||
echo "Fetching .sha256 file" >&2
|
||||
local sha256out=$(curl -s -f http://${FTPDIR}/${name}.sha256sum)
|
||||
|
||||
if [ "$?" -ne "0" ]; then
|
||||
echo "Version not found" >&2
|
||||
return 1
|
||||
fi
|
||||
|
||||
extensions=( "xz" "bz2" "gz" )
|
||||
echo "Choosing archive extension (known are ${extensions[@]})..." >&2
|
||||
for ext in ${extensions[@]}; do
|
||||
if echo -e "$sha256out" | grep -q "\\.tar\\.${ext}$"; then
|
||||
ext_pref=$ext
|
||||
sha256=$(echo -e "$sha256out" | grep "\\.tar\\.${ext}$" | cut -f1 -d\ )
|
||||
break
|
||||
fi
|
||||
done
|
||||
echo "Chosen ${ext_pref}, hash is ${sha256}" >&2
|
||||
|
||||
echo "# Autogenerated by maintainers/scripts/gnome.sh update
|
||||
|
||||
fetchurl: {
|
||||
name = \"${project}-${version}\";
|
||||
|
||||
src = fetchurl {
|
||||
url = mirror://gnome/sources/${project}/${majorVersion}/${project}-${version}.tar.${ext_pref};
|
||||
sha256 = \"${sha256}\";
|
||||
};
|
||||
}"
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
update_project() {
|
||||
local project="$1"
|
||||
local majorVersion="$2"
|
||||
|
||||
# find project in nixpkgs tree
|
||||
projectPath=$(find_project -name "$project" -print)
|
||||
if [ -z "$projectPath" ]; then
|
||||
echo "Project $project not found under $GNOME_TOP"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
src=$(show_project "$project" "$majorVersion")
|
||||
|
||||
if [ "$?" -eq "0" ]; then
|
||||
echo "Updating $projectPath/src.nix" >&2
|
||||
echo -e "$src" > "$projectPath/src.nix"
|
||||
fi
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
if [ "$action" == "update-all" ]; then
|
||||
majorVersion="$2"
|
||||
if [ -z "$majorVersion" ]; then
|
||||
echo "No major version specified" >&2
|
||||
usage
|
||||
fi
|
||||
|
||||
# find projects
|
||||
projects=$(find_project -exec basename '{}' \;)
|
||||
for project in $projects; do
|
||||
if echo "$NO_GNOME_MAJOR"|grep -q $project; then
|
||||
echo "Skipping $project"
|
||||
else
|
||||
echo "= Updating $project to $majorVersion" >&2
|
||||
update_project $project $majorVersion
|
||||
echo >&2
|
||||
fi
|
||||
done
|
||||
else
|
||||
project="$2"
|
||||
majorVersion="$3"
|
||||
|
||||
if [ -z "$project" ]; then
|
||||
echo "No project specified, exiting" >&2
|
||||
usage
|
||||
fi
|
||||
|
||||
if [ "$action" == "show" ]; then
|
||||
show_project $project $majorVersion
|
||||
elif [ "$action" == "update" ]; then
|
||||
update_project $project $majorVersion
|
||||
else
|
||||
echo "Unknown action $action" >&2
|
||||
usage
|
||||
fi
|
||||
fi
|
@ -6,6 +6,7 @@ hydra_eval_jobs \
|
||||
--argstr system i686-linux \
|
||||
--argstr system x86_64-darwin \
|
||||
--argstr system i686-cygwin \
|
||||
--argstr system x86_64-cygwin \
|
||||
--argstr system i686-freebsd \
|
||||
--arg officialRelease false \
|
||||
--arg nixpkgs "{ outPath = builtins.storePath ./. ; rev = 1234; }" \
|
||||
|
@ -31,7 +31,15 @@ elif [[ $1 == build ]]; then
|
||||
echo "=== Not a pull request"
|
||||
else
|
||||
echo "=== Checking PR"
|
||||
nox-review pr ${TRAVIS_PULL_REQUEST}
|
||||
|
||||
if ! nox-review pr ${TRAVIS_PULL_REQUEST}; then
|
||||
if sudo dmesg | egrep 'Out of memory|Killed process' > /tmp/oom-log; then
|
||||
echo "=== The build failed due to running out of memory:"
|
||||
cat /tmp/oom-log
|
||||
echo "=== Please disregard the result of this Travis build."
|
||||
fi
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
# echo "=== Checking tarball creation"
|
||||
# nix-build pkgs/top-level/release.nix -A tarball
|
||||
|
@ -11,7 +11,7 @@ uninstall packages from the command line. For instance, to install
|
||||
Mozilla Thunderbird:
|
||||
|
||||
<screen>
|
||||
$ nix-env -iA nixos.pkgs.thunderbird</screen>
|
||||
$ nix-env -iA nixos.thunderbird</screen>
|
||||
|
||||
If you invoke this as root, the package is installed in the Nix
|
||||
profile <filename>/nix/var/nix/profiles/default</filename> and visible
|
||||
|
@ -23,13 +23,13 @@ Nixpkgs will be built or downloaded as part of the system when you run
|
||||
<para>You can get a list of the available packages as follows:
|
||||
<screen>
|
||||
$ nix-env -qaP '*' --description
|
||||
nixos.pkgs.firefox firefox-23.0 Mozilla Firefox - the browser, reloaded
|
||||
nixos.firefox firefox-23.0 Mozilla Firefox - the browser, reloaded
|
||||
<replaceable>...</replaceable>
|
||||
</screen>
|
||||
|
||||
The first column in the output is the <emphasis>attribute
|
||||
name</emphasis>, such as
|
||||
<literal>nixos.pkgs.thunderbird</literal>. (The
|
||||
<literal>nixos.thunderbird</literal>. (The
|
||||
<literal>nixos</literal> prefix allows distinguishing between
|
||||
different channels that you might have.)</para>
|
||||
|
||||
|
@ -61,6 +61,12 @@ by default because it’s not free software. You can enable it as follows:
|
||||
<programlisting>
|
||||
services.xserver.videoDrivers = [ "nvidia" ];
|
||||
</programlisting>
|
||||
Or if you have an older card, you may have to use one of the legacy drivers:
|
||||
<programlisting>
|
||||
services.xserver.videoDrivers = [ "nvidiaLegacy340" ];
|
||||
services.xserver.videoDrivers = [ "nvidiaLegacy304" ];
|
||||
services.xserver.videoDrivers = [ "nvidiaLegacy173" ];
|
||||
</programlisting>
|
||||
You may need to reboot after enabling this driver to prevent a clash
|
||||
with other kernel modules.</para>
|
||||
|
||||
|
@ -31,10 +31,8 @@ let
|
||||
else
|
||||
fn;
|
||||
|
||||
# Convert the list of options into an XML file. The builtin
|
||||
# unsafeDiscardStringContext is used to prevent the realisation of
|
||||
# the store paths which are used in options definitions.
|
||||
optionsXML = builtins.toFile "options.xml" (builtins.unsafeDiscardStringContext (builtins.toXML optionsList'));
|
||||
# Convert the list of options into an XML file.
|
||||
optionsXML = builtins.toFile "options.xml" (builtins.toXML optionsList');
|
||||
|
||||
optionsDocBook = runCommand "options-db.xml" {} ''
|
||||
optionsXML=${optionsXML}
|
||||
@ -61,6 +59,16 @@ let
|
||||
echo "${version}" > version
|
||||
'';
|
||||
|
||||
toc = builtins.toFile "toc.xml"
|
||||
''
|
||||
<toc role="chunk-toc">
|
||||
<d:tocentry xmlns:d="http://docbook.org/ns/docbook" linkend="book-nixos-manual"><?dbhtml filename="index.html"?>
|
||||
<d:tocentry linkend="ch-options"><?dbhtml filename="options.html"?></d:tocentry>
|
||||
<d:tocentry linkend="ch-release-notes"><?dbhtml filename="release-notes.html"?></d:tocentry>
|
||||
</d:tocentry>
|
||||
</toc>
|
||||
'';
|
||||
|
||||
in rec {
|
||||
|
||||
# The NixOS options in JSON format.
|
||||
@ -113,9 +121,10 @@ in rec {
|
||||
--param chunk.section.depth 0 \
|
||||
--param chunk.first.sections 1 \
|
||||
--param use.id.as.filename 1 \
|
||||
--stringparam generate.toc "book toc chapter toc appendix toc" \
|
||||
--stringparam generate.toc "book toc appendix toc" \
|
||||
--stringparam chunk.toc ${toc} \
|
||||
--nonet --xinclude --output $dst/ \
|
||||
${docbook5_xsl}/xml/xsl/docbook/xhtml/chunkfast.xsl ./manual.xml
|
||||
${docbook5_xsl}/xml/xsl/docbook/xhtml/chunktoc.xsl ./manual.xml
|
||||
|
||||
mkdir -p $dst/images/callouts
|
||||
cp ${docbook5_xsl}/xml/xsl/docbook/images/callouts/*.gif $dst/images/callouts/
|
||||
@ -128,6 +137,8 @@ in rec {
|
||||
''; # */
|
||||
|
||||
meta.description = "The NixOS manual in HTML format";
|
||||
|
||||
allowedReferences = ["out"];
|
||||
};
|
||||
|
||||
manualPDF = stdenv.mkDerivation {
|
||||
@ -135,12 +146,9 @@ in rec {
|
||||
|
||||
inherit sources;
|
||||
|
||||
buildInputs = [ libxml2 libxslt dblatex tetex ];
|
||||
buildInputs = [ libxml2 libxslt dblatex dblatex.tex ];
|
||||
|
||||
buildCommand = ''
|
||||
# TeX needs a writable font cache.
|
||||
export VARTEXFONTS=$TMPDIR/texfonts
|
||||
|
||||
${copySources}
|
||||
|
||||
dst=$out/share/doc/nixos
|
||||
@ -151,7 +159,7 @@ in rec {
|
||||
|
||||
mkdir -p $out/nix-support
|
||||
echo "doc-pdf manual $dst/manual.pdf" >> $out/nix-support/hydra-build-products
|
||||
''; # */
|
||||
'';
|
||||
};
|
||||
|
||||
# Generate the NixOS manpages.
|
||||
@ -179,6 +187,8 @@ in rec {
|
||||
${docbook5_xsl}/xml/xsl/docbook/manpages/docbook.xsl \
|
||||
./man-pages.xml
|
||||
'';
|
||||
|
||||
allowedReferences = ["out"];
|
||||
};
|
||||
|
||||
}
|
||||
|
@ -106,6 +106,15 @@ options = {
|
||||
</listitem>
|
||||
</varlistentry>
|
||||
|
||||
<varlistentry>
|
||||
<term><varname>types.package</varname></term>
|
||||
<listitem>
|
||||
<para>A derivation (such as <literal>pkgs.hello</literal>) or a
|
||||
store path (such as
|
||||
<filename>/nix/store/1ifi1cfbfs5iajmvwgrbmrnrw3a147h9-hello-2.10</filename>).</para>
|
||||
</listitem>
|
||||
</varlistentry>
|
||||
|
||||
<varlistentry>
|
||||
<term><varname>types.listOf</varname> <replaceable>t</replaceable></term>
|
||||
<listitem>
|
||||
@ -138,4 +147,4 @@ You can also create new types using the function
|
||||
<varname>mkOptionType</varname>. See
|
||||
<filename>lib/types.nix</filename> in Nixpkgs for details.</para>
|
||||
|
||||
</section>
|
||||
</section>
|
||||
|
@ -24,6 +24,9 @@ $ mkdir -p <replaceable>/my/sources</replaceable>
|
||||
$ cd <replaceable>/my/sources</replaceable>
|
||||
$ nix-env -i git
|
||||
$ git clone git://github.com/NixOS/nixpkgs.git
|
||||
$ cd nixpkgs
|
||||
$ git remote add channels git://github.com/NixOS/nixpkgs-channels.git
|
||||
$ git remote update channels
|
||||
</screen>
|
||||
|
||||
This will check out the latest NixOS sources to
|
||||
@ -31,7 +34,12 @@ This will check out the latest NixOS sources to
|
||||
and the Nixpkgs sources to
|
||||
<filename><replaceable>/my/sources</replaceable>/nixpkgs</filename>.
|
||||
(The NixOS source tree lives in a subdirectory of the Nixpkgs
|
||||
repository.)</para>
|
||||
repository.) The remote <literal>channels</literal> refers to a
|
||||
read-only repository that tracks the Nixpkgs/NixOS channels (see <xref
|
||||
linkend="sec-upgrading"/> for more information about channels). Thus,
|
||||
the Git branch <literal>channels/nixos-14.12</literal> will contain
|
||||
the latest built and tested version available in the
|
||||
<literal>nixos-14.12</literal> channel.</para>
|
||||
|
||||
<para>It’s often inconvenient to develop directly on the master
|
||||
branch, since if somebody has just committed (say) a change to GCC,
|
||||
@ -40,28 +48,32 @@ rebuild everything from source. So you may want to create a local
|
||||
branch based on your current NixOS version:
|
||||
|
||||
<screen>
|
||||
$ <replaceable>/my/sources</replaceable>/nixpkgs/maintainers/scripts/update-channel-branches.sh
|
||||
Fetching channels from https://nixos.org/channels:
|
||||
* [new branch] cbe467e -> channels/remotes/nixos-unstable
|
||||
Fetching channels from nixos-version:
|
||||
* [new branch] 9ff4738 -> channels/current-system
|
||||
Fetching channels from ~/.nix-defexpr:
|
||||
* [new branch] 0d4acad -> channels/root/nixos
|
||||
$ git checkout -b local channels/current-system
|
||||
$ nixos-version
|
||||
14.04.273.ea1952b (Baboon)
|
||||
|
||||
$ git checkout -b local ea1952b
|
||||
</screen>
|
||||
|
||||
Or, to base your local branch on the latest version available in the
|
||||
Or, to base your local branch on the latest version available in a
|
||||
NixOS channel:
|
||||
|
||||
<screen>
|
||||
$ <replaceable>/my/sources</replaceable>/nixpkgs/maintainers/scripts/update-channel-branches.sh
|
||||
$ git checkout -b local channels/remotes/nixos-unstable
|
||||
$ git remote update channels
|
||||
$ git checkout -b local channels/nixos-14.12
|
||||
</screen>
|
||||
|
||||
You can then use <command>git rebase</command> to sync your local
|
||||
branch with the upstream branch, and use <command>git
|
||||
cherry-pick</command> to copy commits from your local branch to the
|
||||
upstream branch.</para>
|
||||
(Replace <literal>nixos-14.12</literal> with the name of the channel
|
||||
you want to use.) You can use <command>git merge</command> or
|
||||
<command>git rebase</command> to keep your local branch in sync with
|
||||
the channel, e.g.
|
||||
|
||||
<screen>
|
||||
$ git remote update channels
|
||||
$ git merge channels/nixos-14.12
|
||||
</screen>
|
||||
|
||||
You can use <command>git cherry-pick</command> to copy commits from
|
||||
your local branch to the upstream branch.</para>
|
||||
|
||||
<para>If you want to rebuild your system using your (modified)
|
||||
sources, you need to tell <command>nixos-rebuild</command> about them
|
||||
|
@ -158,7 +158,7 @@ let locatedb = "/var/cache/locatedb"; in
|
||||
script =
|
||||
''
|
||||
mkdir -m 0755 -p $(dirname ${locatedb})
|
||||
exec updatedb --localuser=nobody --output=${locatedb} --prunepaths='/tmp /var/tmp /media /run'
|
||||
exec updatedb --localuser=nobody --output=${locatedb} --prunepaths='/tmp /var/tmp /run'
|
||||
'';
|
||||
};
|
||||
|
||||
@ -172,4 +172,4 @@ let locatedb = "/var/cache/locatedb"; in
|
||||
<xi:include href="option-declarations.xml" />
|
||||
<xi:include href="option-def.xml" />
|
||||
|
||||
</chapter>
|
||||
</chapter>
|
||||
|
@ -154,6 +154,15 @@ startAll;
|
||||
log.</para></listitem>
|
||||
</varlistentry>
|
||||
|
||||
<varlistentry>
|
||||
<term><methodname>getScreenText</methodname></term>
|
||||
<listitem><para>Return a textual representation of what is currently
|
||||
visible on the machine's screen using optical character
|
||||
recognition.</para>
|
||||
<note><para>This requires passing <option>enableOCR</option> to the test
|
||||
attribute set.</para></note></listitem>
|
||||
</varlistentry>
|
||||
|
||||
<varlistentry>
|
||||
<term><methodname>sendMonitorCommand</methodname></term>
|
||||
<listitem><para>Send a command to the QEMU monitor. This is rarely
|
||||
@ -237,6 +246,15 @@ startAll;
|
||||
connections.</para></listitem>
|
||||
</varlistentry>
|
||||
|
||||
<varlistentry>
|
||||
<term><methodname>waitForText</methodname></term>
|
||||
<listitem><para>Wait until the supplied regular expressions matches
|
||||
the textual contents of the screen by using optical character recognition
|
||||
(see <methodname>getScreenText</methodname>).</para>
|
||||
<note><para>This requires passing <option>enableOCR</option> to the test
|
||||
attribute set.</para></note></listitem>
|
||||
</varlistentry>
|
||||
|
||||
<varlistentry>
|
||||
<term><methodname>waitForWindow</methodname></term>
|
||||
<listitem><para>Wait until an X11 window has appeared whose name
|
||||
|
@ -41,10 +41,6 @@ changes:
|
||||
<option>boot.loader.efi</option> and <option>boot.loader.gummiboot</option>
|
||||
as well.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>To see console messages during early boot, add <literal>"fbcon"</literal>
|
||||
to your <option>boot.initrd.kernelModules</option>.</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
</para>
|
||||
|
||||
|
@ -6,8 +6,8 @@
|
||||
|
||||
<title>Booting from a USB Drive</title>
|
||||
|
||||
<para>For systems without CD drive, the NixOS livecd can be booted from
|
||||
a usb stick. For non-UEFI installations,
|
||||
<para>For systems without CD drive, the NixOS live CD can be booted from
|
||||
a USB stick. For non-UEFI installations,
|
||||
<link xlink:href="http://unetbootin.sourceforge.net/">unetbootin</link>
|
||||
will work. For UEFI installations, you should mount the ISO, copy its contents
|
||||
verbatim to your drive, then either:
|
||||
|
@ -120,7 +120,11 @@ $ nixos-generate-config --root /mnt</screen>
|
||||
$ nano /mnt/etc/nixos/configuration.nix
|
||||
</screen>
|
||||
|
||||
The <command>vim</command> text editor is also available.</para>
|
||||
If you’re using the graphical ISO image, other editors may be
|
||||
available (such as <command>vim</command>). If you have network
|
||||
access, you can also install other editors — for instance, you can
|
||||
install Emacs by running <literal>nix-env -i
|
||||
emacs</literal>.</para>
|
||||
|
||||
<para>You <emphasis>must</emphasis> set the option
|
||||
<option>boot.loader.grub.device</option> to specify on which disk
|
||||
@ -189,11 +193,13 @@ $ reboot</screen>
|
||||
|
||||
<listitem>
|
||||
|
||||
<para>You should now be able to boot into the installed NixOS. The GRUB boot menu shows a list
|
||||
of <emphasis>available configurations</emphasis> (initially just one). Every time
|
||||
you change the NixOS configuration (see<link linkend="sec-changing-config">Changing
|
||||
Configuration</link> ), a new item appears in the menu. This allows you to
|
||||
easily roll back to another configuration if something goes wrong.</para>
|
||||
<para>You should now be able to boot into the installed NixOS. The
|
||||
GRUB boot menu shows a list of <emphasis>available
|
||||
configurations</emphasis> (initially just one). Every time you
|
||||
change the NixOS configuration (see <link
|
||||
linkend="sec-changing-config">Changing Configuration</link> ), a
|
||||
new item is added to the menu. This allows you to easily roll back
|
||||
to a previous configuration if something goes wrong.</para>
|
||||
|
||||
<para>You should log in and change the <literal>root</literal>
|
||||
password with <command>passwd</command>.</para>
|
||||
|
@ -107,4 +107,30 @@ newer Nix version, which may involve an upgrade of Nix’s database
|
||||
schema. This cannot be undone easily, so in that case you will not be
|
||||
able to go back to your original channel.</para></warning>
|
||||
|
||||
|
||||
<section><title>Automatic Upgrades</title>
|
||||
|
||||
<para>You can keep a NixOS system up-to-date automatically by adding
|
||||
the following to <filename>configuration.nix</filename>:
|
||||
|
||||
<programlisting>
|
||||
system.autoUpgrade.enable = true;
|
||||
</programlisting>
|
||||
|
||||
This enables a periodically executed systemd service named
|
||||
<literal>nixos-upgrade.service</literal>. It runs
|
||||
<command>nixos-rebuild switch --upgrade</command> to upgrade NixOS to
|
||||
the latest version in the current channel. (To see when the service
|
||||
runs, see <command>systemctl list-timers</command>.) You can also
|
||||
specify a channel explicitly, e.g.
|
||||
|
||||
<programlisting>
|
||||
system.autoUpgrade.channel = https://nixos.org/channels/nixos-15.09;
|
||||
</programlisting>
|
||||
|
||||
</para>
|
||||
|
||||
</section>
|
||||
|
||||
|
||||
</chapter>
|
||||
|
@ -2,7 +2,7 @@
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
version="5.0"
|
||||
xml:id="NixOSManual">
|
||||
xml:id="book-nixos-manual">
|
||||
|
||||
<info>
|
||||
<title>NixOS Manual</title>
|
||||
@ -33,11 +33,12 @@
|
||||
<xi:include href="administration/running.xml" />
|
||||
<!-- <xi:include href="userconfiguration.xml" /> -->
|
||||
<xi:include href="development/development.xml" />
|
||||
<xi:include href="release-notes/release-notes.xml" />
|
||||
|
||||
<appendix xml:id="ch-options">
|
||||
<title>Configuration Options</title>
|
||||
<xi:include href="options-db.xml" />
|
||||
</appendix>
|
||||
|
||||
<xi:include href="release-notes/release-notes.xml" />
|
||||
|
||||
</book>
|
||||
|
@ -25,61 +25,65 @@
|
||||
<option>
|
||||
<xsl:value-of select="attr[@name = 'name']/string/@value" />
|
||||
</option>
|
||||
</term>
|
||||
</term>
|
||||
|
||||
<listitem>
|
||||
<listitem>
|
||||
|
||||
<para>
|
||||
<xsl:value-of disable-output-escaping="yes"
|
||||
select="attr[@name = 'description']/string/@value" />
|
||||
</para>
|
||||
<para>
|
||||
<xsl:value-of disable-output-escaping="yes"
|
||||
select="attr[@name = 'description']/string/@value" />
|
||||
</para>
|
||||
|
||||
<xsl:if test="attr[@name = 'type']">
|
||||
<para>
|
||||
<emphasis>Type:</emphasis>
|
||||
<xsl:text> </xsl:text>
|
||||
<xsl:apply-templates select="attr[@name = 'type']" mode="top" />
|
||||
</para>
|
||||
</xsl:if>
|
||||
<xsl:if test="attr[@name = 'type']">
|
||||
<para>
|
||||
<emphasis>Type:</emphasis>
|
||||
<xsl:text> </xsl:text>
|
||||
<xsl:value-of select="attr[@name = 'type']/string/@value"/>
|
||||
<xsl:if test="attr[@name = 'readOnly']/bool/@value = 'true'">
|
||||
<xsl:text> </xsl:text>
|
||||
<emphasis>(read only)</emphasis>
|
||||
</xsl:if>
|
||||
</para>
|
||||
</xsl:if>
|
||||
|
||||
<xsl:if test="attr[@name = 'default']">
|
||||
<para>
|
||||
<emphasis>Default:</emphasis>
|
||||
<xsl:text> </xsl:text>
|
||||
<xsl:apply-templates select="attr[@name = 'default']" mode="top" />
|
||||
</para>
|
||||
</xsl:if>
|
||||
<xsl:if test="attr[@name = 'default']">
|
||||
<para>
|
||||
<emphasis>Default:</emphasis>
|
||||
<xsl:text> </xsl:text>
|
||||
<xsl:apply-templates select="attr[@name = 'default']" mode="top" />
|
||||
</para>
|
||||
</xsl:if>
|
||||
|
||||
<xsl:if test="attr[@name = 'example']">
|
||||
<para>
|
||||
<emphasis>Example:</emphasis>
|
||||
<xsl:text> </xsl:text>
|
||||
<xsl:choose>
|
||||
<xsl:when test="attr[@name = 'example']/attrs[attr[@name = '_type' and string[@value = 'literalExample']]]">
|
||||
<programlisting><xsl:value-of select="attr[@name = 'example']/attrs/attr[@name = 'text']/string/@value" /></programlisting>
|
||||
</xsl:when>
|
||||
<xsl:otherwise>
|
||||
<xsl:apply-templates select="attr[@name = 'example']" mode="top" />
|
||||
</xsl:otherwise>
|
||||
</xsl:choose>
|
||||
</para>
|
||||
</xsl:if>
|
||||
<xsl:if test="attr[@name = 'example']">
|
||||
<para>
|
||||
<emphasis>Example:</emphasis>
|
||||
<xsl:text> </xsl:text>
|
||||
<xsl:choose>
|
||||
<xsl:when test="attr[@name = 'example']/attrs[attr[@name = '_type' and string[@value = 'literalExample']]]">
|
||||
<programlisting><xsl:value-of select="attr[@name = 'example']/attrs/attr[@name = 'text']/string/@value" /></programlisting>
|
||||
</xsl:when>
|
||||
<xsl:otherwise>
|
||||
<xsl:apply-templates select="attr[@name = 'example']" mode="top" />
|
||||
</xsl:otherwise>
|
||||
</xsl:choose>
|
||||
</para>
|
||||
</xsl:if>
|
||||
|
||||
<xsl:if test="count(attr[@name = 'declarations']/list/*) != 0">
|
||||
<para>
|
||||
<emphasis>Declared by:</emphasis>
|
||||
</para>
|
||||
<xsl:apply-templates select="attr[@name = 'declarations']" />
|
||||
</xsl:if>
|
||||
<xsl:if test="count(attr[@name = 'declarations']/list/*) != 0">
|
||||
<para>
|
||||
<emphasis>Declared by:</emphasis>
|
||||
</para>
|
||||
<xsl:apply-templates select="attr[@name = 'declarations']" />
|
||||
</xsl:if>
|
||||
|
||||
<xsl:if test="count(attr[@name = 'definitions']/list/*) != 0">
|
||||
<para>
|
||||
<emphasis>Defined by:</emphasis>
|
||||
</para>
|
||||
<xsl:apply-templates select="attr[@name = 'definitions']" />
|
||||
</xsl:if>
|
||||
<xsl:if test="count(attr[@name = 'definitions']/list/*) != 0">
|
||||
<para>
|
||||
<emphasis>Defined by:</emphasis>
|
||||
</para>
|
||||
<xsl:apply-templates select="attr[@name = 'definitions']" />
|
||||
</xsl:if>
|
||||
|
||||
</listitem>
|
||||
</listitem>
|
||||
|
||||
</varlistentry>
|
||||
|
||||
|
@ -1,19 +1,18 @@
|
||||
<part xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
version="5.0"
|
||||
xml:id="ch-release-notes">
|
||||
<appendix xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
version="5.0"
|
||||
xml:id="ch-release-notes">
|
||||
|
||||
<title>Release Notes</title>
|
||||
|
||||
<partintro>
|
||||
<para>This section lists the release notes for each stable version of NixOS
|
||||
and current unstable revision.</para>
|
||||
</partintro>
|
||||
|
||||
<xi:include href="rl-unstable.xml" />
|
||||
<xi:include href="rl-1509.xml" />
|
||||
<xi:include href="rl-1412.xml" />
|
||||
<xi:include href="rl-1404.xml" />
|
||||
<xi:include href="rl-1310.xml" />
|
||||
|
||||
</part>
|
||||
</appendix>
|
||||
|
@ -1,11 +1,11 @@
|
||||
<chapter xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
version="5.0"
|
||||
xml:id="sec-release-13.10">
|
||||
<section xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
version="5.0"
|
||||
xml:id="sec-release-13.10">
|
||||
|
||||
<title>Release 13.10 (“Aardvark”, 2013/10/31)</title>
|
||||
|
||||
<para>This is the first stable release branch of NixOS.</para>
|
||||
|
||||
</chapter>
|
||||
</section>
|
||||
|
@ -1,8 +1,8 @@
|
||||
<chapter xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
version="5.0"
|
||||
xml:id="sec-release-14.04">
|
||||
<section xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
version="5.0"
|
||||
xml:id="sec-release-14.04">
|
||||
|
||||
<title>Release 14.04 (“Baboon”, 2014/04/30)</title>
|
||||
|
||||
@ -157,4 +157,4 @@ networking.firewall.enable = false;
|
||||
|
||||
</para>
|
||||
|
||||
</chapter>
|
||||
</section>
|
||||
|
@ -1,8 +1,8 @@
|
||||
<chapter xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
version="5.0"
|
||||
xml:id="sec-release-14.12">
|
||||
<section xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
version="5.0"
|
||||
xml:id="sec-release-14.12">
|
||||
|
||||
<title>Release 14.12 (“Caterpillar”, 2014/12/30)</title>
|
||||
|
||||
@ -174,4 +174,4 @@ now.</para></listitem>
|
||||
|
||||
</para>
|
||||
|
||||
</chapter>
|
||||
</section>
|
||||
|
491
nixos/doc/manual/release-notes/rl-1509.xml
Normal file
491
nixos/doc/manual/release-notes/rl-1509.xml
Normal file
@ -0,0 +1,491 @@
|
||||
<section xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
version="5.0"
|
||||
xml:id="sec-release-15.09">
|
||||
|
||||
<title>Release 15.09 (“Dingo”, 2015/09/30)</title>
|
||||
|
||||
<para>In addition to numerous new and upgraded packages, this release
|
||||
has the following highlights:</para>
|
||||
|
||||
<itemizedlist>
|
||||
|
||||
<listitem>
|
||||
<para>The <link xlink:href="http://haskell.org/">Haskell</link>
|
||||
packages infrastructure has been re-designed from the ground up
|
||||
("Haskell NG"). NixOS now distributes the latest version
|
||||
of every single package registered on <link
|
||||
xlink:href="http://hackage.haskell.org/">Hackage</link> -- well in
|
||||
excess of 8,000 Haskell packages. Detailed instructions on how to
|
||||
use that infrastructure can be found in the <link
|
||||
xlink:href="http://nixos.org/nixpkgs/manual/#users-guide-to-the-haskell-infrastructure">User's
|
||||
Guide to the Haskell Infrastructure</link>. Users migrating from an
|
||||
earlier release may find helpful information below, in the list of
|
||||
backwards-incompatible changes. Furthermore, we distribute 51(!)
|
||||
additional Haskell package sets that provide every single <link
|
||||
xlink:href="http://www.stackage.org/">LTS Haskell</link> release
|
||||
since version 0.0 as well as the most recent <link
|
||||
xlink:href="http://www.stackage.org/">Stackage Nightly</link>
|
||||
snapshot. The announcement <link
|
||||
xlink:href="http://lists.science.uu.nl/pipermail/nix-dev/2015-September/018138.html">"Full
|
||||
Stackage Support in Nixpkgs"</link> gives additional
|
||||
details.</para>
|
||||
</listitem>
|
||||
|
||||
<listitem>
|
||||
<para>Nix has been updated to version 1.10, which among other
|
||||
improvements enables cryptographic signatures on binary caches for
|
||||
improved security.</para>
|
||||
</listitem>
|
||||
|
||||
<listitem>
|
||||
<para>You can now keep your NixOS system up to date automatically
|
||||
by setting
|
||||
|
||||
<programlisting>
|
||||
system.autoUpgrade.enable = true;
|
||||
</programlisting>
|
||||
|
||||
This will cause the system to periodically check for updates in
|
||||
your current channel and run <command>nixos-rebuild</command>.</para>
|
||||
</listitem>
|
||||
|
||||
<listitem>
|
||||
<para>This release is based on Glibc 2.21, GCC 4.9 and Linux
|
||||
3.18.</para>
|
||||
</listitem>
|
||||
|
||||
<listitem>
|
||||
<para>GNOME has been upgraded to 3.16.
|
||||
</para>
|
||||
</listitem>
|
||||
|
||||
<listitem>
|
||||
<para>Xfce has been upgraded to 4.12.
|
||||
</para>
|
||||
</listitem>
|
||||
|
||||
<listitem>
|
||||
<para>KDE 5 has been upgraded to KDE Frameworks 5.10,
|
||||
Plasma 5.3.2 and Applications 15.04.3.
|
||||
KDE 4 has been updated to kdelibs-4.14.10.
|
||||
</para>
|
||||
</listitem>
|
||||
|
||||
<listitem>
|
||||
<para>E19 has been upgraded to 0.16.8.15.
|
||||
</para>
|
||||
</listitem>
|
||||
|
||||
</itemizedlist>
|
||||
|
||||
|
||||
<para>The following new services were added since the last release:
|
||||
|
||||
<itemizedlist>
|
||||
<listitem><para><literal>services/mail/exim.nix</literal></para></listitem>
|
||||
<listitem><para><literal>services/misc/apache-kafka.nix</literal></para></listitem>
|
||||
<listitem><para><literal>services/misc/canto-daemon.nix</literal></para></listitem>
|
||||
<listitem><para><literal>services/misc/confd.nix</literal></para></listitem>
|
||||
<listitem><para><literal>services/misc/devmon.nix</literal></para></listitem>
|
||||
<listitem><para><literal>services/misc/gitit.nix</literal></para></listitem>
|
||||
<listitem><para><literal>services/misc/ihaskell.nix</literal></para></listitem>
|
||||
<listitem><para><literal>services/misc/mbpfan.nix</literal></para></listitem>
|
||||
<listitem><para><literal>services/misc/mediatomb.nix</literal></para></listitem>
|
||||
<listitem><para><literal>services/misc/mwlib.nix</literal></para></listitem>
|
||||
<listitem><para><literal>services/misc/parsoid.nix</literal></para></listitem>
|
||||
<listitem><para><literal>services/misc/plex.nix</literal></para></listitem>
|
||||
<listitem><para><literal>services/misc/ripple-rest.nix</literal></para></listitem>
|
||||
<listitem><para><literal>services/misc/ripple-data-api.nix</literal></para></listitem>
|
||||
<listitem><para><literal>services/misc/subsonic.nix</literal></para></listitem>
|
||||
<listitem><para><literal>services/misc/sundtek.nix</literal></para></listitem>
|
||||
<listitem><para><literal>services/monitoring/cadvisor.nix</literal></para></listitem>
|
||||
<listitem><para><literal>services/monitoring/das_watchdog.nix</literal></para></listitem>
|
||||
<listitem><para><literal>services/monitoring/grafana.nix</literal></para></listitem>
|
||||
<listitem><para><literal>services/monitoring/riemann-tools.nix</literal></para></listitem>
|
||||
<listitem><para><literal>services/monitoring/teamviewer.nix</literal></para></listitem>
|
||||
<listitem><para><literal>services/network-filesystems/u9fs.nix</literal></para></listitem>
|
||||
<listitem><para><literal>services/networking/aiccu.nix</literal></para></listitem>
|
||||
<listitem><para><literal>services/networking/asterisk.nix</literal></para></listitem>
|
||||
<listitem><para><literal>services/networking/bird.nix</literal></para></listitem>
|
||||
<listitem><para><literal>services/networking/charybdis.nix</literal></para></listitem>
|
||||
<listitem><para><literal>services/networking/docker-registry-server.nix</literal></para></listitem>
|
||||
<listitem><para><literal>services/networking/fan.nix</literal></para></listitem>
|
||||
<listitem><para><literal>services/networking/firefox/sync-server.nix</literal></para></listitem>
|
||||
<listitem><para><literal>services/networking/gateone.nix</literal></para></listitem>
|
||||
<listitem><para><literal>services/networking/heyefi.nix</literal></para></listitem>
|
||||
<listitem><para><literal>services/networking/i2p.nix</literal></para></listitem>
|
||||
<listitem><para><literal>services/networking/lambdabot.nix</literal></para></listitem>
|
||||
<listitem><para><literal>services/networking/mstpd.nix</literal></para></listitem>
|
||||
<listitem><para><literal>services/networking/nix-serve.nix</literal></para></listitem>
|
||||
<listitem><para><literal>services/networking/nylon.nix</literal></para></listitem>
|
||||
<listitem><para><literal>services/networking/racoon.nix</literal></para></listitem>
|
||||
<listitem><para><literal>services/networking/skydns.nix</literal></para></listitem>
|
||||
<listitem><para><literal>services/networking/shout.nix</literal></para></listitem>
|
||||
<listitem><para><literal>services/networking/softether.nix</literal></para></listitem>
|
||||
<listitem><para><literal>services/networking/sslh.nix</literal></para></listitem>
|
||||
<listitem><para><literal>services/networking/tinc.nix</literal></para></listitem>
|
||||
<listitem><para><literal>services/networking/tlsdated.nix</literal></para></listitem>
|
||||
<listitem><para><literal>services/networking/tox-bootstrapd.nix</literal></para></listitem>
|
||||
<listitem><para><literal>services/networking/tvheadend.nix</literal></para></listitem>
|
||||
<listitem><para><literal>services/networking/zerotierone.nix</literal></para></listitem>
|
||||
<listitem><para><literal>services/scheduling/marathon.nix</literal></para></listitem>
|
||||
<listitem><para><literal>services/security/fprintd.nix</literal></para></listitem>
|
||||
<listitem><para><literal>services/security/hologram.nix</literal></para></listitem>
|
||||
<listitem><para><literal>services/security/munge.nix</literal></para></listitem>
|
||||
<listitem><para><literal>services/system/cloud-init.nix</literal></para></listitem>
|
||||
<listitem><para><literal>services/web-servers/shellinabox.nix</literal></para></listitem>
|
||||
<listitem><para><literal>services/web-servers/uwsgi.nix</literal></para></listitem>
|
||||
<listitem><para><literal>services/x11/unclutter.nix</literal></para></listitem>
|
||||
<listitem><para><literal>services/x11/display-managers/sddm.nix</literal></para></listitem>
|
||||
<listitem><para><literal>system/boot/coredump.nix</literal></para></listitem>
|
||||
<listitem><para><literal>system/boot/loader/loader.nix</literal></para></listitem>
|
||||
<listitem><para><literal>system/boot/loader/generic-extlinux-compatible</literal></para></listitem>
|
||||
<listitem><para><literal>system/boot/networkd.nix</literal></para></listitem>
|
||||
<listitem><para><literal>system/boot/resolved.nix</literal></para></listitem>
|
||||
<listitem><para><literal>system/boot/timesyncd.nix</literal></para></listitem>
|
||||
<listitem><para><literal>tasks/filesystems/exfat.nix</literal></para></listitem>
|
||||
<listitem><para><literal>tasks/filesystems/ntfs.nix</literal></para></listitem>
|
||||
<listitem><para><literal>tasks/filesystems/vboxsf.nix</literal></para></listitem>
|
||||
<listitem><para><literal>virtualisation/virtualbox-host.nix</literal></para></listitem>
|
||||
<listitem><para><literal>virtualisation/vmware-guest.nix</literal></para></listitem>
|
||||
<listitem><para><literal>virtualisation/xen-dom0.nix</literal></para></listitem>
|
||||
</itemizedlist>
|
||||
</para>
|
||||
|
||||
|
||||
<para>When upgrading from a previous release, please be aware of the
|
||||
following incompatible changes:
|
||||
|
||||
<itemizedlist>
|
||||
|
||||
<listitem><para><command>sshd</command> no longer supports DSA and ECDSA
|
||||
host keys by default. If you have existing systems with such host keys
|
||||
and want to continue to use them, please set
|
||||
|
||||
<programlisting>
|
||||
system.stateVersion = "14.12";
|
||||
</programlisting>
|
||||
|
||||
The new option <option>system.stateVersion</option> ensures that
|
||||
certain configuration changes that could break existing systems (such
|
||||
as the <command>sshd</command> host key setting) will maintain
|
||||
compatibility with the specified NixOS release. NixOps sets the state
|
||||
version of existing deployments automatically.</para></listitem>
|
||||
|
||||
<listitem><para><command>cron</command> is no longer enabled by
|
||||
default, unless you have a non-empty
|
||||
<option>services.cron.systemCronJobs</option>. To force
|
||||
<command>cron</command> to be enabled, set
|
||||
<option>services.cron.enable = true</option>.</para></listitem>
|
||||
|
||||
<listitem><para>Nix now requires binary caches to be cryptographically
|
||||
signed. If you have unsigned binary caches that you want to continue
|
||||
to use, you should set <option>nix.requireSignedBinaryCaches =
|
||||
false</option>.</para></listitem>
|
||||
|
||||
<listitem><para>Steam now doesn't need root rights to work. Instead of using
|
||||
<literal>*-steam-chrootenv</literal>, you should now just run <literal>steam</literal>.
|
||||
<literal>steamChrootEnv</literal> package was renamed to <literal>steam</literal>,
|
||||
and old <literal>steam</literal> package -- to <literal>steamOriginal</literal>.
|
||||
</para></listitem>
|
||||
|
||||
<listitem><para>CMPlayer has been renamed to bomi upstream. Package
|
||||
<literal>cmplayer</literal> was accordingly renamed to
|
||||
<literal>bomi</literal> </para></listitem>
|
||||
|
||||
<listitem><para>Atom Shell has been renamed to Electron upstream. Package <literal>atom-shell</literal>
|
||||
was accordingly renamed to <literal>electron</literal>
|
||||
</para></listitem>
|
||||
|
||||
<listitem><para>Elm is not released on Hackage anymore. You should now use <literal>elmPackages.elm</literal>
|
||||
which contains the latest Elm platform.</para></listitem>
|
||||
|
||||
<listitem>
|
||||
<para>The CUPS printing service has been updated to version
|
||||
<literal>2.0.2</literal>. Furthermore its systemd service has been
|
||||
renamed to <literal>cups.service</literal>.</para>
|
||||
|
||||
<para>Local printers are no longer shared or advertised by
|
||||
default. This behavior can be changed by enabling
|
||||
<option>services.printing.defaultShared</option> or
|
||||
<option>services.printing.browsing</option> respectively.</para>
|
||||
</listitem>
|
||||
|
||||
<listitem>
|
||||
<para>
|
||||
The VirtualBox host and guest options have been named more
|
||||
consistently. They can now found in
|
||||
<option>virtualisation.virtualbox.host.*</option> instead of
|
||||
<option>services.virtualboxHost.*</option> and
|
||||
<option>virtualisation.virtualbox.guest.*</option> instead of
|
||||
<option>services.virtualboxGuest.*</option>.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Also, there now is support for the <literal>vboxsf</literal> file
|
||||
system using the <option>fileSystems</option> configuration
|
||||
attribute. An example of how this can be used in a configuration:
|
||||
|
||||
<programlisting>
|
||||
fileSystems."/shiny" = {
|
||||
device = "myshinysharedfolder";
|
||||
fsType = "vboxsf";
|
||||
};
|
||||
</programlisting>
|
||||
|
||||
</para>
|
||||
</listitem>
|
||||
|
||||
<listitem>
|
||||
<para>
|
||||
"<literal>nix-env -qa</literal>" no longer discovers
|
||||
Haskell packages by name. The only packages visible in the global
|
||||
scope are <literal>ghc</literal>, <literal>cabal-install</literal>,
|
||||
and <literal>stack</literal>, but all other packages are hidden. The
|
||||
reason for this inconvenience is the sheer size of the Haskell
|
||||
package set. Name-based lookups are expensive, and most
|
||||
<literal>nix-env -qa</literal> operations would become much slower
|
||||
if we'd add the entire Hackage database into the top level attribute
|
||||
set. Instead, the list of Haskell packages can be displayed by
|
||||
running:
|
||||
</para>
|
||||
<programlisting>
|
||||
nix-env -f "<nixpkgs>" -qaP -A haskellPackages
|
||||
</programlisting>
|
||||
<para>
|
||||
Executable programs written in Haskell can be installed with:
|
||||
</para>
|
||||
<programlisting>
|
||||
nix-env -f "<nixpkgs>" -iA haskellPackages.pandoc
|
||||
</programlisting>
|
||||
<para>
|
||||
Installing Haskell <emphasis>libraries</emphasis> this way, however, is no
|
||||
longer supported. See the next item for more details.
|
||||
</para>
|
||||
</listitem>
|
||||
|
||||
<listitem>
|
||||
<para>
|
||||
Previous versions of NixOS came with a feature called
|
||||
<literal>ghc-wrapper</literal>, a small script that allowed GHC to
|
||||
transparently pick up on libraries installed in the user's profile. This
|
||||
feature has been deprecated; <literal>ghc-wrapper</literal> was removed
|
||||
from the distribution. The proper way to register Haskell libraries with
|
||||
the compiler now is the <literal>haskellPackages.ghcWithPackages</literal>
|
||||
function. The <link
|
||||
xlink:href="http://nixos.org/nixpkgs/manual/#users-guide-to-the-haskell-infrastructure">User's
|
||||
Guide to the Haskell Infrastructure</link> provides more information about
|
||||
this subject.
|
||||
</para>
|
||||
</listitem>
|
||||
|
||||
<listitem>
|
||||
<para>
|
||||
All Haskell builds that have been generated with version 1.x of
|
||||
the <literal>cabal2nix</literal> utility are now invalid and need
|
||||
to be re-generated with a current version of
|
||||
<literal>cabal2nix</literal> to function. The most recent version
|
||||
of this tool can be installed by running
|
||||
<literal>nix-env -i cabal2nix</literal>.
|
||||
</para>
|
||||
</listitem>
|
||||
|
||||
<listitem>
|
||||
<para>
|
||||
The <literal>haskellPackages</literal> set in Nixpkgs used to have a
|
||||
function attribute called <literal>extension</literal> that users
|
||||
could override in their <literal>~/.nixpkgs/config.nix</literal>
|
||||
files to configure additional attributes, etc. That function still
|
||||
exists, but it's now called <literal>overrides</literal>.
|
||||
</para>
|
||||
</listitem>
|
||||
|
||||
<listitem>
|
||||
<para>
|
||||
The OpenBLAS library has been updated to version
|
||||
<literal>0.2.14</literal>. Support for the
|
||||
<literal>x86_64-darwin</literal> platform was added. Dynamic
|
||||
architecture detection was enabled; OpenBLAS now selects
|
||||
microarchitecture-optimized routines at runtime, so optimal
|
||||
performance is achieved without the need to rebuild OpenBLAS
|
||||
locally. OpenBLAS has replaced ATLAS in most packages which use an
|
||||
optimized BLAS or LAPACK implementation.
|
||||
</para>
|
||||
</listitem>
|
||||
|
||||
<listitem>
|
||||
<para>
|
||||
The <literal>phpfpm</literal> is now using the default PHP version
|
||||
(<literal>pkgs.php</literal>) instead of PHP 5.4 (<literal>pkgs.php54</literal>).
|
||||
</para>
|
||||
</listitem>
|
||||
|
||||
<listitem>
|
||||
<para>
|
||||
The <literal>locate</literal> service no longer indexes the Nix store
|
||||
by default, preventing packages with potentially numerous versions from
|
||||
cluttering the output. Indexing the store can be activated by setting
|
||||
<option>services.locate.includeStore = true</option>.
|
||||
</para>
|
||||
</listitem>
|
||||
|
||||
<listitem>
|
||||
<para>
|
||||
The Nix expression search path (<envar>NIX_PATH</envar>) no longer
|
||||
contains <filename>/etc/nixos/nixpkgs</filename> by default. You
|
||||
can override <envar>NIX_PATH</envar> by setting
|
||||
<option>nix.nixPath</option>.
|
||||
</para>
|
||||
</listitem>
|
||||
|
||||
<listitem>
|
||||
<para>
|
||||
Python 2.6 has been marked as broken (as it no longer recieves
|
||||
security updates from upstream).
|
||||
</para>
|
||||
</listitem>
|
||||
|
||||
<listitem>
|
||||
<para>
|
||||
Any use of module arguments such as <varname>pkgs</varname> to access
|
||||
library functions, or to define <literal>imports</literal> attributes
|
||||
will now lead to an infinite loop at the time of the evaluation.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
In case of an infinite loop, use the <command>--show-trace</command>
|
||||
command line argument and read the line just above the error message.
|
||||
|
||||
<screen>
|
||||
$ nixos-rebuild build --show-trace
|
||||
…
|
||||
while evaluating the module argument `pkgs' in "/etc/nixos/my-module.nix":
|
||||
infinite recursion encountered
|
||||
</screen>
|
||||
</para>
|
||||
|
||||
|
||||
<para>
|
||||
Any use of <literal>pkgs.lib</literal>, should be replaced by
|
||||
<varname>lib</varname>, after adding it as argument of the module. The
|
||||
following module
|
||||
|
||||
<programlisting>
|
||||
{ config, pkgs, ... }:
|
||||
|
||||
with pkgs.lib;
|
||||
|
||||
{
|
||||
options = {
|
||||
foo = mkOption { … };
|
||||
};
|
||||
config = mkIf config.foo { … };
|
||||
}
|
||||
</programlisting>
|
||||
|
||||
should be modified to look like:
|
||||
|
||||
<programlisting>
|
||||
{ config, pkgs, lib, ... }:
|
||||
|
||||
with lib;
|
||||
|
||||
{
|
||||
options = {
|
||||
foo = mkOption { <replaceable>option declaration</replaceable> };
|
||||
};
|
||||
config = mkIf config.foo { <replaceable>option definition</replaceable> };
|
||||
}
|
||||
</programlisting>
|
||||
</para>
|
||||
|
||||
<para>
|
||||
When <varname>pkgs</varname> is used to download other projects to
|
||||
import their modules, and only in such cases, it should be replaced by
|
||||
<literal>(import <nixpkgs> {})</literal>. The following module
|
||||
|
||||
<programlisting>
|
||||
{ config, pkgs, ... }:
|
||||
|
||||
let
|
||||
myProject = pkgs.fetchurl {
|
||||
src = <replaceable>url</replaceable>;
|
||||
sha256 = <replaceable>hash</replaceable>;
|
||||
};
|
||||
in
|
||||
|
||||
{
|
||||
imports = [ "${myProject}/module.nix" ];
|
||||
}
|
||||
</programlisting>
|
||||
|
||||
should be modified to look like:
|
||||
|
||||
<programlisting>
|
||||
{ config, pkgs, ... }:
|
||||
|
||||
let
|
||||
myProject = (import <nixpkgs> {}).fetchurl {
|
||||
src = <replaceable>url</replaceable>;
|
||||
sha256 = <replaceable>hash</replaceable>;
|
||||
};
|
||||
in
|
||||
|
||||
{
|
||||
imports = [ "${myProject}/module.nix" ];
|
||||
}
|
||||
</programlisting>
|
||||
</para>
|
||||
|
||||
</listitem>
|
||||
|
||||
</itemizedlist>
|
||||
</para>
|
||||
|
||||
|
||||
<para>Other notable improvements:
|
||||
|
||||
<itemizedlist>
|
||||
|
||||
<listitem><para>The nixos and nixpkgs channels were unified,
|
||||
so one <emphasis>can</emphasis> use <literal>nix-env -iA nixos.bash</literal>
|
||||
instead of <literal>nix-env -iA nixos.pkgs.bash</literal>.
|
||||
See <link xlink:href="https://github.com/NixOS/nixpkgs/commit/2cd7c1f198">the commit</link> for details.
|
||||
</para></listitem>
|
||||
|
||||
<listitem>
|
||||
<para>
|
||||
Users running an SSH server who worry about the quality of their
|
||||
<literal>/etc/ssh/moduli</literal> file with respect to the
|
||||
<link
|
||||
xlink:href="https://stribika.github.io/2015/01/04/secure-secure-shell.html">vulnerabilities
|
||||
discovered in the Diffie-Hellman key exchange</link> can now
|
||||
replace OpenSSH's default version with one they generated
|
||||
themselves using the new
|
||||
<option>services.openssh.moduliFile</option> option.
|
||||
</para>
|
||||
</listitem>
|
||||
|
||||
<listitem> <para>
|
||||
A newly packaged TeX Live 2015 is provided in <literal>pkgs.texlive</literal>,
|
||||
split into 6500 nix packages. For basic user documentation see
|
||||
<link xlink:href="https://github.com/NixOS/nixpkgs/blob/release-15.09/pkgs/tools/typesetting/tex/texlive-new/default.nix#L1"
|
||||
>the source</link>.
|
||||
Beware of <link xlink:href="https://github.com/NixOS/nixpkgs/issues/9757"
|
||||
>an issue</link> when installing a too large package set.
|
||||
|
||||
The plan is to deprecate and maybe delete the original TeX packages
|
||||
until the next release.
|
||||
</para> </listitem>
|
||||
|
||||
<listitem><para>
|
||||
<option>buildEnv.env</option> on all Python interpreters
|
||||
is now available for nix-shell interoperability.
|
||||
</para> </listitem>
|
||||
</itemizedlist>
|
||||
|
||||
</para>
|
||||
|
||||
</section>
|
@ -1,55 +1,45 @@
|
||||
<chapter xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
version="5.0"
|
||||
xml:id="sec-release-unstable">
|
||||
<section xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
version="5.0"
|
||||
xml:id="sec-release-unstable">
|
||||
|
||||
<title>Unstable revision</title>
|
||||
|
||||
<para>In addition to numerous new and upgraded packages, this release has the following highlights:
|
||||
|
||||
<!--<itemizedlist>
|
||||
|
||||
</itemizedlist>-->
|
||||
</para>
|
||||
|
||||
<para>Following new services were added since the last release:
|
||||
|
||||
<!--<itemizedlist>
|
||||
|
||||
</itemizedlist>-->
|
||||
</para>
|
||||
<title>Unstable</title>
|
||||
|
||||
<para>When upgrading from a previous release, please be aware of the
|
||||
following incompatible changes:
|
||||
following incompatible changes:</para>
|
||||
|
||||
<itemizedlist>
|
||||
|
||||
<listitem><para>Steam now doesn't need root rights to work. Instead of using
|
||||
<literal>*-steam-chrootenv</literal>, you should now just run <literal>steam</literal>.
|
||||
<literal>steamChrootEnv</literal> package was renamed to <literal>steam</literal>,
|
||||
and old <literal>steam</literal> package -- to <literal>steamOriginal</literal>.
|
||||
</para></listitem>
|
||||
|
||||
<listitem><para>CMPlayer has been renamed to bomi upstream. Package <literal>cmplayer</literal>
|
||||
was accordingly renamed to <literal>bomi</literal>
|
||||
</para></listitem>
|
||||
|
||||
<listitem>
|
||||
<para>
|
||||
The default <literal>NIX_PATH</literal> for NixOS now includes
|
||||
<literal>/nix/var/nix/profiles/per-user/root/channels</literal>, so it's
|
||||
easy to add custom channels.
|
||||
<listitem>
|
||||
<para><command>wmiiSnap</command> has been replaced with
|
||||
<command>wmii_hg</command>, but
|
||||
<command>services.xserver.windowManager.wmii.enable</command> has
|
||||
been updated respectively so this only affects you if you have
|
||||
explicitly installed <command>wmiiSnap</command>.
|
||||
</para>
|
||||
<para>
|
||||
Moreover, whenever a <command>nixos-rebuild <action>
|
||||
--upgrade</command> is issued, every channel that includes a file
|
||||
called <filename>.update-on-nixos-rebuild</filename> will be upgraded
|
||||
alongside of the <literal>nixos</literal> channel.
|
||||
</para>
|
||||
</listitem>
|
||||
</listitem>
|
||||
|
||||
<listitem>
|
||||
<para><command>wmiimenu</command> is removed, as it has been
|
||||
removed by the developers upstream. Use <command>wimenu</command>
|
||||
from the <command>wmii-hg</command> package.</para>
|
||||
</listitem>
|
||||
|
||||
<listitem>
|
||||
<para>Gitit is no longer automatically added to the module list in
|
||||
NixOS and as such there will not be any manual entries for it. You
|
||||
will need to add an import statement to your NixOS configuration
|
||||
in order to use it, e.g.
|
||||
|
||||
<programlisting><![CDATA[
|
||||
{
|
||||
imports = [ <nixos/modules/services/misc/gitit.nix> ];
|
||||
}
|
||||
]]></programlisting>
|
||||
|
||||
will include the Gitit service configuration options.</para>
|
||||
</listitem>
|
||||
|
||||
</itemizedlist>
|
||||
</para>
|
||||
|
||||
</chapter>
|
||||
</section>
|
||||
|
@ -1,6 +1,6 @@
|
||||
{ system, minimal ? false }:
|
||||
|
||||
let pkgs = import ./nixpkgs.nix { config = {}; inherit system; }; in
|
||||
let pkgs = import ../.. { config = {}; inherit system; }; in
|
||||
|
||||
with pkgs.lib;
|
||||
with import ../lib/qemu-flags.nix;
|
||||
@ -41,22 +41,22 @@ rec {
|
||||
|
||||
machines = attrNames nodes;
|
||||
|
||||
machinesNumbered = zipTwoLists machines (range 1 254);
|
||||
machinesNumbered = zipLists machines (range 1 254);
|
||||
|
||||
nodes_ = flip map machinesNumbered (m: nameValuePair m.first
|
||||
nodes_ = flip map machinesNumbered (m: nameValuePair m.fst
|
||||
[ ( { config, pkgs, nodes, ... }:
|
||||
let
|
||||
interfacesNumbered = zipTwoLists config.virtualisation.vlans (range 1 255);
|
||||
interfaces = flip map interfacesNumbered ({ first, second }:
|
||||
nameValuePair "eth${toString second}" { ip4 =
|
||||
[ { address = "192.168.${toString first}.${toString m.second}";
|
||||
interfacesNumbered = zipLists config.virtualisation.vlans (range 1 255);
|
||||
interfaces = flip map interfacesNumbered ({ fst, snd }:
|
||||
nameValuePair "eth${toString snd}" { ip4 =
|
||||
[ { address = "192.168.${toString fst}.${toString m.snd}";
|
||||
prefixLength = 24;
|
||||
} ];
|
||||
});
|
||||
in
|
||||
{ key = "ip-address";
|
||||
config =
|
||||
{ networking.hostName = m.first;
|
||||
{ networking.hostName = m.fst;
|
||||
|
||||
networking.interfaces = listToAttrs interfaces;
|
||||
|
||||
@ -76,11 +76,11 @@ rec {
|
||||
|
||||
virtualisation.qemu.options =
|
||||
flip map interfacesNumbered
|
||||
({ first, second }: qemuNICFlags second first m.second);
|
||||
({ fst, snd }: qemuNICFlags snd fst m.snd);
|
||||
};
|
||||
}
|
||||
)
|
||||
(getAttr m.first nodes)
|
||||
(getAttr m.fst nodes)
|
||||
] );
|
||||
|
||||
in listToAttrs nodes_;
|
||||
|
@ -1,6 +0,0 @@
|
||||
{ system ? builtins.currentSystem }:
|
||||
|
||||
{ pkgs =
|
||||
(import nixpkgs/default.nix { inherit system; })
|
||||
// { recurseForDerivations = true; };
|
||||
}
|
@ -17,6 +17,8 @@
|
||||
baseModules ? import ../modules/module-list.nix
|
||||
, # !!! See comment about args in lib/modules.nix
|
||||
extraArgs ? {}
|
||||
, # !!! See comment about args in lib/modules.nix
|
||||
specialArgs ? {}
|
||||
, modules
|
||||
, # !!! See comment about check in lib/modules.nix
|
||||
check ? true
|
||||
@ -47,15 +49,11 @@ in rec {
|
||||
inherit prefix check;
|
||||
modules = modules ++ extraModules ++ baseModules ++ [ pkgsModule ];
|
||||
args = extraArgs;
|
||||
specialArgs = { modulesPath = ../modules; } // specialArgs;
|
||||
}) config options;
|
||||
|
||||
# These are the extra arguments passed to every module. In
|
||||
# particular, Nixpkgs is passed through the "pkgs" argument.
|
||||
# FIXME: we enable config.allowUnfree to make packages like
|
||||
# nvidia-x11 available. This isn't a problem because if the user has
|
||||
# ‘nixpkgs.config.allowUnfree = false’, then evaluation will fail on
|
||||
# the 64-bit package anyway. However, it would be cleaner to respect
|
||||
# nixpkgs.config here.
|
||||
extraArgs = extraArgs_ // {
|
||||
inherit modules baseModules;
|
||||
};
|
||||
|
27
nixos/lib/make-channel.nix
Normal file
27
nixos/lib/make-channel.nix
Normal file
@ -0,0 +1,27 @@
|
||||
{ pkgs, nixpkgs, version, versionSuffix }:
|
||||
|
||||
pkgs.releaseTools.makeSourceTarball {
|
||||
name = "nixos-channel";
|
||||
|
||||
src = nixpkgs;
|
||||
|
||||
officialRelease = false; # FIXME: fix this in makeSourceTarball
|
||||
inherit version versionSuffix;
|
||||
|
||||
buildInputs = [ pkgs.nix ];
|
||||
|
||||
distPhase = ''
|
||||
rm -rf .git
|
||||
echo -n $VERSION_SUFFIX > .version-suffix
|
||||
echo -n ${nixpkgs.rev or nixpkgs.shortRev} > .git-revision
|
||||
releaseName=nixos-$VERSION$VERSION_SUFFIX
|
||||
mkdir -p $out/tarballs
|
||||
cp -prd . ../$releaseName
|
||||
chmod -R u+w ../$releaseName
|
||||
ln -s . ../$releaseName/nixpkgs # hack to make ‘<nixpkgs>’ work
|
||||
NIX_STATE_DIR=$TMPDIR nix-env -f ../$releaseName/default.nix -qaP --meta --xml \* > /dev/null
|
||||
cd ..
|
||||
chmod -R u+w $releaseName
|
||||
tar cfJ $out/tarballs/$releaseName.tar.xz $releaseName
|
||||
'';
|
||||
}
|
115
nixos/lib/make-disk-image.nix
Normal file
115
nixos/lib/make-disk-image.nix
Normal file
@ -0,0 +1,115 @@
|
||||
{ pkgs
|
||||
, lib
|
||||
|
||||
, # The NixOS configuration to be installed onto the disk image.
|
||||
config
|
||||
|
||||
, # The size of the disk, in megabytes.
|
||||
diskSize
|
||||
|
||||
, # Whether the disk should be partitioned (with a single partition
|
||||
# containing the root filesystem) or contain the root filesystem
|
||||
# directly.
|
||||
partitioned ? true
|
||||
|
||||
, # The root file system type.
|
||||
fsType ? "ext4"
|
||||
|
||||
, # The initial NixOS configuration file to be copied to
|
||||
# /etc/nixos/configuration.nix.
|
||||
configFile ? null
|
||||
|
||||
, # Shell code executed after the VM has finished.
|
||||
postVM ? ""
|
||||
|
||||
}:
|
||||
|
||||
with lib;
|
||||
|
||||
pkgs.vmTools.runInLinuxVM (
|
||||
pkgs.runCommand "nixos-disk-image"
|
||||
{ preVM =
|
||||
''
|
||||
mkdir $out
|
||||
diskImage=$out/nixos.img
|
||||
${pkgs.vmTools.qemu}/bin/qemu-img create -f raw $diskImage "${toString diskSize}M"
|
||||
mv closure xchg/
|
||||
'';
|
||||
buildInputs = [ pkgs.utillinux pkgs.perl pkgs.e2fsprogs pkgs.parted ];
|
||||
exportReferencesGraph =
|
||||
[ "closure" config.system.build.toplevel ];
|
||||
inherit postVM;
|
||||
}
|
||||
''
|
||||
${if partitioned then ''
|
||||
# Create a single / partition.
|
||||
parted /dev/vda mklabel msdos
|
||||
parted /dev/vda -- mkpart primary ext2 1M -1s
|
||||
. /sys/class/block/vda1/uevent
|
||||
mknod /dev/vda1 b $MAJOR $MINOR
|
||||
rootDisk=/dev/vda1
|
||||
'' else ''
|
||||
rootDisk=/dev/vda
|
||||
''}
|
||||
|
||||
# Create an empty filesystem and mount it.
|
||||
mkfs.${fsType} -L nixos $rootDisk
|
||||
${optionalString (fsType == "ext4") ''
|
||||
tune2fs -c 0 -i 0 $rootDisk
|
||||
''}
|
||||
mkdir /mnt
|
||||
mount $rootDisk /mnt
|
||||
|
||||
# The initrd expects these directories to exist.
|
||||
mkdir /mnt/dev /mnt/proc /mnt/sys
|
||||
|
||||
mount -o bind /proc /mnt/proc
|
||||
mount -o bind /dev /mnt/dev
|
||||
mount -o bind /sys /mnt/sys
|
||||
|
||||
# Copy all paths in the closure to the filesystem.
|
||||
storePaths=$(perl ${pkgs.pathsFromGraph} /tmp/xchg/closure)
|
||||
|
||||
mkdir -p /mnt/nix/store
|
||||
echo "copying everything (will take a while)..."
|
||||
set -f
|
||||
cp -prd $storePaths /mnt/nix/store/
|
||||
|
||||
# Register the paths in the Nix database.
|
||||
printRegistration=1 perl ${pkgs.pathsFromGraph} /tmp/xchg/closure | \
|
||||
chroot /mnt ${config.nix.package}/bin/nix-store --load-db --option build-users-group ""
|
||||
|
||||
# Add missing size/hash fields to the database. FIXME:
|
||||
# exportReferencesGraph should provide these directly.
|
||||
chroot /mnt ${config.nix.package}/bin/nix-store --verify --check-contents
|
||||
|
||||
# Create the system profile to allow nixos-rebuild to work.
|
||||
chroot /mnt ${config.nix.package}/bin/nix-env --option build-users-group "" \
|
||||
-p /nix/var/nix/profiles/system --set ${config.system.build.toplevel}
|
||||
|
||||
# `nixos-rebuild' requires an /etc/NIXOS.
|
||||
mkdir -p /mnt/etc
|
||||
touch /mnt/etc/NIXOS
|
||||
|
||||
# `switch-to-configuration' requires a /bin/sh
|
||||
mkdir -p /mnt/bin
|
||||
ln -s ${config.system.build.binsh}/bin/sh /mnt/bin/sh
|
||||
|
||||
# Install a configuration.nix.
|
||||
mkdir -p /mnt/etc/nixos
|
||||
${optionalString (configFile != null) ''
|
||||
cp ${configFile} /mnt/etc/nixos/configuration.nix
|
||||
''}
|
||||
|
||||
# Generate the GRUB menu.
|
||||
ln -s vda /dev/xvda
|
||||
ln -s vda /dev/sda
|
||||
chroot /mnt ${config.system.build.toplevel}/bin/switch-to-configuration boot
|
||||
|
||||
umount /mnt/proc /mnt/dev /mnt/sys
|
||||
umount /mnt
|
||||
|
||||
# Do an fsck to make sure resize2fs works.
|
||||
fsck.${fsType} -f -y $rootDisk
|
||||
''
|
||||
)
|
88
nixos/lib/make-ext4-fs.nix
Normal file
88
nixos/lib/make-ext4-fs.nix
Normal file
@ -0,0 +1,88 @@
|
||||
# Builds an ext4 image containing a populated /nix/store with the closure
|
||||
# of store paths passed in the storePaths parameter. The generated image
|
||||
# is sized to only fit its contents, with the expectation that a script
|
||||
# resizes the filesystem at boot time.
|
||||
{ pkgs
|
||||
, storePaths
|
||||
, volumeLabel
|
||||
}:
|
||||
|
||||
pkgs.stdenv.mkDerivation {
|
||||
name = "ext4-fs.img";
|
||||
|
||||
buildInputs = with pkgs; [e2fsprogs libfaketime perl];
|
||||
|
||||
# For obtaining the closure of `storePaths'.
|
||||
exportReferencesGraph =
|
||||
map (x: [("closure-" + baseNameOf x) x]) storePaths;
|
||||
|
||||
buildCommand =
|
||||
''
|
||||
# Add the closures of the top-level store objects.
|
||||
storePaths=$(perl ${pkgs.pathsFromGraph} closure-*)
|
||||
|
||||
# Also include a manifest of the closures in a format suitable
|
||||
# for nix-store --load-db.
|
||||
printRegistration=1 perl ${pkgs.pathsFromGraph} closure-* > nix-path-registration
|
||||
|
||||
# Make a crude approximation of the size of the target image.
|
||||
# If the script starts failing, increase the fudge factors here.
|
||||
numInodes=$(find $storePaths | wc -l)
|
||||
numDataBlocks=$(du -c -B 4096 --apparent-size $storePaths | awk '$2 == "total" { print int($1 * 1.03) }')
|
||||
bytes=$((2 * 4096 * $numInodes + 4096 * $numDataBlocks))
|
||||
echo "Creating an EXT4 image of $bytes bytes (numInodes=$numInodes, numDataBlocks=$numDataBlocks)"
|
||||
|
||||
truncate -s $bytes $out
|
||||
faketime "1970-01-01 00:00:00" mkfs.ext4 -L ${volumeLabel} -U 44444444-4444-4444-8888-888888888888 $out
|
||||
|
||||
# Populate the image contents by piping a bunch of commands to the `debugfs` tool from e2fsprogs.
|
||||
# For example, to copy /nix/store/abcd...efg-coreutils-8.23/bin/sleep:
|
||||
# cd /nix/store/abcd...efg-coreutils-8.23/bin
|
||||
# write /nix/store/abcd...efg-coreutils-8.23/bin/sleep sleep
|
||||
# sif sleep mode 040555
|
||||
# sif sleep gid 30000
|
||||
# In particular, debugfs doesn't handle absolute target paths; you have to 'cd' in the virtual
|
||||
# filesystem first. Likewise the intermediate directories must already exist (using `find`
|
||||
# handles that for us). And when setting the file's permissions, the inode type flags (__S_IFDIR,
|
||||
# __S_IFREG) need to be set as well.
|
||||
(
|
||||
echo write nix-path-registration nix-path-registration
|
||||
echo mkdir nix
|
||||
echo cd /nix
|
||||
echo mkdir store
|
||||
|
||||
# XXX: This explodes in exciting ways if anything in /nix/store has a space in it.
|
||||
find $storePaths -printf '%y %f %h %m\n'| while read -r type file dir perms; do
|
||||
# echo "TYPE=$type DIR=$dir FILE=$file PERMS=$perms" >&2
|
||||
|
||||
echo "cd $dir"
|
||||
case $type in
|
||||
d)
|
||||
echo "mkdir $file"
|
||||
echo sif $file mode $((040000 | 0$perms)) # magic constant is __S_IFDIR
|
||||
;;
|
||||
f)
|
||||
echo "write $dir/$file $file"
|
||||
echo sif $file mode $((0100000 | 0$perms)) # magic constant is __S_IFREG
|
||||
;;
|
||||
l)
|
||||
echo "symlink $file $(readlink "$dir/$file")"
|
||||
;;
|
||||
*)
|
||||
echo "Unknown entry: $type $dir $file $perms" >&2
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
|
||||
echo sif $file gid 30000 # chgrp to nixbld
|
||||
done
|
||||
) | faketime "1970-01-01 00:00:00" debugfs -w $out -f /dev/stdin > errorlog 2>&1
|
||||
|
||||
# The debugfs tool doesn't terminate on error nor exit with a non-zero status. Check manually.
|
||||
if egrep -q 'Could not allocate|File not found' errorlog; then
|
||||
cat errorlog
|
||||
echo "--- Failed to create EXT4 image of $bytes bytes (numInodes=$numInodes, numDataBlocks=$numDataBlocks) ---"
|
||||
return 1
|
||||
fi
|
||||
'';
|
||||
}
|
@ -1,8 +0,0 @@
|
||||
/* Terrible backward compatibility hack to get the path to Nixpkgs
|
||||
from here. Usually, that's the relative path ‘../..’. However,
|
||||
when using the NixOS channel, <nixos> resolves to a symlink to
|
||||
nixpkgs/nixos, so ‘../..’ doesn't resolve to the top-level Nixpkgs
|
||||
directory but one above it. So check for that situation. */
|
||||
if builtins.pathExists ../../.version then import ../..
|
||||
else if builtins.pathExists ../../nixpkgs then import ../../nixpkgs
|
||||
else abort "Can't find Nixpkgs, please set ‘NIX_PATH=nixpkgs=/path/to/nixpkgs’."
|
@ -9,6 +9,7 @@ use FileHandle;
|
||||
use Cwd;
|
||||
use File::Basename;
|
||||
use File::Path qw(make_path);
|
||||
use File::Slurp;
|
||||
|
||||
|
||||
my $showGraphics = defined $ENV{'DISPLAY'};
|
||||
@ -20,7 +21,7 @@ sub new {
|
||||
my ($class, $args) = @_;
|
||||
|
||||
my $startCommand = $args->{startCommand};
|
||||
|
||||
|
||||
my $name = $args->{name};
|
||||
if (!$name) {
|
||||
$startCommand =~ /run-(.*)-vm$/ if defined $startCommand;
|
||||
@ -33,7 +34,7 @@ sub new {
|
||||
"qemu-kvm -m 384 " .
|
||||
"-net nic,model=virtio \$QEMU_OPTS ";
|
||||
my $iface = $args->{hdaInterface} || "virtio";
|
||||
$startCommand .= "-drive file=" . Cwd::abs_path($args->{hda}) . ",if=$iface,boot=on,werror=report "
|
||||
$startCommand .= "-drive file=" . Cwd::abs_path($args->{hda}) . ",if=$iface,werror=report "
|
||||
if defined $args->{hda};
|
||||
$startCommand .= "-cdrom $args->{cdrom} "
|
||||
if defined $args->{cdrom};
|
||||
@ -42,8 +43,6 @@ sub new {
|
||||
$startCommand .= "-bios $args->{bios} "
|
||||
if defined $args->{bios};
|
||||
$startCommand .= $args->{qemuFlags} || "";
|
||||
} else {
|
||||
$startCommand = Cwd::abs_path $startCommand;
|
||||
}
|
||||
|
||||
my $tmpDir = $ENV{'TMPDIR'} || "/tmp";
|
||||
@ -170,7 +169,7 @@ sub start {
|
||||
|
||||
eval {
|
||||
local $SIG{CHLD} = sub { die "QEMU died prematurely\n"; };
|
||||
|
||||
|
||||
# Wait until QEMU connects to the monitor.
|
||||
accept($self->{monitor}, $monitorS) or die;
|
||||
|
||||
@ -181,11 +180,11 @@ sub start {
|
||||
$self->{socket}->autoflush(1);
|
||||
};
|
||||
die "$@" if $@;
|
||||
|
||||
|
||||
$self->waitForMonitorPrompt;
|
||||
|
||||
$self->log("QEMU running (pid $pid)");
|
||||
|
||||
|
||||
$self->{pid} = $pid;
|
||||
$self->{booted} = 1;
|
||||
}
|
||||
@ -240,7 +239,7 @@ sub connect {
|
||||
alarm 300;
|
||||
readline $self->{socket} or die "the VM quit before connecting\n";
|
||||
alarm 0;
|
||||
|
||||
|
||||
$self->log("connected to guest root shell");
|
||||
$self->{connected} = 1;
|
||||
|
||||
@ -269,7 +268,7 @@ sub isUp {
|
||||
|
||||
sub execute_ {
|
||||
my ($self, $command) = @_;
|
||||
|
||||
|
||||
$self->connect;
|
||||
|
||||
print { $self->{socket} } ("( $command ); echo '|!=EOF' \$?\n");
|
||||
@ -452,7 +451,7 @@ sub shutdown {
|
||||
sub crash {
|
||||
my ($self) = @_;
|
||||
return unless $self->{booted};
|
||||
|
||||
|
||||
$self->log("forced crash");
|
||||
|
||||
$self->sendMonitorCommand("quit");
|
||||
@ -493,6 +492,44 @@ sub screenshot {
|
||||
}
|
||||
|
||||
|
||||
# Take a screenshot and return the result as text using optical character
|
||||
# recognition.
|
||||
sub getScreenText {
|
||||
my ($self) = @_;
|
||||
|
||||
system("command -v tesseract &> /dev/null") == 0
|
||||
or die "getScreenText used but enableOCR is false";
|
||||
|
||||
my $text;
|
||||
$self->nest("performing optical character recognition", sub {
|
||||
my $tmpbase = Cwd::abs_path(".")."/ocr";
|
||||
my $tmpin = $tmpbase."in.ppm";
|
||||
my $tmpout = "$tmpbase.ppm";
|
||||
|
||||
$self->sendMonitorCommand("screendump $tmpin");
|
||||
system("ppmtopgm $tmpin | pamscale 4 -filter=lanczos > $tmpout") == 0
|
||||
or die "cannot scale screenshot";
|
||||
unlink $tmpin;
|
||||
system("tesseract $tmpout $tmpbase") == 0 or die "OCR failed";
|
||||
unlink $tmpout;
|
||||
$text = read_file("$tmpbase.txt");
|
||||
unlink "$tmpbase.txt";
|
||||
});
|
||||
return $text;
|
||||
}
|
||||
|
||||
|
||||
# Wait until a specific regexp matches the textual contents of the screen.
|
||||
sub waitForText {
|
||||
my ($self, $regexp) = @_;
|
||||
$self->nest("waiting for $regexp to appear on the screen", sub {
|
||||
retry sub {
|
||||
return 1 if $self->getScreenText =~ /$regexp/;
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
|
||||
# Wait until it is possible to connect to the X server. Note that
|
||||
# testing the existence of /tmp/.X11-unix/X0 is insufficient.
|
||||
sub waitForX {
|
||||
|
@ -15,6 +15,8 @@ rec {
|
||||
|
||||
unpackPhase = "true";
|
||||
|
||||
preferLocalBuild = true;
|
||||
|
||||
installPhase =
|
||||
''
|
||||
mkdir -p $out/bin
|
||||
@ -28,7 +30,7 @@ rec {
|
||||
|
||||
wrapProgram $out/bin/nixos-test-driver \
|
||||
--prefix PATH : "${qemu_kvm}/bin:${vde2}/bin:${netpbm}/bin:${coreutils}/bin" \
|
||||
--prefix PERL5LIB : "${lib.makePerlPath [ perlPackages.TermReadLineGnu perlPackages.XMLWriter perlPackages.IOTty ]}:$out/lib/perl5/site_perl"
|
||||
--prefix PERL5LIB : "${with perlPackages; lib.makePerlPath [ TermReadLineGnu XMLWriter IOTty FileSlurp ]}:$out/lib/perl5/site_perl"
|
||||
'';
|
||||
};
|
||||
|
||||
@ -68,7 +70,12 @@ rec {
|
||||
|
||||
|
||||
makeTest =
|
||||
{ testScript, makeCoverageReport ? false, name ? "unnamed", ... } @ t:
|
||||
{ testScript
|
||||
, makeCoverageReport ? false
|
||||
, enableOCR ? false
|
||||
, name ? "unnamed"
|
||||
, ...
|
||||
} @ t:
|
||||
|
||||
let
|
||||
testDriverName = "nixos-test-driver-${name}";
|
||||
@ -86,6 +93,8 @@ rec {
|
||||
|
||||
vms = map (m: m.config.system.build.vm) (lib.attrValues nodes);
|
||||
|
||||
ocrProg = tesseract.override { enableLanguages = [ "eng" ]; };
|
||||
|
||||
# Generate onvenience wrappers for running the test driver
|
||||
# interactively with the specified network, and for starting the
|
||||
# VMs from the command line.
|
||||
@ -102,23 +111,29 @@ rec {
|
||||
vms="$(for i in ${toString vms}; do echo $i/bin/run-*-vm; done)"
|
||||
wrapProgram $out/bin/nixos-test-driver \
|
||||
--add-flags "$vms" \
|
||||
${lib.optionalString enableOCR "--prefix PATH : '${ocrProg}/bin'"} \
|
||||
--run "testScript=\"\$(cat $out/test-script)\"" \
|
||||
--set testScript '"$testScript"' \
|
||||
--set VLANS '"${toString vlans}"'
|
||||
ln -s ${testDriver}/bin/nixos-test-driver $out/bin/nixos-run-vms
|
||||
wrapProgram $out/bin/nixos-run-vms \
|
||||
--add-flags "$vms" \
|
||||
${lib.optionalString enableOCR "--prefix PATH : '${ocrProg}/bin'"} \
|
||||
--set tests '"startAll; joinAll;"' \
|
||||
--set VLANS '"${toString vlans}"' \
|
||||
${lib.optionalString (builtins.length vms == 1) "--set USE_SERIAL 1"}
|
||||
''; # "
|
||||
|
||||
test = runTests driver;
|
||||
passMeta = drv: drv // lib.optionalAttrs (t ? meta) {
|
||||
meta = (drv.meta or {}) // t.meta;
|
||||
};
|
||||
|
||||
report = releaseTools.gcovReport { coverageRuns = [ test ]; };
|
||||
|
||||
in (if makeCoverageReport then report else test) // { inherit nodes driver test; };
|
||||
test = passMeta (runTests driver);
|
||||
report = passMeta (releaseTools.gcovReport { coverageRuns = [ test ]; });
|
||||
|
||||
in (if makeCoverageReport then report else test) // {
|
||||
inherit nodes driver test;
|
||||
};
|
||||
|
||||
runInMachine =
|
||||
{ drv
|
||||
|
@ -1,59 +1,125 @@
|
||||
{ configuration ? import ../lib/from-env.nix "NIXOS_CONFIG" <nixos-config>
|
||||
|
||||
# []: display all options
|
||||
# [<option names>]: display the selected options
|
||||
, displayOptions ? [
|
||||
"hardware.pcmcia.enable"
|
||||
"environment.systemPackages"
|
||||
"boot.kernelModules"
|
||||
"services.udev.packages"
|
||||
"jobs"
|
||||
"environment.etc"
|
||||
"system.activationScripts"
|
||||
]
|
||||
# provide an option name, as a string literal.
|
||||
, testOption ? null
|
||||
|
||||
# provide a list of option names, as string literals.
|
||||
, testOptions ? [ ]
|
||||
}:
|
||||
|
||||
# This file is used to generate a dot graph which contains all options and
|
||||
# there dependencies to track problems and their sources.
|
||||
# This file is made to be used as follow:
|
||||
#
|
||||
# $ nix-instantiate ./option-usage.nix --argstr testOption service.xserver.enable -A txtContent --eval
|
||||
#
|
||||
# or
|
||||
#
|
||||
# $ nix-build ./option-usage.nix --argstr testOption service.xserver.enable -A txt -o service.xserver.enable._txt
|
||||
#
|
||||
# otther target exists such as, `dotContent`, `dot`, and `pdf`. If you are
|
||||
# looking for the option usage of multiple options, you can provide a list
|
||||
# as argument.
|
||||
#
|
||||
# $ nix-build ./option-usage.nix --arg testOptions \
|
||||
# '["boot.loader.gummiboot.enable" "boot.loader.gummiboot.timeout"]' \
|
||||
# -A txt -o gummiboot.list
|
||||
#
|
||||
# Note, this script is slow as it has to evaluate all options of the system
|
||||
# once per queried option.
|
||||
#
|
||||
# This nix expression works by doing a first evaluation, which evaluates the
|
||||
# result of every option.
|
||||
#
|
||||
# Then, for each queried option, we evaluate the NixOS modules a second
|
||||
# time, except that we replace the `config` argument of all the modules with
|
||||
# the result of the original evaluation, except for the tested option which
|
||||
# value is replaced by a `throw` statement which is caught by the `tryEval`
|
||||
# evaluation of each option value.
|
||||
#
|
||||
# We then compare the result of the evluation of the original module, with
|
||||
# the result of the second evaluation, and consider that the new failures are
|
||||
# caused by our mutation of the `config` argument.
|
||||
#
|
||||
# Doing so returns all option results which are directly using the
|
||||
# tested option result.
|
||||
|
||||
with import ../../lib;
|
||||
|
||||
let
|
||||
|
||||
evalFun = {
|
||||
extraArgs ? {}
|
||||
specialArgs ? {}
|
||||
}: import ../lib/eval-config.nix {
|
||||
modules = [ configuration ];
|
||||
inherit extraArgs;
|
||||
inherit specialArgs;
|
||||
};
|
||||
|
||||
eval = evalFun {};
|
||||
inherit (eval) pkgs;
|
||||
|
||||
reportNewFailures = old: new: with pkgs.lib;
|
||||
excludedTestOptions = [
|
||||
# We cannot evluate _module.args, as it is used during the computation
|
||||
# of the modules list.
|
||||
"_module.args"
|
||||
|
||||
# For some reasons which we yet have to investigate, some options cannot
|
||||
# be replaced by a throw without cuasing a non-catchable failure.
|
||||
"networking.bonds"
|
||||
"networking.bridges"
|
||||
"networking.interfaces"
|
||||
"networking.macvlans"
|
||||
"networking.sits"
|
||||
"networking.vlans"
|
||||
"services.openssh.startWhenNeeded"
|
||||
];
|
||||
|
||||
# for some reasons which we yet have to investigate, some options are
|
||||
# time-consuming to compute, thus we filter them out at the moment.
|
||||
excludedOptions = [
|
||||
"boot.systemd.services"
|
||||
"systemd.services"
|
||||
"environment.gnome3.packageSet"
|
||||
"kde.extraPackages"
|
||||
];
|
||||
excludeOptions = list:
|
||||
filter (opt: !(elem (showOption opt.loc) excludedOptions)) list;
|
||||
|
||||
|
||||
reportNewFailures = old: new:
|
||||
let
|
||||
filterChanges =
|
||||
filter ({fst, snd}:
|
||||
!(fst.config.success -> snd.config.success)
|
||||
!(fst.success -> snd.success)
|
||||
);
|
||||
|
||||
keepNames =
|
||||
map ({fst, snd}:
|
||||
assert fst.name == snd.name; snd.name
|
||||
/* assert fst.name == snd.name; */ snd.name
|
||||
);
|
||||
|
||||
# Use tryEval (strict ...) to know if there is any failure while
|
||||
# evaluating the option value.
|
||||
#
|
||||
# Note, the `strict` function is not strict enough, but using toXML
|
||||
# builtins multiply by 4 the memory usage and the time used to compute
|
||||
# each options.
|
||||
tryCollectOptions = moduleResult:
|
||||
flip map (excludeOptions (collect isOption moduleResult)) (opt:
|
||||
{ name = showOption opt.loc; } // builtins.tryEval (strict opt.value));
|
||||
in
|
||||
keepNames (
|
||||
filterChanges (
|
||||
zipLists (collect isOption old) (collect isOption new)
|
||||
zipLists (tryCollectOptions old) (tryCollectOptions new)
|
||||
)
|
||||
);
|
||||
|
||||
|
||||
# Create a list of modules where each module contains only one failling
|
||||
# options.
|
||||
introspectionModules = with pkgs.lib;
|
||||
introspectionModules =
|
||||
let
|
||||
setIntrospection = opt: rec {
|
||||
name = opt.name;
|
||||
path = splitString "." opt.name;
|
||||
name = showOption opt.loc;
|
||||
path = opt.loc;
|
||||
config = setAttrByPath path
|
||||
(throw "Usage introspection of '${name}' by forced failure.");
|
||||
};
|
||||
@ -61,39 +127,67 @@ let
|
||||
map setIntrospection (collect isOption eval.options);
|
||||
|
||||
overrideConfig = thrower:
|
||||
pkgs.lib.recursiveUpdateUntil (path: old: new:
|
||||
recursiveUpdateUntil (path: old: new:
|
||||
path == thrower.path
|
||||
) eval.config thrower.config;
|
||||
|
||||
|
||||
graph = with pkgs.lib;
|
||||
graph =
|
||||
map (thrower: {
|
||||
option = thrower.name;
|
||||
usedBy = reportNewFailures eval.options (evalFun {
|
||||
extraArgs = {
|
||||
config = overrideConfig thrower;
|
||||
};
|
||||
}).options;
|
||||
usedBy = assert __trace "Investigate ${thrower.name}" true;
|
||||
reportNewFailures eval.options (evalFun {
|
||||
specialArgs = {
|
||||
config = overrideConfig thrower;
|
||||
};
|
||||
}).options;
|
||||
}) introspectionModules;
|
||||
|
||||
graphToDot = graph: with pkgs.lib; ''
|
||||
displayOptionsGraph =
|
||||
let
|
||||
checkList =
|
||||
if !(isNull testOption) then [ testOption ]
|
||||
else testOptions;
|
||||
checkAll = checkList == [];
|
||||
in
|
||||
flip filter graph ({option, usedBy}:
|
||||
(checkAll || elem option checkList)
|
||||
&& !(elem option excludedTestOptions)
|
||||
);
|
||||
|
||||
graphToDot = graph: ''
|
||||
digraph "Option Usages" {
|
||||
${concatMapStrings ({option, usedBy}:
|
||||
assert __trace option true;
|
||||
if displayOptions == [] || elem option displayOptions then
|
||||
concatMapStrings (user: ''
|
||||
"${option}" -> "${user}"''
|
||||
) usedBy
|
||||
else ""
|
||||
) graph}
|
||||
concatMapStrings (user: ''
|
||||
"${option}" -> "${user}"''
|
||||
) usedBy
|
||||
) displayOptionsGraph}
|
||||
}
|
||||
'';
|
||||
|
||||
graphToText = graph:
|
||||
concatMapStrings ({option, usedBy}:
|
||||
concatMapStrings (user: ''
|
||||
${user}
|
||||
'') usedBy
|
||||
) displayOptionsGraph;
|
||||
|
||||
in
|
||||
|
||||
pkgs.texFunctions.dot2pdf {
|
||||
dotGraph = pkgs.writeTextFile {
|
||||
rec {
|
||||
dotContent = graphToDot graph;
|
||||
dot = pkgs.writeTextFile {
|
||||
name = "option_usages.dot";
|
||||
text = graphToDot graph;
|
||||
text = dotContent;
|
||||
};
|
||||
|
||||
pdf = pkgs.texFunctions.dot2pdf {
|
||||
dotGraph = dot;
|
||||
};
|
||||
|
||||
txtContent = graphToText graph;
|
||||
txt = pkgs.writeTextFile {
|
||||
name = "option_usages.txt";
|
||||
text = txtContent;
|
||||
};
|
||||
}
|
||||
|
@ -1,5 +0,0 @@
|
||||
{ modulesPath, ...}:
|
||||
{
|
||||
imports = [ "${modulesPath}/virtualisation/amazon-config.nix" ];
|
||||
services.journald.rateLimitBurst = 0;
|
||||
}
|
@ -1,5 +0,0 @@
|
||||
{ config, pkgs, ...}:
|
||||
{
|
||||
imports = [ ./amazon-base-config.nix ];
|
||||
ec2.hvm = true;
|
||||
}
|
@ -1,34 +0,0 @@
|
||||
{ config, pkgs, lib, ...}:
|
||||
let
|
||||
cloudUtils = pkgs.fetchurl {
|
||||
url = "https://launchpad.net/cloud-utils/trunk/0.27/+download/cloud-utils-0.27.tar.gz";
|
||||
sha256 = "16shlmg36lidp614km41y6qk3xccil02f5n3r4wf6d1zr5n4v8vd";
|
||||
};
|
||||
growpart = pkgs.stdenv.mkDerivation {
|
||||
name = "growpart";
|
||||
src = cloudUtils;
|
||||
buildPhase = ''
|
||||
cp bin/growpart $out
|
||||
sed -i 's|awk|gawk|' $out
|
||||
sed -i 's|sed|gnused|' $out
|
||||
'';
|
||||
dontInstall = true;
|
||||
dontPatchShebangs = true;
|
||||
};
|
||||
in
|
||||
{
|
||||
imports = [ ./amazon-base-config.nix ];
|
||||
ec2.hvm = true;
|
||||
boot.loader.grub.device = lib.mkOverride 0 "/dev/xvdg";
|
||||
boot.kernelParams = [ "console=ttyS0" ];
|
||||
|
||||
boot.initrd.extraUtilsCommands = ''
|
||||
copy_bin_and_libs ${pkgs.gawk}/bin/gawk
|
||||
copy_bin_and_libs ${pkgs.gnused}/bin/sed
|
||||
copy_bin_and_libs ${pkgs.utillinux}/sbin/sfdisk
|
||||
cp -v ${growpart} $out/bin/growpart
|
||||
'';
|
||||
boot.initrd.postDeviceCommands = ''
|
||||
[ -e /dev/xvda ] && [ -e /dev/xvda1 ] && TMPDIR=/run sh $(type -P growpart) /dev/xvda 1
|
||||
'';
|
||||
}
|
27
nixos/maintainers/scripts/ec2/amazon-image.nix
Normal file
27
nixos/maintainers/scripts/ec2/amazon-image.nix
Normal file
@ -0,0 +1,27 @@
|
||||
{ config, lib, pkgs, ... }:
|
||||
|
||||
with lib;
|
||||
|
||||
{
|
||||
|
||||
imports =
|
||||
[ ../../../modules/installer/cd-dvd/channel.nix
|
||||
../../../modules/virtualisation/amazon-image.nix
|
||||
];
|
||||
|
||||
system.build.amazonImage = import ../../../lib/make-disk-image.nix {
|
||||
inherit pkgs lib config;
|
||||
partitioned = config.ec2.hvm;
|
||||
diskSize = if config.ec2.hvm then 2048 else 8192;
|
||||
configFile = pkgs.writeText "configuration.nix"
|
||||
''
|
||||
{
|
||||
imports = [ <nixpkgs/nixos/modules/virtualisation/amazon-image.nix> ];
|
||||
${optionalString config.ec2.hvm ''
|
||||
ec2.hvm = true;
|
||||
''}
|
||||
}
|
||||
'';
|
||||
};
|
||||
|
||||
}
|
217
nixos/maintainers/scripts/ec2/create-amis.sh
Executable file
217
nixos/maintainers/scripts/ec2/create-amis.sh
Executable file
@ -0,0 +1,217 @@
|
||||
#! /bin/sh -e
|
||||
|
||||
set -o pipefail
|
||||
#set -x
|
||||
|
||||
stateDir=${TMPDIR:-/tmp}/ec2-image
|
||||
echo "keeping state in $stateDir"
|
||||
mkdir -p $stateDir
|
||||
|
||||
version=$(nix-instantiate --eval --strict '<nixpkgs>' -A lib.nixpkgsVersion | sed s/'"'//g)
|
||||
echo "NixOS version is $version"
|
||||
|
||||
rm -f ec2-amis.nix
|
||||
|
||||
|
||||
for type in hvm pv; do
|
||||
link=$stateDir/$type
|
||||
imageFile=$link/nixos.img
|
||||
system=x86_64-linux
|
||||
arch=x86_64
|
||||
|
||||
# Build the image.
|
||||
if ! [ -L $link ]; then
|
||||
if [ $type = pv ]; then hvmFlag=false; else hvmFlag=true; fi
|
||||
|
||||
echo "building image type '$type'..."
|
||||
nix-build -o $link \
|
||||
'<nixpkgs/nixos>' \
|
||||
-A config.system.build.amazonImage \
|
||||
--arg configuration "{ imports = [ <nixpkgs/nixos/maintainers/scripts/ec2/amazon-image.nix> ]; ec2.hvm = $hvmFlag; }"
|
||||
fi
|
||||
|
||||
for store in ebs s3; do
|
||||
|
||||
bucket=nixos-amis
|
||||
bucketDir="$version-$type-$store"
|
||||
|
||||
prevAmi=
|
||||
prevRegion=
|
||||
|
||||
for region in eu-west-1 eu-central-1 us-east-1 us-west-1 us-west-2 ap-southeast-1 ap-southeast-2 ap-northeast-1 sa-east-1; do
|
||||
|
||||
name=nixos-$version-$arch-$type-$store
|
||||
description="NixOS $system $version ($type-$store)"
|
||||
|
||||
amiFile=$stateDir/$region.$type.$store.ami-id
|
||||
|
||||
if ! [ -e $amiFile ]; then
|
||||
|
||||
echo "doing $name in $region..."
|
||||
|
||||
if [ -n "$prevAmi" ]; then
|
||||
ami=$(ec2-copy-image \
|
||||
--region "$region" \
|
||||
--source-region "$prevRegion" --source-ami-id "$prevAmi" \
|
||||
--name "$name" --description "$description" | cut -f 2)
|
||||
else
|
||||
|
||||
if [ $store = s3 ]; then
|
||||
|
||||
# Bundle the image.
|
||||
imageDir=$stateDir/$type-bundled
|
||||
|
||||
if ! [ -d $imageDir ]; then
|
||||
rm -rf $imageDir.tmp
|
||||
mkdir -p $imageDir.tmp
|
||||
ec2-bundle-image \
|
||||
-d $imageDir.tmp \
|
||||
-i $imageFile --arch $arch \
|
||||
--user "$AWS_ACCOUNT" -c "$EC2_CERT" -k "$EC2_PRIVATE_KEY"
|
||||
mv $imageDir.tmp $imageDir
|
||||
fi
|
||||
|
||||
# Upload the bundle to S3.
|
||||
if ! [ -e $imageDir/uploaded ]; then
|
||||
echo "uploading bundle to S3..."
|
||||
ec2-upload-bundle \
|
||||
-m $imageDir/nixos.img.manifest.xml \
|
||||
-b "$bucket/$bucketDir" \
|
||||
-a "$EC2_ACCESS_KEY" -s "$EC2_SECRET_KEY" \
|
||||
--location EU
|
||||
touch $imageDir/uploaded
|
||||
fi
|
||||
|
||||
extraFlags="$bucket/$bucketDir/nixos.img.manifest.xml"
|
||||
|
||||
else
|
||||
|
||||
# Convert the image to vhd format so we don't have
|
||||
# to upload a huge raw image.
|
||||
vhdFile=$stateDir/$type.vhd
|
||||
if ! [ -e $vhdFile ]; then
|
||||
qemu-img convert -O vpc $imageFile $vhdFile.tmp
|
||||
mv $vhdFile.tmp $vhdFile
|
||||
fi
|
||||
|
||||
taskId=$(cat $stateDir/$region.$type.task-id 2> /dev/null || true)
|
||||
volId=$(cat $stateDir/$region.$type.vol-id 2> /dev/null || true)
|
||||
snapId=$(cat $stateDir/$region.$type.snap-id 2> /dev/null || true)
|
||||
|
||||
# Import the VHD file.
|
||||
if [ -z "$snapId" -a -z "$volId" -a -z "$taskId" ]; then
|
||||
echo "importing $vhdFile..."
|
||||
taskId=$(ec2-import-volume $vhdFile --no-upload -f vhd \
|
||||
-o "$EC2_ACCESS_KEY" -w "$EC2_SECRET_KEY" \
|
||||
--region "$region" -z "${region}a" \
|
||||
--bucket "$bucket" --prefix "$bucketDir/" \
|
||||
| tee /dev/stderr \
|
||||
| sed 's/.*\(import-vol-[0-9a-z]\+\).*/\1/ ; t ; d')
|
||||
echo -n "$taskId" > $stateDir/$region.$type.task-id
|
||||
fi
|
||||
|
||||
if [ -z "$snapId" -a -z "$volId" ]; then
|
||||
ec2-resume-import $vhdFile -t "$taskId" --region "$region" \
|
||||
-o "$EC2_ACCESS_KEY" -w "$EC2_SECRET_KEY"
|
||||
fi
|
||||
|
||||
# Wait for the volume creation to finish.
|
||||
if [ -z "$snapId" -a -z "$volId" ]; then
|
||||
echo "waiting for import to finish..."
|
||||
while true; do
|
||||
volId=$(ec2-describe-conversion-tasks "$taskId" --region "$region" | sed 's/.*VolumeId.*\(vol-[0-9a-f]\+\).*/\1/ ; t ; d')
|
||||
if [ -n "$volId" ]; then break; fi
|
||||
sleep 10
|
||||
done
|
||||
|
||||
echo -n "$volId" > $stateDir/$region.$type.vol-id
|
||||
fi
|
||||
|
||||
# Delete the import task.
|
||||
if [ -n "$volId" -a -n "$taskId" ]; then
|
||||
echo "removing import task..."
|
||||
ec2-delete-disk-image -t "$taskId" --region "$region" -o "$EC2_ACCESS_KEY" -w "$EC2_SECRET_KEY" || true
|
||||
rm -f $stateDir/$region.$type.task-id
|
||||
fi
|
||||
|
||||
# Create a snapshot.
|
||||
if [ -z "$snapId" ]; then
|
||||
echo "creating snapshot..."
|
||||
snapId=$(ec2-create-snapshot "$volId" --region "$region" | cut -f 2)
|
||||
echo -n "$snapId" > $stateDir/$region.$type.snap-id
|
||||
ec2-create-tags "$snapId" -t "Name=$description" --region "$region"
|
||||
fi
|
||||
|
||||
# Wait for the snapshot to finish.
|
||||
echo "waiting for snapshot to finish..."
|
||||
while true; do
|
||||
status=$(ec2-describe-snapshots "$snapId" --region "$region" | head -n1 | cut -f 4)
|
||||
if [ "$status" = completed ]; then break; fi
|
||||
sleep 10
|
||||
done
|
||||
|
||||
# Delete the volume.
|
||||
if [ -n "$volId" ]; then
|
||||
echo "deleting volume..."
|
||||
ec2-delete-volume "$volId" --region "$region" || true
|
||||
rm -f $stateDir/$region.$type.vol-id
|
||||
fi
|
||||
|
||||
extraFlags="-b /dev/sda1=$snapId:20:true:gp2"
|
||||
|
||||
if [ $type = pv ]; then
|
||||
extraFlags+=" --root-device-name=/dev/sda1"
|
||||
fi
|
||||
|
||||
extraFlags+=" -b /dev/sdb=ephemeral0 -b /dev/sdc=ephemeral1 -b /dev/sdd=ephemeral2 -b /dev/sde=ephemeral3"
|
||||
fi
|
||||
|
||||
# Register the AMI.
|
||||
if [ $type = pv ]; then
|
||||
kernel=$(ec2-describe-images -o amazon --filter "manifest-location=*pv-grub-hd0_1.04-$arch*" --region "$region" | cut -f 2)
|
||||
[ -n "$kernel" ]
|
||||
echo "using PV-GRUB kernel $kernel"
|
||||
extraFlags+=" --virtualization-type paravirtual --kernel $kernel"
|
||||
else
|
||||
extraFlags+=" --virtualization-type hvm"
|
||||
fi
|
||||
|
||||
ami=$(ec2-register \
|
||||
-n "$name" \
|
||||
-d "$description" \
|
||||
--region "$region" \
|
||||
--architecture "$arch" \
|
||||
$extraFlags | cut -f 2)
|
||||
fi
|
||||
|
||||
echo -n "$ami" > $amiFile
|
||||
echo "created AMI $ami of type '$type' in $region..."
|
||||
|
||||
else
|
||||
ami=$(cat $amiFile)
|
||||
fi
|
||||
|
||||
if [ -z "$NO_WAIT" -o -z "$prevAmi" ]; then
|
||||
echo "waiting for AMI..."
|
||||
while true; do
|
||||
status=$(ec2-describe-images "$ami" --region "$region" | head -n1 | cut -f 5)
|
||||
if [ "$status" = available ]; then break; fi
|
||||
sleep 10
|
||||
done
|
||||
|
||||
ec2-modify-image-attribute \
|
||||
--region "$region" "$ami" -l -a all
|
||||
fi
|
||||
|
||||
echo "region = $region, type = $type, store = $store, ami = $ami"
|
||||
if [ -z "$prevAmi" ]; then
|
||||
prevAmi="$ami"
|
||||
prevRegion="$region"
|
||||
fi
|
||||
|
||||
echo " \"15.09\".$region.$type-$store = \"$ami\";" >> ec2-amis.nix
|
||||
done
|
||||
|
||||
done
|
||||
|
||||
done
|
@ -1,216 +0,0 @@
|
||||
#! /usr/bin/env python
|
||||
|
||||
import os
|
||||
import sys
|
||||
import time
|
||||
import argparse
|
||||
import nixops.util
|
||||
from nixops import deployment
|
||||
from boto.ec2.blockdevicemapping import BlockDeviceMapping, BlockDeviceType
|
||||
import boto.ec2
|
||||
from nixops.statefile import StateFile, get_default_state_file
|
||||
|
||||
parser = argparse.ArgumentParser(description='Create an EBS-backed NixOS AMI')
|
||||
parser.add_argument('--region', dest='region', required=True, help='EC2 region to create the image in')
|
||||
parser.add_argument('--channel', dest='channel', default="14.12", help='Channel to use')
|
||||
parser.add_argument('--keep', dest='keep', action='store_true', help='Keep NixOps machine after use')
|
||||
parser.add_argument('--hvm', dest='hvm', action='store_true', help='Create HVM image')
|
||||
parser.add_argument('--key', dest='key_name', action='store_true', help='Keypair used for HVM instance creation', default="rob")
|
||||
args = parser.parse_args()
|
||||
|
||||
instance_type = "m3.medium" if args.hvm else "m1.small"
|
||||
|
||||
if args.hvm:
|
||||
virtualization_type = "hvm"
|
||||
root_block = "/dev/sda1"
|
||||
image_type = 'hvm'
|
||||
else:
|
||||
virtualization_type = "paravirtual"
|
||||
root_block = "/dev/sda"
|
||||
image_type = 'ebs'
|
||||
|
||||
ebs_size = 20
|
||||
|
||||
# Start a NixOS machine in the given region.
|
||||
f = open("ebs-creator-config.nix", "w")
|
||||
f.write('''{{
|
||||
resources.ec2KeyPairs.keypair.accessKeyId = "lb-nixos";
|
||||
resources.ec2KeyPairs.keypair.region = "{0}";
|
||||
|
||||
machine =
|
||||
{{ pkgs, ... }}:
|
||||
{{
|
||||
deployment.ec2.accessKeyId = "lb-nixos";
|
||||
deployment.ec2.region = "{0}";
|
||||
deployment.ec2.blockDeviceMapping."/dev/xvdg".size = pkgs.lib.mkOverride 10 {1};
|
||||
}};
|
||||
}}
|
||||
'''.format(args.region, ebs_size))
|
||||
f.close()
|
||||
|
||||
db = StateFile(get_default_state_file())
|
||||
try:
|
||||
depl = db.open_deployment("ebs-creator")
|
||||
except Exception:
|
||||
depl = db.create_deployment()
|
||||
depl.name = "ebs-creator"
|
||||
depl.logger.set_autoresponse("y")
|
||||
depl.nix_exprs = [os.path.abspath("./ebs-creator.nix"), os.path.abspath("./ebs-creator-config.nix")]
|
||||
if not args.keep: depl.destroy_resources()
|
||||
depl.deploy(allow_reboot=True)
|
||||
|
||||
m = depl.machines['machine']
|
||||
|
||||
# Do the installation.
|
||||
device="/dev/xvdg"
|
||||
if args.hvm:
|
||||
m.run_command('parted -s /dev/xvdg -- mklabel msdos')
|
||||
m.run_command('parted -s /dev/xvdg -- mkpart primary ext2 1M -1s')
|
||||
device="/dev/xvdg1"
|
||||
|
||||
m.run_command("if mountpoint -q /mnt; then umount /mnt; fi")
|
||||
m.run_command("mkfs.ext4 -L nixos {0}".format(device))
|
||||
m.run_command("mkdir -p /mnt")
|
||||
m.run_command("mount {0} /mnt".format(device))
|
||||
m.run_command("touch /mnt/.ebs")
|
||||
m.run_command("mkdir -p /mnt/etc/nixos")
|
||||
|
||||
m.run_command("nix-channel --add https://nixos.org/channels/nixos-{} nixos".format(args.channel))
|
||||
m.run_command("nix-channel --update")
|
||||
|
||||
version = m.run_command("nix-instantiate --eval-only -A lib.nixpkgsVersion '<nixpkgs>'", capture_stdout=True).split(' ')[0].replace('"','').strip()
|
||||
print >> sys.stderr, "NixOS version is {0}".format(version)
|
||||
if args.hvm:
|
||||
m.upload_file("./amazon-base-config.nix", "/mnt/etc/nixos/amazon-base-config.nix")
|
||||
m.upload_file("./amazon-hvm-config.nix", "/mnt/etc/nixos/configuration.nix")
|
||||
m.upload_file("./amazon-hvm-install-config.nix", "/mnt/etc/nixos/amazon-hvm-install-config.nix")
|
||||
m.run_command("NIXOS_CONFIG=/etc/nixos/amazon-hvm-install-config.nix nixos-install")
|
||||
else:
|
||||
m.upload_file("./amazon-base-config.nix", "/mnt/etc/nixos/configuration.nix")
|
||||
m.run_command("nixos-install")
|
||||
|
||||
m.run_command("umount /mnt")
|
||||
|
||||
if args.hvm:
|
||||
ami_name = "nixos-{0}-x86_64-hvm".format(version)
|
||||
description = "NixOS {0} (x86_64; EBS root; hvm)".format(version)
|
||||
else:
|
||||
ami_name = "nixos-{0}-x86_64-ebs".format(version)
|
||||
description = "NixOS {0} (x86_64; EBS root)".format(version)
|
||||
|
||||
|
||||
# Wait for the snapshot to finish.
|
||||
def check():
|
||||
status = snapshot.update()
|
||||
print >> sys.stderr, "snapshot status is {0}".format(status)
|
||||
return status == '100%'
|
||||
|
||||
m.connect()
|
||||
volume = m._conn.get_all_volumes([], filters={'attachment.instance-id': m.resource_id, 'attachment.device': "/dev/sdg"})[0]
|
||||
|
||||
# Create a snapshot.
|
||||
snapshot = volume.create_snapshot(description=description)
|
||||
print >> sys.stderr, "created snapshot {0}".format(snapshot.id)
|
||||
|
||||
nixops.util.check_wait(check, max_tries=120)
|
||||
|
||||
m._conn.create_tags([snapshot.id], {'Name': ami_name})
|
||||
|
||||
if not args.keep: depl.destroy_resources()
|
||||
|
||||
# Register the image.
|
||||
aki = m._conn.get_all_images(filters={'manifest-location': 'ec2*pv-grub-hd0_1.03-x86_64*'})[0]
|
||||
print >> sys.stderr, "using kernel image {0} - {1}".format(aki.id, aki.location)
|
||||
|
||||
block_map = BlockDeviceMapping()
|
||||
block_map[root_block] = BlockDeviceType(snapshot_id=snapshot.id, delete_on_termination=True, size=ebs_size, volume_type="gp2")
|
||||
block_map['/dev/sdb'] = BlockDeviceType(ephemeral_name="ephemeral0")
|
||||
block_map['/dev/sdc'] = BlockDeviceType(ephemeral_name="ephemeral1")
|
||||
block_map['/dev/sdd'] = BlockDeviceType(ephemeral_name="ephemeral2")
|
||||
block_map['/dev/sde'] = BlockDeviceType(ephemeral_name="ephemeral3")
|
||||
|
||||
common_args = dict(
|
||||
name=ami_name,
|
||||
description=description,
|
||||
architecture="x86_64",
|
||||
root_device_name=root_block,
|
||||
block_device_map=block_map,
|
||||
virtualization_type=virtualization_type,
|
||||
delete_root_volume_on_termination=True
|
||||
)
|
||||
if not args.hvm:
|
||||
common_args['kernel_id']=aki.id
|
||||
|
||||
ami_id = m._conn.register_image(**common_args)
|
||||
|
||||
print >> sys.stderr, "registered AMI {0}".format(ami_id)
|
||||
|
||||
print >> sys.stderr, "sleeping a bit..."
|
||||
time.sleep(30)
|
||||
|
||||
print >> sys.stderr, "setting image name..."
|
||||
m._conn.create_tags([ami_id], {'Name': ami_name})
|
||||
|
||||
print >> sys.stderr, "making image public..."
|
||||
image = m._conn.get_all_images(image_ids=[ami_id])[0]
|
||||
image.set_launch_permissions(user_ids=[], group_names=["all"])
|
||||
|
||||
# Do a test deployment to make sure that the AMI works.
|
||||
f = open("ebs-test.nix", "w")
|
||||
f.write(
|
||||
'''
|
||||
{{
|
||||
network.description = "NixOS EBS test";
|
||||
|
||||
resources.ec2KeyPairs.keypair.accessKeyId = "lb-nixos";
|
||||
resources.ec2KeyPairs.keypair.region = "{0}";
|
||||
|
||||
machine = {{ config, pkgs, resources, ... }}: {{
|
||||
deployment.targetEnv = "ec2";
|
||||
deployment.ec2.accessKeyId = "lb-nixos";
|
||||
deployment.ec2.region = "{0}";
|
||||
deployment.ec2.instanceType = "{2}";
|
||||
deployment.ec2.keyPair = resources.ec2KeyPairs.keypair.name;
|
||||
deployment.ec2.securityGroups = [ "public-ssh" ];
|
||||
deployment.ec2.ami = "{1}";
|
||||
}};
|
||||
}}
|
||||
'''.format(args.region, ami_id, instance_type))
|
||||
f.close()
|
||||
|
||||
test_depl = db.create_deployment()
|
||||
test_depl.auto_response = "y"
|
||||
test_depl.name = "ebs-creator-test"
|
||||
test_depl.nix_exprs = [os.path.abspath("./ebs-test.nix")]
|
||||
test_depl.deploy(create_only=True)
|
||||
test_depl.machines['machine'].run_command("nixos-version")
|
||||
|
||||
# Log the AMI ID.
|
||||
f = open("ec2-amis.nix".format(args.region, image_type), "w")
|
||||
f.write("{\n")
|
||||
|
||||
for dest in [ 'us-east-1', 'us-west-1', 'us-west-2', 'eu-west-1', 'eu-central-1', 'ap-southeast-1', 'ap-southeast-2', 'ap-northeast-1', 'sa-east-1']:
|
||||
copy_image = None
|
||||
if args.region != dest:
|
||||
try:
|
||||
print >> sys.stderr, "copying image from region {0} to {1}".format(args.region, dest)
|
||||
conn = boto.ec2.connect_to_region(dest)
|
||||
copy_image = conn.copy_image(args.region, ami_id, ami_name, description=None, client_token=None)
|
||||
except :
|
||||
print >> sys.stderr, "FAILED!"
|
||||
|
||||
# Log the AMI ID.
|
||||
if copy_image != None:
|
||||
f.write(' "{0}"."{1}".{2} = "{3}";\n'.format(args.channel,dest,"hvm" if args.hvm else "ebs",copy_image.image_id))
|
||||
else:
|
||||
f.write(' "{0}"."{1}".{2} = "{3}";\n'.format(args.channel,args.region,"hvm" if args.hvm else "ebs",ami_id))
|
||||
|
||||
|
||||
f.write("}\n")
|
||||
f.close()
|
||||
|
||||
if not args.keep:
|
||||
test_depl.logger.set_autoresponse("y")
|
||||
test_depl.destroy_resources()
|
||||
test_depl.delete()
|
||||
|
@ -1,53 +0,0 @@
|
||||
#! /bin/sh -e
|
||||
|
||||
export NIXOS_CONFIG=$(dirname $(readlink -f $0))/amazon-base-config.nix
|
||||
|
||||
version=$(nix-instantiate --eval-only '<nixpkgs/nixos>' -A config.system.nixosVersion | sed s/'"'//g)
|
||||
echo "NixOS version is $version"
|
||||
|
||||
buildAndUploadFor() {
|
||||
system="$1"
|
||||
arch="$2"
|
||||
|
||||
echo "building $system image..."
|
||||
nix-build '<nixpkgs/nixos>' \
|
||||
-A config.system.build.amazonImage --argstr system "$system" -o ec2-ami
|
||||
|
||||
ec2-bundle-image -i ./ec2-ami/nixos.img --user "$AWS_ACCOUNT" --arch "$arch" \
|
||||
-c "$EC2_CERT" -k "$EC2_PRIVATE_KEY"
|
||||
|
||||
for region in eu-west-1; do
|
||||
echo "uploading $system image for $region..."
|
||||
|
||||
name=nixos-$version-$arch-s3
|
||||
bucket="$(echo $name-$region | tr '[A-Z]_' '[a-z]-')"
|
||||
|
||||
if [ "$region" = eu-west-1 ]; then s3location=EU;
|
||||
elif [ "$region" = us-east-1 ]; then s3location=US;
|
||||
else s3location="$region"
|
||||
fi
|
||||
|
||||
ec2-upload-bundle -b "$bucket" -m /tmp/nixos.img.manifest.xml \
|
||||
-a "$EC2_ACCESS_KEY" -s "$EC2_SECRET_KEY" --location "$s3location" \
|
||||
--url http://s3.amazonaws.com
|
||||
|
||||
kernel=$(ec2-describe-images -o amazon --filter "manifest-location=*pv-grub-hd0_1.04-$arch*" --region "$region" | cut -f 2)
|
||||
echo "using PV-GRUB kernel $kernel"
|
||||
|
||||
ami=$(ec2-register "$bucket/nixos.img.manifest.xml" -n "$name" -d "NixOS $system r$revision" -O "$EC2_ACCESS_KEY" -W "$EC2_SECRET_KEY" \
|
||||
--region "$region" --kernel "$kernel" | cut -f 2)
|
||||
|
||||
echo "AMI ID is $ami"
|
||||
|
||||
echo " \"14.12\".\"$region\".s3 = \"$ami\";" >> ec2-amis.nix
|
||||
|
||||
ec2-modify-image-attribute --region "$region" "$ami" -l -a all -O "$EC2_ACCESS_KEY" -W "$EC2_SECRET_KEY"
|
||||
|
||||
for cp_region in us-east-1 us-west-1 us-west-2 eu-central-1 ap-southeast-1 ap-southeast-2 ap-northeast-1 sa-east-1; do
|
||||
new_ami=$(aws ec2 copy-image --source-image-id $ami --source-region $region --region $cp_region --name "$name" | json ImageId)
|
||||
echo " \"14.12\".\"$cp_region\".s3 = \"$new_ami\";" >> ec2-amis.nix
|
||||
done
|
||||
done
|
||||
}
|
||||
|
||||
buildAndUploadFor x86_64-linux x86_64
|
@ -1,13 +0,0 @@
|
||||
{
|
||||
network.description = "NixOS EBS creator";
|
||||
|
||||
machine =
|
||||
{ config, pkgs, resources, ... }:
|
||||
{ deployment.targetEnv = "ec2";
|
||||
deployment.ec2.instanceType = "c3.large";
|
||||
deployment.ec2.securityGroups = [ "public-ssh" ];
|
||||
deployment.ec2.ebsBoot = false;
|
||||
deployment.ec2.keyPair = resources.ec2KeyPairs.keypair.name;
|
||||
environment.systemPackages = [ pkgs.parted ];
|
||||
};
|
||||
}
|
@ -1,3 +1,6 @@
|
||||
# This module is deprecated, since you can just say ‘fonts.fonts = [
|
||||
# pkgs.corefonts ];’ instead.
|
||||
|
||||
{ config, lib, pkgs, ... }:
|
||||
|
||||
with lib;
|
||||
@ -9,6 +12,7 @@ with lib;
|
||||
fonts = {
|
||||
|
||||
enableCoreFonts = mkOption {
|
||||
visible = false;
|
||||
default = false;
|
||||
description = ''
|
||||
Whether to include Microsoft's proprietary Core Fonts. These fonts
|
||||
|
@ -108,10 +108,8 @@ with lib;
|
||||
subpixel = {
|
||||
|
||||
rgba = mkOption {
|
||||
type = types.string // {
|
||||
check = flip elem ["rgb" "bgr" "vrgb" "vbgr" "none"];
|
||||
};
|
||||
default = "rgb";
|
||||
type = types.enum ["rgb" "bgr" "vrgb" "vbgr" "none"];
|
||||
description = ''
|
||||
Subpixel order, one of <literal>none</literal>,
|
||||
<literal>rgb</literal>, <literal>bgr</literal>,
|
||||
@ -120,10 +118,8 @@ with lib;
|
||||
};
|
||||
|
||||
lcdfilter = mkOption {
|
||||
type = types.str // {
|
||||
check = flip elem ["none" "default" "light" "legacy"];
|
||||
};
|
||||
default = "default";
|
||||
type = types.enum ["none" "default" "light" "legacy"];
|
||||
description = ''
|
||||
FreeType LCD filter, one of <literal>none</literal>,
|
||||
<literal>default</literal>, <literal>light</literal>, or
|
||||
@ -142,7 +138,7 @@ with lib;
|
||||
config =
|
||||
let fontconfig = config.fonts.fontconfig;
|
||||
fcBool = x: "<bool>" + (if x then "true" else "false") + "</bool>";
|
||||
nixosConf = ''
|
||||
renderConf = ''
|
||||
<?xml version='1.0'?>
|
||||
<!DOCTYPE fontconfig SYSTEM 'fonts.dtd'>
|
||||
<fontconfig>
|
||||
@ -169,6 +165,21 @@ with lib;
|
||||
</edit>
|
||||
</match>
|
||||
|
||||
${optionalString (fontconfig.dpi != 0) ''
|
||||
<match target="pattern">
|
||||
<edit name="dpi" mode="assign">
|
||||
<double>${toString fontconfig.dpi}</double>
|
||||
</edit>
|
||||
</match>
|
||||
''}
|
||||
|
||||
</fontconfig>
|
||||
'';
|
||||
genericAliasConf = ''
|
||||
<?xml version='1.0'?>
|
||||
<!DOCTYPE fontconfig SYSTEM 'fonts.dtd'>
|
||||
<fontconfig>
|
||||
|
||||
<!-- Default fonts -->
|
||||
${optionalString (fontconfig.defaultFonts.sansSerif != []) ''
|
||||
<alias>
|
||||
@ -201,14 +212,6 @@ with lib;
|
||||
</alias>
|
||||
''}
|
||||
|
||||
${optionalString (fontconfig.dpi != 0) ''
|
||||
<match target="pattern">
|
||||
<edit name="dpi" mode="assign">
|
||||
<double>${toString fontconfig.dpi}</double>
|
||||
</edit>
|
||||
</match>
|
||||
''}
|
||||
|
||||
</fontconfig>
|
||||
'';
|
||||
in mkIf fontconfig.enable {
|
||||
@ -219,7 +222,8 @@ with lib;
|
||||
environment.etc."fonts/fonts.conf".source =
|
||||
pkgs.makeFontsConf { fontconfig = pkgs.fontconfig_210; fontDirectories = config.fonts.fonts; };
|
||||
|
||||
environment.etc."fonts/conf.d/98-nixos.conf".text = nixosConf;
|
||||
environment.etc."fonts/conf.d/10-nixos-rendering.conf".text = renderConf;
|
||||
environment.etc."fonts/conf.d/60-nixos-generic-alias.conf".text = genericAliasConf;
|
||||
|
||||
# Versioned fontconfig > 2.10. Take shared fonts.conf from fontconfig.
|
||||
# Otherwise specify only font directories.
|
||||
@ -236,7 +240,8 @@ with lib;
|
||||
</fontconfig>
|
||||
'';
|
||||
|
||||
environment.etc."fonts/${pkgs.fontconfig.configVersion}/conf.d/98-nixos.conf".text = nixosConf;
|
||||
environment.etc."fonts/${pkgs.fontconfig.configVersion}/conf.d/10-nixos-rendering.conf".text = renderConf;
|
||||
environment.etc."fonts/${pkgs.fontconfig.configVersion}/conf.d/60-nixos-generic-alias.conf".text = genericAliasConf;
|
||||
|
||||
environment.etc."fonts/${pkgs.fontconfig.configVersion}/conf.d/99-user.conf" = {
|
||||
enable = fontconfig.includeUserConf;
|
||||
|
@ -31,6 +31,7 @@ with lib;
|
||||
pkgs.xorg.fontbh100dpi
|
||||
pkgs.xorg.fontmiscmisc
|
||||
pkgs.xorg.fontcursormisc
|
||||
pkgs.unifont
|
||||
];
|
||||
|
||||
};
|
||||
|
@ -43,7 +43,7 @@ in
|
||||
|
||||
consoleFont = mkOption {
|
||||
type = types.str;
|
||||
default = "lat9w-16";
|
||||
default = "Lat2-Terminus16";
|
||||
example = "LatArCyrHeb-16";
|
||||
description = ''
|
||||
The font used for the virtual consoles. Leave empty to use
|
||||
@ -52,6 +52,15 @@ in
|
||||
'';
|
||||
};
|
||||
|
||||
consoleUseXkbConfig = mkOption {
|
||||
type = types.bool;
|
||||
default = false;
|
||||
description = ''
|
||||
If set, configure the console keymap from the xserver keyboard
|
||||
settings.
|
||||
'';
|
||||
};
|
||||
|
||||
consoleKeyMap = mkOption {
|
||||
type = mkOptionType {
|
||||
name = "string or path";
|
||||
@ -74,6 +83,13 @@ in
|
||||
|
||||
config = {
|
||||
|
||||
i18n.consoleKeyMap = with config.services.xserver;
|
||||
mkIf config.i18n.consoleUseXkbConfig
|
||||
(pkgs.runCommand "xkb-console-keymap" { preferLocalBuild = true; } ''
|
||||
'${pkgs.ckbcomp}/bin/ckbcomp' -model '${xkbModel}' -layout '${layout}' \
|
||||
-option '${xkbOptions}' -variant '${xkbVariant}' > "$out"
|
||||
'');
|
||||
|
||||
environment.systemPackages =
|
||||
optional (config.i18n.supportedLocales != []) glibcLocales;
|
||||
|
||||
|
@ -48,7 +48,7 @@ in
|
||||
|
||||
config = mkIf config.krb5.enable {
|
||||
|
||||
environment.systemPackages = [ pkgs.krb5 ];
|
||||
environment.systemPackages = [ pkgs.krb5Full ];
|
||||
|
||||
environment.etc."krb5.conf".text =
|
||||
''
|
||||
|
@ -108,7 +108,7 @@ in
|
||||
|
||||
extraConfig = mkOption {
|
||||
default = "";
|
||||
type = types.string;
|
||||
type = types.lines;
|
||||
description = ''
|
||||
Extra configuration options that will be added verbatim at
|
||||
the end of the nslcd configuration file (nslcd.conf).
|
||||
@ -120,7 +120,7 @@ in
|
||||
distinguishedName = mkOption {
|
||||
default = "";
|
||||
example = "cn=admin,dc=example,dc=com";
|
||||
type = types.string;
|
||||
type = types.str;
|
||||
description = ''
|
||||
The distinguished name to bind to the LDAP server with. If this
|
||||
is not specified, an anonymous bind will be done.
|
||||
@ -129,7 +129,7 @@ in
|
||||
|
||||
password = mkOption {
|
||||
default = "/etc/ldap/bind.password";
|
||||
type = types.string;
|
||||
type = types.str;
|
||||
description = ''
|
||||
The path to a file containing the credentials to use when binding
|
||||
to the LDAP server (if not binding anonymously).
|
||||
@ -149,7 +149,7 @@ in
|
||||
|
||||
policy = mkOption {
|
||||
default = "hard_open";
|
||||
type = types.string;
|
||||
type = types.enum [ "hard_open" "hard_init" "soft" ];
|
||||
description = ''
|
||||
Specifies the policy to use for reconnecting to an unavailable
|
||||
LDAP server. The default is <literal>hard_open</literal>, which
|
||||
@ -168,7 +168,7 @@ in
|
||||
|
||||
extraConfig = mkOption {
|
||||
default = "";
|
||||
type = types.string;
|
||||
type = types.lines;
|
||||
description = ''
|
||||
Extra configuration options that will be added verbatim at
|
||||
the end of the ldap configuration file (ldap.conf).
|
||||
|
@ -39,6 +39,16 @@ in
|
||||
'';
|
||||
};
|
||||
|
||||
networking.extraResolvconfConf = lib.mkOption {
|
||||
type = types.lines;
|
||||
default = "";
|
||||
example = "libc=NO";
|
||||
description = ''
|
||||
Extra configuration to append to <filename>resolvconf.conf</filename>.
|
||||
'';
|
||||
};
|
||||
|
||||
|
||||
networking.proxy = {
|
||||
|
||||
default = lib.mkOption {
|
||||
@ -150,6 +160,7 @@ in
|
||||
'' + optionalString dnsmasqResolve ''
|
||||
dnsmasq_conf=/etc/dnsmasq-conf.conf
|
||||
dnsmasq_resolv=/etc/dnsmasq-resolv.conf
|
||||
'' + cfg.extraResolvconfConf + ''
|
||||
'';
|
||||
|
||||
} // (optionalAttrs config.services.resolved.enable (
|
||||
|
@ -12,7 +12,7 @@ let
|
||||
|
||||
# Forces 32bit pulseaudio and alsaPlugins to be built/supported for apps
|
||||
# using 32bit alsa on 64bit linux.
|
||||
enable32BitAlsaPlugins = stdenv.isx86_64 && (pkgs_i686.alsaLib != null && pkgs_i686.pulseaudio != null);
|
||||
enable32BitAlsaPlugins = cfg.support32Bit && stdenv.isx86_64 && (pkgs_i686.alsaLib != null && pkgs_i686.libpulseaudio != null);
|
||||
|
||||
ids = config.ids;
|
||||
|
||||
@ -78,6 +78,15 @@ in {
|
||||
'';
|
||||
};
|
||||
|
||||
support32Bit = mkOption {
|
||||
type = types.bool;
|
||||
default = false;
|
||||
description = ''
|
||||
Whether to include the 32-bit pulseaudio libraries in the systemn or not.
|
||||
This is only useful on 64-bit systems and currently limited to x86_64-linux.
|
||||
'';
|
||||
};
|
||||
|
||||
configFile = mkOption {
|
||||
type = types.path;
|
||||
description = ''
|
||||
@ -89,12 +98,12 @@ in {
|
||||
|
||||
package = mkOption {
|
||||
type = types.package;
|
||||
default = pulseaudioFull;
|
||||
default = pulseaudioLight;
|
||||
example = literalExample "pkgs.pulseaudioFull";
|
||||
description = ''
|
||||
The PulseAudio derivation to use. This can be used to disable
|
||||
features (such as JACK support, Bluetooth) that are enabled in the
|
||||
pulseaudioFull package in Nixpkgs.
|
||||
The PulseAudio derivation to use. This can be used to enable
|
||||
features (such as JACK support, Bluetooth) via the
|
||||
<literal>pulseaudioFull</literal> package.
|
||||
'';
|
||||
};
|
||||
|
||||
|
@ -41,20 +41,7 @@ in
|
||||
strings. The latter is concatenated, interspersed with colon
|
||||
characters.
|
||||
'';
|
||||
type = types.attrsOf (mkOptionType {
|
||||
name = "a string or a list of strings";
|
||||
merge = loc: defs:
|
||||
let
|
||||
defs' = filterOverrides defs;
|
||||
res = (head defs').value;
|
||||
in
|
||||
if isList res then concatLists (getValues defs')
|
||||
else if lessThan 1 (length defs') then
|
||||
throw "The option `${showOption loc}' is defined multiple times, in ${showFiles (getFiles defs)}."
|
||||
else if !isString res then
|
||||
throw "The option `${showOption loc}' does not have a string value, in ${showFiles (getFiles defs)}."
|
||||
else res;
|
||||
});
|
||||
type = types.attrsOf (types.loeOf types.str);
|
||||
apply = mapAttrs (n: v: if isList v then concatStringsSep ":" v else v);
|
||||
};
|
||||
|
||||
@ -63,15 +50,15 @@ in
|
||||
description = ''
|
||||
A list of profiles used to setup the global environment.
|
||||
'';
|
||||
type = types.listOf types.string;
|
||||
type = types.listOf types.str;
|
||||
};
|
||||
|
||||
environment.profileRelativeEnvVars = mkOption {
|
||||
type = types.attrsOf (types.listOf types.str);
|
||||
example = { PATH = [ "/bin" "/sbin" ]; MANPATH = [ "/man" "/share/man" ]; };
|
||||
description = ''
|
||||
Attribute set of environment variable. Each attribute maps to a list
|
||||
of relative paths. Each relative path is appended to the each profile
|
||||
Attribute set of environment variable. Each attribute maps to a list
|
||||
of relative paths. Each relative path is appended to the each profile
|
||||
of <option>environment.profiles</option> to form the content of the
|
||||
corresponding environment variable.
|
||||
'';
|
||||
@ -136,6 +123,7 @@ in
|
||||
"''${pkgs.dash}/bin/dash"
|
||||
'';
|
||||
type = types.path;
|
||||
visible = false;
|
||||
description = ''
|
||||
The shell executable that is linked system-wide to
|
||||
<literal>/bin/sh</literal>. Please note that NixOS assumes all
|
||||
|
@ -23,20 +23,7 @@ in
|
||||
strings. The latter is concatenated, interspersed with colon
|
||||
characters.
|
||||
'';
|
||||
type = types.attrsOf (mkOptionType {
|
||||
name = "a string or a list of strings";
|
||||
merge = loc: defs:
|
||||
let
|
||||
defs' = filterOverrides defs;
|
||||
res = (head defs').value;
|
||||
in
|
||||
if isList res then concatLists (getValues defs')
|
||||
else if lessThan 1 (length defs') then
|
||||
throw "The option `${showOption loc}' is defined multiple times, in ${showFiles (getFiles defs)}."
|
||||
else if !isString res then
|
||||
throw "The option `${showOption loc}' does not have a string value, in ${showFiles (getFiles defs)}."
|
||||
else res;
|
||||
});
|
||||
type = types.attrsOf (types.loeOf types.str);
|
||||
apply = mapAttrs (n: v: if isList v then concatStringsSep ":" v else v);
|
||||
};
|
||||
|
||||
|
@ -38,13 +38,14 @@ let
|
||||
pkgs.nano
|
||||
pkgs.ncurses
|
||||
pkgs.netcat
|
||||
pkgs.openssh
|
||||
config.programs.ssh.package
|
||||
pkgs.perl
|
||||
pkgs.procps
|
||||
pkgs.rsync
|
||||
pkgs.strace
|
||||
pkgs.su
|
||||
pkgs.time
|
||||
pkgs.texinfoInteractive
|
||||
pkgs.utillinux
|
||||
extraManpages
|
||||
];
|
||||
@ -57,7 +58,7 @@ in
|
||||
environment = {
|
||||
|
||||
systemPackages = mkOption {
|
||||
type = types.listOf types.path;
|
||||
type = types.listOf types.package;
|
||||
default = [];
|
||||
example = literalExample "[ pkgs.firefox pkgs.thunderbird ]";
|
||||
description = ''
|
||||
@ -102,15 +103,24 @@ in
|
||||
[ "/bin"
|
||||
"/etc/xdg"
|
||||
"/info"
|
||||
"/lib"
|
||||
"/lib" # FIXME: remove
|
||||
#"/lib/debug/.build-id" # enables GDB to find separated debug info
|
||||
"/man"
|
||||
"/sbin"
|
||||
"/share/applications"
|
||||
"/share/desktop-directories"
|
||||
"/share/doc"
|
||||
"/share/emacs"
|
||||
"/share/vim-plugins"
|
||||
"/share/org"
|
||||
"/share/icons"
|
||||
"/share/info"
|
||||
"/share/terminfo"
|
||||
"/share/man"
|
||||
"/share/menus"
|
||||
"/share/mime"
|
||||
"/share/nano"
|
||||
"/share/org"
|
||||
"/share/terminfo"
|
||||
"/share/themes"
|
||||
"/share/vim-plugins"
|
||||
];
|
||||
|
||||
system.path = pkgs.buildEnv {
|
||||
@ -144,6 +154,13 @@ in
|
||||
if [ -x $out/bin/update-desktop-database -a -w $out/share/applications ]; then
|
||||
$out/bin/update-desktop-database $out/share/applications
|
||||
fi
|
||||
|
||||
if [ -x $out/bin/install-info -a -w $out/share/info ]; then
|
||||
shopt -s nullglob
|
||||
for i in $out/share/info/*.info $out/share/info/*.info.gz; do
|
||||
$out/bin/install-info $i $out/share/info/dir
|
||||
done
|
||||
fi
|
||||
'';
|
||||
};
|
||||
|
||||
|
@ -26,6 +26,7 @@ in
|
||||
|
||||
hardwareClockInLocalTime = mkOption {
|
||||
default = false;
|
||||
type = types.bool;
|
||||
description = "If set, keep the hardware clock in local time instead of UTC.";
|
||||
};
|
||||
|
||||
|
@ -108,6 +108,15 @@ let
|
||||
description = "The user's home directory.";
|
||||
};
|
||||
|
||||
cryptHomeLuks = mkOption {
|
||||
type = with types; nullOr str;
|
||||
default = null;
|
||||
description = ''
|
||||
Path to encrypted luks device that contains
|
||||
the user's home directory.
|
||||
'';
|
||||
};
|
||||
|
||||
shell = mkOption {
|
||||
type = types.str;
|
||||
default = "/run/current-system/sw/bin/nologin";
|
||||
@ -207,7 +216,7 @@ let
|
||||
exist. If <option>users.mutableUsers</option> is true, the
|
||||
password can be changed subsequently using the
|
||||
<command>passwd</command> command. Otherwise, it's
|
||||
equivalent to setting the <option>password</option> option.
|
||||
equivalent to setting the <option>hashedPassword</option> option.
|
||||
|
||||
${hashedPasswordDescription}
|
||||
'';
|
||||
@ -327,13 +336,13 @@ let
|
||||
map (range: "${user.name}:${toString range.startUid}:${toString range.count}\n")
|
||||
user.subUidRanges);
|
||||
|
||||
subuidFile = concatStrings (map mkSubuidEntry (attrValues cfg.extraUsers));
|
||||
subuidFile = concatStrings (map mkSubuidEntry (attrValues cfg.users));
|
||||
|
||||
mkSubgidEntry = user: concatStrings (
|
||||
map (range: "${user.name}:${toString range.startGid}:${toString range.count}\n")
|
||||
user.subGidRanges);
|
||||
|
||||
subgidFile = concatStrings (map mkSubgidEntry (attrValues cfg.extraUsers));
|
||||
subgidFile = concatStrings (map mkSubgidEntry (attrValues cfg.users));
|
||||
|
||||
idsAreUnique = set: idAttr: !(fold (name: args@{ dup, acc }:
|
||||
let
|
||||
@ -345,8 +354,8 @@ let
|
||||
else { dup = false; acc = newAcc; }
|
||||
) { dup = false; acc = {}; } (builtins.attrNames set)).dup;
|
||||
|
||||
uidsAreUnique = idsAreUnique (filterAttrs (n: u: u.uid != null) cfg.extraUsers) "uid";
|
||||
gidsAreUnique = idsAreUnique (filterAttrs (n: g: g.gid != null) cfg.extraGroups) "gid";
|
||||
uidsAreUnique = idsAreUnique (filterAttrs (n: u: u.uid != null) cfg.users) "uid";
|
||||
gidsAreUnique = idsAreUnique (filterAttrs (n: g: g.gid != null) cfg.groups) "gid";
|
||||
|
||||
spec = pkgs.writeText "users-groups.json" (builtins.toJSON {
|
||||
inherit (cfg) mutableUsers;
|
||||
@ -355,13 +364,13 @@ let
|
||||
name uid group description home shell createHome isSystemUser
|
||||
password passwordFile hashedPassword
|
||||
initialPassword initialHashedPassword;
|
||||
}) cfg.extraUsers;
|
||||
}) cfg.users;
|
||||
groups = mapAttrsToList (n: g:
|
||||
{ inherit (g) name gid;
|
||||
members = g.members ++ (mapAttrsToList (n: u: u.name) (
|
||||
filterAttrs (n: u: elem g.name u.extraGroups) cfg.extraUsers
|
||||
filterAttrs (n: u: elem g.name u.extraGroups) cfg.users
|
||||
));
|
||||
}) cfg.extraGroups;
|
||||
}) cfg.groups;
|
||||
});
|
||||
|
||||
in {
|
||||
@ -379,10 +388,10 @@ in {
|
||||
<literal>groupadd</literal> commands. On system activation, the
|
||||
existing contents of the <literal>/etc/passwd</literal> and
|
||||
<literal>/etc/group</literal> files will be merged with the
|
||||
contents generated from the <literal>users.extraUsers</literal> and
|
||||
<literal>users.extraGroups</literal> options.
|
||||
contents generated from the <literal>users.users</literal> and
|
||||
<literal>users.groups</literal> options.
|
||||
The initial password for a user will be set
|
||||
according to <literal>users.extraUsers</literal>, but existing passwords
|
||||
according to <literal>users.users</literal>, but existing passwords
|
||||
will not be changed.
|
||||
|
||||
<warning><para>
|
||||
@ -390,7 +399,7 @@ in {
|
||||
group files will simply be replaced on system activation. This also
|
||||
holds for the user passwords; all changed
|
||||
passwords will be reset according to the
|
||||
<literal>users.extraUsers</literal> configuration on activation.
|
||||
<literal>users.users</literal> configuration on activation.
|
||||
</para></warning>
|
||||
'';
|
||||
};
|
||||
@ -403,7 +412,7 @@ in {
|
||||
'';
|
||||
};
|
||||
|
||||
users.extraUsers = mkOption {
|
||||
users.users = mkOption {
|
||||
default = {};
|
||||
type = types.loaOf types.optionSet;
|
||||
example = {
|
||||
@ -424,7 +433,7 @@ in {
|
||||
options = [ userOpts ];
|
||||
};
|
||||
|
||||
users.extraGroups = mkOption {
|
||||
users.groups = mkOption {
|
||||
default = {};
|
||||
example =
|
||||
{ students.gid = 1001;
|
||||
@ -452,7 +461,7 @@ in {
|
||||
|
||||
config = {
|
||||
|
||||
users.extraUsers = {
|
||||
users.users = {
|
||||
root = {
|
||||
uid = ids.uids.root;
|
||||
description = "System administrator";
|
||||
@ -469,7 +478,7 @@ in {
|
||||
};
|
||||
};
|
||||
|
||||
users.extraGroups = {
|
||||
users.groups = {
|
||||
root.gid = ids.gids.root;
|
||||
wheel.gid = ids.gids.wheel;
|
||||
disk.gid = ids.gids.disk;
|
||||
@ -516,6 +525,27 @@ in {
|
||||
{ assertion = !cfg.enforceIdUniqueness || (uidsAreUnique && gidsAreUnique);
|
||||
message = "UIDs and GIDs must be unique!";
|
||||
}
|
||||
{ # If mutableUsers is false, to prevent users creating a
|
||||
# configuration that locks them out of the system, ensure that
|
||||
# there is at least one "privileged" account that has a
|
||||
# password or an SSH authorized key. Privileged accounts are
|
||||
# root and users in the wheel group.
|
||||
assertion = !cfg.mutableUsers ->
|
||||
any id (mapAttrsToList (name: cfg:
|
||||
(name == "root"
|
||||
|| cfg.group == "wheel"
|
||||
|| elem "wheel" cfg.extraGroups)
|
||||
&&
|
||||
((cfg.hashedPassword != null && cfg.hashedPassword != "!")
|
||||
|| cfg.password != null
|
||||
|| cfg.passwordFile != null
|
||||
|| cfg.openssh.authorizedKeys.keys != []
|
||||
|| cfg.openssh.authorizedKeys.keyFiles != [])
|
||||
) cfg.users);
|
||||
message = ''
|
||||
Neither the root account nor any wheel user has a password or SSH authorized key.
|
||||
You must set one to prevent being locked out of your system.'';
|
||||
}
|
||||
];
|
||||
|
||||
};
|
||||
|
@ -22,9 +22,7 @@ with lib;
|
||||
###### implementation
|
||||
|
||||
config = mkIf config.hardware.enableAllFirmware {
|
||||
hardware.firmware = [
|
||||
"${pkgs.firmwareLinuxNonfree}/lib/firmware"
|
||||
];
|
||||
hardware.firmware = [ pkgs.firmwareLinuxNonfree ];
|
||||
};
|
||||
|
||||
}
|
||||
|
@ -26,7 +26,7 @@ in
|
||||
hardware.bumblebee.group = mkOption {
|
||||
default = "wheel";
|
||||
example = "video";
|
||||
type = types.uniq types.str;
|
||||
type = types.str;
|
||||
description = ''Group for bumblebee socket'';
|
||||
};
|
||||
hardware.bumblebee.connectDisplay = mkOption {
|
||||
@ -52,9 +52,9 @@ in
|
||||
systemd.services.bumblebeed = {
|
||||
description = "Bumblebee Hybrid Graphics Switcher";
|
||||
wantedBy = [ "display-manager.service" ];
|
||||
script = "bumblebeed --use-syslog -g ${config.hardware.bumblebee.group}";
|
||||
path = [ kernel.bbswitch bumblebee ];
|
||||
serviceConfig = {
|
||||
ExecStart = "${bumblebee}/bin/bumblebeed --use-syslog -g ${config.hardware.bumblebee.group}";
|
||||
Restart = "always";
|
||||
RestartSec = 60;
|
||||
CPUSchedulingPolicy = "idle";
|
||||
|
@ -5,11 +5,11 @@ let
|
||||
in
|
||||
|
||||
{
|
||||
boot.extraModulePackages = [wis_go7007];
|
||||
boot.extraModulePackages = [ wis_go7007 ];
|
||||
|
||||
environment.systemPackages = [wis_go7007];
|
||||
environment.systemPackages = [ wis_go7007 ];
|
||||
|
||||
hardware.firmware = ["${wis_go7007}/firmware"];
|
||||
hardware.firmware = [ wis_go7007 ];
|
||||
|
||||
services.udev.packages = [wis_go7007];
|
||||
services.udev.packages = [ wis_go7007 ];
|
||||
}
|
||||
|
@ -9,18 +9,17 @@ let
|
||||
|
||||
# We need a copy of the Nix expressions for Nixpkgs and NixOS on the
|
||||
# CD. These are installed into the "nixos" channel of the root
|
||||
# user, as expected by nixos-rebuild/nixos-install.
|
||||
# user, as expected by nixos-rebuild/nixos-install. FIXME: merge
|
||||
# with make-channel.nix.
|
||||
channelSources = pkgs.runCommand "nixos-${config.system.nixosVersion}"
|
||||
{ expr = readFile ../../../lib/channel-expr.nix; }
|
||||
{ }
|
||||
''
|
||||
mkdir -p $out/nixos
|
||||
cp -prd ${pkgs.path} $out/nixos/nixpkgs
|
||||
ln -s nixpkgs/nixos $out/nixos/nixos
|
||||
mkdir -p $out
|
||||
cp -prd ${pkgs.path} $out/nixos
|
||||
chmod -R u+w $out/nixos
|
||||
rm -rf $out/nixos/nixpkgs/.git
|
||||
echo -n ${config.system.nixosVersion} > $out/nixos/nixpkgs/.version
|
||||
echo -n "" > $out/nixos/nixpkgs/.version-suffix
|
||||
echo "$expr" > $out/nixos/default.nix
|
||||
ln -s . $out/nixos/nixpkgs
|
||||
rm -rf $out/nixos/.git
|
||||
echo -n ${config.system.nixosVersionSuffix} > $out/nixos/.version-suffix
|
||||
'';
|
||||
|
||||
in
|
||||
@ -34,7 +33,7 @@ in
|
||||
echo "unpacking the NixOS/Nixpkgs sources..."
|
||||
mkdir -p /nix/var/nix/profiles/per-user/root
|
||||
${config.nix.package}/bin/nix-env -p /nix/var/nix/profiles/per-user/root/channels \
|
||||
-i ${channelSources} --quiet --option use-substitutes false
|
||||
-i ${channelSources} --quiet --option build-use-substitutes false
|
||||
mkdir -m 0700 -p /root/.nix-defexpr
|
||||
ln -s /nix/var/nix/profiles/per-user/root/channels /root/.nix-defexpr/channels
|
||||
mkdir -m 0755 -p /var/lib/nixos
|
||||
|
@ -7,8 +7,7 @@ with lib;
|
||||
|
||||
{
|
||||
imports =
|
||||
[ ./channel.nix
|
||||
./iso-image.nix
|
||||
[ ./iso-image.nix
|
||||
|
||||
# Profiles of this basic installation CD.
|
||||
../../profiles/all-hardware.nix
|
||||
@ -21,18 +20,6 @@ with lib;
|
||||
|
||||
isoImage.volumeID = substring 0 11 "NIXOS_ISO";
|
||||
|
||||
# Make the installer more likely to succeed in low memory
|
||||
# environments. The kernel's overcommit heustistics bite us
|
||||
# fairly often, preventing processes such as nix-worker or
|
||||
# download-using-manifests.pl from forking even if there is
|
||||
# plenty of free memory.
|
||||
boot.kernel.sysctl."vm.overcommit_memory" = "1";
|
||||
|
||||
# To speed up installation a little bit, include the complete stdenv
|
||||
# in the Nix store on the CD. Archive::Cpio is needed for the
|
||||
# initrd builder.
|
||||
isoImage.storeContents = [ pkgs.stdenv pkgs.busybox pkgs.perlPackages.ArchiveCpio ];
|
||||
|
||||
# EFI booting
|
||||
isoImage.makeEfiBootable = true;
|
||||
|
||||
@ -42,9 +29,6 @@ with lib;
|
||||
# Add Memtest86+ to the CD.
|
||||
boot.loader.grub.memtest86.enable = true;
|
||||
|
||||
# Get a console as soon as the initrd loads fbcon on EFI boot.
|
||||
boot.initrd.kernelModules = [ "fbcon" ];
|
||||
|
||||
# Allow the user to log in as root without a password.
|
||||
users.extraUsers.root.initialHashedPassword = "";
|
||||
}
|
||||
|
@ -11,9 +11,16 @@ with lib;
|
||||
# Provide wicd for easy wireless configuration.
|
||||
#networking.wicd.enable = true;
|
||||
|
||||
# Include gparted for partitioning disks
|
||||
environment.systemPackages = [ pkgs.gparted ];
|
||||
|
||||
environment.systemPackages =
|
||||
[ # Include gparted for partitioning disks.
|
||||
pkgs.gparted
|
||||
|
||||
# Include some editors.
|
||||
pkgs.vim
|
||||
pkgs.bvi # binary editor
|
||||
pkgs.joe
|
||||
];
|
||||
|
||||
# Provide networkmanager for easy wireless configuration.
|
||||
networking.networkmanager.enable = true;
|
||||
networking.wireless.enable = mkForce false;
|
||||
@ -67,7 +74,7 @@ with lib;
|
||||
loadTemplate("org.kde.plasma-desktop.defaultPanel")
|
||||
|
||||
for (var i = 0; i < screenCount; ++i) {
|
||||
var desktop = new Activity
|
||||
var desktop = new Activity
|
||||
desktop.name = i18n("Desktop")
|
||||
desktop.screen = i
|
||||
desktop.wallpaperPlugin = 'image'
|
||||
@ -75,7 +82,7 @@ with lib;
|
||||
|
||||
var folderview = desktop.addWidget("folderview");
|
||||
folderview.writeConfig("url", "desktop:/");
|
||||
|
||||
|
||||
//Create more panels for other screens
|
||||
if (i > 0){
|
||||
var panel = new Panel
|
||||
|
@ -1,7 +1,7 @@
|
||||
# This module defines a small NixOS installation CD. It does not
|
||||
# contain any graphical stuff.
|
||||
|
||||
{ config, pkgs, ... }:
|
||||
{ config, lib, ... }:
|
||||
|
||||
{
|
||||
imports =
|
||||
|
@ -30,8 +30,7 @@ let
|
||||
# * COM32 entries (chainload, reboot, poweroff) are not recognized. They
|
||||
# result in incorrect boot entries.
|
||||
|
||||
baseIsolinuxCfg =
|
||||
''
|
||||
baseIsolinuxCfg = ''
|
||||
SERIAL 0 38400
|
||||
TIMEOUT ${builtins.toString syslinuxTimeout}
|
||||
UI vesamenu.c32
|
||||
@ -40,11 +39,11 @@ let
|
||||
DEFAULT boot
|
||||
|
||||
LABEL boot
|
||||
MENU LABEL NixOS ${config.system.nixosVersion} Installer
|
||||
MENU LABEL NixOS ${config.system.nixosVersion}${config.isoImage.appendToMenuLabel}
|
||||
LINUX /boot/bzImage
|
||||
APPEND init=${config.system.build.toplevel}/init ${toString config.boot.kernelParams}
|
||||
INITRD /boot/initrd
|
||||
'';
|
||||
'';
|
||||
|
||||
isolinuxMemtest86Entry = ''
|
||||
LABEL memtest
|
||||
@ -55,12 +54,12 @@ let
|
||||
|
||||
isolinuxCfg = baseIsolinuxCfg + (optionalString config.boot.loader.grub.memtest86.enable isolinuxMemtest86Entry);
|
||||
|
||||
# The efi boot image
|
||||
# The EFI boot image.
|
||||
efiDir = pkgs.runCommand "efi-directory" {} ''
|
||||
mkdir -p $out/EFI/boot
|
||||
cp -v ${pkgs.gummiboot}/lib/gummiboot/gummiboot${targetArch}.efi $out/EFI/boot/boot${targetArch}.efi
|
||||
mkdir -p $out/loader/entries
|
||||
echo "title NixOS LiveCD" > $out/loader/entries/nixos-livecd.conf
|
||||
echo "title NixOS Live CD" > $out/loader/entries/nixos-livecd.conf
|
||||
echo "linux /boot/bzImage" >> $out/loader/entries/nixos-livecd.conf
|
||||
echo "initrd /boot/initrd" >> $out/loader/entries/nixos-livecd.conf
|
||||
echo "options init=${config.system.build.toplevel}/init ${toString config.boot.kernelParams}" >> $out/loader/entries/nixos-livecd.conf
|
||||
@ -105,7 +104,7 @@ in
|
||||
options = {
|
||||
|
||||
isoImage.isoName = mkOption {
|
||||
default = "${config.isoImage.isoName}.iso";
|
||||
default = "${config.isoImage.isoBaseName}.iso";
|
||||
description = ''
|
||||
Name of the generated ISO image file.
|
||||
'';
|
||||
@ -192,6 +191,18 @@ in
|
||||
'';
|
||||
};
|
||||
|
||||
isoImage.appendToMenuLabel = mkOption {
|
||||
default = " Installer";
|
||||
example = " Live System";
|
||||
description = ''
|
||||
The string to append after the menu label for the NixOS system.
|
||||
This will be directly appended (without whitespace) to the NixOS version
|
||||
string, like for example if it is set to <literal>XXX</literal>:
|
||||
|
||||
<para><literal>NixOS 99.99-pre666XXX</literal></para>
|
||||
'';
|
||||
};
|
||||
|
||||
};
|
||||
|
||||
config = {
|
||||
@ -204,7 +215,9 @@ in
|
||||
|
||||
# !!! Hack - attributes expected by other modules.
|
||||
system.boot.loader.kernelFile = "bzImage";
|
||||
environment.systemPackages = [ pkgs.grub2 pkgs.syslinux ];
|
||||
environment.systemPackages = [ pkgs.grub2 pkgs.grub2_efi pkgs.syslinux ];
|
||||
|
||||
boot.consoleLogLevel = mkDefault 7;
|
||||
|
||||
# In stage 1 of the boot, mount the CD as the root FS by label so
|
||||
# that we don't need to know its device. We pass the label of the
|
||||
@ -217,6 +230,7 @@ in
|
||||
boot.kernelParams =
|
||||
[ "root=LABEL=${config.isoImage.volumeID}"
|
||||
"boot.shell_on_fail"
|
||||
"nomodeset"
|
||||
];
|
||||
|
||||
fileSystems."/" =
|
||||
@ -256,6 +270,8 @@ in
|
||||
|
||||
boot.initrd.availableKernelModules = [ "squashfs" "iso9660" "usb-storage" ];
|
||||
|
||||
boot.blacklistedKernelModules = [ "nouveau" ];
|
||||
|
||||
boot.initrd.kernelModules = [ "loop" ];
|
||||
|
||||
# Closures to be copied to the Nix store on the CD, namely the init
|
||||
|
@ -0,0 +1,40 @@
|
||||
{ config, lib, pkgs, ... }:
|
||||
|
||||
let
|
||||
extlinux-conf-builder =
|
||||
import ../../system/boot/loader/generic-extlinux-compatible/extlinux-conf-builder.nix {
|
||||
inherit pkgs;
|
||||
};
|
||||
in
|
||||
{
|
||||
imports = [
|
||||
../../profiles/minimal.nix
|
||||
../../profiles/installation-device.nix
|
||||
./sd-image.nix
|
||||
];
|
||||
|
||||
assertions = lib.singleton {
|
||||
assertion = pkgs.stdenv.system == "armv7l-linux";
|
||||
message = "sd-image-armv7l-multiplatform.nix can be only built natively on ARMv7; " +
|
||||
"it cannot be cross compiled";
|
||||
};
|
||||
|
||||
boot.loader.grub.enable = false;
|
||||
boot.loader.generic-extlinux-compatible.enable = true;
|
||||
|
||||
# FIXME: change this to linuxPackages_latest once v4.2 is out
|
||||
boot.kernelPackages = pkgs.linuxPackages_testing;
|
||||
boot.kernelParams = ["console=ttyS0,115200n8" "console=ttyAMA0,115200n8" "console=tty0"];
|
||||
|
||||
# FIXME: fix manual evaluation on ARM
|
||||
services.nixosManual.enable = lib.mkOverride 0 false;
|
||||
|
||||
# FIXME: this probably should be in installation-device.nix
|
||||
users.extraUsers.root.initialHashedPassword = "";
|
||||
|
||||
sdImage = {
|
||||
populateBootCommands = ''
|
||||
${extlinux-conf-builder} -t 3 -c ${config.system.build.toplevel} -d ./boot
|
||||
'';
|
||||
};
|
||||
}
|
46
nixos/modules/installer/cd-dvd/sd-image-raspberrypi.nix
Normal file
46
nixos/modules/installer/cd-dvd/sd-image-raspberrypi.nix
Normal file
@ -0,0 +1,46 @@
|
||||
{ config, lib, pkgs, ... }:
|
||||
|
||||
let
|
||||
extlinux-conf-builder =
|
||||
import ../../system/boot/loader/generic-extlinux-compatible/extlinux-conf-builder.nix {
|
||||
inherit pkgs;
|
||||
};
|
||||
in
|
||||
{
|
||||
imports = [
|
||||
../../profiles/minimal.nix
|
||||
../../profiles/installation-device.nix
|
||||
./sd-image.nix
|
||||
];
|
||||
|
||||
assertions = lib.singleton {
|
||||
assertion = pkgs.stdenv.system == "armv6l-linux";
|
||||
message = "sd-image-raspberrypi.nix can be only built natively on ARMv6; " +
|
||||
"it cannot be cross compiled";
|
||||
};
|
||||
|
||||
# Needed by RPi firmware
|
||||
nixpkgs.config.allowUnfree = true;
|
||||
|
||||
boot.loader.grub.enable = false;
|
||||
boot.loader.generic-extlinux-compatible.enable = true;
|
||||
|
||||
boot.kernelPackages = pkgs.linuxPackages_rpi;
|
||||
|
||||
# FIXME: fix manual evaluation on ARM
|
||||
services.nixosManual.enable = lib.mkOverride 0 false;
|
||||
|
||||
# FIXME: this probably should be in installation-device.nix
|
||||
users.extraUsers.root.initialHashedPassword = "";
|
||||
|
||||
sdImage = {
|
||||
populateBootCommands = ''
|
||||
for f in bootcode.bin fixup.dat start.elf; do
|
||||
cp ${pkgs.raspberrypifw}/share/raspberrypi/boot/$f boot/
|
||||
done
|
||||
cp ${pkgs.ubootRaspberryPi}/u-boot.bin boot/u-boot-rpi.bin
|
||||
echo 'kernel u-boot-rpi.bin' > boot/config.txt
|
||||
${extlinux-conf-builder} -t 3 -c ${config.system.build.toplevel} -d ./boot
|
||||
'';
|
||||
};
|
||||
}
|
127
nixos/modules/installer/cd-dvd/sd-image.nix
Normal file
127
nixos/modules/installer/cd-dvd/sd-image.nix
Normal file
@ -0,0 +1,127 @@
|
||||
# This module creates a bootable SD card image containing the given NixOS
|
||||
# configuration. The generated image is MBR partitioned, with a FAT /boot
|
||||
# partition, and ext4 root partition. The generated image is sized to fit
|
||||
# its contents, and a boot script automatically resizes the root partition
|
||||
# to fit the device on the first boot.
|
||||
#
|
||||
# The derivation for the SD image will be placed in
|
||||
# config.system.build.sdImage
|
||||
|
||||
{ config, lib, pkgs, ... }:
|
||||
|
||||
with lib;
|
||||
|
||||
let
|
||||
rootfsImage = import ../../../lib/make-ext4-fs.nix {
|
||||
inherit pkgs;
|
||||
inherit (config.sdImage) storePaths;
|
||||
volumeLabel = "NIXOS_SD";
|
||||
};
|
||||
in
|
||||
{
|
||||
options.sdImage = {
|
||||
storePaths = mkOption {
|
||||
type = with types; listOf package;
|
||||
example = literalExample "[ pkgs.stdenv ]";
|
||||
description = ''
|
||||
Derivations to be included in the Nix store in the generated SD image.
|
||||
'';
|
||||
};
|
||||
|
||||
bootSize = mkOption {
|
||||
type = types.int;
|
||||
default = 128;
|
||||
description = ''
|
||||
Size of the /boot partition, in megabytes.
|
||||
'';
|
||||
};
|
||||
|
||||
populateBootCommands = mkOption {
|
||||
example = literalExample "'' cp \${pkgs.myBootLoader}/u-boot.bin boot/ ''";
|
||||
description = ''
|
||||
Shell commands to populate the ./boot directory.
|
||||
All files in that directory are copied to the
|
||||
/boot partition on the SD image.
|
||||
'';
|
||||
};
|
||||
};
|
||||
|
||||
config = {
|
||||
fileSystems = {
|
||||
"/boot" = {
|
||||
device = "/dev/disk/by-label/NIXOS_BOOT";
|
||||
fsType = "vfat";
|
||||
};
|
||||
"/" = {
|
||||
device = "/dev/disk/by-label/NIXOS_SD";
|
||||
fsType = "ext4";
|
||||
};
|
||||
};
|
||||
|
||||
sdImage.storePaths = [ config.system.build.toplevel ];
|
||||
|
||||
system.build.sdImage = pkgs.stdenv.mkDerivation {
|
||||
name = "sd-image-${pkgs.stdenv.system}.img";
|
||||
|
||||
buildInputs = with pkgs; [ dosfstools e2fsprogs mtools libfaketime utillinux ];
|
||||
|
||||
buildCommand = ''
|
||||
# Create the image file sized to fit /boot and /, plus 4M of slack
|
||||
rootSizeBlocks=$(du -B 512 --apparent-size ${rootfsImage} | awk '{ print $1 }')
|
||||
bootSizeBlocks=$((${toString config.sdImage.bootSize} * 1024 * 1024 / 512))
|
||||
imageSize=$((rootSizeBlocks * 512 + bootSizeBlocks * 512 + 4096 * 1024))
|
||||
truncate -s $imageSize $out
|
||||
|
||||
# type=b is 'W95 FAT32', type=83 is 'Linux'.
|
||||
sfdisk $out <<EOF
|
||||
label: dos
|
||||
label-id: 0x2178694e
|
||||
|
||||
start=1M, size=$bootSizeBlocks, type=b, bootable
|
||||
type=83
|
||||
EOF
|
||||
|
||||
# Copy the rootfs into the SD image
|
||||
eval $(partx $out -o START,SECTORS --nr 2 --pairs)
|
||||
dd conv=notrunc if=${rootfsImage} of=$out seek=$START count=$SECTORS
|
||||
|
||||
# Create a FAT32 /boot partition of suitable size into bootpart.img
|
||||
eval $(partx $out -o START,SECTORS --nr 1 --pairs)
|
||||
truncate -s $((SECTORS * 512)) bootpart.img
|
||||
faketime "1970-01-01 00:00:00" mkfs.vfat -i 0x2178694e -n NIXOS_BOOT bootpart.img
|
||||
|
||||
# Populate the files intended for /boot
|
||||
mkdir boot
|
||||
${config.sdImage.populateBootCommands}
|
||||
|
||||
# Copy the populated /boot into the SD image
|
||||
(cd boot; mcopy -bpsvm -i ../bootpart.img ./* ::)
|
||||
dd conv=notrunc if=bootpart.img of=$out seek=$START count=$SECTORS
|
||||
'';
|
||||
};
|
||||
|
||||
boot.postBootCommands = ''
|
||||
# On the first boot do some maintenance tasks
|
||||
if [ -f /nix-path-registration ]; then
|
||||
# Figure out device names for the boot device and root filesystem.
|
||||
rootPart=$(readlink -f /dev/disk/by-label/NIXOS_SD)
|
||||
bootDevice=$(lsblk -npo PKNAME $rootPart)
|
||||
|
||||
# Resize the root partition and the filesystem to fit the disk
|
||||
echo ",+," | sfdisk -N2 --no-reread $bootDevice
|
||||
${pkgs.parted}/bin/partprobe
|
||||
${pkgs.e2fsprogs}/bin/resize2fs $rootPart
|
||||
|
||||
# Register the contents of the initial Nix store
|
||||
${config.nix.package}/bin/nix-store --load-db < /nix-path-registration
|
||||
|
||||
# nixos-rebuild also requires a "system" profile and an /etc/NIXOS tag.
|
||||
touch /etc/NIXOS
|
||||
${config.nix.package}/bin/nix-env -p /nix/var/nix/profiles/system --set /run/current-system
|
||||
|
||||
# Prevents this from running on later boots.
|
||||
rm -f /nix-path-registration
|
||||
fi
|
||||
'';
|
||||
};
|
||||
}
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user