Functions reference
The nixpkgs repository has several utility functions to manipulate Nix expressions.
pkgs.overridePackages
This function inside the nixpkgs expression (pkgs)
can be used to override the set of packages itself.
Warning: this function is expensive and must not be used from within
the nixpkgs repository.
Example usage:
let
pkgs = import <nixpkgs> {};
newpkgs = pkgs.overridePackages (self: super: {
foo = super.foo.override { ... };
};
in ...
The resulting newpkgs will have the new foo
expression, and all other expressions depending on foo will also
use the new foo expression.
The behavior of this function is similar to config.packageOverrides.
The self parameter refers to the final package set with the
applied overrides. Using this parameter may lead to infinite recursion if not
used consciously.
The super parameter refers to the old package set.
It's equivalent to pkgs in the above example.
<pkg>.override
The function override is usually available for all the
derivations in the nixpkgs expression (pkgs).
It is used to override the arguments passed to a function.
Example usages:
pkgs.foo.override { arg1 = val1; arg2 = val2; ... }pkgs.overridePackages (self: super: {
foo = super.foo.override { barSupport = true ; };
})mypkg = pkgs.callPackage ./mypkg.nix {
mydep = pkgs.mydep.override { ... };
})
In the first example, pkgs.foo is the result of a function call
with some default arguments, usually a derivation.
Using pkgs.foo.override will call the same function with
the given new arguments.
<pkg>.overrideDerivationDo not use this function in Nixpkgs. Because it breaks
package abstraction and doesn’t provide error checking for
function arguments, it is only intended for ad-hoc customisation
(such as in ~/.nixpkgs/config.nix).
The function overrideDerivation is usually available for all the
derivations in the nixpkgs expression (pkgs).
It is used to create a new derivation by overriding the attributes of
the original derivation according to the given function.
Example usage:
mySed = pkgs.gnused.overrideDerivation (oldAttrs: {
name = "sed-4.2.2-pre";
src = fetchurl {
url = ftp://alpha.gnu.org/gnu/sed/sed-4.2.2-pre.tar.bz2;
sha256 = "11nq06d131y4wmf3drm0yk502d2xc6n5qy82cg88rb9nqd2lj41k";
};
patches = [];
});
In the above example, the name, src and patches of the derivation
will be overridden, while all other attributes will be retained from the
original derivation.
The argument oldAttrs is used to refer to the attribute set of
the original derivation.
lib.makeOverridable
The function lib.makeOverridable is used to make the result
of a function easily customizable. This utility only makes sense for functions
that accept an argument set and return an attribute set.
Example usage:
f = { a, b }: { result = a+b; }
c = lib.makeOverridable f { a = 1; b = 2; }
The variable c is the value of the f function
applied with some default arguments. Hence the value of c.result
is 3, in this example.
The variable c however also has some additional functions, like
c.override which can be used to
override the default arguments. In this example the value of
(c.override { a = 4; }).result is 6.
buildFHSChrootEnv/buildFHSUserEnvbuildFHSChrootEnv and
buildFHSUserEnv provide a way to build and run
FHS-compatible lightweight sandboxes. They get their own isolated root with
binded /nix/store, so their footprint in terms of disk
space needed is quite small. This allows one to run software which is hard or
unfeasible to patch for NixOS -- 3rd-party source trees with FHS assumptions,
games distributed as tarballs, software with integrity checking and/or external
self-updated binaries.
buildFHSChrootEnv allows to create persistent
environments, which can be constructed, deconstructed and entered by
multiple users at once. A downside is that it requires
root access for both those who create and destroy and
those who enter it. It can be useful to create environments for daemons that
one can enter and observe.
buildFHSUserEnv uses Linux namespaces feature to create
temporary lightweight environments which are destroyed after all child
processes exit. It does not require root access, and can be useful to create
sandboxes and wrap applications.
Those functions both rely on buildFHSEnv, which creates
an actual directory structure given a list of necessary packages and extra
build commands.
buildFHSChrootEnv and buildFHSUserEnv
both accept those arguments which are passed to
buildFHSEnv:
nameEnvironment name.targetPkgsPackages to be installed for the main host's architecture
(i.e. x86_64 on x86_64 installations).multiPkgsPackages to be installed for all architectures supported by
a host (i.e. i686 and x86_64 on x86_64 installations).extraBuildCommandsAdditional commands to be executed for finalizing the
directory structure.extraBuildCommandsMultiLike extraBuildCommandsMulti, but
executed only on multilib architectures.
Additionally, buildFHSUserEnv accepts
runScript parameter, which is a command that would be
executed inside the sandbox and passed all the command line arguments. It
default to bash.
It also uses CHROOTENV_EXTRA_BINDS environment variable
for binding extra directories in the sandbox to outside places. The format of
the variable is /mnt=test-mnt:/data, where
/mnt would be mounted as /test-mnt
and /data would be mounted as /data.
extraBindMounts array argument to
buildFHSUserEnv function is prepended to this variable.
Latter entries take priority if defined several times -- i.e. in case of
/data=data1:/data=data2 the actual bind path would be
/data2.
One can create a simple environment using a shell.nix
like that:
{} }:
(pkgs.buildFHSUserEnv {
name = "simple-x11-env";
targetPkgs = pkgs: (with pkgs;
[ udev
alsaLib
]) ++ (with pkgs.xorg;
[ libX11
libXcursor
libXrandr
]);
multiPkgs = pkgs: (with pkgs;
[ udev
alsaLib
]);
runScript = "bash";
}).env
]]>
Running nix-shell would then drop you into a shell with
these libraries and binaries available. You can use this to run
closed-source applications which expect FHS structure without hassles:
simply change runScript to the application path,
e.g. ./bin/start.sh -- relative paths are supported.
pkgs.dockerToolspkgs.dockerTools is a set of functions for creating and
manipulating Docker images according to the
Docker Image Specification v1.0.0
. Docker itself is not used to perform any of the operations done by these
functions.
The dockerTools API is unstable and may be subject to
backwards-incompatible changes in the future.
buildImage
This function is analogous to the docker build command,
in that can used to build a Docker-compatible repository tarball containing
a single image with one or multiple layers. As such, the result
is suitable for being loaded in Docker with docker load.
The parameters of buildImage with relative example values are
described below:
Docker build
buildImage {
name = "redis";
tag = "latest";
fromImage = someBaseImage;
fromImageName = null;
fromImageTag = "latest";
contents = pkgs.redis;
runAsRoot = ''
#!${stdenv.shell}
mkdir -p /data
'';
config = {
Cmd = [ "/bin/redis-server" ];
WorkingDir = "/data";
Volumes = {
"/data" = {};
};
};
}
The above example will build a Docker image redis/latest
from the given base image. Loading and running this image in Docker results in
redis-server being started automatically.
name specifies the name of the resulting image.
This is the only required argument for buildImage.
tag specifies the tag of the resulting image.
By default it's latest.
fromImage is the repository tarball containing the base image.
It must be a valid Docker image, such as exported by docker save.
By default it's null, which can be seen as equivalent
to FROM scratch of a Dockerfile.
fromImageName can be used to further specify
the base image within the repository, in case it contains multiple images.
By default it's null, in which case
buildImage will peek the first image available
in the repository.
fromImageTag can be used to further specify the tag
of the base image within the repository, in case an image contains multiple tags.
By default it's null, in which case
buildImage will peek the first tag available for the base image.
contents is a derivation that will be copied in the new
layer of the resulting image. This can be similarly seen as
ADD contents/ / in a Dockerfile.
By default it's null.
runAsRoot is a bash script that will run as root
in an environment that overlays the existing layers of the base image with
the new resulting layer, including the previously copied
contents derivation.
This can be similarly seen as
RUN ... in a Dockerfile.
Using this parameter requires the kvm
device to be available.
config is used to specify the configuration of the
containers that will be started off the built image in Docker.
The available options are listed in the
Docker Image Specification v1.0.0
.
After the new layer has been created, its closure
(to which contents, config and
runAsRoot contribute) will be copied in the layer itself.
Only new dependencies that are not already in the existing layers will be copied.
At the end of the process, only one new single layer will be produced and
added to the resulting image.
The resulting repository will only list the single image
image/tag. In the case of
it would be redis/latest.
It is possible to inspect the arguments with which an image was built
using its buildArgs attribute.
pullImage
This function is analogous to the docker pull command,
in that can be used to fetch a Docker image from a Docker registry.
Currently only registry v1 is supported.
By default Docker Hub
is used to pull images.
Its parameters are described in the example below:
Docker pull
pullImage {
imageName = "debian";
imageTag = "jessie";
imageId = null;
sha256 = "1bhw5hkz6chrnrih0ymjbmn69hyfriza2lr550xyvpdrnbzr4gk2";
indexUrl = "https://index.docker.io";
registryVersion = "v1";
}
imageName specifies the name of the image to be downloaded,
which can also include the registry namespace (e.g. library/debian).
This argument is required.
imageTag specifies the tag of the image to be downloaded.
By default it's latest.
imageId, if specified this exact image will be fetched, instead
of imageName/imageTag. However, the resulting repository
will still be named imageName/imageTag.
By default it's null.
sha256 is the checksum of the whole fetched image.
This argument is required.
The checksum is computed on the unpacked directory, not on the final tarball.
In the above example the default values are shown for the variables
indexUrl and registryVersion.
Hence by default the Docker.io registry is used to pull the images.
exportImage
This function is analogous to the docker export command,
in that can used to flatten a Docker image that contains multiple layers.
It is in fact the result of the merge of all the layers of the image.
As such, the result is suitable for being imported in Docker
with docker import.
Using this function requires the kvm
device to be available.
The parameters of exportImage are the following:
Docker export
exportImage {
fromImage = someLayeredImage;
fromImageName = null;
fromImageTag = null;
name = someLayeredImage.name;
}
The parameters relative to the base image have the same synopsis as
described in , except that
fromImage is the only required argument in this case.
The name argument is the name of the derivation output,
which defaults to fromImage.name.
shadowSetup
This constant string is a helper for setting up the base files for managing
users and groups, only if such files don't exist already.
It is suitable for being used in a
runAsRoot script for cases like
in the example below:
Shadow base files
buildImage {
name = "shadow-basic";
runAsRoot = ''
#!${stdenv.shell}
${shadowSetup}
groupadd -r redis
useradd -r -g redis redis
mkdir /data
chown redis:redis /data
'';
}
Creating base files like /etc/passwd or
/etc/login.defs are necessary for shadow-utils to
manipulate users and groups.