mirror of
https://github.com/ilyakooo0/nixpkgs.git
synced 2025-01-01 08:25:55 +03:00
Merge remote-tracking branch 'upstream/master' into gcc-6
This commit is contained in:
commit
15f6dcb668
@ -26,6 +26,4 @@ env:
|
|||||||
- GITHUB_TOKEN=5edaaf1017f691ed34e7f80878f8f5fbd071603f
|
- GITHUB_TOKEN=5edaaf1017f691ed34e7f80878f8f5fbd071603f
|
||||||
|
|
||||||
notifications:
|
notifications:
|
||||||
email:
|
email: false
|
||||||
on_success: never
|
|
||||||
on_failure: change
|
|
||||||
|
@ -227,7 +227,7 @@ packages via <literal>packageOverrides</literal></title>
|
|||||||
|
|
||||||
<para>You can define a function called
|
<para>You can define a function called
|
||||||
<varname>packageOverrides</varname> in your local
|
<varname>packageOverrides</varname> in your local
|
||||||
<filename>~/.config/nixpkgs/config.nix</filename> to overide nix packages. It
|
<filename>~/.config/nixpkgs/config.nix</filename> to override nix packages. It
|
||||||
must be a function that takes pkgs as an argument and return modified
|
must be a function that takes pkgs as an argument and return modified
|
||||||
set of packages.
|
set of packages.
|
||||||
|
|
||||||
|
@ -70,7 +70,7 @@
|
|||||||
|
|
||||||
<para>
|
<para>
|
||||||
In the above example, the <varname>separateDebugInfo</varname> attribute is
|
In the above example, the <varname>separateDebugInfo</varname> attribute is
|
||||||
overriden to be true, thus building debug info for
|
overridden to be true, thus building debug info for
|
||||||
<varname>helloWithDebug</varname>, while all other attributes will be
|
<varname>helloWithDebug</varname>, while all other attributes will be
|
||||||
retained from the original <varname>hello</varname> package.
|
retained from the original <varname>hello</varname> package.
|
||||||
</para>
|
</para>
|
||||||
|
@ -923,6 +923,28 @@ If you need to change a package's attribute(s) from `configuration.nix` you coul
|
|||||||
|
|
||||||
If you are using the `bepasty-server` package somewhere, for example in `systemPackages` or indirectly from `services.bepasty`, then a `nixos-rebuild switch` will rebuild the system but with the `bepasty-server` package using a different `src` attribute. This way one can modify `python` based software/libraries easily. Using `self` and `super` one can also alter dependencies (`buildInputs`) between the old state (`self`) and new state (`super`).
|
If you are using the `bepasty-server` package somewhere, for example in `systemPackages` or indirectly from `services.bepasty`, then a `nixos-rebuild switch` will rebuild the system but with the `bepasty-server` package using a different `src` attribute. This way one can modify `python` based software/libraries easily. Using `self` and `super` one can also alter dependencies (`buildInputs`) between the old state (`self`) and new state (`super`).
|
||||||
|
|
||||||
|
### How to override a Python package using overlays?
|
||||||
|
|
||||||
|
To alter a python package using overlays, you would use the following approach:
|
||||||
|
|
||||||
|
```nix
|
||||||
|
self: super:
|
||||||
|
rec {
|
||||||
|
python = super.python.override {
|
||||||
|
packageOverrides = python-self: python-super: {
|
||||||
|
bepasty-server = python-super.bepasty-server.overrideAttrs ( oldAttrs: {
|
||||||
|
src = self.pkgs.fetchgit {
|
||||||
|
url = "https://github.com/bepasty/bepasty-server";
|
||||||
|
sha256 = "9ziqshmsf0rjvdhhca55sm0x8jz76fsf2q4rwh4m6lpcf8wr0nps";
|
||||||
|
rev = "e2516e8cf4f2afb5185337073607eb9e84a61d2d";
|
||||||
|
};
|
||||||
|
});
|
||||||
|
};
|
||||||
|
};
|
||||||
|
pythonPackages = python.pkgs;
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
## Contributing
|
## Contributing
|
||||||
|
|
||||||
### Contributing guidelines
|
### Contributing guidelines
|
||||||
|
@ -2,31 +2,55 @@
|
|||||||
xmlns:xlink="http://www.w3.org/1999/xlink"
|
xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||||
xml:id="sec-language-qt">
|
xml:id="sec-language-qt">
|
||||||
|
|
||||||
<title>Qt and KDE</title>
|
<title>Qt</title>
|
||||||
|
|
||||||
<para>Qt is a comprehensive desktop and mobile application development toolkit for C++. Legacy support is available for Qt 3 and Qt 4, but all current development uses Qt 5. The Qt 5 packages in Nixpkgs are updated frequently to take advantage of new features, but older versions are typically retained to support packages that may not be compatible with the latest version. When packaging applications and libraries for Nixpkgs, it is important to ensure that compatible versions of Qt 5 are used throughout; this consideration motivates the tools described below.</para>
|
<para>
|
||||||
|
Qt is a comprehensive desktop and mobile application development toolkit for C++.
|
||||||
|
Legacy support is available for Qt 3 and Qt 4, but all current development uses Qt 5.
|
||||||
|
The Qt 5 packages in Nixpkgs are updated frequently to take advantage of new features,
|
||||||
|
but older versions are typically retained until their support window ends.
|
||||||
|
The most important consideration in packaging Qt-based software is ensuring that each package and all its dependencies use the same version of Qt 5;
|
||||||
|
this consideration motivates most of the tools described below.
|
||||||
|
</para>
|
||||||
|
|
||||||
<section xml:id="ssec-qt-libraries"><title>Libraries</title>
|
<section xml:id="ssec-qt-libraries"><title>Packaging Libraries for Nixpkgs</title>
|
||||||
|
|
||||||
<para>Libraries that depend on Qt 5 should be built with each available version to avoid linking a dependent package against incompatible versions of Qt 5. (Although Qt 5 maintains backward ABI compatibility, linking against multiple versions at once is generally not possible; at best it will lead to runtime faults.) Packages that provide libraries should be added to the top-level function <varname>mkLibsForQt5</varname>, which is used to build a set of libraries for every Qt 5 version. The <varname>callPackage</varname> provided in this scope will ensure that only one Qt version will be used throughout the dependency tree. Dependencies should be imported unqualified, i.e. <literal>qtbase</literal> not <literal>qt5.qtbase</literal>, so that <varname>callPackage</varname> can do its work. <emphasis>Do not</emphasis> import a package set such as <literal>qt5</literal> or <literal>libsForQt5</literal> into your package; although it may work fine in the moment, it could well break at the next Qt update.</para>
|
<para>
|
||||||
|
Whenever possible, libraries that use Qt 5 should be built with each available version.
|
||||||
|
Packages providing libraries should be added to the top-level function <varname>mkLibsForQt5</varname>,
|
||||||
|
which is used to build a set of libraries for every Qt 5 version.
|
||||||
|
A special <varname>callPackage</varname> function is used in this scope to ensure that the entire dependency tree uses the same Qt 5 version.
|
||||||
|
Import dependencies unqualified, i.e., <literal>qtbase</literal> not <literal>qt5.qtbase</literal>.
|
||||||
|
<emphasis>Do not</emphasis> import a package set such as <literal>qt5</literal> or <literal>libsForQt5</literal>.
|
||||||
|
</para>
|
||||||
|
|
||||||
<para>If a library does not support a particular version of Qt 5, it is best to mark it as broken by setting its <literal>meta.broken</literal> attribute. A package may be marked broken for certain versions by testing the <literal>qtbase.version</literal> attribute, which will always give the current Qt 5 version.</para>
|
<para>
|
||||||
|
If a library does not support a particular version of Qt 5, it is best to mark it as broken by setting its <literal>meta.broken</literal> attribute.
|
||||||
|
A package may be marked broken for certain versions by testing the <literal>qtbase.version</literal> attribute, which will always give the current Qt 5 version.
|
||||||
|
</para>
|
||||||
|
|
||||||
</section>
|
</section>
|
||||||
|
|
||||||
<section xml:id="ssec-qt-applications"><title>Applications</title>
|
<section xml:id="ssec-qt-applications"><title>Packaging Applications for Nixpkgs</title>
|
||||||
|
|
||||||
<para>Applications generally do not need to be built with every Qt version because they do not provide any libraries for dependent packages to link against. The primary consideration is merely ensuring that the application itself and its dependencies are linked against only one version of Qt. To call your application expression, use <literal>libsForQt5.callPackage</literal> instead of <literal>callPackage</literal>. Dependencies should be imported unqualified, i.e. <literal>qtbase</literal> not <literal>qt5.qtbase</literal>. <emphasis>Do not</emphasis> import a package set such as <literal>qt5</literal> or <literal>libsForQt5</literal> into your package; although it may work fine in the moment, it could well break at the next Qt update.</para>
|
<para>
|
||||||
|
Call your application expression using <literal>libsForQt5.callPackage</literal> instead of <literal>callPackage</literal>.
|
||||||
|
Import dependencies unqualified, i.e., <literal>qtbase</literal> not <literal>qt5.qtbase</literal>.
|
||||||
|
<emphasis>Do not</emphasis> import a package set such as <literal>qt5</literal> or <literal>libsForQt5</literal>.
|
||||||
|
</para>
|
||||||
|
|
||||||
<para>It is generally best to build an application package against the <varname>libsForQt5</varname> library set. In case a package does not build with the latest Qt version, it is possible to pick a set pinned to a particular version, e.g. <varname>libsForQt55</varname> for Qt 5.5, if that is the latest version the package supports.</para>
|
<para>
|
||||||
|
Qt 5 maintains strict backward compatibility, so it is generally best to build an application package against the latest version using the <varname>libsForQt5</varname> library set.
|
||||||
|
In case a package does not build with the latest Qt version, it is possible to pick a set pinned to a particular version, e.g. <varname>libsForQt55</varname> for Qt 5.5, if that is the latest version the package supports.
|
||||||
|
If a package must be pinned to an older Qt version, be sure to file a bug upstream;
|
||||||
|
because Qt is strictly backwards-compatible, any incompatibility is by definition a bug in the application.
|
||||||
|
</para>
|
||||||
|
|
||||||
<para>Qt-based applications require that several paths be set at runtime. This is accomplished by wrapping the provided executables in a package with <literal>wrapQtProgram</literal> or <literal>makeQtWrapper</literal> during the <literal>postFixup</literal> phase. To use the wrapper generators, add <literal>makeQtWrapper</literal> to <literal>nativeBuildInputs</literal>. The wrapper generators support the same options as <literal>wrapProgram</literal> and <literal>makeWrapper</literal> respectively. It is usually only necessary to generate wrappers for programs intended to be invoked by the user.</para>
|
<para>
|
||||||
|
When testing applications in Nixpkgs, it is a common practice to build the package with <literal>nix-build</literal> and run it using the created symbolic link.
|
||||||
</section>
|
This will not work with Qt applications, however, because they have many hard runtime requirements that can only be guaranteed if the package is actually installed.
|
||||||
|
To test a Qt application, install it with <literal>nix-env</literal> or run it inside <literal>nix-shell</literal>.
|
||||||
<section xml:id="ssec-qt-kde"><title>KDE</title>
|
</para>
|
||||||
|
|
||||||
<para>The KDE Frameworks are a set of libraries for Qt 5 which form the basis of the Plasma desktop environment and the KDE Applications suite. Packaging a Frameworks-based library does not require any steps beyond those described above for general Qt-based libraries. Frameworks-based applications should not use <literal>makeQtWrapper</literal>; instead, use <literal>kdeWrapper</literal> to create the necessary wrappers: <literal>kdeWrapper { unwrapped = <replaceable>expr</replaceable>; targets = <replaceable>exes</replaceable>; }</literal>, where <replaceable>expr</replaceable> is the un-wrapped package expression and <replaceable>exes</replaceable> is a list of strings giving the relative paths to programs in the package which should be wrapped.</para>
|
|
||||||
|
|
||||||
</section>
|
</section>
|
||||||
|
|
||||||
|
@ -78,7 +78,7 @@ self: super:
|
|||||||
<para>The first argument, usually named <varname>self</varname>, corresponds to the final package
|
<para>The first argument, usually named <varname>self</varname>, corresponds to the final package
|
||||||
set. You should use this set for the dependencies of all packages specified in your
|
set. You should use this set for the dependencies of all packages specified in your
|
||||||
overlay. For example, all the dependencies of <varname>rr</varname> in the example above come
|
overlay. For example, all the dependencies of <varname>rr</varname> in the example above come
|
||||||
from <varname>self</varname>, as well as the overriden dependencies used in the
|
from <varname>self</varname>, as well as the overridden dependencies used in the
|
||||||
<varname>boost</varname> override.</para>
|
<varname>boost</varname> override.</para>
|
||||||
|
|
||||||
<para>The second argument, usually named <varname>super</varname>,
|
<para>The second argument, usually named <varname>super</varname>,
|
||||||
|
@ -516,4 +516,140 @@ to your configuration, rebuild, and run the game with
|
|||||||
|
|
||||||
</section>
|
</section>
|
||||||
|
|
||||||
|
<section xml:id="sec-emacs">
|
||||||
|
|
||||||
|
<title>Emacs</title>
|
||||||
|
|
||||||
|
<section xml:id="sec-emacs-config">
|
||||||
|
|
||||||
|
<title>Configuring Emacs</title>
|
||||||
|
|
||||||
|
<para>
|
||||||
|
The Emacs package comes with some extra helpers to make it easier to
|
||||||
|
configure. <varname>emacsWithPackages</varname> allows you to manage
|
||||||
|
packages from ELPA. This means that you will not have to install
|
||||||
|
that packages from within Emacs. For instance, if you wanted to use
|
||||||
|
<literal>company</literal>, <literal>counsel</literal>,
|
||||||
|
<literal>flycheck</literal>, <literal>ivy</literal>,
|
||||||
|
<literal>magit</literal>, <literal>projectile</literal>, and
|
||||||
|
<literal>use-package</literal> you could use this as a
|
||||||
|
<filename>~/.config/nixpkgs/config.nix</filename> override:
|
||||||
|
</para>
|
||||||
|
|
||||||
|
<screen>
|
||||||
|
{
|
||||||
|
packageOverrides = pkgs: with pkgs; {
|
||||||
|
myEmacs = emacsWithPackages (epkgs: (with epkgs.melpaStablePackages; [
|
||||||
|
company
|
||||||
|
counsel
|
||||||
|
flycheck
|
||||||
|
ivy
|
||||||
|
magit
|
||||||
|
projectile
|
||||||
|
use-package
|
||||||
|
]));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
</screen>
|
||||||
|
|
||||||
|
<para>
|
||||||
|
You can install it like any other packages via <command>nix-env -iA
|
||||||
|
myEmacs</command>. However, this will only install those packages.
|
||||||
|
It will not <literal>configure</literal> them for us. To do this, we
|
||||||
|
need to provide a configuration file. Luckily, it is possible to do
|
||||||
|
this from within Nix! By modifying the above example, we can make
|
||||||
|
Emacs load a custom config file. The key is to create a package that
|
||||||
|
provide a <filename>default.el</filename> file in
|
||||||
|
<filename>/share/emacs/site-start/</filename>. Emacs knows to load
|
||||||
|
this file automatically when it starts.
|
||||||
|
</para>
|
||||||
|
|
||||||
|
<screen>
|
||||||
|
{
|
||||||
|
packageOverrides = pkgs: with pkgs; rec {
|
||||||
|
myEmacsConfig = writeText "default.el" ''
|
||||||
|
;; initialize package
|
||||||
|
|
||||||
|
(require 'package)
|
||||||
|
(package-initialize 'noactivate)
|
||||||
|
(eval-when-compile
|
||||||
|
(require 'use-package))
|
||||||
|
|
||||||
|
;; load some packages
|
||||||
|
|
||||||
|
(use-package company
|
||||||
|
:bind ("<C-tab>" . company-complete)
|
||||||
|
:diminish company-mode
|
||||||
|
:commands (company-mode global-company-mode)
|
||||||
|
:defer 1
|
||||||
|
:config
|
||||||
|
(global-company-mode))
|
||||||
|
|
||||||
|
(use-package counsel
|
||||||
|
:commands (counsel-descbinds)
|
||||||
|
:bind (([remap execute-extended-command] . counsel-M-x)
|
||||||
|
("C-x C-f" . counsel-find-file)
|
||||||
|
("C-c g" . counsel-git)
|
||||||
|
("C-c j" . counsel-git-grep)
|
||||||
|
("C-c k" . counsel-ag)
|
||||||
|
("C-x l" . counsel-locate)
|
||||||
|
("M-y" . counsel-yank-pop)))
|
||||||
|
|
||||||
|
(use-package flycheck
|
||||||
|
:defer 2
|
||||||
|
:config (global-flycheck-mode))
|
||||||
|
|
||||||
|
(use-package ivy
|
||||||
|
:defer 1
|
||||||
|
:bind (("C-c C-r" . ivy-resume)
|
||||||
|
("C-x C-b" . ivy-switch-buffer)
|
||||||
|
:map ivy-minibuffer-map
|
||||||
|
("C-j" . ivy-call))
|
||||||
|
:diminish ivy-mode
|
||||||
|
:commands ivy-mode
|
||||||
|
:config
|
||||||
|
(ivy-mode 1))
|
||||||
|
|
||||||
|
(use-package magit
|
||||||
|
:defer
|
||||||
|
:if (executable-find "git")
|
||||||
|
:bind (("C-x g" . magit-status)
|
||||||
|
("C-x G" . magit-dispatch-popup))
|
||||||
|
:init
|
||||||
|
(setq magit-completing-read-function 'ivy-completing-read))
|
||||||
|
|
||||||
|
(use-package projectile
|
||||||
|
:commands projectile-mode
|
||||||
|
:bind-keymap ("C-c p" . projectile-command-map)
|
||||||
|
:defer 5
|
||||||
|
:config
|
||||||
|
(projectile-global-mode))
|
||||||
|
'';
|
||||||
|
myEmacs = emacsWithPackages (epkgs: (with epkgs.melpaStablePackages; [
|
||||||
|
(runCommand "default.el" {} ''
|
||||||
|
mkdir -p $out/share/emacs/site-lisp
|
||||||
|
cp ${myEmacsConfig} $out/share/emacs/site-lisp/default.el
|
||||||
|
'')
|
||||||
|
company
|
||||||
|
counsel
|
||||||
|
flycheck
|
||||||
|
ivy
|
||||||
|
magit
|
||||||
|
projectile
|
||||||
|
use-package
|
||||||
|
]));
|
||||||
|
};
|
||||||
|
}
|
||||||
|
</screen>
|
||||||
|
|
||||||
|
<para>
|
||||||
|
This provides a fairly full Emacs start file. It will load in
|
||||||
|
addition to the user's presonal config. You can always disable it by
|
||||||
|
passing <command>-q</command> to the Emacs command.
|
||||||
|
</para>
|
||||||
|
|
||||||
|
</section>
|
||||||
|
|
||||||
|
</section>
|
||||||
|
|
||||||
</chapter>
|
</chapter>
|
||||||
|
@ -18,7 +18,7 @@
|
|||||||
<para>The high change rate of nixpkgs make any pull request that is open for
|
<para>The high change rate of nixpkgs make any pull request that is open for
|
||||||
long enough subject to conflicts that will require extra work from the
|
long enough subject to conflicts that will require extra work from the
|
||||||
submitter or the merger. Reviewing pull requests in a timely manner and being
|
submitter or the merger. Reviewing pull requests in a timely manner and being
|
||||||
responsive to the comments is the key to avoid these. Github provides sort
|
responsive to the comments is the key to avoid these. GitHub provides sort
|
||||||
filters that can be used to see the <link
|
filters that can be used to see the <link
|
||||||
xlink:href="https://github.com/NixOS/nixpkgs/pulls?q=is%3Apr+is%3Aopen+sort%3Aupdated-desc">most
|
xlink:href="https://github.com/NixOS/nixpkgs/pulls?q=is%3Apr+is%3Aopen+sort%3Aupdated-desc">most
|
||||||
recently</link> and the <link
|
recently</link> and the <link
|
||||||
|
@ -318,7 +318,13 @@ containing some shell commands to be executed, or by redefining the
|
|||||||
shell function
|
shell function
|
||||||
<varname><replaceable>name</replaceable>Phase</varname>. The former
|
<varname><replaceable>name</replaceable>Phase</varname>. The former
|
||||||
is convenient to override a phase from the derivation, while the
|
is convenient to override a phase from the derivation, while the
|
||||||
latter is convenient from a build script.</para>
|
latter is convenient from a build script.
|
||||||
|
|
||||||
|
However, typically one only wants to <emphasis>add</emphasis> some
|
||||||
|
commands to a phase, e.g. by defining <literal>postInstall</literal>
|
||||||
|
or <literal>preFixup</literal>, as skipping some of the default actions
|
||||||
|
may have unexpected consequences.
|
||||||
|
</para>
|
||||||
|
|
||||||
|
|
||||||
<section xml:id="ssec-controlling-phases"><title>Controlling
|
<section xml:id="ssec-controlling-phases"><title>Controlling
|
||||||
@ -1156,7 +1162,7 @@ makeWrapper $out/bin/foo $wrapperfile --prefix PATH : ${lib.makeBinPath [ hello
|
|||||||
<term><option>--replace</option>
|
<term><option>--replace</option>
|
||||||
<replaceable>s1</replaceable>
|
<replaceable>s1</replaceable>
|
||||||
<replaceable>s2</replaceable></term>
|
<replaceable>s2</replaceable></term>
|
||||||
<listitem><para>Replace every occurence of the string
|
<listitem><para>Replace every occurrence of the string
|
||||||
<replaceable>s1</replaceable> by
|
<replaceable>s1</replaceable> by
|
||||||
<replaceable>s2</replaceable>.</para></listitem>
|
<replaceable>s2</replaceable>.</para></listitem>
|
||||||
</varlistentry>
|
</varlistentry>
|
||||||
@ -1164,7 +1170,7 @@ makeWrapper $out/bin/foo $wrapperfile --prefix PATH : ${lib.makeBinPath [ hello
|
|||||||
<varlistentry>
|
<varlistentry>
|
||||||
<term><option>--subst-var</option>
|
<term><option>--subst-var</option>
|
||||||
<replaceable>varName</replaceable></term>
|
<replaceable>varName</replaceable></term>
|
||||||
<listitem><para>Replace every occurence of
|
<listitem><para>Replace every occurrence of
|
||||||
<literal>@<replaceable>varName</replaceable>@</literal> by
|
<literal>@<replaceable>varName</replaceable>@</literal> by
|
||||||
the contents of the environment variable
|
the contents of the environment variable
|
||||||
<replaceable>varName</replaceable>. This is useful for
|
<replaceable>varName</replaceable>. This is useful for
|
||||||
@ -1177,7 +1183,7 @@ makeWrapper $out/bin/foo $wrapperfile --prefix PATH : ${lib.makeBinPath [ hello
|
|||||||
<term><option>--subst-var-by</option>
|
<term><option>--subst-var-by</option>
|
||||||
<replaceable>varName</replaceable>
|
<replaceable>varName</replaceable>
|
||||||
<replaceable>s</replaceable></term>
|
<replaceable>s</replaceable></term>
|
||||||
<listitem><para>Replace every occurence of
|
<listitem><para>Replace every occurrence of
|
||||||
<literal>@<replaceable>varName</replaceable>@</literal> by
|
<literal>@<replaceable>varName</replaceable>@</literal> by
|
||||||
the string <replaceable>s</replaceable>.</para></listitem>
|
the string <replaceable>s</replaceable>.</para></listitem>
|
||||||
</varlistentry>
|
</varlistentry>
|
||||||
@ -1225,7 +1231,7 @@ substitute ./foo.in ./foo.out \
|
|||||||
<term><function>substituteAll</function>
|
<term><function>substituteAll</function>
|
||||||
<replaceable>infile</replaceable>
|
<replaceable>infile</replaceable>
|
||||||
<replaceable>outfile</replaceable></term>
|
<replaceable>outfile</replaceable></term>
|
||||||
<listitem><para>Replaces every occurence of
|
<listitem><para>Replaces every occurrence of
|
||||||
<literal>@<replaceable>varName</replaceable>@</literal>, where
|
<literal>@<replaceable>varName</replaceable>@</literal>, where
|
||||||
<replaceable>varName</replaceable> is any environment variable, in
|
<replaceable>varName</replaceable> is any environment variable, in
|
||||||
<replaceable>infile</replaceable>, writing the result to
|
<replaceable>infile</replaceable>, writing the result to
|
||||||
@ -1528,7 +1534,7 @@ bin/blib.a(bios_console.o): In function `bios_handle_cup':
|
|||||||
depends on such a format string, it will need to be worked around.
|
depends on such a format string, it will need to be worked around.
|
||||||
</para>
|
</para>
|
||||||
|
|
||||||
<para>Addtionally, some warnings are enabled which might trigger build
|
<para>Additionally, some warnings are enabled which might trigger build
|
||||||
failures if compiler warnings are treated as errors in the package build.
|
failures if compiler warnings are treated as errors in the package build.
|
||||||
In this case, set <option>NIX_CFLAGS_COMPILE</option> to
|
In this case, set <option>NIX_CFLAGS_COMPILE</option> to
|
||||||
<option>-Wno-error=warning-type</option>.</para>
|
<option>-Wno-error=warning-type</option>.</para>
|
||||||
@ -1558,7 +1564,7 @@ fcntl2.h:50:4: error: call to '__open_missing_mode' declared with attribute erro
|
|||||||
<term><varname>pic</varname></term>
|
<term><varname>pic</varname></term>
|
||||||
<listitem>
|
<listitem>
|
||||||
<para>Adds the <option>-fPIC</option> compiler options. This options adds
|
<para>Adds the <option>-fPIC</option> compiler options. This options adds
|
||||||
support for position independant code in shared libraries and thus making
|
support for position independent code in shared libraries and thus making
|
||||||
ASLR possible.</para>
|
ASLR possible.</para>
|
||||||
<para>Most notably, the Linux kernel, kernel modules and other code
|
<para>Most notably, the Linux kernel, kernel modules and other code
|
||||||
not running in an operating system environment like boot loaders won't
|
not running in an operating system environment like boot loaders won't
|
||||||
|
@ -45,6 +45,11 @@ lib.mapAttrs (n: v: v // { shortName = n; }) rec {
|
|||||||
fullName = "Apple Public Source License 2.0";
|
fullName = "Apple Public Source License 2.0";
|
||||||
};
|
};
|
||||||
|
|
||||||
|
arphicpl = {
|
||||||
|
fullName = "Arphic Public License";
|
||||||
|
url = https://www.freedesktop.org/wiki/Arphic_Public_License/;
|
||||||
|
};
|
||||||
|
|
||||||
artistic1 = spdx {
|
artistic1 = spdx {
|
||||||
spdxId = "Artistic-1.0";
|
spdxId = "Artistic-1.0";
|
||||||
fullName = "Artistic License 1.0";
|
fullName = "Artistic License 1.0";
|
||||||
|
@ -99,6 +99,7 @@
|
|||||||
chris-martin = "Chris Martin <ch.martin@gmail.com>";
|
chris-martin = "Chris Martin <ch.martin@gmail.com>";
|
||||||
chrisjefferson = "Christopher Jefferson <chris@bubblescope.net>";
|
chrisjefferson = "Christopher Jefferson <chris@bubblescope.net>";
|
||||||
christopherpoole = "Christopher Mark Poole <mail@christopherpoole.net>";
|
christopherpoole = "Christopher Mark Poole <mail@christopherpoole.net>";
|
||||||
|
ciil = "Simon Lackerbauer <simon@lackerbauer.com>";
|
||||||
ckampka = "Christian Kampka <christian@kampka.net>";
|
ckampka = "Christian Kampka <christian@kampka.net>";
|
||||||
cko = "Christine Koppelt <christine.koppelt@gmail.com>";
|
cko = "Christine Koppelt <christine.koppelt@gmail.com>";
|
||||||
cleverca22 = "Michael Bishop <cleverca22@gmail.com>";
|
cleverca22 = "Michael Bishop <cleverca22@gmail.com>";
|
||||||
@ -178,6 +179,7 @@
|
|||||||
exlevan = "Alexey Levan <exlevan@gmail.com>";
|
exlevan = "Alexey Levan <exlevan@gmail.com>";
|
||||||
expipiplus1 = "Joe Hermaszewski <nix@monoid.al>";
|
expipiplus1 = "Joe Hermaszewski <nix@monoid.al>";
|
||||||
fadenb = "Tristan Helmich <tristan.helmich+nixos@gmail.com>";
|
fadenb = "Tristan Helmich <tristan.helmich+nixos@gmail.com>";
|
||||||
|
fare = "Francois-Rene Rideau <fahree@gmail.com>";
|
||||||
falsifian = "James Cook <james.cook@utoronto.ca>";
|
falsifian = "James Cook <james.cook@utoronto.ca>";
|
||||||
flosse = "Markus Kohlhase <mail@markus-kohlhase.de>";
|
flosse = "Markus Kohlhase <mail@markus-kohlhase.de>";
|
||||||
fluffynukeit = "Daniel Austin <dan@fluffynukeit.com>";
|
fluffynukeit = "Daniel Austin <dan@fluffynukeit.com>";
|
||||||
@ -268,6 +270,7 @@
|
|||||||
kaiha = "Kai Harries <kai.harries@gmail.com>";
|
kaiha = "Kai Harries <kai.harries@gmail.com>";
|
||||||
kamilchm = "Kamil Chmielewski <kamil.chm@gmail.com>";
|
kamilchm = "Kamil Chmielewski <kamil.chm@gmail.com>";
|
||||||
kampfschlaefer = "Arnold Krille <arnold@arnoldarts.de>";
|
kampfschlaefer = "Arnold Krille <arnold@arnoldarts.de>";
|
||||||
|
kentjames = "James Kent <jameschristopherkent@gmail.com";
|
||||||
kevincox = "Kevin Cox <kevincox@kevincox.ca>";
|
kevincox = "Kevin Cox <kevincox@kevincox.ca>";
|
||||||
khumba = "Bryan Gardiner <bog@khumba.net>";
|
khumba = "Bryan Gardiner <bog@khumba.net>";
|
||||||
KibaFox = "Kiba Fox <kiba.fox@foxypossibilities.com>";
|
KibaFox = "Kiba Fox <kiba.fox@foxypossibilities.com>";
|
||||||
@ -601,4 +604,5 @@
|
|||||||
zohl = "Al Zohali <zohl@fmap.me>";
|
zohl = "Al Zohali <zohl@fmap.me>";
|
||||||
zoomulator = "Kim Simmons <zoomulator@gmail.com>";
|
zoomulator = "Kim Simmons <zoomulator@gmail.com>";
|
||||||
zraexy = "David Mell <zraexy@gmail.com>";
|
zraexy = "David Mell <zraexy@gmail.com>";
|
||||||
|
zx2c4 = "Jason A. Donenfeld <Jason@zx2c4.com>";
|
||||||
}
|
}
|
||||||
|
@ -17,6 +17,11 @@ rec {
|
|||||||
drv // { meta = (drv.meta or {}) // newAttrs; };
|
drv // { meta = (drv.meta or {}) // newAttrs; };
|
||||||
|
|
||||||
|
|
||||||
|
/* Disable Hydra builds of given derivation.
|
||||||
|
*/
|
||||||
|
dontDistribute = drv: addMetaAttrs { hydraPlatforms = []; } drv;
|
||||||
|
|
||||||
|
|
||||||
/* Change the symbolic name of a package for presentation purposes
|
/* Change the symbolic name of a package for presentation purposes
|
||||||
(i.e., so that nix-env users can tell them apart).
|
(i.e., so that nix-env users can tell them apart).
|
||||||
*/
|
*/
|
||||||
|
@ -90,7 +90,7 @@ runTests {
|
|||||||
testIsStorePath = {
|
testIsStorePath = {
|
||||||
expr =
|
expr =
|
||||||
let goodPath =
|
let goodPath =
|
||||||
"/nix/store/d945ibfx9x185xf04b890y4f9g3cbb63-python-2.7.11";
|
"${builtins.storeDir}/d945ibfx9x185xf04b890y4f9g3cbb63-python-2.7.11";
|
||||||
in {
|
in {
|
||||||
storePath = isStorePath goodPath;
|
storePath = isStorePath goodPath;
|
||||||
storePathAppendix = isStorePath
|
storePathAppendix = isStorePath
|
||||||
|
@ -25,18 +25,33 @@ INDEX = "https://pypi.io/pypi"
|
|||||||
EXTENSIONS = ['tar.gz', 'tar.bz2', 'tar', 'zip', '.whl']
|
EXTENSIONS = ['tar.gz', 'tar.bz2', 'tar', 'zip', '.whl']
|
||||||
"""Permitted file extensions. These are evaluated from left to right and the first occurance is returned."""
|
"""Permitted file extensions. These are evaluated from left to right and the first occurance is returned."""
|
||||||
|
|
||||||
def _get_value(attribute, text):
|
import logging
|
||||||
"""Match attribute in text and return it."""
|
logging.basicConfig(level=logging.INFO)
|
||||||
|
|
||||||
|
|
||||||
|
def _get_values(attribute, text):
|
||||||
|
"""Match attribute in text and return all matches.
|
||||||
|
|
||||||
|
:returns: List of matches.
|
||||||
|
"""
|
||||||
regex = '{}\s+=\s+"(.*)";'.format(attribute)
|
regex = '{}\s+=\s+"(.*)";'.format(attribute)
|
||||||
regex = re.compile(regex)
|
regex = re.compile(regex)
|
||||||
value = regex.findall(text)
|
values = regex.findall(text)
|
||||||
n = len(value)
|
return values
|
||||||
|
|
||||||
|
def _get_unique_value(attribute, text):
|
||||||
|
"""Match attribute in text and return unique match.
|
||||||
|
|
||||||
|
:returns: Single match.
|
||||||
|
"""
|
||||||
|
values = _get_values(attribute, text)
|
||||||
|
n = len(values)
|
||||||
if n > 1:
|
if n > 1:
|
||||||
raise ValueError("Found too many values for {}".format(attribute))
|
raise ValueError("found too many values for {}".format(attribute))
|
||||||
elif n == 1:
|
elif n == 1:
|
||||||
return value[0]
|
return values[0]
|
||||||
else:
|
else:
|
||||||
raise ValueError("No value found for {}".format(attribute))
|
raise ValueError("no value found for {}".format(attribute))
|
||||||
|
|
||||||
def _get_line_and_value(attribute, text):
|
def _get_line_and_value(attribute, text):
|
||||||
"""Match attribute in text. Return the line and the value of the attribute."""
|
"""Match attribute in text. Return the line and the value of the attribute."""
|
||||||
@ -45,11 +60,11 @@ def _get_line_and_value(attribute, text):
|
|||||||
value = regex.findall(text)
|
value = regex.findall(text)
|
||||||
n = len(value)
|
n = len(value)
|
||||||
if n > 1:
|
if n > 1:
|
||||||
raise ValueError("Found too many values for {}".format(attribute))
|
raise ValueError("found too many values for {}".format(attribute))
|
||||||
elif n == 1:
|
elif n == 1:
|
||||||
return value[0]
|
return value[0]
|
||||||
else:
|
else:
|
||||||
raise ValueError("No value found for {}".format(attribute))
|
raise ValueError("no value found for {}".format(attribute))
|
||||||
|
|
||||||
|
|
||||||
def _replace_value(attribute, value, text):
|
def _replace_value(attribute, value, text):
|
||||||
@ -64,175 +79,151 @@ def _fetch_page(url):
|
|||||||
if r.status_code == requests.codes.ok:
|
if r.status_code == requests.codes.ok:
|
||||||
return r.json()
|
return r.json()
|
||||||
else:
|
else:
|
||||||
raise ValueError("Request for {} failed".format(url))
|
raise ValueError("request for {} failed".format(url))
|
||||||
|
|
||||||
def _get_latest_version(package, extension):
|
|
||||||
|
|
||||||
|
|
||||||
|
def _get_latest_version_pypi(package, extension):
|
||||||
|
"""Get latest version and hash from PyPI."""
|
||||||
url = "{}/{}/json".format(INDEX, package)
|
url = "{}/{}/json".format(INDEX, package)
|
||||||
json = _fetch_page(url)
|
json = _fetch_page(url)
|
||||||
|
|
||||||
data = extract_relevant_nix_data(json, extension)[1]
|
version = json['info']['version']
|
||||||
|
for release in json['releases'][version]:
|
||||||
version = data['latest_version']
|
if release['filename'].endswith(extension):
|
||||||
if version in data['versions']:
|
# TODO: In case of wheel we need to do further checks!
|
||||||
sha256 = data['versions'][version]['sha256']
|
sha256 = release['digests']['sha256']
|
||||||
else:
|
|
||||||
sha256 = None # Its possible that no file was uploaded to PyPI
|
|
||||||
|
|
||||||
return version, sha256
|
return version, sha256
|
||||||
|
|
||||||
|
|
||||||
def extract_relevant_nix_data(json, extension):
|
def _get_latest_version_github(package, extension):
|
||||||
"""Extract relevant Nix data from the JSON of a package obtained from PyPI.
|
raise ValueError("updating from GitHub is not yet supported.")
|
||||||
|
|
||||||
:param json: JSON obtained from PyPI
|
|
||||||
|
FETCHERS = {
|
||||||
|
'fetchFromGitHub' : _get_latest_version_github,
|
||||||
|
'fetchPypi' : _get_latest_version_pypi,
|
||||||
|
'fetchurl' : _get_latest_version_pypi,
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
DEFAULT_SETUPTOOLS_EXTENSION = 'tar.gz'
|
||||||
|
|
||||||
|
|
||||||
|
FORMATS = {
|
||||||
|
'setuptools' : DEFAULT_SETUPTOOLS_EXTENSION,
|
||||||
|
'wheel' : 'whl'
|
||||||
|
}
|
||||||
|
|
||||||
|
def _determine_fetcher(text):
|
||||||
|
# Count occurences of fetchers.
|
||||||
|
nfetchers = sum(text.count('src = {}'.format(fetcher)) for fetcher in FETCHERS.keys())
|
||||||
|
if nfetchers == 0:
|
||||||
|
raise ValueError("no fetcher.")
|
||||||
|
elif nfetchers > 1:
|
||||||
|
raise ValueError("multiple fetchers.")
|
||||||
|
else:
|
||||||
|
# Then we check which fetcher to use.
|
||||||
|
for fetcher in FETCHERS.keys():
|
||||||
|
if 'src = {}'.format(fetcher) in text:
|
||||||
|
return fetcher
|
||||||
|
|
||||||
|
|
||||||
|
def _determine_extension(text, fetcher):
|
||||||
|
"""Determine what extension is used in the expression.
|
||||||
|
|
||||||
|
If we use:
|
||||||
|
- fetchPypi, we check if format is specified.
|
||||||
|
- fetchurl, we determine the extension from the url.
|
||||||
|
- fetchFromGitHub we simply use `.tar.gz`.
|
||||||
"""
|
"""
|
||||||
def _extract_license(json):
|
if fetcher == 'fetchPypi':
|
||||||
"""Extract license from JSON."""
|
try:
|
||||||
return json['info']['license']
|
format = _get_unique_value('format', text)
|
||||||
|
except ValueError as e:
|
||||||
|
format = None # format was not given
|
||||||
|
|
||||||
def _available_versions(json):
|
try:
|
||||||
return json['releases'].keys()
|
extension = _get_unique_value('extension', text)
|
||||||
|
except ValueError as e:
|
||||||
|
extension = None # extension was not given
|
||||||
|
|
||||||
def _extract_latest_version(json):
|
if extension is None:
|
||||||
return json['info']['version']
|
if format is None:
|
||||||
|
format = 'setuptools'
|
||||||
|
extension = FORMATS[format]
|
||||||
|
|
||||||
def _get_src_and_hash(json, version, extensions):
|
elif fetcher == 'fetchurl':
|
||||||
"""Obtain url and hash for a given version and list of allowable extensions."""
|
url = _get_unique_value('url', text)
|
||||||
if not json['releases']:
|
extension = os.path.splitext(url)[1]
|
||||||
msg = "Package {}: No releases available.".format(json['info']['name'])
|
if 'pypi' not in url:
|
||||||
raise ValueError(msg)
|
raise ValueError('url does not point to PyPI.')
|
||||||
else:
|
|
||||||
# We use ['releases'] and not ['urls'] because we want to have the possibility for different version.
|
|
||||||
for possible_file in json['releases'][version]:
|
|
||||||
for extension in extensions:
|
|
||||||
if possible_file['filename'].endswith(extension):
|
|
||||||
src = {'url': str(possible_file['url']),
|
|
||||||
'sha256': str(possible_file['digests']['sha256']),
|
|
||||||
}
|
|
||||||
return src
|
|
||||||
else:
|
|
||||||
msg = "Package {}: No release with valid file extension available.".format(json['info']['name'])
|
|
||||||
logging.info(msg)
|
|
||||||
return None
|
|
||||||
#raise ValueError(msg)
|
|
||||||
|
|
||||||
def _get_sources(json, extensions):
|
elif fetcher == 'fetchFromGitHub':
|
||||||
versions = _available_versions(json)
|
raise ValueError('updating from GitHub is not yet implemented.')
|
||||||
releases = {version: _get_src_and_hash(json, version, extensions) for version in versions}
|
|
||||||
releases = toolz.itemfilter(lambda x: x[1] is not None, releases)
|
|
||||||
return releases
|
|
||||||
|
|
||||||
# Collect data)
|
return extension
|
||||||
name = str(json['info']['name'])
|
|
||||||
latest_version = str(_extract_latest_version(json))
|
|
||||||
#src = _get_src_and_hash(json, latest_version, EXTENSIONS)
|
|
||||||
sources = _get_sources(json, [extension])
|
|
||||||
|
|
||||||
# Collect meta data
|
|
||||||
license = str(_extract_license(json))
|
|
||||||
license = license if license != "UNKNOWN" else None
|
|
||||||
summary = str(json['info'].get('summary')).strip('.')
|
|
||||||
summary = summary if summary != "UNKNOWN" else None
|
|
||||||
#description = str(json['info'].get('description'))
|
|
||||||
#description = description if description != "UNKNOWN" else None
|
|
||||||
homepage = json['info'].get('home_page')
|
|
||||||
|
|
||||||
data = {
|
|
||||||
'latest_version' : latest_version,
|
|
||||||
'versions' : sources,
|
|
||||||
#'src' : src,
|
|
||||||
'meta' : {
|
|
||||||
'description' : summary if summary else None,
|
|
||||||
#'longDescription' : description,
|
|
||||||
'license' : license,
|
|
||||||
'homepage' : homepage,
|
|
||||||
},
|
|
||||||
}
|
|
||||||
return name, data
|
|
||||||
|
|
||||||
|
|
||||||
def _update_package(path):
|
def _update_package(path):
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
# Read the expression
|
||||||
|
with open(path, 'r') as f:
|
||||||
|
text = f.read()
|
||||||
|
|
||||||
|
# Determine pname.
|
||||||
|
pname = _get_unique_value('pname', text)
|
||||||
|
|
||||||
|
# Determine version.
|
||||||
|
version = _get_unique_value('version', text)
|
||||||
|
|
||||||
|
# First we check how many fetchers are mentioned.
|
||||||
|
fetcher = _determine_fetcher(text)
|
||||||
|
|
||||||
|
extension = _determine_extension(text, fetcher)
|
||||||
|
|
||||||
|
new_version, new_sha256 = _get_latest_version_pypi(pname, extension)
|
||||||
|
|
||||||
|
if new_version == version:
|
||||||
|
logging.info("Path {}: no update available for {}.".format(path, pname))
|
||||||
|
return False
|
||||||
|
if not new_sha256:
|
||||||
|
raise ValueError("no file available for {}.".format(pname))
|
||||||
|
|
||||||
|
text = _replace_value('version', new_version, text)
|
||||||
|
text = _replace_value('sha256', new_sha256, text)
|
||||||
|
|
||||||
|
with open(path, 'w') as f:
|
||||||
|
f.write(text)
|
||||||
|
|
||||||
|
logging.info("Path {}: updated {} from {} to {}".format(path, pname, version, new_version))
|
||||||
|
|
||||||
|
return True
|
||||||
|
|
||||||
|
|
||||||
|
def _update(path):
|
||||||
|
|
||||||
# We need to read and modify a Nix expression.
|
# We need to read and modify a Nix expression.
|
||||||
if os.path.isdir(path):
|
if os.path.isdir(path):
|
||||||
path = os.path.join(path, 'default.nix')
|
path = os.path.join(path, 'default.nix')
|
||||||
|
|
||||||
|
# If a default.nix does not exist, we quit.
|
||||||
if not os.path.isfile(path):
|
if not os.path.isfile(path):
|
||||||
logging.warning("Path does not exist: {}".format(path))
|
logging.info("Path {}: does not exist.".format(path))
|
||||||
return False
|
return False
|
||||||
|
|
||||||
|
# If file is not a Nix expression, we quit.
|
||||||
if not path.endswith(".nix"):
|
if not path.endswith(".nix"):
|
||||||
logging.warning("Path does not end with `.nix`, skipping: {}".format(path))
|
logging.info("Path {}: does not end with `.nix`.".format(path))
|
||||||
return False
|
|
||||||
|
|
||||||
with open(path, 'r') as f:
|
|
||||||
text = f.read()
|
|
||||||
|
|
||||||
try:
|
|
||||||
pname = _get_value('pname', text)
|
|
||||||
except ValueError as e:
|
|
||||||
logging.warning("Path {}: {}".format(path, str(e)))
|
|
||||||
return False
|
return False
|
||||||
|
|
||||||
try:
|
try:
|
||||||
version = _get_value('version', text)
|
return _update_package(path)
|
||||||
except ValueError as e:
|
except ValueError as e:
|
||||||
logging.warning("Path {}: {}".format(path, str(e)))
|
logging.warning("Path {}: {}".format(path, e))
|
||||||
return False
|
return False
|
||||||
|
|
||||||
# If we use a wheel, then we need to request a wheel as well
|
|
||||||
try:
|
|
||||||
format = _get_value('format', text)
|
|
||||||
except ValueError as e:
|
|
||||||
# No format mentioned, then we assume we have setuptools
|
|
||||||
# and use a .tar.gz
|
|
||||||
logging.info("Path {}: {}".format(path, str(e)))
|
|
||||||
extension = ".tar.gz"
|
|
||||||
else:
|
|
||||||
if format == 'wheel':
|
|
||||||
extension = ".whl"
|
|
||||||
else:
|
|
||||||
try:
|
|
||||||
url = _get_value('url', text)
|
|
||||||
extension = os.path.splitext(url)[1]
|
|
||||||
if 'pypi' not in url:
|
|
||||||
logging.warning("Path {}: uses non-PyPI url, not updating.".format(path))
|
|
||||||
return False
|
|
||||||
except ValueError as e:
|
|
||||||
logging.info("Path {}: {}".format(path, str(e)))
|
|
||||||
extension = ".tar.gz"
|
|
||||||
|
|
||||||
try:
|
|
||||||
new_version, new_sha256 = _get_latest_version(pname, extension)
|
|
||||||
except ValueError as e:
|
|
||||||
logging.warning("Path {}: {}".format(path, str(e)))
|
|
||||||
else:
|
|
||||||
if not new_sha256:
|
|
||||||
logging.warning("Path has no valid file available: {}".format(path))
|
|
||||||
return False
|
|
||||||
if new_version != version:
|
|
||||||
try:
|
|
||||||
text = _replace_value('version', new_version, text)
|
|
||||||
except ValueError as e:
|
|
||||||
logging.warning("Path {}: {}".format(path, str(e)))
|
|
||||||
try:
|
|
||||||
text = _replace_value('sha256', new_sha256, text)
|
|
||||||
except ValueError as e:
|
|
||||||
logging.warning("Path {}: {}".format(path, str(e)))
|
|
||||||
|
|
||||||
with open(path, 'w') as f:
|
|
||||||
f.write(text)
|
|
||||||
|
|
||||||
logging.info("Updated {} from {} to {}".format(pname, version, new_version))
|
|
||||||
|
|
||||||
else:
|
|
||||||
logging.info("No update available for {} at {}".format(pname, version))
|
|
||||||
|
|
||||||
return True
|
|
||||||
|
|
||||||
|
|
||||||
def main():
|
def main():
|
||||||
|
|
||||||
parser = argparse.ArgumentParser()
|
parser = argparse.ArgumentParser()
|
||||||
@ -240,11 +231,11 @@ def main():
|
|||||||
|
|
||||||
args = parser.parse_args()
|
args = parser.parse_args()
|
||||||
|
|
||||||
packages = args.package
|
packages = map(os.path.abspath, args.package)
|
||||||
|
|
||||||
count = list(map(_update_package, packages))
|
count = list(map(_update, packages))
|
||||||
|
|
||||||
#logging.info("{} package(s) updated".format(sum(count)))
|
logging.info("{} package(s) updated".format(sum(count)))
|
||||||
|
|
||||||
if __name__ == '__main__':
|
if __name__ == '__main__':
|
||||||
main()
|
main()
|
@ -57,7 +57,7 @@ Thus, if something went wrong, you can get status info using
|
|||||||
|
|
||||||
</para>
|
</para>
|
||||||
|
|
||||||
<para>If the container has started succesfully, you can log in as
|
<para>If the container has started successfully, you can log in as
|
||||||
root using the <command>root-login</command> operation:
|
root using the <command>root-login</command> operation:
|
||||||
|
|
||||||
<screen>
|
<screen>
|
||||||
|
@ -45,6 +45,13 @@ services.xserver.displayManager.lightdm.enable = true;
|
|||||||
</programlisting>
|
</programlisting>
|
||||||
</para>
|
</para>
|
||||||
|
|
||||||
|
<para>You can set the keyboard layout (and optionally the layout variant):
|
||||||
|
<programlisting>
|
||||||
|
services.xserver.layout = "de";
|
||||||
|
services.xserver.xkbVariant = "neo";
|
||||||
|
</programlisting>
|
||||||
|
</para>
|
||||||
|
|
||||||
<para>The X server is started automatically at boot time. If you
|
<para>The X server is started automatically at boot time. If you
|
||||||
don’t want this to happen, you can set:
|
don’t want this to happen, you can set:
|
||||||
<programlisting>
|
<programlisting>
|
||||||
|
@ -12,12 +12,12 @@ your <filename>configuration.nix</filename> to configure the system that
|
|||||||
would be installed on the CD.</para>
|
would be installed on the CD.</para>
|
||||||
|
|
||||||
<para>Default CD/DVD configurations are available
|
<para>Default CD/DVD configurations are available
|
||||||
inside <filename>nixos/modules/installer/cd-dvd</filename>. To build them
|
inside <filename>nixos/modules/installer/cd-dvd</filename>.
|
||||||
you have to set <envar>NIXOS_CONFIG</envar> before
|
|
||||||
running <command>nix-build</command> to build the ISO.
|
|
||||||
|
|
||||||
<screen>
|
<screen>
|
||||||
$ nix-build -A config.system.build.isoImage -I nixos-config=modules/installer/cd-dvd/installation-cd-minimal.nix</screen>
|
$ git clone https://github.com/NixOS/nixpkgs.git
|
||||||
|
$ cd nixpkgs/nixos
|
||||||
|
$ nix-build -A config.system.build.isoImage -I nixos-config=modules/installer/cd-dvd/installation-cd-minimal.nix default.nix</screen>
|
||||||
|
|
||||||
</para>
|
</para>
|
||||||
|
|
||||||
|
@ -96,7 +96,7 @@ options = {
|
|||||||
</itemizedlist>
|
</itemizedlist>
|
||||||
</para>
|
</para>
|
||||||
|
|
||||||
<para>Both approachs have problems.</para>
|
<para>Both approaches have problems.</para>
|
||||||
|
|
||||||
<para>Making backends independent can quickly become hard to manage. For
|
<para>Making backends independent can quickly become hard to manage. For
|
||||||
display managers, there can be only one enabled at a time, but the type
|
display managers, there can be only one enabled at a time, but the type
|
||||||
|
@ -396,7 +396,7 @@ code before creating a new type.</para>
|
|||||||
<listitem><para>For composed types that can take a submodule as type
|
<listitem><para>For composed types that can take a submodule as type
|
||||||
parameter, this function can be used to substitute the parameter of a
|
parameter, this function can be used to substitute the parameter of a
|
||||||
submodule type. It takes a module as parameter and return the type with
|
submodule type. It takes a module as parameter and return the type with
|
||||||
the submodule options substituted. It is usally defined as a type
|
the submodule options substituted. It is usually defined as a type
|
||||||
function call with a recursive call to
|
function call with a recursive call to
|
||||||
<literal>substSubModules</literal>, e.g for a type
|
<literal>substSubModules</literal>, e.g for a type
|
||||||
<literal>composedType</literal> that take an <literal>elemtype</literal>
|
<literal>composedType</literal> that take an <literal>elemtype</literal>
|
||||||
|
@ -342,7 +342,7 @@ nix-env -f "<nixpkgs>" -iA haskellPackages.pandoc
|
|||||||
|
|
||||||
<listitem>
|
<listitem>
|
||||||
<para>
|
<para>
|
||||||
Python 2.6 has been marked as broken (as it no longer recieves
|
Python 2.6 has been marked as broken (as it no longer receives
|
||||||
security updates from upstream).
|
security updates from upstream).
|
||||||
</para>
|
</para>
|
||||||
</listitem>
|
</listitem>
|
||||||
|
@ -362,7 +362,7 @@ services.syncthing = {
|
|||||||
<listitem>
|
<listitem>
|
||||||
<para>
|
<para>
|
||||||
<literal>networking.firewall.allowPing</literal> is now enabled by
|
<literal>networking.firewall.allowPing</literal> is now enabled by
|
||||||
default. Users are encourarged to configure an approiate rate limit for
|
default. Users are encouraged to configure an appropriate rate limit for
|
||||||
their machines using the Kernel interface at
|
their machines using the Kernel interface at
|
||||||
<filename>/proc/sys/net/ipv4/icmp_ratelimit</filename> and
|
<filename>/proc/sys/net/ipv4/icmp_ratelimit</filename> and
|
||||||
<filename>/proc/sys/net/ipv6/icmp/ratelimit</filename> or using the
|
<filename>/proc/sys/net/ipv6/icmp/ratelimit</filename> or using the
|
||||||
|
@ -55,6 +55,12 @@ has the following highlights: </para>
|
|||||||
following incompatible changes:</para>
|
following incompatible changes:</para>
|
||||||
|
|
||||||
<itemizedlist>
|
<itemizedlist>
|
||||||
|
<listitem>
|
||||||
|
<para>
|
||||||
|
<literal>aiccu</literal> package was removed. This is due to SixXS
|
||||||
|
<link xlink:href="https://www.sixxs.net/main/"> sunsetting</link> its IPv6 tunnel.
|
||||||
|
</para>
|
||||||
|
</listitem>
|
||||||
<listitem>
|
<listitem>
|
||||||
<para>
|
<para>
|
||||||
Top-level <literal>idea</literal> package collection was renamed.
|
Top-level <literal>idea</literal> package collection was renamed.
|
||||||
@ -89,6 +95,24 @@ rmdir /var/lib/ipfs/.ipfs
|
|||||||
The <literal>postgres</literal> default <literal>dataDir</literal> has changed from <literal>/var/db/postgres</literal> to <literal>/var/lib/postgresql/$psqlSchema</literal> where $psqlSchema is 9.6 for example.
|
The <literal>postgres</literal> default <literal>dataDir</literal> has changed from <literal>/var/db/postgres</literal> to <literal>/var/lib/postgresql/$psqlSchema</literal> where $psqlSchema is 9.6 for example.
|
||||||
</para>
|
</para>
|
||||||
</listitem>
|
</listitem>
|
||||||
|
<listitem>
|
||||||
|
<para>
|
||||||
|
The <literal>caddy</literal> service was previously using an extra
|
||||||
|
<literal>.caddy</literal> in the data directory specified with the
|
||||||
|
<literal>dataDir</literal> option. The contents of the
|
||||||
|
<literal>.caddy</literal> directory are now expected to be in the
|
||||||
|
<literal>dataDir</literal>.
|
||||||
|
</para>
|
||||||
|
</listitem>
|
||||||
|
<listitem>
|
||||||
|
<para>
|
||||||
|
The <literal>ssh-agent</literal> user service is not started by default
|
||||||
|
anymore. Use <literal>programs.ssh.startAgent</literal> to enable it if
|
||||||
|
needed. There is also a new <literal>programs.gnupg.agent</literal>
|
||||||
|
module that creates a <literal>gpg-agent</literal> user service. It can
|
||||||
|
also serve as a SSH agent if <literal>enableSSHSupport</literal> is set.
|
||||||
|
</para>
|
||||||
|
</listitem>
|
||||||
</itemizedlist>
|
</itemizedlist>
|
||||||
|
|
||||||
|
|
||||||
|
@ -35,7 +35,7 @@ foreach my $vlan (split / /, $ENV{VLANS} || "") {
|
|||||||
if ($pid == 0) {
|
if ($pid == 0) {
|
||||||
dup2(fileno($pty->slave), 0);
|
dup2(fileno($pty->slave), 0);
|
||||||
dup2(fileno($stdoutW), 1);
|
dup2(fileno($stdoutW), 1);
|
||||||
exec "vde_switch -s $socket" or _exit(1);
|
exec "vde_switch -s $socket --dirmode 0700" or _exit(1);
|
||||||
}
|
}
|
||||||
close $stdoutW;
|
close $stdoutW;
|
||||||
print $pty "version\n";
|
print $pty "version\n";
|
||||||
|
@ -1,5 +1,5 @@
|
|||||||
{
|
{
|
||||||
x86_64-linux = "/nix/store/71im965h634iy99zsmlncw6qhx5jcclx-nix-1.11.9";
|
x86_64-linux = "/nix/store/crqd5wmrqipl4n1fcm5kkc1zg4sj80js-nix-1.11.11";
|
||||||
i686-linux = "/nix/store/cgvavixkayc36l6kl92i8mxr6k0p2yhy-nix-1.11.9";
|
i686-linux = "/nix/store/wsjn14xp5ja509d4dxb1c78zhirw0b5x-nix-1.11.11";
|
||||||
x86_64-darwin = "/nix/store/w1c96v5yxvdmq4nvqlxjvg6kp7xa2lag-nix-1.11.9";
|
x86_64-darwin = "/nix/store/zqkqnhk85g2shxlpb04y72h1i3db3gpl-nix-1.11.11";
|
||||||
}
|
}
|
||||||
|
@ -294,6 +294,7 @@
|
|||||||
jackett = 276;
|
jackett = 276;
|
||||||
aria2 = 277;
|
aria2 = 277;
|
||||||
clickhouse = 278;
|
clickhouse = 278;
|
||||||
|
rslsync = 279;
|
||||||
|
|
||||||
# When adding a uid, make sure it doesn't match an existing gid. And don't use uids above 399!
|
# When adding a uid, make sure it doesn't match an existing gid. And don't use uids above 399!
|
||||||
|
|
||||||
@ -557,6 +558,7 @@
|
|||||||
jackett = 276;
|
jackett = 276;
|
||||||
aria2 = 277;
|
aria2 = 277;
|
||||||
clickhouse = 278;
|
clickhouse = 278;
|
||||||
|
rslsync = 279;
|
||||||
|
|
||||||
# When adding a gid, make sure it doesn't match an existing
|
# When adding a gid, make sure it doesn't match an existing
|
||||||
# uid. Users and groups with the same name should have equal
|
# uid. Users and groups with the same name should have equal
|
||||||
|
@ -131,9 +131,9 @@ in {
|
|||||||
path = mkIf (!isMLocate) [ pkgs.su ];
|
path = mkIf (!isMLocate) [ pkgs.su ];
|
||||||
script =
|
script =
|
||||||
''
|
''
|
||||||
install -m ${if isMLocate then "0750" else "0755"} -o root -g ${if isMLocate then "mlocate" else "root"} -d $(dirname ${cfg.output})
|
mkdir -m 0755 -p ${dirOf cfg.output}
|
||||||
exec ${cfg.locate}/bin/updatedb \
|
exec ${cfg.locate}/bin/updatedb \
|
||||||
${optionalString (cfg.localuser != null) ''--localuser=${cfg.localuser}''} \
|
${optionalString (cfg.localuser != null && ! isMLocate) ''--localuser=${cfg.localuser}''} \
|
||||||
--output=${toString cfg.output} ${concatStringsSep " " cfg.extraFlags}
|
--output=${toString cfg.output} ${concatStringsSep " " cfg.extraFlags}
|
||||||
'';
|
'';
|
||||||
environment = {
|
environment = {
|
||||||
|
@ -99,6 +99,7 @@
|
|||||||
./programs/spacefm.nix
|
./programs/spacefm.nix
|
||||||
./programs/ssh.nix
|
./programs/ssh.nix
|
||||||
./programs/ssmtp.nix
|
./programs/ssmtp.nix
|
||||||
|
./programs/thefuck.nix
|
||||||
./programs/tmux.nix
|
./programs/tmux.nix
|
||||||
./programs/venus.nix
|
./programs/venus.nix
|
||||||
./programs/vim.nix
|
./programs/vim.nix
|
||||||
@ -283,6 +284,7 @@
|
|||||||
./services/misc/etcd.nix
|
./services/misc/etcd.nix
|
||||||
./services/misc/felix.nix
|
./services/misc/felix.nix
|
||||||
./services/misc/folding-at-home.nix
|
./services/misc/folding-at-home.nix
|
||||||
|
./services/misc/fstrim.nix
|
||||||
./services/misc/gammu-smsd.nix
|
./services/misc/gammu-smsd.nix
|
||||||
./services/misc/geoip-updater.nix
|
./services/misc/geoip-updater.nix
|
||||||
#./services/misc/gitit.nix
|
#./services/misc/gitit.nix
|
||||||
@ -387,7 +389,6 @@
|
|||||||
./services/network-filesystems/u9fs.nix
|
./services/network-filesystems/u9fs.nix
|
||||||
./services/network-filesystems/yandex-disk.nix
|
./services/network-filesystems/yandex-disk.nix
|
||||||
./services/network-filesystems/xtreemfs.nix
|
./services/network-filesystems/xtreemfs.nix
|
||||||
./services/networking/aiccu.nix
|
|
||||||
./services/networking/amuled.nix
|
./services/networking/amuled.nix
|
||||||
./services/networking/asterisk.nix
|
./services/networking/asterisk.nix
|
||||||
./services/networking/atftpd.nix
|
./services/networking/atftpd.nix
|
||||||
@ -485,6 +486,7 @@
|
|||||||
./services/networking/radvd.nix
|
./services/networking/radvd.nix
|
||||||
./services/networking/rdnssd.nix
|
./services/networking/rdnssd.nix
|
||||||
./services/networking/redsocks.nix
|
./services/networking/redsocks.nix
|
||||||
|
./services/networking/resilio.nix
|
||||||
./services/networking/rpcbind.nix
|
./services/networking/rpcbind.nix
|
||||||
./services/networking/sabnzbd.nix
|
./services/networking/sabnzbd.nix
|
||||||
./services/networking/searx.nix
|
./services/networking/searx.nix
|
||||||
|
@ -21,13 +21,37 @@ in
|
|||||||
|
|
||||||
agent.enableSSHSupport = mkOption {
|
agent.enableSSHSupport = mkOption {
|
||||||
type = types.bool;
|
type = types.bool;
|
||||||
default = true;
|
default = false;
|
||||||
description = ''
|
description = ''
|
||||||
Enable SSH agent support in GnuPG agent. Also sets SSH_AUTH_SOCK
|
Enable SSH agent support in GnuPG agent. Also sets SSH_AUTH_SOCK
|
||||||
environment variable correctly. This will disable socket-activation
|
environment variable correctly. This will disable socket-activation
|
||||||
and thus always start a GnuPG agent per user session.
|
and thus always start a GnuPG agent per user session.
|
||||||
'';
|
'';
|
||||||
};
|
};
|
||||||
|
|
||||||
|
agent.enableExtraSocket = mkOption {
|
||||||
|
type = types.bool;
|
||||||
|
default = false;
|
||||||
|
description = ''
|
||||||
|
Enable extra socket for GnuPG agent.
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
|
||||||
|
agent.enableBrowserSocket = mkOption {
|
||||||
|
type = types.bool;
|
||||||
|
default = false;
|
||||||
|
description = ''
|
||||||
|
Enable browser socket for GnuPG agent.
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
|
||||||
|
dirmngr.enable = mkOption {
|
||||||
|
type = types.bool;
|
||||||
|
default = false;
|
||||||
|
description = ''
|
||||||
|
Enables GnuPG network certificate management daemon with socket-activation for every user session.
|
||||||
|
'';
|
||||||
|
};
|
||||||
};
|
};
|
||||||
|
|
||||||
config = mkIf cfg.agent.enable {
|
config = mkIf cfg.agent.enable {
|
||||||
@ -38,15 +62,72 @@ in
|
|||||||
("${pkgs.gnupg}/bin/gpg-agent --supervised "
|
("${pkgs.gnupg}/bin/gpg-agent --supervised "
|
||||||
+ optionalString cfg.agent.enableSSHSupport "--enable-ssh-support")
|
+ optionalString cfg.agent.enableSSHSupport "--enable-ssh-support")
|
||||||
];
|
];
|
||||||
|
ExecReload = "${pkgs.gnupg}/bin/gpgconf --reload gpg-agent";
|
||||||
};
|
};
|
||||||
};
|
};
|
||||||
|
|
||||||
systemd.user.sockets.gpg-agent = {
|
systemd.user.sockets.gpg-agent = {
|
||||||
wantedBy = [ "sockets.target" ];
|
wantedBy = [ "sockets.target" ];
|
||||||
|
listenStreams = [ "%t/gnupg/S.gpg-agent" ];
|
||||||
|
socketConfig = {
|
||||||
|
FileDescriptorName = "std";
|
||||||
|
SocketMode = "0600";
|
||||||
|
DirectoryMode = "0700";
|
||||||
|
};
|
||||||
};
|
};
|
||||||
|
|
||||||
systemd.user.sockets.gpg-agent-ssh = mkIf cfg.agent.enableSSHSupport {
|
systemd.user.sockets.gpg-agent-ssh = mkIf cfg.agent.enableSSHSupport {
|
||||||
wantedBy = [ "sockets.target" ];
|
wantedBy = [ "sockets.target" ];
|
||||||
|
listenStreams = [ "%t/gnupg/S.gpg-agent.ssh" ];
|
||||||
|
socketConfig = {
|
||||||
|
FileDescriptorName = "ssh";
|
||||||
|
Service = "gpg-agent.service";
|
||||||
|
SocketMode = "0600";
|
||||||
|
DirectoryMode = "0700";
|
||||||
|
};
|
||||||
|
};
|
||||||
|
|
||||||
|
systemd.user.sockets.gpg-agent-extra = mkIf cfg.agent.enableExtraSocket {
|
||||||
|
wantedBy = [ "sockets.target" ];
|
||||||
|
listenStreams = [ "%t/gnupg/S.gpg-agent.extra" ];
|
||||||
|
socketConfig = {
|
||||||
|
FileDescriptorName = "extra";
|
||||||
|
Service = "gpg-agent.service";
|
||||||
|
SocketMode = "0600";
|
||||||
|
DirectoryMode = "0700";
|
||||||
|
};
|
||||||
|
};
|
||||||
|
|
||||||
|
systemd.user.sockets.gpg-agent-browser = mkIf cfg.agent.enableBrowserSocket {
|
||||||
|
wantedBy = [ "sockets.target" ];
|
||||||
|
listenStreams = [ "%t/gnupg/S.gpg-agent.browser" ];
|
||||||
|
socketConfig = {
|
||||||
|
FileDescriptorName = "browser";
|
||||||
|
Service = "gpg-agent.service";
|
||||||
|
SocketMode = "0600";
|
||||||
|
DirectoryMode = "0700";
|
||||||
|
};
|
||||||
|
};
|
||||||
|
|
||||||
|
systemd.user.services.dirmngr = {
|
||||||
|
requires = [ "dirmngr.socket" ];
|
||||||
|
after = [ "dirmngr.socket" ];
|
||||||
|
unitConfig = {
|
||||||
|
RefuseManualStart = "true";
|
||||||
|
};
|
||||||
|
serviceConfig = {
|
||||||
|
ExecStart = "${pkgs.gnupg}/bin/dirmngr --supervised";
|
||||||
|
ExecReload = "${pkgs.gnupg}/bin/gpgconf --reload dirmngr";
|
||||||
|
};
|
||||||
|
};
|
||||||
|
|
||||||
|
systemd.user.sockets.dirmngr = {
|
||||||
|
wantedBy = [ "sockets.target" ];
|
||||||
|
listenStreams = [ "%t/gnupg/S.dirmngr" ];
|
||||||
|
socketConfig = {
|
||||||
|
SocketMode = "0600";
|
||||||
|
DirectoryMode = "0700";
|
||||||
|
};
|
||||||
};
|
};
|
||||||
|
|
||||||
systemd.packages = [ pkgs.gnupg ];
|
systemd.packages = [ pkgs.gnupg ];
|
||||||
|
@ -74,7 +74,7 @@ in
|
|||||||
|
|
||||||
startAgent = mkOption {
|
startAgent = mkOption {
|
||||||
type = types.bool;
|
type = types.bool;
|
||||||
default = true;
|
default = false;
|
||||||
description = ''
|
description = ''
|
||||||
Whether to start the OpenSSH agent when you log in. The OpenSSH agent
|
Whether to start the OpenSSH agent when you log in. The OpenSSH agent
|
||||||
remembers private keys for you so that you don't have to type in
|
remembers private keys for you so that you don't have to type in
|
||||||
|
31
nixos/modules/programs/thefuck.nix
Normal file
31
nixos/modules/programs/thefuck.nix
Normal file
@ -0,0 +1,31 @@
|
|||||||
|
{ config, pkgs, lib, ... }:
|
||||||
|
|
||||||
|
with lib;
|
||||||
|
|
||||||
|
let
|
||||||
|
cfg = config.programs.thefuck;
|
||||||
|
in
|
||||||
|
{
|
||||||
|
options = {
|
||||||
|
programs.thefuck = {
|
||||||
|
enable = mkEnableOption "thefuck";
|
||||||
|
|
||||||
|
alias = mkOption {
|
||||||
|
default = "fuck";
|
||||||
|
type = types.string;
|
||||||
|
|
||||||
|
description = ''
|
||||||
|
`thefuck` needs an alias to be configured.
|
||||||
|
The default value is `fuck`, but you can use anything else as well.
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
};
|
||||||
|
};
|
||||||
|
|
||||||
|
config = mkIf cfg.enable {
|
||||||
|
environment.systemPackages = with pkgs; [ thefuck ];
|
||||||
|
environment.shellInit = ''
|
||||||
|
eval $(${pkgs.thefuck}/bin/thefuck --alias ${cfg.alias})
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
}
|
@ -8,13 +8,7 @@ in
|
|||||||
{
|
{
|
||||||
options = {
|
options = {
|
||||||
programs.zsh.syntaxHighlighting = {
|
programs.zsh.syntaxHighlighting = {
|
||||||
enable = mkOption {
|
enable = mkEnableOption "zsh-syntax-highlighting";
|
||||||
default = false;
|
|
||||||
type = types.bool;
|
|
||||||
description = ''
|
|
||||||
Enable zsh-syntax-highlighting.
|
|
||||||
'';
|
|
||||||
};
|
|
||||||
|
|
||||||
highlighters = mkOption {
|
highlighters = mkOption {
|
||||||
default = [ "main" ];
|
default = [ "main" ];
|
||||||
@ -38,13 +32,13 @@ in
|
|||||||
};
|
};
|
||||||
|
|
||||||
patterns = mkOption {
|
patterns = mkOption {
|
||||||
default = [];
|
default = {};
|
||||||
type = types.listOf(types.listOf(types.string));
|
type = types.attrsOf types.string;
|
||||||
|
|
||||||
example = literalExample ''
|
example = literalExample ''
|
||||||
[
|
{
|
||||||
["rm -rf *" "fg=white,bold,bg=red"]
|
"rm -rf *" = "fg=white,bold,bg=red";
|
||||||
]
|
}
|
||||||
'';
|
'';
|
||||||
|
|
||||||
description = ''
|
description = ''
|
||||||
@ -67,14 +61,17 @@ in
|
|||||||
"ZSH_HIGHLIGHT_HIGHLIGHTERS=(${concatStringsSep " " cfg.highlighters})"
|
"ZSH_HIGHLIGHT_HIGHLIGHTERS=(${concatStringsSep " " cfg.highlighters})"
|
||||||
}
|
}
|
||||||
|
|
||||||
${optionalString (length(cfg.patterns) > 0)
|
${let
|
||||||
(assert(elem "pattern" cfg.highlighters); (foldl (
|
n = attrNames cfg.patterns;
|
||||||
a: b:
|
in
|
||||||
assert(length(b) == 2); ''
|
optionalString (length(n) > 0)
|
||||||
${a}
|
(assert(elem "pattern" cfg.highlighters); (foldl (
|
||||||
ZSH_HIGHLIGHT_PATTERNS+=('${elemAt b 0}' '${elemAt b 1}')
|
a: b:
|
||||||
''
|
''
|
||||||
) "") cfg.patterns)
|
${a}
|
||||||
|
ZSH_HIGHLIGHT_PATTERNS+=('${b}' '${attrByPath [b] "" cfg.patterns}')
|
||||||
|
''
|
||||||
|
) "") n)
|
||||||
}
|
}
|
||||||
'';
|
'';
|
||||||
};
|
};
|
||||||
|
@ -13,7 +13,7 @@ let
|
|||||||
description = ''
|
description = ''
|
||||||
Where the webroot of the HTTP vhost is located.
|
Where the webroot of the HTTP vhost is located.
|
||||||
<filename>.well-known/acme-challenge/</filename> directory
|
<filename>.well-known/acme-challenge/</filename> directory
|
||||||
will be created automatically if it doesn't exist.
|
will be created below the webroot if it doesn't exist.
|
||||||
<literal>http://example.org/.well-known/acme-challenge/</literal> must also
|
<literal>http://example.org/.well-known/acme-challenge/</literal> must also
|
||||||
be available (notice unencrypted HTTP).
|
be available (notice unencrypted HTTP).
|
||||||
'';
|
'';
|
||||||
@ -46,7 +46,10 @@ let
|
|||||||
allowKeysForGroup = mkOption {
|
allowKeysForGroup = mkOption {
|
||||||
type = types.bool;
|
type = types.bool;
|
||||||
default = false;
|
default = false;
|
||||||
description = "Give read permissions to the specified group to read SSL private certificates.";
|
description = ''
|
||||||
|
Give read permissions to the specified group
|
||||||
|
(<option>security.acme.group</option>) to read SSL private certificates.
|
||||||
|
'';
|
||||||
};
|
};
|
||||||
|
|
||||||
postRun = mkOption {
|
postRun = mkOption {
|
||||||
@ -65,21 +68,24 @@ let
|
|||||||
"cert.der" "cert.pem" "chain.pem" "external.sh"
|
"cert.der" "cert.pem" "chain.pem" "external.sh"
|
||||||
"fullchain.pem" "full.pem" "key.der" "key.pem" "account_key.json"
|
"fullchain.pem" "full.pem" "key.der" "key.pem" "account_key.json"
|
||||||
]);
|
]);
|
||||||
default = [ "fullchain.pem" "key.pem" "account_key.json" ];
|
default = [ "fullchain.pem" "full.pem" "key.pem" "account_key.json" ];
|
||||||
description = ''
|
description = ''
|
||||||
Plugins to enable. With default settings simp_le will
|
Plugins to enable. With default settings simp_le will
|
||||||
store public certificate bundle in <filename>fullchain.pem</filename>
|
store public certificate bundle in <filename>fullchain.pem</filename>,
|
||||||
and private key in <filename>key.pem</filename> in its state directory.
|
private key in <filename>key.pem</filename> and those two previous
|
||||||
|
files combined in <filename>full.pem</filename> in its state directory.
|
||||||
'';
|
'';
|
||||||
};
|
};
|
||||||
|
|
||||||
extraDomains = mkOption {
|
extraDomains = mkOption {
|
||||||
type = types.attrsOf (types.nullOr types.str);
|
type = types.attrsOf (types.nullOr types.str);
|
||||||
default = {};
|
default = {};
|
||||||
example = {
|
example = literalExample ''
|
||||||
"example.org" = "/srv/http/nginx";
|
{
|
||||||
"mydomain.org" = null;
|
"example.org" = "/srv/http/nginx";
|
||||||
};
|
"mydomain.org" = null;
|
||||||
|
}
|
||||||
|
'';
|
||||||
description = ''
|
description = ''
|
||||||
Extra domain names for which certificates are to be issued, with their
|
Extra domain names for which certificates are to be issued, with their
|
||||||
own server roots if needed.
|
own server roots if needed.
|
||||||
@ -139,17 +145,19 @@ in
|
|||||||
description = ''
|
description = ''
|
||||||
Attribute set of certificates to get signed and renewed.
|
Attribute set of certificates to get signed and renewed.
|
||||||
'';
|
'';
|
||||||
example = {
|
example = literalExample ''
|
||||||
"example.com" = {
|
{
|
||||||
webroot = "/var/www/challenges/";
|
"example.com" = {
|
||||||
email = "foo@example.com";
|
webroot = "/var/www/challenges/";
|
||||||
extraDomains = { "www.example.com" = null; "foo.example.com" = "/var/www/foo/"; };
|
email = "foo@example.com";
|
||||||
};
|
extraDomains = { "www.example.com" = null; "foo.example.com" = "/var/www/foo/"; };
|
||||||
"bar.example.com" = {
|
};
|
||||||
webroot = "/var/www/challenges/";
|
"bar.example.com" = {
|
||||||
email = "bar@example.com";
|
webroot = "/var/www/challenges/";
|
||||||
};
|
email = "bar@example.com";
|
||||||
};
|
};
|
||||||
|
}
|
||||||
|
'';
|
||||||
};
|
};
|
||||||
};
|
};
|
||||||
};
|
};
|
||||||
@ -238,6 +246,9 @@ in
|
|||||||
mv $workdir/server.key ${cpath}/key.pem
|
mv $workdir/server.key ${cpath}/key.pem
|
||||||
mv $workdir/server.crt ${cpath}/fullchain.pem
|
mv $workdir/server.crt ${cpath}/fullchain.pem
|
||||||
|
|
||||||
|
# Create full.pem for e.g. lighttpd (same format as "simp_le ... -f full.pem" creates)
|
||||||
|
cat "${cpath}/key.pem" "${cpath}/fullchain.pem" > "${cpath}/full.pem"
|
||||||
|
|
||||||
# Clean up working directory
|
# Clean up working directory
|
||||||
rm $workdir/server.csr
|
rm $workdir/server.csr
|
||||||
rm $workdir/server.pass.key
|
rm $workdir/server.pass.key
|
||||||
@ -247,6 +258,8 @@ in
|
|||||||
chown '${data.user}:${data.group}' '${cpath}/key.pem'
|
chown '${data.user}:${data.group}' '${cpath}/key.pem'
|
||||||
chmod ${rights} '${cpath}/fullchain.pem'
|
chmod ${rights} '${cpath}/fullchain.pem'
|
||||||
chown '${data.user}:${data.group}' '${cpath}/fullchain.pem'
|
chown '${data.user}:${data.group}' '${cpath}/fullchain.pem'
|
||||||
|
chmod ${rights} '${cpath}/full.pem'
|
||||||
|
chown '${data.user}:${data.group}' '${cpath}/full.pem'
|
||||||
'';
|
'';
|
||||||
serviceConfig = {
|
serviceConfig = {
|
||||||
Type = "oneshot";
|
Type = "oneshot";
|
||||||
@ -275,15 +288,14 @@ in
|
|||||||
)
|
)
|
||||||
);
|
);
|
||||||
servicesAttr = listToAttrs services;
|
servicesAttr = listToAttrs services;
|
||||||
nginxAttr = {
|
injectServiceDep = {
|
||||||
nginx = {
|
after = [ "acme-selfsigned-certificates.target" ];
|
||||||
after = [ "acme-selfsigned-certificates.target" ];
|
wants = [ "acme-selfsigned-certificates.target" "acme-certificates.target" ];
|
||||||
wants = [ "acme-selfsigned-certificates.target" "acme-certificates.target" ];
|
|
||||||
};
|
|
||||||
};
|
};
|
||||||
in
|
in
|
||||||
servicesAttr //
|
servicesAttr //
|
||||||
(if config.services.nginx.enable then nginxAttr else {});
|
(if config.services.nginx.enable then { nginx = injectServiceDep; } else {}) //
|
||||||
|
(if config.services.lighttpd.enable then { lighttpd = injectServiceDep; } else {});
|
||||||
|
|
||||||
systemd.timers = flip mapAttrs' cfg.certs (cert: data: nameValuePair
|
systemd.timers = flip mapAttrs' cfg.certs (cert: data: nameValuePair
|
||||||
("acme-${cert}")
|
("acme-${cert}")
|
||||||
|
@ -80,8 +80,8 @@ let
|
|||||||
group = "root";
|
group = "root";
|
||||||
} // s)
|
} // s)
|
||||||
else if
|
else if
|
||||||
(s ? "setuid" && s.setuid == true) ||
|
(s ? "setuid" && s.setuid) ||
|
||||||
(s ? "setguid" && s.setguid == true) ||
|
(s ? "setgid" && s.setgid) ||
|
||||||
(s ? "permissions")
|
(s ? "permissions")
|
||||||
then mkSetuidProgram s
|
then mkSetuidProgram s
|
||||||
else mkSetuidProgram
|
else mkSetuidProgram
|
||||||
|
@ -40,7 +40,7 @@ let
|
|||||||
});
|
});
|
||||||
|
|
||||||
policyFile = pkgs.writeText "kube-policy"
|
policyFile = pkgs.writeText "kube-policy"
|
||||||
concatStringsSep "\n" (map (builtins.toJSON cfg.apiserver.authorizationPolicy));
|
(concatStringsSep "\n" (map builtins.toJSON cfg.apiserver.authorizationPolicy));
|
||||||
|
|
||||||
cniConfig = pkgs.buildEnv {
|
cniConfig = pkgs.buildEnv {
|
||||||
name = "kubernetes-cni-config";
|
name = "kubernetes-cni-config";
|
||||||
|
@ -125,6 +125,15 @@ in {
|
|||||||
Additional command line arguments to pass to Jenkins.
|
Additional command line arguments to pass to Jenkins.
|
||||||
'';
|
'';
|
||||||
};
|
};
|
||||||
|
|
||||||
|
extraJavaOptions = mkOption {
|
||||||
|
type = types.listOf types.str;
|
||||||
|
default = [ ];
|
||||||
|
example = [ "-Xmx80m" ];
|
||||||
|
description = ''
|
||||||
|
Additional command line arguments to pass to the Java run time (as opposed to Jenkins).
|
||||||
|
'';
|
||||||
|
};
|
||||||
};
|
};
|
||||||
};
|
};
|
||||||
|
|
||||||
@ -185,7 +194,7 @@ in {
|
|||||||
'';
|
'';
|
||||||
|
|
||||||
script = ''
|
script = ''
|
||||||
${pkgs.jdk}/bin/java -jar ${pkgs.jenkins}/webapps/jenkins.war --httpListenAddress=${cfg.listenAddress} \
|
${pkgs.jdk}/bin/java ${concatStringsSep " " cfg.extraJavaOptions} -jar ${pkgs.jenkins}/webapps/jenkins.war --httpListenAddress=${cfg.listenAddress} \
|
||||||
--httpPort=${toString cfg.port} \
|
--httpPort=${toString cfg.port} \
|
||||||
--prefix=${cfg.prefix} \
|
--prefix=${cfg.prefix} \
|
||||||
${concatStringsSep " " cfg.extraOptions}
|
${concatStringsSep " " cfg.extraOptions}
|
||||||
|
@ -4,17 +4,46 @@ with lib;
|
|||||||
|
|
||||||
let
|
let
|
||||||
cfg = config.services.logstash;
|
cfg = config.services.logstash;
|
||||||
|
atLeast54 = versionAtLeast (builtins.parseDrvName cfg.package.name).version "5.4";
|
||||||
pluginPath = lib.concatStringsSep ":" cfg.plugins;
|
pluginPath = lib.concatStringsSep ":" cfg.plugins;
|
||||||
havePluginPath = lib.length cfg.plugins > 0;
|
havePluginPath = lib.length cfg.plugins > 0;
|
||||||
ops = lib.optionalString;
|
ops = lib.optionalString;
|
||||||
verbosityFlag = {
|
verbosityFlag =
|
||||||
debug = "--debug";
|
if atLeast54
|
||||||
info = "--verbose";
|
then "--log.level " + cfg.logLevel
|
||||||
warn = ""; # intentionally empty
|
else {
|
||||||
error = "--quiet";
|
debug = "--debug";
|
||||||
fatal = "--silent";
|
info = "--verbose";
|
||||||
}."${cfg.logLevel}";
|
warn = ""; # intentionally empty
|
||||||
|
error = "--quiet";
|
||||||
|
fatal = "--silent";
|
||||||
|
}."${cfg.logLevel}";
|
||||||
|
|
||||||
|
pluginsPath =
|
||||||
|
if atLeast54
|
||||||
|
then "--path.plugins ${pluginPath}"
|
||||||
|
else "--pluginpath ${pluginPath}";
|
||||||
|
|
||||||
|
logstashConf = pkgs.writeText "logstash.conf" ''
|
||||||
|
input {
|
||||||
|
${cfg.inputConfig}
|
||||||
|
}
|
||||||
|
|
||||||
|
filter {
|
||||||
|
${cfg.filterConfig}
|
||||||
|
}
|
||||||
|
|
||||||
|
output {
|
||||||
|
${cfg.outputConfig}
|
||||||
|
}
|
||||||
|
'';
|
||||||
|
|
||||||
|
logstashSettingsYml = pkgs.writeText "logstash.yml" cfg.extraSettings;
|
||||||
|
|
||||||
|
logstashSettingsDir = pkgs.runCommand "logstash-settings" {inherit logstashSettingsYml;} ''
|
||||||
|
mkdir -p $out
|
||||||
|
ln -s $logstashSettingsYml $out/logstash.yml
|
||||||
|
'';
|
||||||
in
|
in
|
||||||
|
|
||||||
{
|
{
|
||||||
@ -45,6 +74,15 @@ in
|
|||||||
description = "The paths to find other logstash plugins in.";
|
description = "The paths to find other logstash plugins in.";
|
||||||
};
|
};
|
||||||
|
|
||||||
|
dataDir = mkOption {
|
||||||
|
type = types.str;
|
||||||
|
default = "/var/lib/logstash";
|
||||||
|
description = ''
|
||||||
|
A path to directory writable by logstash that it uses to store data.
|
||||||
|
Plugins will also have access to this path.
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
|
||||||
logLevel = mkOption {
|
logLevel = mkOption {
|
||||||
type = types.enum [ "debug" "info" "warn" "error" "fatal" ];
|
type = types.enum [ "debug" "info" "warn" "error" "fatal" ];
|
||||||
default = "warn";
|
default = "warn";
|
||||||
@ -116,6 +154,19 @@ in
|
|||||||
'';
|
'';
|
||||||
};
|
};
|
||||||
|
|
||||||
|
extraSettings = mkOption {
|
||||||
|
type = types.lines;
|
||||||
|
default = "";
|
||||||
|
description = "Extra Logstash settings in YAML format.";
|
||||||
|
example = ''
|
||||||
|
pipeline:
|
||||||
|
batch:
|
||||||
|
size: 125
|
||||||
|
delay: 5
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
|
||||||
|
|
||||||
};
|
};
|
||||||
};
|
};
|
||||||
|
|
||||||
@ -123,31 +174,34 @@ in
|
|||||||
###### implementation
|
###### implementation
|
||||||
|
|
||||||
config = mkIf cfg.enable {
|
config = mkIf cfg.enable {
|
||||||
|
assertions = [
|
||||||
|
{ assertion = atLeast54 -> !cfg.enableWeb;
|
||||||
|
message = ''
|
||||||
|
The logstash web interface is only available for versions older than 5.4.
|
||||||
|
So either set services.logstash.enableWeb = false,
|
||||||
|
or set services.logstash.package to an older logstash.
|
||||||
|
'';
|
||||||
|
}
|
||||||
|
];
|
||||||
|
|
||||||
systemd.services.logstash = with pkgs; {
|
systemd.services.logstash = with pkgs; {
|
||||||
description = "Logstash Daemon";
|
description = "Logstash Daemon";
|
||||||
wantedBy = [ "multi-user.target" ];
|
wantedBy = [ "multi-user.target" ];
|
||||||
environment = { JAVA_HOME = jre; };
|
environment = { JAVA_HOME = jre; };
|
||||||
path = [ pkgs.bash ];
|
path = [ pkgs.bash ];
|
||||||
serviceConfig = {
|
serviceConfig = {
|
||||||
ExecStart =
|
ExecStartPre = ''${pkgs.coreutils}/bin/mkdir -p "${cfg.dataDir}" ; ${pkgs.coreutils}/bin/chmod 700 "${cfg.dataDir}"'';
|
||||||
"${cfg.package}/bin/logstash agent " +
|
ExecStart = concatStringsSep " " (filter (s: stringLength s != 0) [
|
||||||
"-w ${toString cfg.filterWorkers} " +
|
"${cfg.package}/bin/logstash"
|
||||||
ops havePluginPath "--pluginpath ${pluginPath} " +
|
(ops (!atLeast54) "agent")
|
||||||
"${verbosityFlag} " +
|
"-w ${toString cfg.filterWorkers}"
|
||||||
"-f ${writeText "logstash.conf" ''
|
(ops havePluginPath pluginsPath)
|
||||||
input {
|
"${verbosityFlag}"
|
||||||
${cfg.inputConfig}
|
"-f ${logstashConf}"
|
||||||
}
|
(ops atLeast54 "--path.settings ${logstashSettingsDir}")
|
||||||
|
(ops atLeast54 "--path.data ${cfg.dataDir}")
|
||||||
filter {
|
(ops cfg.enableWeb "-- web -a ${cfg.listenAddress} -p ${cfg.port}")
|
||||||
${cfg.filterConfig}
|
]);
|
||||||
}
|
|
||||||
|
|
||||||
output {
|
|
||||||
${cfg.outputConfig}
|
|
||||||
}
|
|
||||||
''} " +
|
|
||||||
ops cfg.enableWeb "-- web -a ${cfg.listenAddress} -p ${cfg.port}";
|
|
||||||
};
|
};
|
||||||
};
|
};
|
||||||
};
|
};
|
||||||
|
@ -3,43 +3,129 @@
|
|||||||
with lib;
|
with lib;
|
||||||
|
|
||||||
let
|
let
|
||||||
|
|
||||||
cfg = config.services.spamassassin;
|
cfg = config.services.spamassassin;
|
||||||
|
spamassassin-local-cf = pkgs.writeText "local.cf" cfg.config;
|
||||||
|
spamassassin-init-pre = pkgs.writeText "init.pre" cfg.initPreConf;
|
||||||
|
|
||||||
|
spamdEnv = pkgs.buildEnv {
|
||||||
|
name = "spamd-env";
|
||||||
|
paths = [];
|
||||||
|
postBuild = ''
|
||||||
|
ln -sf ${spamassassin-init-pre} $out/init.pre
|
||||||
|
ln -sf ${spamassassin-local-cf} $out/local.cf
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
|
||||||
in
|
in
|
||||||
|
|
||||||
{
|
{
|
||||||
|
|
||||||
###### interface
|
|
||||||
|
|
||||||
options = {
|
options = {
|
||||||
|
|
||||||
services.spamassassin = {
|
services.spamassassin = {
|
||||||
|
|
||||||
enable = mkOption {
|
enable = mkOption {
|
||||||
default = false;
|
default = false;
|
||||||
description = "Whether to run the SpamAssassin daemon.";
|
description = "Whether to run the SpamAssassin daemon";
|
||||||
};
|
};
|
||||||
|
|
||||||
debug = mkOption {
|
debug = mkOption {
|
||||||
default = false;
|
default = false;
|
||||||
description = "Whether to run the SpamAssassin daemon in debug mode.";
|
description = "Whether to run the SpamAssassin daemon in debug mode";
|
||||||
};
|
};
|
||||||
|
|
||||||
|
config = mkOption {
|
||||||
|
type = types.lines;
|
||||||
|
description = ''
|
||||||
|
The SpamAssassin local.cf config
|
||||||
|
|
||||||
|
If you are using this configuration:
|
||||||
|
add_header all Status _YESNO_, score=_SCORE_ required=_REQD_ tests=_TESTS_ autolearn=_AUTOLEARN_ version=_VERSION_
|
||||||
|
|
||||||
|
Then you can Use this sieve filter:
|
||||||
|
require ["fileinto", "reject", "envelope"];
|
||||||
|
|
||||||
|
if header :contains "X-Spam-Flag" "YES" {
|
||||||
|
fileinto "spam";
|
||||||
|
}
|
||||||
|
|
||||||
|
Or this procmail filter:
|
||||||
|
:0:
|
||||||
|
* ^X-Spam-Flag: YES
|
||||||
|
/var/vpopmail/domains/lastlog.de/js/.maildir/.spam/new
|
||||||
|
|
||||||
|
To filter your messages based on the additional mail headers added by spamassassin.
|
||||||
|
'';
|
||||||
|
example = ''
|
||||||
|
#rewrite_header Subject [***** SPAM _SCORE_ *****]
|
||||||
|
required_score 5.0
|
||||||
|
use_bayes 1
|
||||||
|
bayes_auto_learn 1
|
||||||
|
add_header all Status _YESNO_, score=_SCORE_ required=_REQD_ tests=_TESTS_ autolearn=_AUTOLEARN_ version=_VERSION_
|
||||||
|
'';
|
||||||
|
default = "";
|
||||||
|
};
|
||||||
|
|
||||||
|
initPreConf = mkOption {
|
||||||
|
type = types.str;
|
||||||
|
description = "The SpamAssassin init.pre config.";
|
||||||
|
default =
|
||||||
|
''
|
||||||
|
#
|
||||||
|
# to update this list, run this command in the rules directory:
|
||||||
|
# grep 'loadplugin.*Mail::SpamAssassin::Plugin::.*' -o -h * | sort | uniq
|
||||||
|
#
|
||||||
|
|
||||||
|
#loadplugin Mail::SpamAssassin::Plugin::AccessDB
|
||||||
|
#loadplugin Mail::SpamAssassin::Plugin::AntiVirus
|
||||||
|
loadplugin Mail::SpamAssassin::Plugin::AskDNS
|
||||||
|
# loadplugin Mail::SpamAssassin::Plugin::ASN
|
||||||
|
loadplugin Mail::SpamAssassin::Plugin::AutoLearnThreshold
|
||||||
|
#loadplugin Mail::SpamAssassin::Plugin::AWL
|
||||||
|
loadplugin Mail::SpamAssassin::Plugin::Bayes
|
||||||
|
loadplugin Mail::SpamAssassin::Plugin::BodyEval
|
||||||
|
loadplugin Mail::SpamAssassin::Plugin::Check
|
||||||
|
#loadplugin Mail::SpamAssassin::Plugin::DCC
|
||||||
|
loadplugin Mail::SpamAssassin::Plugin::DKIM
|
||||||
|
loadplugin Mail::SpamAssassin::Plugin::DNSEval
|
||||||
|
loadplugin Mail::SpamAssassin::Plugin::FreeMail
|
||||||
|
loadplugin Mail::SpamAssassin::Plugin::Hashcash
|
||||||
|
loadplugin Mail::SpamAssassin::Plugin::HeaderEval
|
||||||
|
loadplugin Mail::SpamAssassin::Plugin::HTMLEval
|
||||||
|
loadplugin Mail::SpamAssassin::Plugin::HTTPSMismatch
|
||||||
|
loadplugin Mail::SpamAssassin::Plugin::ImageInfo
|
||||||
|
loadplugin Mail::SpamAssassin::Plugin::MIMEEval
|
||||||
|
loadplugin Mail::SpamAssassin::Plugin::MIMEHeader
|
||||||
|
# loadplugin Mail::SpamAssassin::Plugin::PDFInfo
|
||||||
|
#loadplugin Mail::SpamAssassin::Plugin::PhishTag
|
||||||
|
loadplugin Mail::SpamAssassin::Plugin::Pyzor
|
||||||
|
loadplugin Mail::SpamAssassin::Plugin::Razor2
|
||||||
|
# loadplugin Mail::SpamAssassin::Plugin::RelayCountry
|
||||||
|
loadplugin Mail::SpamAssassin::Plugin::RelayEval
|
||||||
|
loadplugin Mail::SpamAssassin::Plugin::ReplaceTags
|
||||||
|
# loadplugin Mail::SpamAssassin::Plugin::Rule2XSBody
|
||||||
|
# loadplugin Mail::SpamAssassin::Plugin::Shortcircuit
|
||||||
|
loadplugin Mail::SpamAssassin::Plugin::SpamCop
|
||||||
|
loadplugin Mail::SpamAssassin::Plugin::SPF
|
||||||
|
#loadplugin Mail::SpamAssassin::Plugin::TextCat
|
||||||
|
# loadplugin Mail::SpamAssassin::Plugin::TxRep
|
||||||
|
loadplugin Mail::SpamAssassin::Plugin::URIDetail
|
||||||
|
loadplugin Mail::SpamAssassin::Plugin::URIDNSBL
|
||||||
|
loadplugin Mail::SpamAssassin::Plugin::URIEval
|
||||||
|
# loadplugin Mail::SpamAssassin::Plugin::URILocalBL
|
||||||
|
loadplugin Mail::SpamAssassin::Plugin::VBounce
|
||||||
|
loadplugin Mail::SpamAssassin::Plugin::WhiteListSubject
|
||||||
|
loadplugin Mail::SpamAssassin::Plugin::WLBLEval
|
||||||
|
'';
|
||||||
|
};
|
||||||
};
|
};
|
||||||
|
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|
||||||
###### implementation
|
|
||||||
|
|
||||||
config = mkIf cfg.enable {
|
config = mkIf cfg.enable {
|
||||||
|
|
||||||
# Allow users to run 'spamc'.
|
# Allow users to run 'spamc'.
|
||||||
environment.systemPackages = [ pkgs.spamassassin ];
|
environment.systemPackages = [ pkgs.spamassassin ];
|
||||||
|
|
||||||
users.extraUsers = singleton {
|
users.extraUsers = singleton {
|
||||||
name = "spamd";
|
name = "spamd";
|
||||||
description = "Spam Assassin Daemon";
|
description = "Spam Assassin Daemon";
|
||||||
uid = config.ids.uids.spamd;
|
uid = config.ids.uids.spamd;
|
||||||
group = "spamd";
|
group = "spamd";
|
||||||
@ -50,13 +136,65 @@ in
|
|||||||
gid = config.ids.gids.spamd;
|
gid = config.ids.gids.spamd;
|
||||||
};
|
};
|
||||||
|
|
||||||
|
systemd.services.sa-update = {
|
||||||
|
script = ''
|
||||||
|
set +e
|
||||||
|
${pkgs.su}/bin/su -s "${pkgs.bash}/bin/bash" -c "${pkgs.spamassassin}/bin/sa-update --gpghomedir=/var/lib/spamassassin/sa-update-keys/ --siteconfigpath=${spamdEnv}/" spamd
|
||||||
|
|
||||||
|
v=$?
|
||||||
|
set -e
|
||||||
|
if [ $v -gt 1 ]; then
|
||||||
|
echo "sa-update execution error"
|
||||||
|
exit $v
|
||||||
|
fi
|
||||||
|
if [ $v -eq 0 ]; then
|
||||||
|
systemctl reload spamd.service
|
||||||
|
fi
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
|
||||||
|
systemd.timers.sa-update = {
|
||||||
|
description = "sa-update-service";
|
||||||
|
partOf = [ "sa-update.service" ];
|
||||||
|
wantedBy = [ "timers.target" ];
|
||||||
|
timerConfig = {
|
||||||
|
OnCalendar = "1:*";
|
||||||
|
Persistent = true;
|
||||||
|
};
|
||||||
|
};
|
||||||
|
|
||||||
systemd.services.spamd = {
|
systemd.services.spamd = {
|
||||||
description = "Spam Assassin Server";
|
description = "Spam Assassin Server";
|
||||||
|
|
||||||
wantedBy = [ "multi-user.target" ];
|
wantedBy = [ "multi-user.target" ];
|
||||||
after = [ "network.target" ];
|
after = [ "network.target" ];
|
||||||
|
|
||||||
script = "${pkgs.spamassassin}/bin/spamd ${optionalString cfg.debug "-D"} --username=spamd --groupname=spamd --nouser-config --virtual-config-dir=/var/lib/spamassassin/user-%u --allow-tell --pidfile=/var/run/spamd.pid";
|
serviceConfig = {
|
||||||
|
ExecStart = "${pkgs.spamassassin}/bin/spamd ${optionalString cfg.debug "-D"} --username=spamd --groupname=spamd --siteconfigpath=${spamdEnv} --virtual-config-dir=/var/lib/spamassassin/user-%u --allow-tell --pidfile=/var/run/spamd.pid";
|
||||||
|
ExecReload = "${pkgs.coreutils}/bin/kill -HUP $MAINPID";
|
||||||
|
};
|
||||||
|
|
||||||
|
# 0 and 1 no error, exitcode > 1 means error:
|
||||||
|
# https://spamassassin.apache.org/full/3.1.x/doc/sa-update.html#exit_codes
|
||||||
|
preStart = ''
|
||||||
|
# this abstraction requires no centralized config at all
|
||||||
|
if [ -d /etc/spamassassin ]; then
|
||||||
|
echo "This spamassassin does not support global '/etc/spamassassin' folder for configuration as this would be impure. Merge your configs into 'services.spamassassin' and remove the '/etc/spamassassin' folder to make this service work. Also see 'https://github.com/NixOS/nixpkgs/pull/26470'.";
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
echo "Recreating '/var/lib/spamasassin' with creating '3.004001' (or similar) and 'sa-update-keys'"
|
||||||
|
mkdir -p /var/lib/spamassassin
|
||||||
|
chown spamd:spamd /var/lib/spamassassin -R
|
||||||
|
set +e
|
||||||
|
${pkgs.su}/bin/su -s "${pkgs.bash}/bin/bash" -c "${pkgs.spamassassin}/bin/sa-update --gpghomedir=/var/lib/spamassassin/sa-update-keys/ --siteconfigpath=${spamdEnv}/" spamd
|
||||||
|
v=$?
|
||||||
|
set -e
|
||||||
|
if [ $v -gt 1 ]; then
|
||||||
|
echo "sa-update execution error"
|
||||||
|
exit $v
|
||||||
|
fi
|
||||||
|
chown spamd:spamd /var/lib/spamassassin -R
|
||||||
|
'';
|
||||||
};
|
};
|
||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
@ -22,19 +22,9 @@ in {
|
|||||||
|
|
||||||
environment.systemPackages = [ pkgs.autorandr ];
|
environment.systemPackages = [ pkgs.autorandr ];
|
||||||
|
|
||||||
# systemd.unitPackages = [ pkgs.autorandr ];
|
systemd.packages = [ pkgs.autorandr ];
|
||||||
|
|
||||||
systemd.services.autorandr = {
|
systemd.services.autorandr = {
|
||||||
unitConfig = {
|
|
||||||
Description = "autorandr execution hook";
|
|
||||||
After = [ "sleep.target" ];
|
|
||||||
StartLimitInterval = "5";
|
|
||||||
StartLimitBurst = "1";
|
|
||||||
};
|
|
||||||
serviceConfig = {
|
|
||||||
ExecStart = "${pkgs.autorandr}/bin/autorandr --batch --change --default default";
|
|
||||||
Type = "oneshot";
|
|
||||||
RemainAfterExit = false;
|
|
||||||
};
|
|
||||||
wantedBy = [ "sleep.target" ];
|
wantedBy = [ "sleep.target" ];
|
||||||
};
|
};
|
||||||
|
|
||||||
|
@ -84,7 +84,7 @@ in {
|
|||||||
dataDir = if !isNull instanceCfg.dataDir then instanceCfg.dataDir else
|
dataDir = if !isNull instanceCfg.dataDir then instanceCfg.dataDir else
|
||||||
"/var/lib/errbot/${name}";
|
"/var/lib/errbot/${name}";
|
||||||
in {
|
in {
|
||||||
after = [ "network.target" ];
|
after = [ "network-online.target" ];
|
||||||
wantedBy = [ "multi-user.target" ];
|
wantedBy = [ "multi-user.target" ];
|
||||||
preStart = ''
|
preStart = ''
|
||||||
mkdir -p ${dataDir}
|
mkdir -p ${dataDir}
|
||||||
|
45
nixos/modules/services/misc/fstrim.nix
Normal file
45
nixos/modules/services/misc/fstrim.nix
Normal file
@ -0,0 +1,45 @@
|
|||||||
|
{ config, lib, pkgs, ... }:
|
||||||
|
|
||||||
|
with lib;
|
||||||
|
|
||||||
|
let
|
||||||
|
|
||||||
|
cfg = config.services.fstrim;
|
||||||
|
|
||||||
|
in {
|
||||||
|
|
||||||
|
options = {
|
||||||
|
|
||||||
|
services.fstrim = {
|
||||||
|
enable = mkEnableOption "periodic SSD TRIM of mounted partitions in background";
|
||||||
|
|
||||||
|
interval = mkOption {
|
||||||
|
type = types.string;
|
||||||
|
default = "weekly";
|
||||||
|
description = ''
|
||||||
|
How often we run fstrim. For most desktop and server systems
|
||||||
|
a sufficient trimming frequency is once a week.
|
||||||
|
|
||||||
|
The format is described in
|
||||||
|
<citerefentry><refentrytitle>systemd.time</refentrytitle>
|
||||||
|
<manvolnum>7</manvolnum></citerefentry>.
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
};
|
||||||
|
|
||||||
|
};
|
||||||
|
|
||||||
|
config = mkIf cfg.enable {
|
||||||
|
|
||||||
|
systemd.packages = [ pkgs.utillinux ];
|
||||||
|
|
||||||
|
systemd.timers.fstrim = {
|
||||||
|
timerConfig = {
|
||||||
|
OnCalendar = cfg.interval;
|
||||||
|
};
|
||||||
|
wantedBy = [ "timers.target" ];
|
||||||
|
};
|
||||||
|
|
||||||
|
};
|
||||||
|
|
||||||
|
}
|
@ -0,0 +1,8 @@
|
|||||||
|
# Generated using update-dd-agent-default, please re-run after updating dd-agent. DO NOT EDIT MANUALLY.
|
||||||
|
[
|
||||||
|
"auto_conf"
|
||||||
|
"agent_metrics.yaml.default"
|
||||||
|
"disk.yaml.default"
|
||||||
|
"network.yaml.default"
|
||||||
|
"ntp.yaml.default"
|
||||||
|
]
|
@ -16,24 +16,100 @@ let
|
|||||||
forwarder_log_file: /var/log/datadog/forwarder.log
|
forwarder_log_file: /var/log/datadog/forwarder.log
|
||||||
dogstatsd_log_file: /var/log/datadog/dogstatsd.log
|
dogstatsd_log_file: /var/log/datadog/dogstatsd.log
|
||||||
pup_log_file: /var/log/datadog/pup.log
|
pup_log_file: /var/log/datadog/pup.log
|
||||||
|
|
||||||
|
# proxy_host: my-proxy.com
|
||||||
|
# proxy_port: 3128
|
||||||
|
# proxy_user: user
|
||||||
|
# proxy_password: password
|
||||||
|
|
||||||
|
# tags: mytag0, mytag1
|
||||||
${optionalString (cfg.tags != null ) "tags: ${concatStringsSep "," cfg.tags }"}
|
${optionalString (cfg.tags != null ) "tags: ${concatStringsSep "," cfg.tags }"}
|
||||||
${cfg.extraDdConfig}
|
|
||||||
|
# collect_ec2_tags: no
|
||||||
|
# recent_point_threshold: 30
|
||||||
|
# use_mount: no
|
||||||
|
# listen_port: 17123
|
||||||
|
# graphite_listen_port: 17124
|
||||||
|
# non_local_traffic: no
|
||||||
|
# use_curl_http_client: False
|
||||||
|
# bind_host: localhost
|
||||||
|
|
||||||
|
# use_pup: no
|
||||||
|
# pup_port: 17125
|
||||||
|
# pup_interface: localhost
|
||||||
|
# pup_url: http://localhost:17125
|
||||||
|
|
||||||
|
# dogstatsd_port : 8125
|
||||||
|
# dogstatsd_interval : 10
|
||||||
|
# dogstatsd_normalize : yes
|
||||||
|
# statsd_forward_host: address_of_own_statsd_server
|
||||||
|
# statsd_forward_port: 8125
|
||||||
|
|
||||||
|
# device_blacklist_re: .*\/dev\/mapper\/lxc-box.*
|
||||||
|
|
||||||
|
# ganglia_host: localhost
|
||||||
|
# ganglia_port: 8651
|
||||||
'';
|
'';
|
||||||
|
|
||||||
etcfiles =
|
diskConfig = pkgs.writeText "disk.yaml" ''
|
||||||
map (i: { source = if builtins.hasAttr "config" i
|
init_config:
|
||||||
then pkgs.writeText "${i.name}.yaml" i.config
|
|
||||||
else "${cfg.agent}/agent/conf.d-system/${i.name}.yaml";
|
|
||||||
target = "dd-agent/conf.d/${i.name}.yaml";
|
|
||||||
}
|
|
||||||
) cfg.integrations ++
|
|
||||||
[ { source = ddConf;
|
|
||||||
target = "dd-agent/datadog.conf";
|
|
||||||
}
|
|
||||||
];
|
|
||||||
|
|
||||||
# restart triggers
|
instances:
|
||||||
etcSources = map (i: i.source) etcfiles;
|
- use_mount: no
|
||||||
|
'';
|
||||||
|
|
||||||
|
networkConfig = pkgs.writeText "network.yaml" ''
|
||||||
|
init_config:
|
||||||
|
|
||||||
|
instances:
|
||||||
|
# Network check only supports one configured instance
|
||||||
|
- collect_connection_state: false
|
||||||
|
excluded_interfaces:
|
||||||
|
- lo
|
||||||
|
- lo0
|
||||||
|
'';
|
||||||
|
|
||||||
|
postgresqlConfig = pkgs.writeText "postgres.yaml" cfg.postgresqlConfig;
|
||||||
|
nginxConfig = pkgs.writeText "nginx.yaml" cfg.nginxConfig;
|
||||||
|
mongoConfig = pkgs.writeText "mongo.yaml" cfg.mongoConfig;
|
||||||
|
jmxConfig = pkgs.writeText "jmx.yaml" cfg.jmxConfig;
|
||||||
|
processConfig = pkgs.writeText "process.yaml" cfg.processConfig;
|
||||||
|
|
||||||
|
etcfiles =
|
||||||
|
let
|
||||||
|
defaultConfd = import ./dd-agent-defaults.nix;
|
||||||
|
in (map (f: { source = "${pkgs.dd-agent}/agent/conf.d-system/${f}";
|
||||||
|
target = "dd-agent/conf.d/${f}";
|
||||||
|
}) defaultConfd) ++ [
|
||||||
|
{ source = ddConf;
|
||||||
|
target = "dd-agent/datadog.conf";
|
||||||
|
}
|
||||||
|
{ source = diskConfig;
|
||||||
|
target = "dd-agent/conf.d/disk.yaml";
|
||||||
|
}
|
||||||
|
{ source = networkConfig;
|
||||||
|
target = "dd-agent/conf.d/network.yaml";
|
||||||
|
} ] ++
|
||||||
|
(optional (cfg.postgresqlConfig != null)
|
||||||
|
{ source = postgresqlConfig;
|
||||||
|
target = "dd-agent/conf.d/postgres.yaml";
|
||||||
|
}) ++
|
||||||
|
(optional (cfg.nginxConfig != null)
|
||||||
|
{ source = nginxConfig;
|
||||||
|
target = "dd-agent/conf.d/nginx.yaml";
|
||||||
|
}) ++
|
||||||
|
(optional (cfg.mongoConfig != null)
|
||||||
|
{ source = mongoConfig;
|
||||||
|
target = "dd-agent/conf.d/mongo.yaml";
|
||||||
|
}) ++
|
||||||
|
(optional (cfg.processConfig != null)
|
||||||
|
{ source = processConfig;
|
||||||
|
target = "dd-agent/conf.d/process.yaml";
|
||||||
|
}) ++
|
||||||
|
(optional (cfg.jmxConfig != null)
|
||||||
|
{ source = jmxConfig;
|
||||||
|
target = "dd-agent/conf.d/jmx.yaml";
|
||||||
|
});
|
||||||
|
|
||||||
in {
|
in {
|
||||||
options.services.dd-agent = {
|
options.services.dd-agent = {
|
||||||
@ -63,46 +139,44 @@ in {
|
|||||||
type = types.uniq (types.nullOr types.string);
|
type = types.uniq (types.nullOr types.string);
|
||||||
};
|
};
|
||||||
|
|
||||||
agent = mkOption {
|
postgresqlConfig = mkOption {
|
||||||
description = "The dd-agent package to use. Useful when overriding the package.";
|
description = "Datadog PostgreSQL integration configuration";
|
||||||
default = pkgs.dd-agent;
|
default = null;
|
||||||
type = types.package;
|
type = types.uniq (types.nullOr types.string);
|
||||||
};
|
};
|
||||||
|
|
||||||
integrations = mkOption {
|
nginxConfig = mkOption {
|
||||||
|
description = "Datadog nginx integration configuration";
|
||||||
|
default = null;
|
||||||
|
type = types.uniq (types.nullOr types.string);
|
||||||
|
};
|
||||||
|
|
||||||
|
mongoConfig = mkOption {
|
||||||
|
description = "MongoDB integration configuration";
|
||||||
|
default = null;
|
||||||
|
type = types.uniq (types.nullOr types.string);
|
||||||
|
};
|
||||||
|
|
||||||
|
jmxConfig = mkOption {
|
||||||
|
description = "JMX integration configuration";
|
||||||
|
default = null;
|
||||||
|
type = types.uniq (types.nullOr types.string);
|
||||||
|
};
|
||||||
|
|
||||||
|
processConfig = mkOption {
|
||||||
description = ''
|
description = ''
|
||||||
Any integrations to use. Default config used if none
|
Process integration configuration
|
||||||
specified. It is currently up to the user to make sure that
|
|
||||||
the dd-agent package used has all the dependencies chosen
|
See http://docs.datadoghq.com/integrations/process/
|
||||||
integrations require in scope.
|
|
||||||
'';
|
|
||||||
type = types.listOf (types.attrsOf types.string);
|
|
||||||
default = [];
|
|
||||||
example = ''
|
|
||||||
[ { name = "elastic";
|
|
||||||
config = '''
|
|
||||||
init_config:
|
|
||||||
|
|
||||||
instances:
|
|
||||||
- url: http://localhost:9200
|
|
||||||
''';
|
|
||||||
}
|
|
||||||
{ name = "nginx"; }
|
|
||||||
{ name = "ntp"; }
|
|
||||||
{ name = "network"; }
|
|
||||||
]
|
|
||||||
'';
|
'';
|
||||||
|
default = null;
|
||||||
|
type = types.uniq (types.nullOr types.string);
|
||||||
};
|
};
|
||||||
|
|
||||||
extraDdConfig = mkOption {
|
|
||||||
description = "Extra settings to append to datadog agent config.";
|
|
||||||
default = "";
|
|
||||||
type = types.string;
|
|
||||||
};
|
|
||||||
};
|
};
|
||||||
|
|
||||||
config = mkIf cfg.enable {
|
config = mkIf cfg.enable {
|
||||||
environment.systemPackages = [ cfg.agent pkgs.sysstat pkgs.procps ];
|
environment.systemPackages = [ pkgs."dd-agent" pkgs.sysstat pkgs.procps ];
|
||||||
|
|
||||||
users.extraUsers.datadog = {
|
users.extraUsers.datadog = {
|
||||||
description = "Datadog Agent User";
|
description = "Datadog Agent User";
|
||||||
@ -116,30 +190,46 @@ in {
|
|||||||
|
|
||||||
systemd.services.dd-agent = {
|
systemd.services.dd-agent = {
|
||||||
description = "Datadog agent monitor";
|
description = "Datadog agent monitor";
|
||||||
path = [ cfg.agent pkgs.python pkgs.sysstat pkgs.procps ];
|
path = [ pkgs."dd-agent" pkgs.python pkgs.sysstat pkgs.procps ];
|
||||||
wantedBy = [ "multi-user.target" ];
|
wantedBy = [ "multi-user.target" ];
|
||||||
serviceConfig = {
|
serviceConfig = {
|
||||||
ExecStart = "${cfg.agent}/bin/dd-agent foreground";
|
ExecStart = "${pkgs.dd-agent}/bin/dd-agent foreground";
|
||||||
User = "datadog";
|
User = "datadog";
|
||||||
Group = "datadog";
|
Group = "datadog";
|
||||||
Restart = "always";
|
Restart = "always";
|
||||||
RestartSec = 2;
|
RestartSec = 2;
|
||||||
};
|
};
|
||||||
restartTriggers = [ cfg.agent ddConf ] ++ etcSources;
|
restartTriggers = [ pkgs.dd-agent ddConf diskConfig networkConfig postgresqlConfig nginxConfig mongoConfig jmxConfig processConfig ];
|
||||||
};
|
};
|
||||||
|
|
||||||
systemd.services.dd-jmxfetch = lib.mkIf (builtins.any (i: i.name == "jmx") cfg.integrations) {
|
systemd.services.dogstatsd = {
|
||||||
description = "Datadog JMX Fetcher";
|
description = "Datadog statsd";
|
||||||
path = [ cfg.agent pkgs.python pkgs.sysstat pkgs.procps pkgs.jdk ];
|
path = [ pkgs."dd-agent" pkgs.python pkgs.procps ];
|
||||||
wantedBy = [ "multi-user.target" ];
|
wantedBy = [ "multi-user.target" ];
|
||||||
serviceConfig = {
|
serviceConfig = {
|
||||||
ExecStart = "${cfg.agent}/bin/dd-jmxfetch";
|
ExecStart = "${pkgs.dd-agent}/bin/dogstatsd start";
|
||||||
|
User = "datadog";
|
||||||
|
Group = "datadog";
|
||||||
|
Type = "forking";
|
||||||
|
PIDFile = "/tmp/dogstatsd.pid";
|
||||||
|
Restart = "always";
|
||||||
|
RestartSec = 2;
|
||||||
|
};
|
||||||
|
restartTriggers = [ pkgs.dd-agent ddConf diskConfig networkConfig postgresqlConfig nginxConfig mongoConfig jmxConfig processConfig ];
|
||||||
|
};
|
||||||
|
|
||||||
|
systemd.services.dd-jmxfetch = lib.mkIf (cfg.jmxConfig != null) {
|
||||||
|
description = "Datadog JMX Fetcher";
|
||||||
|
path = [ pkgs."dd-agent" pkgs.python pkgs.sysstat pkgs.procps pkgs.jdk ];
|
||||||
|
wantedBy = [ "multi-user.target" ];
|
||||||
|
serviceConfig = {
|
||||||
|
ExecStart = "${pkgs.dd-agent}/bin/dd-jmxfetch";
|
||||||
User = "datadog";
|
User = "datadog";
|
||||||
Group = "datadog";
|
Group = "datadog";
|
||||||
Restart = "always";
|
Restart = "always";
|
||||||
RestartSec = 2;
|
RestartSec = 2;
|
||||||
};
|
};
|
||||||
restartTriggers = [ cfg.agent ddConf ] ++ etcSources;
|
restartTriggers = [ pkgs.dd-agent ddConf diskConfig networkConfig postgresqlConfig nginxConfig mongoConfig jmxConfig ];
|
||||||
};
|
};
|
||||||
|
|
||||||
environment.etc = etcfiles;
|
environment.etc = etcfiles;
|
||||||
|
9
nixos/modules/services/monitoring/dd-agent/update-dd-agent-defaults
Executable file
9
nixos/modules/services/monitoring/dd-agent/update-dd-agent-defaults
Executable file
@ -0,0 +1,9 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
dd=$(nix-build --no-out-link -A dd-agent ../../../..)
|
||||||
|
echo '# Generated using update-dd-agent-default, please re-run after updating dd-agent. DO NOT EDIT MANUALLY.' > dd-agent-defaults.nix
|
||||||
|
echo '[' >> dd-agent-defaults.nix
|
||||||
|
echo ' "auto_conf"' >> dd-agent-defaults.nix
|
||||||
|
for f in $(find $dd/agent/conf.d-system -maxdepth 1 -type f | grep -v '\.example' | sort); do
|
||||||
|
echo " \"$(basename $f)\"" >> dd-agent-defaults.nix
|
||||||
|
done
|
||||||
|
echo ']' >> dd-agent-defaults.nix
|
@ -1,185 +0,0 @@
|
|||||||
{ config, lib, pkgs, ... }:
|
|
||||||
|
|
||||||
with lib;
|
|
||||||
|
|
||||||
let
|
|
||||||
|
|
||||||
cfg = config.services.aiccu;
|
|
||||||
notNull = a: ! isNull a;
|
|
||||||
configFile = pkgs.writeText "aiccu.conf" ''
|
|
||||||
${if notNull cfg.username then "username " + cfg.username else ""}
|
|
||||||
${if notNull cfg.password then "password " + cfg.password else ""}
|
|
||||||
protocol ${cfg.protocol}
|
|
||||||
server ${cfg.server}
|
|
||||||
ipv6_interface ${cfg.interfaceName}
|
|
||||||
verbose ${boolToString cfg.verbose}
|
|
||||||
daemonize true
|
|
||||||
automatic ${boolToString cfg.automatic}
|
|
||||||
requiretls ${boolToString cfg.requireTLS}
|
|
||||||
pidfile ${cfg.pidFile}
|
|
||||||
defaultroute ${boolToString cfg.defaultRoute}
|
|
||||||
${if notNull cfg.setupScript then cfg.setupScript else ""}
|
|
||||||
makebeats ${boolToString cfg.makeHeartBeats}
|
|
||||||
noconfigure ${boolToString cfg.noConfigure}
|
|
||||||
behindnat ${boolToString cfg.behindNAT}
|
|
||||||
${if cfg.localIPv4Override then "local_ipv4_override" else ""}
|
|
||||||
'';
|
|
||||||
|
|
||||||
in {
|
|
||||||
|
|
||||||
options = {
|
|
||||||
|
|
||||||
services.aiccu = {
|
|
||||||
|
|
||||||
enable = mkOption {
|
|
||||||
type = types.bool;
|
|
||||||
default = false;
|
|
||||||
description = "Enable aiccu IPv6 over IPv4 SiXXs tunnel";
|
|
||||||
};
|
|
||||||
|
|
||||||
username = mkOption {
|
|
||||||
type = with types; nullOr str;
|
|
||||||
default = null;
|
|
||||||
example = "FAB5-SIXXS";
|
|
||||||
description = "Login credential";
|
|
||||||
};
|
|
||||||
|
|
||||||
password = mkOption {
|
|
||||||
type = with types; nullOr str;
|
|
||||||
default = null;
|
|
||||||
example = "TmAkRbBEr0";
|
|
||||||
description = "Login credential";
|
|
||||||
};
|
|
||||||
|
|
||||||
protocol = mkOption {
|
|
||||||
type = types.str;
|
|
||||||
default = "tic";
|
|
||||||
example = "tic|tsp|l2tp";
|
|
||||||
description = "Protocol to use for setting up the tunnel";
|
|
||||||
};
|
|
||||||
|
|
||||||
server = mkOption {
|
|
||||||
type = types.str;
|
|
||||||
default = "tic.sixxs.net";
|
|
||||||
example = "enabled.ipv6server.net";
|
|
||||||
description = "Server to use for setting up the tunnel";
|
|
||||||
};
|
|
||||||
|
|
||||||
interfaceName = mkOption {
|
|
||||||
type = types.str;
|
|
||||||
default = "aiccu";
|
|
||||||
example = "sixxs";
|
|
||||||
description = ''
|
|
||||||
The name of the interface that will be used as a tunnel interface.
|
|
||||||
On *BSD the ipv6_interface should be set to gifX (eg gif0) for proto-41 tunnels
|
|
||||||
or tunX (eg tun0) for AYIYA tunnels.
|
|
||||||
'';
|
|
||||||
};
|
|
||||||
|
|
||||||
tunnelID = mkOption {
|
|
||||||
type = with types; nullOr str;
|
|
||||||
default = null;
|
|
||||||
example = "T12345";
|
|
||||||
description = "The tunnel id to use, only required when there are multiple tunnels in the list";
|
|
||||||
};
|
|
||||||
|
|
||||||
verbose = mkOption {
|
|
||||||
type = types.bool;
|
|
||||||
default = false;
|
|
||||||
description = "Be verbose?";
|
|
||||||
};
|
|
||||||
|
|
||||||
automatic = mkOption {
|
|
||||||
type = types.bool;
|
|
||||||
default = true;
|
|
||||||
description = "Automatic Login and Tunnel activation";
|
|
||||||
};
|
|
||||||
|
|
||||||
requireTLS = mkOption {
|
|
||||||
type = types.bool;
|
|
||||||
default = false;
|
|
||||||
description = ''
|
|
||||||
When set to true, if TLS is not supported on the server
|
|
||||||
the TIC transaction will fail.
|
|
||||||
When set to false, it will try a starttls, when that is
|
|
||||||
not supported it will continue.
|
|
||||||
In any case if AICCU is build with TLS support it will
|
|
||||||
try to do a 'starttls' to the TIC server to see if that
|
|
||||||
is supported.
|
|
||||||
'';
|
|
||||||
};
|
|
||||||
|
|
||||||
pidFile = mkOption {
|
|
||||||
type = types.path;
|
|
||||||
default = "/run/aiccu.pid";
|
|
||||||
example = "/var/lib/aiccu/aiccu.pid";
|
|
||||||
description = "Location of PID File";
|
|
||||||
};
|
|
||||||
|
|
||||||
defaultRoute = mkOption {
|
|
||||||
type = types.bool;
|
|
||||||
default = true;
|
|
||||||
description = "Add a default route";
|
|
||||||
};
|
|
||||||
|
|
||||||
setupScript = mkOption {
|
|
||||||
type = with types; nullOr path;
|
|
||||||
default = null;
|
|
||||||
example = "/var/lib/aiccu/fix-subnets.sh";
|
|
||||||
description = "Script to run after setting up the interfaces";
|
|
||||||
};
|
|
||||||
|
|
||||||
makeHeartBeats = mkOption {
|
|
||||||
type = types.bool;
|
|
||||||
default = true;
|
|
||||||
description = ''
|
|
||||||
In general you don't want to turn this off
|
|
||||||
Of course only applies to AYIYA and heartbeat tunnels not to static ones
|
|
||||||
'';
|
|
||||||
};
|
|
||||||
|
|
||||||
noConfigure = mkOption {
|
|
||||||
type = types.bool;
|
|
||||||
default = false;
|
|
||||||
description = "Don't configure anything";
|
|
||||||
};
|
|
||||||
|
|
||||||
behindNAT = mkOption {
|
|
||||||
type = types.bool;
|
|
||||||
default = false;
|
|
||||||
description = "Notify the user that a NAT-kind network is detected";
|
|
||||||
};
|
|
||||||
|
|
||||||
localIPv4Override = mkOption {
|
|
||||||
type = types.bool;
|
|
||||||
default = false;
|
|
||||||
description = ''
|
|
||||||
Overrides the IPv4 parameter received from TIC
|
|
||||||
This allows one to configure a NAT into "DMZ" mode and then
|
|
||||||
forwarding the proto-41 packets to an internal host.
|
|
||||||
|
|
||||||
This is only needed for static proto-41 tunnels!
|
|
||||||
AYIYA and heartbeat tunnels don't require this.
|
|
||||||
'';
|
|
||||||
};
|
|
||||||
|
|
||||||
};
|
|
||||||
};
|
|
||||||
|
|
||||||
config = mkIf cfg.enable {
|
|
||||||
|
|
||||||
systemd.services.aiccu = {
|
|
||||||
description = "Automatic IPv6 Connectivity Client Utility";
|
|
||||||
after = [ "network.target" ];
|
|
||||||
wantedBy = [ "multi-user.target" ];
|
|
||||||
serviceConfig = {
|
|
||||||
ExecStart = "${pkgs.aiccu}/bin/aiccu start ${configFile}";
|
|
||||||
ExecStop = "${pkgs.aiccu}/bin/aiccu stop";
|
|
||||||
Type = "forking";
|
|
||||||
PIDFile = cfg.pidFile;
|
|
||||||
Restart = "no"; # aiccu startup errors are serious, do not pound the tic server or be banned.
|
|
||||||
};
|
|
||||||
};
|
|
||||||
|
|
||||||
};
|
|
||||||
}
|
|
@ -10,12 +10,17 @@ let
|
|||||||
|
|
||||||
confFile = pkgs.writeText "named.conf"
|
confFile = pkgs.writeText "named.conf"
|
||||||
''
|
''
|
||||||
|
include "/etc/bind/rndc.key";
|
||||||
|
controls {
|
||||||
|
inet 127.0.0.1 allow {localhost;} keys {"rndc-key";};
|
||||||
|
};
|
||||||
|
|
||||||
acl cachenetworks { ${concatMapStrings (entry: " ${entry}; ") cfg.cacheNetworks} };
|
acl cachenetworks { ${concatMapStrings (entry: " ${entry}; ") cfg.cacheNetworks} };
|
||||||
acl badnetworks { ${concatMapStrings (entry: " ${entry}; ") cfg.blockedNetworks} };
|
acl badnetworks { ${concatMapStrings (entry: " ${entry}; ") cfg.blockedNetworks} };
|
||||||
|
|
||||||
options {
|
options {
|
||||||
listen-on {any;};
|
listen-on { ${concatMapStrings (entry: " ${entry}; ") cfg.listenOn} };
|
||||||
listen-on-v6 {any;};
|
listen-on-v6 { ${concatMapStrings (entry: " ${entry}; ") cfg.listenOnIpv6} };
|
||||||
allow-query { cachenetworks; };
|
allow-query { cachenetworks; };
|
||||||
blackhole { badnetworks; };
|
blackhole { badnetworks; };
|
||||||
forward first;
|
forward first;
|
||||||
@ -96,6 +101,22 @@ in
|
|||||||
";
|
";
|
||||||
};
|
};
|
||||||
|
|
||||||
|
listenOn = mkOption {
|
||||||
|
default = ["any"];
|
||||||
|
type = types.listOf types.str;
|
||||||
|
description = "
|
||||||
|
Interfaces to listen on.
|
||||||
|
";
|
||||||
|
};
|
||||||
|
|
||||||
|
listenOnIpv6 = mkOption {
|
||||||
|
default = ["any"];
|
||||||
|
type = types.listOf types.str;
|
||||||
|
description = "
|
||||||
|
Ipv6 interfaces to listen on.
|
||||||
|
";
|
||||||
|
};
|
||||||
|
|
||||||
zones = mkOption {
|
zones = mkOption {
|
||||||
default = [];
|
default = [];
|
||||||
description = "
|
description = "
|
||||||
@ -151,11 +172,21 @@ in
|
|||||||
wantedBy = [ "multi-user.target" ];
|
wantedBy = [ "multi-user.target" ];
|
||||||
|
|
||||||
preStart = ''
|
preStart = ''
|
||||||
|
mkdir -m 0755 -p /etc/bind
|
||||||
|
if ! [ -f "/etc/bind/rndc.key" ]; then
|
||||||
|
${pkgs.bind.out}/sbin/rndc-confgen -r /dev/urandom -c /etc/bind/rndc.key -u ${bindUser} -a -A hmac-sha256 2>/dev/null
|
||||||
|
fi
|
||||||
|
|
||||||
${pkgs.coreutils}/bin/mkdir -p /var/run/named
|
${pkgs.coreutils}/bin/mkdir -p /var/run/named
|
||||||
chown ${bindUser} /var/run/named
|
chown ${bindUser} /var/run/named
|
||||||
'';
|
'';
|
||||||
|
|
||||||
script = "${pkgs.bind.out}/sbin/named -u ${bindUser} ${optionalString cfg.ipv4Only "-4"} -c ${cfg.configFile} -f";
|
serviceConfig = {
|
||||||
|
ExecStart = "${pkgs.bind.out}/sbin/named -u ${bindUser} ${optionalString cfg.ipv4Only "-4"} -c ${cfg.configFile} -f";
|
||||||
|
ExecReload = "${pkgs.bind.out}/sbin/rndc -k '/etc/bind/rndc.key' reload";
|
||||||
|
ExecStop = "${pkgs.bind.out}/sbin/rndc -k '/etc/bind/rndc.key' stop";
|
||||||
|
};
|
||||||
|
|
||||||
unitConfig.Documentation = "man:named(8)";
|
unitConfig.Documentation = "man:named(8)";
|
||||||
};
|
};
|
||||||
};
|
};
|
||||||
|
@ -5,110 +5,122 @@ with lib;
|
|||||||
let
|
let
|
||||||
|
|
||||||
cfg = config.services.cntlm;
|
cfg = config.services.cntlm;
|
||||||
uid = config.ids.uids.cntlm;
|
|
||||||
|
configFile = if cfg.configText != "" then
|
||||||
|
pkgs.writeText "cntlm.conf" ''
|
||||||
|
${cfg.configText}
|
||||||
|
''
|
||||||
|
else
|
||||||
|
pkgs.writeText "lighttpd.conf" ''
|
||||||
|
# Cntlm Authentication Proxy Configuration
|
||||||
|
Username ${cfg.username}
|
||||||
|
Domain ${cfg.domain}
|
||||||
|
Password ${cfg.password}
|
||||||
|
${optionalString (cfg.netbios_hostname != "") "Workstation ${cfg.netbios_hostname}"}
|
||||||
|
${concatMapStrings (entry: "Proxy ${entry}\n") cfg.proxy}
|
||||||
|
${optionalString (cfg.noproxy != []) "NoProxy ${concatStringsSep ", " cfg.noproxy}"}
|
||||||
|
|
||||||
|
${concatMapStrings (port: ''
|
||||||
|
Listen ${toString port}
|
||||||
|
'') cfg.port}
|
||||||
|
|
||||||
|
${cfg.extraConfig}
|
||||||
|
'';
|
||||||
|
|
||||||
in
|
in
|
||||||
|
|
||||||
{
|
{
|
||||||
|
|
||||||
options = {
|
options.services.cntlm = {
|
||||||
|
|
||||||
services.cntlm = {
|
enable = mkOption {
|
||||||
|
default = false;
|
||||||
|
description = ''
|
||||||
|
Whether to enable the cntlm, which start a local proxy.
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
|
||||||
enable = mkOption {
|
username = mkOption {
|
||||||
default = false;
|
description = ''
|
||||||
description = ''
|
Proxy account name, without the possibility to include domain name ('at' sign is interpreted literally).
|
||||||
Whether to enable the cntlm, which start a local proxy.
|
'';
|
||||||
'';
|
};
|
||||||
};
|
|
||||||
|
|
||||||
username = mkOption {
|
domain = mkOption {
|
||||||
description = ''
|
description = ''Proxy account domain/workgroup name.'';
|
||||||
Proxy account name, without the possibility to include domain name ('at' sign is interpreted literally).
|
};
|
||||||
'';
|
|
||||||
};
|
|
||||||
|
|
||||||
domain = mkOption {
|
password = mkOption {
|
||||||
description = ''Proxy account domain/workgroup name.'';
|
default = "/etc/cntlm.password";
|
||||||
};
|
type = types.str;
|
||||||
|
description = ''Proxy account password. Note: use chmod 0600 on /etc/cntlm.password for security.'';
|
||||||
|
};
|
||||||
|
|
||||||
password = mkOption {
|
netbios_hostname = mkOption {
|
||||||
default = "/etc/cntlm.password";
|
type = types.str;
|
||||||
type = types.str;
|
default = "";
|
||||||
description = ''Proxy account password. Note: use chmod 0600 on /etc/cntlm.password for security.'';
|
description = ''
|
||||||
};
|
The hostname of your machine.
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
|
||||||
netbios_hostname = mkOption {
|
proxy = mkOption {
|
||||||
type = types.str;
|
description = ''
|
||||||
description = ''
|
A list of NTLM/NTLMv2 authenticating HTTP proxies.
|
||||||
The hostname of your machine.
|
|
||||||
'';
|
|
||||||
};
|
|
||||||
|
|
||||||
proxy = mkOption {
|
Parent proxy, which requires authentication. The same as proxy on the command-line, can be used more than once to specify unlimited
|
||||||
description = ''
|
number of proxies. Should one proxy fail, cntlm automatically moves on to the next one. The connect request fails only if the whole
|
||||||
A list of NTLM/NTLMv2 authenticating HTTP proxies.
|
list of proxies is scanned and (for each request) and found to be invalid. Command-line takes precedence over the configuration file.
|
||||||
|
'';
|
||||||
|
example = [ "proxy.example.com:81" ];
|
||||||
|
};
|
||||||
|
|
||||||
Parent proxy, which requires authentication. The same as proxy on the command-line, can be used more than once to specify unlimited
|
noproxy = mkOption {
|
||||||
number of proxies. Should one proxy fail, cntlm automatically moves on to the next one. The connect request fails only if the whole
|
description = ''
|
||||||
list of proxies is scanned and (for each request) and found to be invalid. Command-line takes precedence over the configuration file.
|
A list of domains where the proxy is skipped.
|
||||||
'';
|
'';
|
||||||
};
|
default = [];
|
||||||
|
example = [ "*.example.com" "example.com" ];
|
||||||
|
};
|
||||||
|
|
||||||
port = mkOption {
|
port = mkOption {
|
||||||
default = [3128];
|
default = [3128];
|
||||||
description = "Specifies on which ports the cntlm daemon listens.";
|
description = "Specifies on which ports the cntlm daemon listens.";
|
||||||
};
|
};
|
||||||
|
|
||||||
extraConfig = mkOption {
|
extraConfig = mkOption {
|
||||||
type = types.lines;
|
type = types.lines;
|
||||||
default = "";
|
default = "";
|
||||||
description = "Verbatim contents of <filename>cntlm.conf</filename>.";
|
description = "Additional config appended to the end of the generated <filename>cntlm.conf</filename>.";
|
||||||
};
|
};
|
||||||
|
|
||||||
|
configText = mkOption {
|
||||||
|
type = types.lines;
|
||||||
|
default = "";
|
||||||
|
description = "Verbatim contents of <filename>cntlm.conf</filename>.";
|
||||||
};
|
};
|
||||||
|
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|
||||||
###### implementation
|
###### implementation
|
||||||
|
|
||||||
config = mkIf config.services.cntlm.enable {
|
config = mkIf cfg.enable {
|
||||||
systemd.services.cntlm = {
|
systemd.services.cntlm = {
|
||||||
description = "CNTLM is an NTLM / NTLM Session Response / NTLMv2 authenticating HTTP proxy";
|
description = "CNTLM is an NTLM / NTLM Session Response / NTLMv2 authenticating HTTP proxy";
|
||||||
after = [ "network.target" ];
|
after = [ "network.target" ];
|
||||||
wantedBy = [ "multi-user.target" ];
|
wantedBy = [ "multi-user.target" ];
|
||||||
serviceConfig = {
|
serviceConfig = {
|
||||||
Type = "forking";
|
|
||||||
User = "cntlm";
|
User = "cntlm";
|
||||||
ExecStart = ''
|
ExecStart = ''
|
||||||
${pkgs.cntlm}/bin/cntlm -U cntlm \
|
${pkgs.cntlm}/bin/cntlm -U cntlm -c ${configFile} -v -f
|
||||||
-c ${pkgs.writeText "cntlm_config" cfg.extraConfig}
|
|
||||||
'';
|
'';
|
||||||
};
|
};
|
||||||
};
|
|
||||||
|
|
||||||
services.cntlm.netbios_hostname = mkDefault config.networking.hostName;
|
|
||||||
|
|
||||||
users.extraUsers.cntlm = {
|
|
||||||
name = "cntlm";
|
|
||||||
description = "cntlm system-wide daemon";
|
|
||||||
home = "/var/empty";
|
|
||||||
};
|
};
|
||||||
|
|
||||||
services.cntlm.extraConfig =
|
users.extraUsers.cntlm = {
|
||||||
''
|
name = "cntlm";
|
||||||
# Cntlm Authentication Proxy Configuration
|
description = "cntlm system-wide daemon";
|
||||||
Username ${cfg.username}
|
isSystemUser = true;
|
||||||
Domain ${cfg.domain}
|
};
|
||||||
Password ${cfg.password}
|
|
||||||
Workstation ${cfg.netbios_hostname}
|
|
||||||
${concatMapStrings (entry: "Proxy ${entry}\n") cfg.proxy}
|
|
||||||
|
|
||||||
${concatMapStrings (port: ''
|
|
||||||
Listen ${toString port}
|
|
||||||
'') cfg.port}
|
|
||||||
'';
|
|
||||||
};
|
};
|
||||||
|
|
||||||
}
|
}
|
||||||
|
@ -17,7 +17,7 @@ let
|
|||||||
host = ${cfg.dns.address}
|
host = ${cfg.dns.address}
|
||||||
port = ${toString cfg.dns.port}
|
port = ${toString cfg.dns.port}
|
||||||
oldDNSMethod = NO_OLD_DNS
|
oldDNSMethod = NO_OLD_DNS
|
||||||
externalIP = ${cfg.dns.address}
|
externalIP = ${cfg.dns.externalAddress}
|
||||||
|
|
||||||
[http]
|
[http]
|
||||||
host = ${cfg.api.hostname}
|
host = ${cfg.api.hostname}
|
||||||
@ -47,8 +47,18 @@ in
|
|||||||
type = types.str;
|
type = types.str;
|
||||||
default = "127.0.0.1";
|
default = "127.0.0.1";
|
||||||
description = ''
|
description = ''
|
||||||
The IP address that will be used to reach this machine.
|
The IP address the DNSChain resolver will bind to.
|
||||||
Leave this unchanged if you do not wish to directly expose the DNSChain resolver.
|
Leave this unchanged if you do not wish to directly expose the resolver.
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
|
||||||
|
dns.externalAddress = mkOption {
|
||||||
|
type = types.str;
|
||||||
|
default = cfg.dns.address;
|
||||||
|
description = ''
|
||||||
|
The IP address used by clients to reach the resolver and the value of
|
||||||
|
the <literal>namecoin.dns</literal> record. Set this in case the bind address
|
||||||
|
is not the actual IP address (e.g. the machine is behind a NAT).
|
||||||
'';
|
'';
|
||||||
};
|
};
|
||||||
|
|
||||||
|
@ -164,7 +164,7 @@ in
|
|||||||
path = [ pkgs.hostapd ];
|
path = [ pkgs.hostapd ];
|
||||||
wantedBy = [ "network.target" ];
|
wantedBy = [ "network.target" ];
|
||||||
|
|
||||||
after = [ "${cfg.interface}-cfg.service" "nat.service" "bind.service" "dhcpd.service"];
|
after = [ "${cfg.interface}-cfg.service" "nat.service" "bind.service" "dhcpd.service" "sys-subsystem-net-devices-${cfg.interface}.device" ];
|
||||||
|
|
||||||
serviceConfig =
|
serviceConfig =
|
||||||
{ ExecStart = "${pkgs.hostapd}/bin/hostapd ${configFile}";
|
{ ExecStart = "${pkgs.hostapd}/bin/hostapd ${configFile}";
|
||||||
|
@ -811,6 +811,7 @@ in
|
|||||||
|
|
||||||
serviceConfig = {
|
serviceConfig = {
|
||||||
ExecStart = "${nsdPkg}/sbin/nsd -d -c ${nsdEnv}/nsd.conf";
|
ExecStart = "${nsdPkg}/sbin/nsd -d -c ${nsdEnv}/nsd.conf";
|
||||||
|
StandardError = "null";
|
||||||
PIDFile = pidFile;
|
PIDFile = pidFile;
|
||||||
Restart = "always";
|
Restart = "always";
|
||||||
RestartSec = "4s";
|
RestartSec = "4s";
|
||||||
|
268
nixos/modules/services/networking/resilio.nix
Normal file
268
nixos/modules/services/networking/resilio.nix
Normal file
@ -0,0 +1,268 @@
|
|||||||
|
{ config, lib, pkgs, ... }:
|
||||||
|
|
||||||
|
with lib;
|
||||||
|
|
||||||
|
let
|
||||||
|
cfg = config.services.resilio;
|
||||||
|
|
||||||
|
resilioSync = pkgs.resilio-sync;
|
||||||
|
|
||||||
|
sharedFoldersRecord = map (entry: {
|
||||||
|
secret = entry.secret;
|
||||||
|
dir = entry.directory;
|
||||||
|
|
||||||
|
use_relay_server = entry.useRelayServer;
|
||||||
|
use_tracker = entry.useTracker;
|
||||||
|
use_dht = entry.useDHT;
|
||||||
|
|
||||||
|
search_lan = entry.searchLAN;
|
||||||
|
use_sync_trash = entry.useSyncTrash;
|
||||||
|
known_hosts = knownHosts;
|
||||||
|
}) cfg.sharedFolders;
|
||||||
|
|
||||||
|
configFile = pkgs.writeText "config.json" (builtins.toJSON ({
|
||||||
|
device_name = cfg.deviceName;
|
||||||
|
storage_path = cfg.storagePath;
|
||||||
|
listening_port = cfg.listeningPort;
|
||||||
|
use_gui = false;
|
||||||
|
check_for_updates = cfg.checkForUpdates;
|
||||||
|
use_upnp = cfg.useUpnp;
|
||||||
|
download_limit = cfg.downloadLimit;
|
||||||
|
upload_limit = cfg.uploadLimit;
|
||||||
|
lan_encrypt_data = cfg.encryptLAN;
|
||||||
|
} // optionalAttrs cfg.enableWebUI {
|
||||||
|
webui = { listen = "${cfg.httpListenAddr}:${toString cfg.httpListenPort}"; } //
|
||||||
|
(optionalAttrs (cfg.httpLogin != "") { login = cfg.httpLogin; }) //
|
||||||
|
(optionalAttrs (cfg.httpPass != "") { password = cfg.httpPass; }) //
|
||||||
|
(optionalAttrs (cfg.apiKey != "") { api_key = cfg.apiKey; }) //
|
||||||
|
(optionalAttrs (cfg.directoryRoot != "") { directory_root = cfg.directoryRoot; });
|
||||||
|
} // optionalAttrs (sharedFoldersRecord != []) {
|
||||||
|
shared_folders = sharedFoldersRecord;
|
||||||
|
}));
|
||||||
|
|
||||||
|
in
|
||||||
|
{
|
||||||
|
options = {
|
||||||
|
services.resilio = {
|
||||||
|
enable = mkOption {
|
||||||
|
type = types.bool;
|
||||||
|
default = false;
|
||||||
|
description = ''
|
||||||
|
If enabled, start the Resilio Sync daemon. Once enabled, you can
|
||||||
|
interact with the service through the Web UI, or configure it in your
|
||||||
|
NixOS configuration. Enabling the <literal>resilio</literal> service
|
||||||
|
also installs a systemd user unit which can be used to start
|
||||||
|
user-specific copies of the daemon. Once installed, you can use
|
||||||
|
<literal>systemctl --user start resilio</literal> as your user to start
|
||||||
|
the daemon using the configuration file located at
|
||||||
|
<literal>$HOME/.config/resilio-sync/config.json</literal>.
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
|
||||||
|
deviceName = mkOption {
|
||||||
|
type = types.str;
|
||||||
|
example = "Voltron";
|
||||||
|
default = config.networking.hostName;
|
||||||
|
description = ''
|
||||||
|
Name of the Resilio Sync device.
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
|
||||||
|
listeningPort = mkOption {
|
||||||
|
type = types.int;
|
||||||
|
default = 0;
|
||||||
|
example = 44444;
|
||||||
|
description = ''
|
||||||
|
Listening port. Defaults to 0 which randomizes the port.
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
|
||||||
|
checkForUpdates = mkOption {
|
||||||
|
type = types.bool;
|
||||||
|
default = true;
|
||||||
|
description = ''
|
||||||
|
Determines whether to check for updates and alert the user
|
||||||
|
about them in the UI.
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
|
||||||
|
useUpnp = mkOption {
|
||||||
|
type = types.bool;
|
||||||
|
default = true;
|
||||||
|
description = ''
|
||||||
|
Use Universal Plug-n-Play (UPnP)
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
|
||||||
|
downloadLimit = mkOption {
|
||||||
|
type = types.int;
|
||||||
|
default = 0;
|
||||||
|
example = 1024;
|
||||||
|
description = ''
|
||||||
|
Download speed limit. 0 is unlimited (default).
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
|
||||||
|
uploadLimit = mkOption {
|
||||||
|
type = types.int;
|
||||||
|
default = 0;
|
||||||
|
example = 1024;
|
||||||
|
description = ''
|
||||||
|
Upload speed limit. 0 is unlimited (default).
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
|
||||||
|
httpListenAddr = mkOption {
|
||||||
|
type = types.str;
|
||||||
|
default = "0.0.0.0";
|
||||||
|
example = "1.2.3.4";
|
||||||
|
description = ''
|
||||||
|
HTTP address to bind to.
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
|
||||||
|
httpListenPort = mkOption {
|
||||||
|
type = types.int;
|
||||||
|
default = 9000;
|
||||||
|
description = ''
|
||||||
|
HTTP port to bind on.
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
|
||||||
|
httpLogin = mkOption {
|
||||||
|
type = types.str;
|
||||||
|
example = "allyourbase";
|
||||||
|
default = "";
|
||||||
|
description = ''
|
||||||
|
HTTP web login username.
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
|
||||||
|
httpPass = mkOption {
|
||||||
|
type = types.str;
|
||||||
|
example = "arebelongtous";
|
||||||
|
default = "";
|
||||||
|
description = ''
|
||||||
|
HTTP web login password.
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
|
||||||
|
encryptLAN = mkOption {
|
||||||
|
type = types.bool;
|
||||||
|
default = true;
|
||||||
|
description = "Encrypt LAN data.";
|
||||||
|
};
|
||||||
|
|
||||||
|
enableWebUI = mkOption {
|
||||||
|
type = types.bool;
|
||||||
|
default = false;
|
||||||
|
description = ''
|
||||||
|
Enable Web UI for administration. Bound to the specified
|
||||||
|
<literal>httpListenAddress</literal> and
|
||||||
|
<literal>httpListenPort</literal>.
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
|
||||||
|
storagePath = mkOption {
|
||||||
|
type = types.path;
|
||||||
|
default = "/var/lib/resilio-sync/";
|
||||||
|
description = ''
|
||||||
|
Where BitTorrent Sync will store it's database files (containing
|
||||||
|
things like username info and licenses). Generally, you should not
|
||||||
|
need to ever change this.
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
|
||||||
|
apiKey = mkOption {
|
||||||
|
type = types.str;
|
||||||
|
default = "";
|
||||||
|
description = "API key, which enables the developer API.";
|
||||||
|
};
|
||||||
|
|
||||||
|
directoryRoot = mkOption {
|
||||||
|
type = types.str;
|
||||||
|
default = "";
|
||||||
|
example = "/media";
|
||||||
|
description = "Default directory to add folders in the web UI.";
|
||||||
|
};
|
||||||
|
|
||||||
|
sharedFolders = mkOption {
|
||||||
|
default = [];
|
||||||
|
example =
|
||||||
|
[ { secret = "AHMYFPCQAHBM7LQPFXQ7WV6Y42IGUXJ5Y";
|
||||||
|
directory = "/home/user/sync_test";
|
||||||
|
useRelayServer = true;
|
||||||
|
useTracker = true;
|
||||||
|
useDHT = false;
|
||||||
|
searchLAN = true;
|
||||||
|
useSyncTrash = true;
|
||||||
|
knownHosts = [
|
||||||
|
"192.168.1.2:4444"
|
||||||
|
"192.168.1.3:4444"
|
||||||
|
];
|
||||||
|
}
|
||||||
|
];
|
||||||
|
description = ''
|
||||||
|
Shared folder list. If enabled, web UI must be
|
||||||
|
disabled. Secrets can be generated using <literal>rslsync
|
||||||
|
--generate-secret</literal>. Note that this secret will be
|
||||||
|
put inside the Nix store, so it is realistically not very
|
||||||
|
secret.
|
||||||
|
|
||||||
|
If you would like to be able to modify the contents of this
|
||||||
|
directories, it is recommended that you make your user a
|
||||||
|
member of the <literal>resilio</literal> group.
|
||||||
|
|
||||||
|
Directories in this list should be in the
|
||||||
|
<literal>resilio</literal> group, and that group must have
|
||||||
|
write access to the directory. It is also recommended that
|
||||||
|
<literal>chmod g+s</literal> is applied to the directory
|
||||||
|
so that any sub directories created will also belong to
|
||||||
|
the <literal>resilio</literal> group. Also,
|
||||||
|
<literal>setfacl -d -m group:resilio:rwx</literal> and
|
||||||
|
<literal>setfacl -m group:resilio:rwx</literal> should also
|
||||||
|
be applied so that the sub directories are writable by
|
||||||
|
the group.
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
};
|
||||||
|
};
|
||||||
|
|
||||||
|
config = mkIf cfg.enable {
|
||||||
|
assertions =
|
||||||
|
[ { assertion = cfg.deviceName != "";
|
||||||
|
message = "Device name cannot be empty.";
|
||||||
|
}
|
||||||
|
{ assertion = cfg.enableWebUI -> cfg.sharedFolders == [];
|
||||||
|
message = "If using shared folders, the web UI cannot be enabled.";
|
||||||
|
}
|
||||||
|
{ assertion = cfg.apiKey != "" -> cfg.enableWebUI;
|
||||||
|
message = "If you're using an API key, you must enable the web server.";
|
||||||
|
}
|
||||||
|
];
|
||||||
|
|
||||||
|
users.extraUsers.rslsync = {
|
||||||
|
description = "Resilio Sync Service user";
|
||||||
|
home = cfg.storagePath;
|
||||||
|
createHome = true;
|
||||||
|
uid = config.ids.uids.rslsync;
|
||||||
|
group = "rslsync";
|
||||||
|
};
|
||||||
|
|
||||||
|
users.extraGroups = [ { name = "rslsync"; } ];
|
||||||
|
|
||||||
|
systemd.services.resilio = with pkgs; {
|
||||||
|
description = "Resilio Sync Service";
|
||||||
|
wantedBy = [ "multi-user.target" ];
|
||||||
|
after = [ "network.target" "local-fs.target" ];
|
||||||
|
serviceConfig = {
|
||||||
|
Restart = "on-abort";
|
||||||
|
UMask = "0002";
|
||||||
|
User = "rslsync";
|
||||||
|
ExecStart = ''
|
||||||
|
${resilioSync}/bin/rslsync --nodaemon --config ${configFile}
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
};
|
||||||
|
};
|
||||||
|
}
|
@ -188,11 +188,6 @@ in {
|
|||||||
ln -sfT ${cfg.package}/modules ${cfg.dataDir}/modules
|
ln -sfT ${cfg.package}/modules ${cfg.dataDir}/modules
|
||||||
if [ "$(id -u)" = 0 ]; then chown -R elasticsearch ${cfg.dataDir}; fi
|
if [ "$(id -u)" = 0 ]; then chown -R elasticsearch ${cfg.dataDir}; fi
|
||||||
'';
|
'';
|
||||||
postStart = mkBefore ''
|
|
||||||
until ${pkgs.curl.bin}/bin/curl -s -o /dev/null ${cfg.listenAddress}:${toString cfg.port}; do
|
|
||||||
sleep 1
|
|
||||||
done
|
|
||||||
'';
|
|
||||||
};
|
};
|
||||||
|
|
||||||
environment.systemPackages = [ cfg.package ];
|
environment.systemPackages = [ cfg.package ];
|
||||||
|
@ -5,7 +5,11 @@ with lib;
|
|||||||
let
|
let
|
||||||
cfg = config.services.kibana;
|
cfg = config.services.kibana;
|
||||||
|
|
||||||
cfgFile = pkgs.writeText "kibana.json" (builtins.toJSON (
|
atLeast54 = versionAtLeast (builtins.parseDrvName cfg.package.name).version "5.4";
|
||||||
|
|
||||||
|
cfgFile = if atLeast54 then cfgFile5 else cfgFile4;
|
||||||
|
|
||||||
|
cfgFile4 = pkgs.writeText "kibana.json" (builtins.toJSON (
|
||||||
(filterAttrsRecursive (n: v: v != null) ({
|
(filterAttrsRecursive (n: v: v != null) ({
|
||||||
host = cfg.listenAddress;
|
host = cfg.listenAddress;
|
||||||
port = cfg.port;
|
port = cfg.port;
|
||||||
@ -36,6 +40,27 @@ let
|
|||||||
];
|
];
|
||||||
} // cfg.extraConf)
|
} // cfg.extraConf)
|
||||||
)));
|
)));
|
||||||
|
|
||||||
|
cfgFile5 = pkgs.writeText "kibana.json" (builtins.toJSON (
|
||||||
|
(filterAttrsRecursive (n: v: v != null) ({
|
||||||
|
server.host = cfg.listenAddress;
|
||||||
|
server.port = cfg.port;
|
||||||
|
server.ssl.certificate = cfg.cert;
|
||||||
|
server.ssl.key = cfg.key;
|
||||||
|
|
||||||
|
kibana.index = cfg.index;
|
||||||
|
kibana.defaultAppId = cfg.defaultAppId;
|
||||||
|
|
||||||
|
elasticsearch.url = cfg.elasticsearch.url;
|
||||||
|
elasticsearch.username = cfg.elasticsearch.username;
|
||||||
|
elasticsearch.password = cfg.elasticsearch.password;
|
||||||
|
|
||||||
|
elasticsearch.ssl.certificate = cfg.elasticsearch.cert;
|
||||||
|
elasticsearch.ssl.key = cfg.elasticsearch.key;
|
||||||
|
elasticsearch.ssl.certificateAuthorities = cfg.elasticsearch.certificateAuthorities;
|
||||||
|
} // cfg.extraConf)
|
||||||
|
)));
|
||||||
|
|
||||||
in {
|
in {
|
||||||
options.services.kibana = {
|
options.services.kibana = {
|
||||||
enable = mkEnableOption "enable kibana service";
|
enable = mkEnableOption "enable kibana service";
|
||||||
@ -96,11 +121,29 @@ in {
|
|||||||
};
|
};
|
||||||
|
|
||||||
ca = mkOption {
|
ca = mkOption {
|
||||||
description = "CA file to auth against elasticsearch.";
|
description = ''
|
||||||
|
CA file to auth against elasticsearch.
|
||||||
|
|
||||||
|
It's recommended to use the <option>certificateAuthorities</option> option
|
||||||
|
when using kibana-5.4 or newer.
|
||||||
|
'';
|
||||||
default = null;
|
default = null;
|
||||||
type = types.nullOr types.path;
|
type = types.nullOr types.path;
|
||||||
};
|
};
|
||||||
|
|
||||||
|
certificateAuthorities = mkOption {
|
||||||
|
description = ''
|
||||||
|
CA files to auth against elasticsearch.
|
||||||
|
|
||||||
|
Please use the <option>ca</option> option when using kibana < 5.4
|
||||||
|
because those old versions don't support setting multiple CA's.
|
||||||
|
|
||||||
|
This defaults to the singleton list [ca] when the <option>ca</option> option is defined.
|
||||||
|
'';
|
||||||
|
default = if isNull cfg.elasticsearch.ca then [] else [ca];
|
||||||
|
type = types.listOf types.path;
|
||||||
|
};
|
||||||
|
|
||||||
cert = mkOption {
|
cert = mkOption {
|
||||||
description = "Certificate file to auth against elasticsearch.";
|
description = "Certificate file to auth against elasticsearch.";
|
||||||
default = null;
|
default = null;
|
||||||
@ -118,6 +161,7 @@ in {
|
|||||||
description = "Kibana package to use";
|
description = "Kibana package to use";
|
||||||
default = pkgs.kibana;
|
default = pkgs.kibana;
|
||||||
defaultText = "pkgs.kibana";
|
defaultText = "pkgs.kibana";
|
||||||
|
example = "pkgs.kibana5";
|
||||||
type = types.package;
|
type = types.package;
|
||||||
};
|
};
|
||||||
|
|
||||||
|
@ -16,7 +16,7 @@ let
|
|||||||
|
|
||||||
phpMajorVersion = head (splitString "." php.version);
|
phpMajorVersion = head (splitString "." php.version);
|
||||||
|
|
||||||
mod_perl = pkgs.mod_perl.override { apacheHttpd = httpd; };
|
mod_perl = pkgs.apacheHttpdPackages.mod_perl.override { apacheHttpd = httpd; };
|
||||||
|
|
||||||
defaultListen = cfg: if cfg.enableSSL
|
defaultListen = cfg: if cfg.enableSSL
|
||||||
then [{ip = "*"; port = 443;}]
|
then [{ip = "*"; port = 443;}]
|
||||||
|
@ -36,7 +36,11 @@ in
|
|||||||
dataDir = mkOption {
|
dataDir = mkOption {
|
||||||
default = "/var/lib/caddy";
|
default = "/var/lib/caddy";
|
||||||
type = types.path;
|
type = types.path;
|
||||||
description = "The data directory, for storing certificates.";
|
description = ''
|
||||||
|
The data directory, for storing certificates. Before 17.09, this
|
||||||
|
would create a .caddy directory. With 17.09 the contents of the
|
||||||
|
.caddy directory are in the specified data directory instead.
|
||||||
|
'';
|
||||||
};
|
};
|
||||||
|
|
||||||
package = mkOption {
|
package = mkOption {
|
||||||
@ -50,17 +54,32 @@ in
|
|||||||
config = mkIf cfg.enable {
|
config = mkIf cfg.enable {
|
||||||
systemd.services.caddy = {
|
systemd.services.caddy = {
|
||||||
description = "Caddy web server";
|
description = "Caddy web server";
|
||||||
after = [ "network.target" ];
|
after = [ "network-online.target" ];
|
||||||
wantedBy = [ "multi-user.target" ];
|
wantedBy = [ "multi-user.target" ];
|
||||||
|
environment = mkIf (versionAtLeast config.system.stateVersion "17.09")
|
||||||
|
{ CADDYPATH = cfg.dataDir; };
|
||||||
serviceConfig = {
|
serviceConfig = {
|
||||||
ExecStart = ''${cfg.package.bin}/bin/caddy -conf=${configFile} \
|
ExecStart = ''
|
||||||
-ca=${cfg.ca} -email=${cfg.email} ${optionalString cfg.agree "-agree"}
|
${cfg.package.bin}/bin/caddy -root=/var/tmp -conf=${configFile} \
|
||||||
|
-ca=${cfg.ca} -email=${cfg.email} ${optionalString cfg.agree "-agree"}
|
||||||
'';
|
'';
|
||||||
|
ExecReload = "${pkgs.coreutils}/bin/kill -HUP $MAINPID";
|
||||||
Type = "simple";
|
Type = "simple";
|
||||||
User = "caddy";
|
User = "caddy";
|
||||||
Group = "caddy";
|
Group = "caddy";
|
||||||
|
Restart = "on-failure";
|
||||||
|
StartLimitInterval = 86400;
|
||||||
|
StartLimitBurst = 5;
|
||||||
AmbientCapabilities = "cap_net_bind_service";
|
AmbientCapabilities = "cap_net_bind_service";
|
||||||
LimitNOFILE = 8192;
|
CapabilityBoundingSet = "cap_net_bind_service";
|
||||||
|
NoNewPrivileges = true;
|
||||||
|
LimitNPROC = 64;
|
||||||
|
LimitNOFILE = 1048576;
|
||||||
|
PrivateTmp = true;
|
||||||
|
PrivateDevices = true;
|
||||||
|
ProtectHome = true;
|
||||||
|
ProtectSystem = "full";
|
||||||
|
ReadWriteDirectories = cfg.dataDir;
|
||||||
};
|
};
|
||||||
};
|
};
|
||||||
|
|
||||||
|
@ -35,10 +35,10 @@ let
|
|||||||
chmod -R a+w $out/share/gsettings-schemas/nixos-gsettings-overrides
|
chmod -R a+w $out/share/gsettings-schemas/nixos-gsettings-overrides
|
||||||
cat - > $out/share/gsettings-schemas/nixos-gsettings-overrides/glib-2.0/schemas/nixos-defaults.gschema.override <<- EOF
|
cat - > $out/share/gsettings-schemas/nixos-gsettings-overrides/glib-2.0/schemas/nixos-defaults.gschema.override <<- EOF
|
||||||
[org.gnome.desktop.background]
|
[org.gnome.desktop.background]
|
||||||
picture-uri='${pkgs.nixos-artwork}/share/artwork/gnome/Gnome_Dark.png'
|
picture-uri='${pkgs.nixos-artwork.wallpapers.gnome-dark}/share/artwork/gnome/Gnome_Dark.png'
|
||||||
|
|
||||||
[org.gnome.desktop.screensaver]
|
[org.gnome.desktop.screensaver]
|
||||||
picture-uri='${pkgs.nixos-artwork}/share/artwork/gnome/Gnome_Dark.png'
|
picture-uri='${pkgs.nixos-artwork.wallpapers.gnome-dark}/share/artwork/gnome/Gnome_Dark.png'
|
||||||
|
|
||||||
${cfg.extraGSettingsOverrides}
|
${cfg.extraGSettingsOverrides}
|
||||||
EOF
|
EOF
|
||||||
|
@ -7,7 +7,7 @@ let
|
|||||||
xcfg = config.services.xserver;
|
xcfg = config.services.xserver;
|
||||||
cfg = xcfg.desktopManager.plasma5;
|
cfg = xcfg.desktopManager.plasma5;
|
||||||
|
|
||||||
inherit (pkgs) kdeWrapper kdeApplications plasma5 libsForQt5 qt5 xorg;
|
inherit (pkgs) kdeApplications plasma5 libsForQt5 qt5 xorg;
|
||||||
|
|
||||||
in
|
in
|
||||||
|
|
||||||
@ -30,24 +30,12 @@ in
|
|||||||
'';
|
'';
|
||||||
};
|
};
|
||||||
|
|
||||||
extraPackages = mkOption {
|
|
||||||
type = types.listOf types.package;
|
|
||||||
default = [];
|
|
||||||
description = ''
|
|
||||||
KDE packages that need to be installed system-wide.
|
|
||||||
'';
|
|
||||||
};
|
|
||||||
|
|
||||||
};
|
};
|
||||||
|
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|
||||||
config = mkMerge [
|
config = mkMerge [
|
||||||
(mkIf (cfg.extraPackages != []) {
|
|
||||||
environment.systemPackages = [ (kdeWrapper cfg.extraPackages) ];
|
|
||||||
})
|
|
||||||
|
|
||||||
(mkIf (xcfg.enable && cfg.enable) {
|
(mkIf (xcfg.enable && cfg.enable) {
|
||||||
services.xserver.desktopManager.session = singleton {
|
services.xserver.desktopManager.session = singleton {
|
||||||
name = "plasma5";
|
name = "plasma5";
|
||||||
@ -64,8 +52,8 @@ in
|
|||||||
};
|
};
|
||||||
|
|
||||||
security.wrappers = {
|
security.wrappers = {
|
||||||
kcheckpass.source = "${plasma5.plasma-workspace.out}/lib/libexec/kcheckpass";
|
kcheckpass.source = "${lib.getBin plasma5.plasma-workspace}/lib/libexec/kcheckpass";
|
||||||
"start_kdeinit".source = "${pkgs.kinit.out}/lib/libexec/kf5/start_kdeinit";
|
"start_kdeinit".source = "${lib.getBin pkgs.kinit}/lib/libexec/kf5/start_kdeinit";
|
||||||
};
|
};
|
||||||
|
|
||||||
environment.systemPackages = with pkgs; with qt5; with libsForQt5; with plasma5; with kdeApplications;
|
environment.systemPackages = with pkgs; with qt5; with libsForQt5; with plasma5; with kdeApplications;
|
||||||
@ -139,10 +127,14 @@ in
|
|||||||
plasma-workspace
|
plasma-workspace
|
||||||
plasma-workspace-wallpapers
|
plasma-workspace-wallpapers
|
||||||
|
|
||||||
|
dolphin
|
||||||
dolphin-plugins
|
dolphin-plugins
|
||||||
ffmpegthumbs
|
ffmpegthumbs
|
||||||
kdegraphics-thumbnailers
|
kdegraphics-thumbnailers
|
||||||
|
khelpcenter
|
||||||
kio-extras
|
kio-extras
|
||||||
|
konsole
|
||||||
|
oxygen
|
||||||
print-manager
|
print-manager
|
||||||
|
|
||||||
breeze-icons
|
breeze-icons
|
||||||
@ -163,16 +155,6 @@ in
|
|||||||
++ lib.optional config.services.colord.enable colord-kde
|
++ lib.optional config.services.colord.enable colord-kde
|
||||||
++ lib.optionals config.services.samba.enable [ kdenetwork-filesharing pkgs.samba ];
|
++ lib.optionals config.services.samba.enable [ kdenetwork-filesharing pkgs.samba ];
|
||||||
|
|
||||||
services.xserver.desktopManager.plasma5.extraPackages =
|
|
||||||
with kdeApplications; with plasma5;
|
|
||||||
[
|
|
||||||
khelpcenter
|
|
||||||
oxygen
|
|
||||||
|
|
||||||
dolphin
|
|
||||||
konsole
|
|
||||||
];
|
|
||||||
|
|
||||||
environment.pathsToLink = [ "/share" ];
|
environment.pathsToLink = [ "/share" ];
|
||||||
|
|
||||||
environment.etc = singleton {
|
environment.etc = singleton {
|
||||||
@ -183,7 +165,6 @@ in
|
|||||||
environment.variables = {
|
environment.variables = {
|
||||||
# Enable GTK applications to load SVG icons
|
# Enable GTK applications to load SVG icons
|
||||||
GDK_PIXBUF_MODULE_FILE = "${pkgs.librsvg.out}/lib/gdk-pixbuf-2.0/2.10.0/loaders.cache";
|
GDK_PIXBUF_MODULE_FILE = "${pkgs.librsvg.out}/lib/gdk-pixbuf-2.0/2.10.0/loaders.cache";
|
||||||
QT_PLUGIN_PATH = "/run/current-system/sw/lib/qt5/plugins";
|
|
||||||
};
|
};
|
||||||
|
|
||||||
fonts.fonts = with pkgs; [ noto-fonts hack-font ];
|
fonts.fonts = with pkgs; [ noto-fonts hack-font ];
|
||||||
@ -209,7 +190,6 @@ in
|
|||||||
|
|
||||||
services.xserver.displayManager.sddm = {
|
services.xserver.displayManager.sddm = {
|
||||||
theme = "breeze";
|
theme = "breeze";
|
||||||
package = pkgs.sddmPlasma5;
|
|
||||||
};
|
};
|
||||||
|
|
||||||
security.pam.services.kde = { allowNullPassword = true; };
|
security.pam.services.kde = { allowNullPassword = true; };
|
||||||
|
@ -111,7 +111,7 @@ in
|
|||||||
|
|
||||||
background = mkOption {
|
background = mkOption {
|
||||||
type = types.str;
|
type = types.str;
|
||||||
default = "${pkgs.nixos-artwork}/share/artwork/gnome/Gnome_Dark.png";
|
default = "${pkgs.nixos-artwork.wallpapers.gnome-dark}/share/artwork/gnome/Gnome_Dark.png";
|
||||||
description = ''
|
description = ''
|
||||||
The background image or color to use.
|
The background image or color to use.
|
||||||
'';
|
'';
|
||||||
|
@ -9,7 +9,7 @@ let
|
|||||||
cfg = dmcfg.sddm;
|
cfg = dmcfg.sddm;
|
||||||
xEnv = config.systemd.services."display-manager".environment;
|
xEnv = config.systemd.services."display-manager".environment;
|
||||||
|
|
||||||
sddm = cfg.package;
|
inherit (pkgs) sddm;
|
||||||
|
|
||||||
xserverWrapper = pkgs.writeScript "xserver-wrapper" ''
|
xserverWrapper = pkgs.writeScript "xserver-wrapper" ''
|
||||||
#!/bin/sh
|
#!/bin/sh
|
||||||
@ -37,8 +37,8 @@ let
|
|||||||
|
|
||||||
[Theme]
|
[Theme]
|
||||||
Current=${cfg.theme}
|
Current=${cfg.theme}
|
||||||
ThemeDir=${sddm}/share/sddm/themes
|
ThemeDir=/run/current-system/sw/share/sddm/themes
|
||||||
FacesDir=${sddm}/share/sddm/faces
|
FacesDir=/run/current-system/sw/share/sddm/faces
|
||||||
|
|
||||||
[Users]
|
[Users]
|
||||||
MaximumUid=${toString config.ids.uids.nixbld}
|
MaximumUid=${toString config.ids.uids.nixbld}
|
||||||
@ -105,15 +105,6 @@ in
|
|||||||
'';
|
'';
|
||||||
};
|
};
|
||||||
|
|
||||||
package = mkOption {
|
|
||||||
type = types.package;
|
|
||||||
default = pkgs.sddm;
|
|
||||||
description = ''
|
|
||||||
The SDDM package to install.
|
|
||||||
The default package can be overridden to provide extra themes.
|
|
||||||
'';
|
|
||||||
};
|
|
||||||
|
|
||||||
autoNumlock = mkOption {
|
autoNumlock = mkOption {
|
||||||
type = types.bool;
|
type = types.bool;
|
||||||
default = false;
|
default = false;
|
||||||
@ -205,7 +196,15 @@ in
|
|||||||
services.xserver.displayManager.job = {
|
services.xserver.displayManager.job = {
|
||||||
logsXsession = true;
|
logsXsession = true;
|
||||||
|
|
||||||
execCmd = "exec ${sddm}/bin/sddm";
|
environment = {
|
||||||
|
# Load themes from system environment
|
||||||
|
QT_PLUGIN_PATH = "/run/current-system/sw/" + pkgs.qt5.qtbase.qtPluginPrefix;
|
||||||
|
QML2_IMPORT_PATH = "/run/current-system/sw/" + pkgs.qt5.qtbase.qtQmlPrefix;
|
||||||
|
|
||||||
|
XDG_DATA_DIRS = "/run/current-system/sw/share";
|
||||||
|
};
|
||||||
|
|
||||||
|
execCmd = "exec /run/current-system/sw/bin/sddm";
|
||||||
};
|
};
|
||||||
|
|
||||||
security.pam.services = {
|
security.pam.services = {
|
||||||
@ -254,7 +253,8 @@ in
|
|||||||
|
|
||||||
users.extraGroups.sddm.gid = config.ids.gids.sddm;
|
users.extraGroups.sddm.gid = config.ids.gids.sddm;
|
||||||
|
|
||||||
services.dbus.packages = [ sddm.unwrapped ];
|
environment.systemPackages = [ sddm ];
|
||||||
|
services.dbus.packages = [ sddm ];
|
||||||
|
|
||||||
# To enable user switching, allow sddm to allocate TTYs/displays dynamically.
|
# To enable user switching, allow sddm to allocate TTYs/displays dynamically.
|
||||||
services.xserver.tty = null;
|
services.xserver.tty = null;
|
||||||
|
@ -15,7 +15,7 @@ in
|
|||||||
services.xserver.windowManager.session = [{
|
services.xserver.windowManager.session = [{
|
||||||
name = "qtile";
|
name = "qtile";
|
||||||
start = ''
|
start = ''
|
||||||
${pkgs.qtile}/bin/qtile
|
${pkgs.qtile}/bin/qtile &
|
||||||
waitPID=$!
|
waitPID=$!
|
||||||
'';
|
'';
|
||||||
}];
|
}];
|
||||||
|
@ -64,11 +64,21 @@ let
|
|||||||
)) + ":" + (makeSearchPathOutput "bin" "sbin" [
|
)) + ":" + (makeSearchPathOutput "bin" "sbin" [
|
||||||
pkgs.mdadm pkgs.utillinux
|
pkgs.mdadm pkgs.utillinux
|
||||||
]);
|
]);
|
||||||
|
font = if lib.last (lib.splitString "." cfg.font) == "pf2"
|
||||||
|
then cfg.font
|
||||||
|
else "${convertedFont}";
|
||||||
});
|
});
|
||||||
|
|
||||||
bootDeviceCounters = fold (device: attr: attr // { "${device}" = (attr."${device}" or 0) + 1; }) {}
|
bootDeviceCounters = fold (device: attr: attr // { "${device}" = (attr."${device}" or 0) + 1; }) {}
|
||||||
(concatMap (args: args.devices) cfg.mirroredBoots);
|
(concatMap (args: args.devices) cfg.mirroredBoots);
|
||||||
|
|
||||||
|
convertedFont = (pkgs.runCommand "grub-font-converted.pf2" {}
|
||||||
|
(builtins.concatStringsSep " "
|
||||||
|
([ "${realGrub}/bin/grub-mkfont"
|
||||||
|
cfg.font
|
||||||
|
"--output" "$out"
|
||||||
|
] ++ (optional (cfg.fontSize!=null) "--size ${toString cfg.fontSize}")))
|
||||||
|
);
|
||||||
in
|
in
|
||||||
|
|
||||||
{
|
{
|
||||||
@ -276,7 +286,7 @@ in
|
|||||||
extraInitrd = mkOption {
|
extraInitrd = mkOption {
|
||||||
type = types.nullOr types.path;
|
type = types.nullOr types.path;
|
||||||
default = null;
|
default = null;
|
||||||
example = "/boot/extra_initrafms.gz";
|
example = "/boot/extra_initramfs.gz";
|
||||||
description = ''
|
description = ''
|
||||||
The path to a second initramfs to be supplied to the kernel.
|
The path to a second initramfs to be supplied to the kernel.
|
||||||
This ramfs will not be copied to the store, so that it can
|
This ramfs will not be copied to the store, so that it can
|
||||||
@ -305,6 +315,24 @@ in
|
|||||||
'';
|
'';
|
||||||
};
|
};
|
||||||
|
|
||||||
|
font = mkOption {
|
||||||
|
type = types.nullOr types.path;
|
||||||
|
default = "${realGrub}/share/grub/unicode.pf2";
|
||||||
|
description = ''
|
||||||
|
Path to a TrueType, OpenType, or pf2 font to be used by Grub.
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
|
||||||
|
fontSize = mkOption {
|
||||||
|
type = types.nullOr types.int;
|
||||||
|
example = literalExample 16;
|
||||||
|
default = null;
|
||||||
|
description = ''
|
||||||
|
Font size for the grub menu. Ignored unless <literal>font</literal>
|
||||||
|
is set to a ttf or otf font.
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
|
||||||
gfxmodeEfi = mkOption {
|
gfxmodeEfi = mkOption {
|
||||||
default = "auto";
|
default = "auto";
|
||||||
example = "1024x768";
|
example = "1024x768";
|
||||||
@ -489,7 +517,7 @@ in
|
|||||||
sha256 = "14kqdx2lfqvh40h6fjjzqgff1mwk74dmbjvmqphi6azzra7z8d59";
|
sha256 = "14kqdx2lfqvh40h6fjjzqgff1mwk74dmbjvmqphi6azzra7z8d59";
|
||||||
}
|
}
|
||||||
# GRUB 1.97 doesn't support gzipped XPMs.
|
# GRUB 1.97 doesn't support gzipped XPMs.
|
||||||
else "${pkgs.nixos-artwork}/share/artwork/gnome/Gnome_Dark.png");
|
else "${pkgs.nixos-artwork.wallpapers.gnome-dark}/share/artwork/gnome/Gnome_Dark.png");
|
||||||
}
|
}
|
||||||
|
|
||||||
(mkIf cfg.enable {
|
(mkIf cfg.enable {
|
||||||
|
@ -67,6 +67,7 @@ my $gfxmodeEfi = get("gfxmodeEfi");
|
|||||||
my $gfxmodeBios = get("gfxmodeBios");
|
my $gfxmodeBios = get("gfxmodeBios");
|
||||||
my $bootloaderId = get("bootloaderId");
|
my $bootloaderId = get("bootloaderId");
|
||||||
my $forceInstall = get("forceInstall");
|
my $forceInstall = get("forceInstall");
|
||||||
|
my $font = get("font");
|
||||||
$ENV{'PATH'} = get("path");
|
$ENV{'PATH'} = get("path");
|
||||||
|
|
||||||
die "unsupported GRUB version\n" if $grubVersion != 1 && $grubVersion != 2;
|
die "unsupported GRUB version\n" if $grubVersion != 1 && $grubVersion != 2;
|
||||||
@ -281,7 +282,7 @@ else {
|
|||||||
insmod vbe
|
insmod vbe
|
||||||
fi
|
fi
|
||||||
insmod font
|
insmod font
|
||||||
if loadfont " . $grubBoot->path . "/grub/fonts/unicode.pf2; then
|
if loadfont " . $grubBoot->path . "/converted-font.pf2; then
|
||||||
insmod gfxterm
|
insmod gfxterm
|
||||||
if [ \"\${grub_platform}\" = \"efi\" ]; then
|
if [ \"\${grub_platform}\" = \"efi\" ]; then
|
||||||
set gfxmode=$gfxmodeEfi
|
set gfxmode=$gfxmodeEfi
|
||||||
@ -294,6 +295,9 @@ else {
|
|||||||
fi
|
fi
|
||||||
";
|
";
|
||||||
|
|
||||||
|
if ($font) {
|
||||||
|
copy $font, "$bootPath/converted-font.pf2" or die "cannot copy $font to $bootPath\n";
|
||||||
|
}
|
||||||
if ($splashImage) {
|
if ($splashImage) {
|
||||||
# FIXME: GRUB 1.97 doesn't resize the background image if it
|
# FIXME: GRUB 1.97 doesn't resize the background image if it
|
||||||
# doesn't match the video resolution.
|
# doesn't match the video resolution.
|
||||||
|
@ -167,7 +167,7 @@ let
|
|||||||
--replace /sbin/blkid ${extraUtils}/bin/blkid \
|
--replace /sbin/blkid ${extraUtils}/bin/blkid \
|
||||||
--replace ${pkgs.lvm2}/sbin ${extraUtils}/bin \
|
--replace ${pkgs.lvm2}/sbin ${extraUtils}/bin \
|
||||||
--replace /sbin/mdadm ${extraUtils}/bin/mdadm \
|
--replace /sbin/mdadm ${extraUtils}/bin/mdadm \
|
||||||
--replace /bin/sh ${extraUtils}/bin/sh \
|
--replace ${pkgs.bash}/bin/sh ${extraUtils}/bin/sh \
|
||||||
--replace /usr/bin/readlink ${extraUtils}/bin/readlink \
|
--replace /usr/bin/readlink ${extraUtils}/bin/readlink \
|
||||||
--replace /usr/bin/basename ${extraUtils}/bin/basename \
|
--replace /usr/bin/basename ${extraUtils}/bin/basename \
|
||||||
--replace ${udev}/bin/udevadm ${extraUtils}/bin/udevadm
|
--replace ${udev}/bin/udevadm ${extraUtils}/bin/udevadm
|
||||||
|
@ -4,6 +4,8 @@
|
|||||||
|
|
||||||
environment.systemPackages = [ pkgs.bcache-tools ];
|
environment.systemPackages = [ pkgs.bcache-tools ];
|
||||||
|
|
||||||
|
services.udev.packages = [ pkgs.bcache-tools ];
|
||||||
|
|
||||||
boot.initrd.extraUdevRulesCommands = ''
|
boot.initrd.extraUdevRulesCommands = ''
|
||||||
cp -v ${pkgs.bcache-tools}/lib/udev/rules.d/*.rules $out/
|
cp -v ${pkgs.bcache-tools}/lib/udev/rules.d/*.rules $out/
|
||||||
'';
|
'';
|
||||||
|
@ -1110,7 +1110,7 @@ in
|
|||||||
'';
|
'';
|
||||||
|
|
||||||
# Udev script to execute for a new WLAN interface. The script configures the new WLAN interface.
|
# Udev script to execute for a new WLAN interface. The script configures the new WLAN interface.
|
||||||
newInterfaceScript = new: pkgs.writeScript "udev-run-script-wlan-interfaces-${new._iName}.sh" ''
|
newInterfaceScript = device: new: pkgs.writeScript "udev-run-script-wlan-interfaces-${new._iName}.sh" ''
|
||||||
#!${pkgs.stdenv.shell}
|
#!${pkgs.stdenv.shell}
|
||||||
# Configure the new interface
|
# Configure the new interface
|
||||||
${pkgs.iw}/bin/iw dev ${new._iName} set type ${new.type}
|
${pkgs.iw}/bin/iw dev ${new._iName} set type ${new.type}
|
||||||
@ -1132,7 +1132,7 @@ in
|
|||||||
# It is important to have that rule first as overwriting the NAME attribute also prevents the
|
# It is important to have that rule first as overwriting the NAME attribute also prevents the
|
||||||
# next rules from matching.
|
# next rules from matching.
|
||||||
${flip (concatMapStringsSep "\n") (wlanListDeviceFirst device wlanDeviceInterfaces."${device}") (interface:
|
${flip (concatMapStringsSep "\n") (wlanListDeviceFirst device wlanDeviceInterfaces."${device}") (interface:
|
||||||
''ACTION=="add", SUBSYSTEM=="net", ENV{DEVTYPE}=="wlan", ENV{INTERFACE}=="${interface._iName}", ${systemdAttrs interface._iName}, RUN+="${newInterfaceScript interface}"'')}
|
''ACTION=="add", SUBSYSTEM=="net", ENV{DEVTYPE}=="wlan", ENV{INTERFACE}=="${interface._iName}", ${systemdAttrs interface._iName}, RUN+="${newInterfaceScript device interface}"'')}
|
||||||
|
|
||||||
# Add the required, new WLAN interfaces to the default WLAN interface with the
|
# Add the required, new WLAN interfaces to the default WLAN interface with the
|
||||||
# persistent, default name as assigned by udev.
|
# persistent, default name as assigned by udev.
|
||||||
|
@ -222,6 +222,7 @@ in rec {
|
|||||||
tests.cadvisor = hydraJob (import tests/cadvisor.nix { system = "x86_64-linux"; });
|
tests.cadvisor = hydraJob (import tests/cadvisor.nix { system = "x86_64-linux"; });
|
||||||
tests.chromium = (callSubTests tests/chromium.nix { system = "x86_64-linux"; }).stable;
|
tests.chromium = (callSubTests tests/chromium.nix { system = "x86_64-linux"; }).stable;
|
||||||
tests.cjdns = callTest tests/cjdns.nix {};
|
tests.cjdns = callTest tests/cjdns.nix {};
|
||||||
|
tests.cloud-init = callTest tests/cloud-init.nix {};
|
||||||
tests.containers-ipv4 = callTest tests/containers-ipv4.nix {};
|
tests.containers-ipv4 = callTest tests/containers-ipv4.nix {};
|
||||||
tests.containers-ipv6 = callTest tests/containers-ipv6.nix {};
|
tests.containers-ipv6 = callTest tests/containers-ipv6.nix {};
|
||||||
tests.containers-bridge = callTest tests/containers-bridge.nix {};
|
tests.containers-bridge = callTest tests/containers-bridge.nix {};
|
||||||
|
47
nixos/tests/cloud-init.nix
Normal file
47
nixos/tests/cloud-init.nix
Normal file
@ -0,0 +1,47 @@
|
|||||||
|
{ system ? builtins.currentSystem }:
|
||||||
|
|
||||||
|
with import ../lib/testing.nix { inherit system; };
|
||||||
|
with import ../lib/qemu-flags.nix;
|
||||||
|
with pkgs.lib;
|
||||||
|
|
||||||
|
let
|
||||||
|
metadataDrive = pkgs.stdenv.mkDerivation {
|
||||||
|
name = "metadata";
|
||||||
|
buildCommand = ''
|
||||||
|
mkdir -p $out/iso
|
||||||
|
|
||||||
|
cat << EOF > $out/iso/user-data
|
||||||
|
#cloud-config
|
||||||
|
write_files:
|
||||||
|
- content: |
|
||||||
|
cloudinit
|
||||||
|
path: /tmp/cloudinit-write-file
|
||||||
|
EOF
|
||||||
|
|
||||||
|
cat << EOF > $out/iso/meta-data
|
||||||
|
instance-id: iid-local01
|
||||||
|
local-hostname: "test"
|
||||||
|
public-keys:
|
||||||
|
- "should be a key!"
|
||||||
|
EOF
|
||||||
|
${pkgs.cdrkit}/bin/genisoimage -volid cidata -joliet -rock -o $out/metadata.iso $out/iso
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
in makeTest {
|
||||||
|
meta = with pkgs.stdenv.lib.maintainers; {
|
||||||
|
maintainers = [ lewo ];
|
||||||
|
};
|
||||||
|
machine =
|
||||||
|
{ config, pkgs, ... }:
|
||||||
|
{
|
||||||
|
virtualisation.qemu.options = [ "-cdrom" "${metadataDrive}/metadata.iso" ];
|
||||||
|
services.cloud-init.enable = true;
|
||||||
|
};
|
||||||
|
testScript = ''
|
||||||
|
$machine->start;
|
||||||
|
$machine->waitForUnit("cloud-init.service");
|
||||||
|
$machine->succeed("cat /tmp/cloudinit-write-file | grep -q 'cloudinit'");
|
||||||
|
|
||||||
|
$machine->waitUntilSucceeds("cat /root/.ssh/authorized_keys | grep -q 'should be a key!'");
|
||||||
|
'';
|
||||||
|
}
|
95
nixos/tests/elk.nix
Normal file
95
nixos/tests/elk.nix
Normal file
@ -0,0 +1,95 @@
|
|||||||
|
# Test the ELK stack: Elasticsearch, Logstash and Kibana.
|
||||||
|
|
||||||
|
import ./make-test.nix ({ pkgs, ...} :
|
||||||
|
let
|
||||||
|
esUrl = "http://localhost:9200";
|
||||||
|
in {
|
||||||
|
name = "ELK";
|
||||||
|
meta = with pkgs.stdenv.lib.maintainers; {
|
||||||
|
maintainers = [ eelco chaoflow offline basvandijk ];
|
||||||
|
};
|
||||||
|
|
||||||
|
nodes = {
|
||||||
|
one =
|
||||||
|
{ config, pkgs, ... }: {
|
||||||
|
# Not giving the machine at least 2060MB results in elasticsearch failing with the following error:
|
||||||
|
#
|
||||||
|
# OpenJDK 64-Bit Server VM warning:
|
||||||
|
# INFO: os::commit_memory(0x0000000085330000, 2060255232, 0)
|
||||||
|
# failed; error='Cannot allocate memory' (errno=12)
|
||||||
|
#
|
||||||
|
# There is insufficient memory for the Java Runtime Environment to continue.
|
||||||
|
# Native memory allocation (mmap) failed to map 2060255232 bytes for committing reserved memory.
|
||||||
|
#
|
||||||
|
# When setting this to 2500 I got "Kernel panic - not syncing: Out of
|
||||||
|
# memory: compulsory panic_on_oom is enabled" so lets give it even a
|
||||||
|
# bit more room:
|
||||||
|
virtualisation.memorySize = 3000;
|
||||||
|
|
||||||
|
# For querying JSON objects returned from elasticsearch and kibana.
|
||||||
|
environment.systemPackages = [ pkgs.jq ];
|
||||||
|
|
||||||
|
services = {
|
||||||
|
logstash = {
|
||||||
|
enable = true;
|
||||||
|
package = pkgs.logstash5;
|
||||||
|
inputConfig = ''
|
||||||
|
exec { command => "echo -n flowers" interval => 1 type => "test" }
|
||||||
|
exec { command => "echo -n dragons" interval => 1 type => "test" }
|
||||||
|
'';
|
||||||
|
filterConfig = ''
|
||||||
|
if [message] =~ /dragons/ {
|
||||||
|
drop {}
|
||||||
|
}
|
||||||
|
'';
|
||||||
|
outputConfig = ''
|
||||||
|
file {
|
||||||
|
path => "/tmp/logstash.out"
|
||||||
|
codec => line { format => "%{message}" }
|
||||||
|
}
|
||||||
|
elasticsearch {
|
||||||
|
hosts => [ "${esUrl}" ]
|
||||||
|
}
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
|
||||||
|
elasticsearch = {
|
||||||
|
enable = true;
|
||||||
|
package = pkgs.elasticsearch5;
|
||||||
|
};
|
||||||
|
|
||||||
|
kibana = {
|
||||||
|
enable = true;
|
||||||
|
package = pkgs.kibana5;
|
||||||
|
elasticsearch.url = esUrl;
|
||||||
|
};
|
||||||
|
};
|
||||||
|
};
|
||||||
|
};
|
||||||
|
|
||||||
|
testScript = ''
|
||||||
|
startAll;
|
||||||
|
|
||||||
|
$one->waitForUnit("elasticsearch.service");
|
||||||
|
|
||||||
|
# Continue as long as the status is not "red". The status is probably
|
||||||
|
# "yellow" instead of "green" because we are using a single elasticsearch
|
||||||
|
# node which elasticsearch considers risky.
|
||||||
|
#
|
||||||
|
# TODO: extend this test with multiple elasticsearch nodes and see if the status turns "green".
|
||||||
|
$one->waitUntilSucceeds("curl --silent --show-error '${esUrl}/_cluster/health' | jq .status | grep -v red");
|
||||||
|
|
||||||
|
# Perform some simple logstash tests.
|
||||||
|
$one->waitForUnit("logstash.service");
|
||||||
|
$one->waitUntilSucceeds("cat /tmp/logstash.out | grep flowers");
|
||||||
|
$one->waitUntilSucceeds("cat /tmp/logstash.out | grep -v dragons");
|
||||||
|
|
||||||
|
# See if kibana is healthy.
|
||||||
|
$one->waitForUnit("kibana.service");
|
||||||
|
$one->waitUntilSucceeds("curl --silent --show-error 'http://localhost:5601/api/status' | jq .status.overall.state | grep green");
|
||||||
|
|
||||||
|
# See if logstash messages arive in elasticsearch.
|
||||||
|
$one->waitUntilSucceeds("curl --silent --show-error '${esUrl}/_search' -H 'Content-Type: application/json' -d '{\"query\" : { \"match\" : { \"message\" : \"flowers\"}}}' | jq .hits.total | grep -v 0");
|
||||||
|
$one->waitUntilSucceeds("curl --silent --show-error '${esUrl}/_search' -H 'Content-Type: application/json' -d '{\"query\" : { \"match\" : { \"message\" : \"dragons\"}}}' | jq .hits.total | grep 0");
|
||||||
|
'';
|
||||||
|
})
|
@ -221,7 +221,7 @@ let
|
|||||||
docbook5_xsl
|
docbook5_xsl
|
||||||
unionfs-fuse
|
unionfs-fuse
|
||||||
ntp
|
ntp
|
||||||
nixos-artwork
|
nixos-artwork.wallpapers.gnome-dark
|
||||||
perlPackages.XMLLibXML
|
perlPackages.XMLLibXML
|
||||||
perlPackages.ListCompare
|
perlPackages.ListCompare
|
||||||
|
|
||||||
|
@ -1,41 +0,0 @@
|
|||||||
# This test runs logstash and checks if messages flows and
|
|
||||||
# elasticsearch is started.
|
|
||||||
|
|
||||||
import ./make-test.nix ({ pkgs, ...} : {
|
|
||||||
name = "logstash";
|
|
||||||
meta = with pkgs.stdenv.lib.maintainers; {
|
|
||||||
maintainers = [ eelco chaoflow offline ];
|
|
||||||
};
|
|
||||||
|
|
||||||
nodes = {
|
|
||||||
one =
|
|
||||||
{ config, pkgs, ... }:
|
|
||||||
{
|
|
||||||
services = {
|
|
||||||
logstash = {
|
|
||||||
enable = true;
|
|
||||||
inputConfig = ''
|
|
||||||
exec { command => "echo flowers" interval => 1 type => "test" }
|
|
||||||
exec { command => "echo dragons" interval => 1 type => "test" }
|
|
||||||
'';
|
|
||||||
filterConfig = ''
|
|
||||||
if [message] =~ /dragons/ {
|
|
||||||
drop {}
|
|
||||||
}
|
|
||||||
'';
|
|
||||||
outputConfig = ''
|
|
||||||
stdout { codec => rubydebug }
|
|
||||||
'';
|
|
||||||
};
|
|
||||||
};
|
|
||||||
};
|
|
||||||
};
|
|
||||||
|
|
||||||
testScript = ''
|
|
||||||
startAll;
|
|
||||||
|
|
||||||
$one->waitForUnit("logstash.service");
|
|
||||||
$one->waitUntilSucceeds("journalctl -n 20 _SYSTEMD_UNIT=logstash.service | grep flowers");
|
|
||||||
$one->fail("journalctl -n 20 _SYSTEMD_UNIT=logstash.service | grep dragons");
|
|
||||||
'';
|
|
||||||
})
|
|
@ -13,7 +13,7 @@ stdenv.mkDerivation rec {
|
|||||||
owner = "bitcoinclassic";
|
owner = "bitcoinclassic";
|
||||||
repo = "bitcoinclassic";
|
repo = "bitcoinclassic";
|
||||||
rev = "v${version}";
|
rev = "v${version}";
|
||||||
sha256 = "1z6g930csvx49krl34207yqwlr8dkxpi72k3msh15p1kjvv90nvz";
|
sha256 = "00spils0gv8krx2nyxrf6j1dl81wmxk8xjkqc22cv7nsdnakzrvm";
|
||||||
};
|
};
|
||||||
|
|
||||||
nativeBuildInputs = [ pkgconfig autoreconfHook ];
|
nativeBuildInputs = [ pkgconfig autoreconfHook ];
|
||||||
@ -32,12 +32,12 @@ stdenv.mkDerivation rec {
|
|||||||
parties. Users hold the crypto keys to their own money and transact directly
|
parties. Users hold the crypto keys to their own money and transact directly
|
||||||
with each other, with the help of a P2P network to check for double-spending.
|
with each other, with the help of a P2P network to check for double-spending.
|
||||||
|
|
||||||
Bitcoin Classic stands for the original Bitcoin as Satoshi described it,
|
Bitcoin Classic stands for the original Bitcoin as Satoshi described it,
|
||||||
"A Peer-to-Peer Electronic Cash System". We are writing the software that
|
"A Peer-to-Peer Electronic Cash System". We are writing the software that
|
||||||
miners and users say they want. We will make sure it solves their needs, help
|
miners and users say they want. We will make sure it solves their needs, help
|
||||||
them deploy it, and gracefully upgrade the bitcoin network's capacity
|
them deploy it, and gracefully upgrade the bitcoin network's capacity
|
||||||
together. The data shows that Bitcoin can grow, on-chain, to welcome many
|
together. The data shows that Bitcoin can grow, on-chain, to welcome many
|
||||||
more users onto our coin in a safe and distributed manner. In the future we
|
more users onto our coin in a safe and distributed manner. In the future we
|
||||||
will continue to release updates that are in line with Satoshi’s whitepaper &
|
will continue to release updates that are in line with Satoshi’s whitepaper &
|
||||||
vision, and are agreed upon by the community.
|
vision, and are agreed upon by the community.
|
||||||
'';
|
'';
|
||||||
|
@ -21,6 +21,7 @@ rec {
|
|||||||
|
|
||||||
freicoin = callPackage ./freicoin.nix { boost = pkgs.boost155; };
|
freicoin = callPackage ./freicoin.nix { boost = pkgs.boost155; };
|
||||||
go-ethereum = callPackage ./go-ethereum.nix { };
|
go-ethereum = callPackage ./go-ethereum.nix { };
|
||||||
|
go-ethereum-classic = callPackage ./go-ethereum-classic { };
|
||||||
|
|
||||||
hivemind = callPackage ./hivemind.nix { withGui = true; };
|
hivemind = callPackage ./hivemind.nix { withGui = true; };
|
||||||
hivemindd = callPackage ./hivemind.nix { withGui = false; };
|
hivemindd = callPackage ./hivemind.nix { withGui = false; };
|
||||||
|
@ -11,16 +11,18 @@ stdenv.mkDerivation rec {
|
|||||||
sha256 = "1m5pcnfhwhcj7q00p2sy3h73rkdm3w6grmljgiq53gshcj08cq1z";
|
sha256 = "1m5pcnfhwhcj7q00p2sy3h73rkdm3w6grmljgiq53gshcj08cq1z";
|
||||||
};
|
};
|
||||||
|
|
||||||
|
qmakeFlags = ["USE_UPNP=-"];
|
||||||
|
|
||||||
# I think that openssl and zlib are required, but come through other
|
# I think that openssl and zlib are required, but come through other
|
||||||
# packages
|
# packages
|
||||||
|
|
||||||
installPhase = ''
|
installPhase = ''
|
||||||
mkdir -p $out/bin
|
mkdir -p $out/bin
|
||||||
cp freicoin-qt $out/bin
|
cp freicoin-qt $out/bin
|
||||||
'';
|
'';
|
||||||
|
|
||||||
nativeBuildInputs = [ qmake4Hook ];
|
nativeBuildInputs = [ qmake4Hook ];
|
||||||
buildInputs = [ db boost gmp mpfr miniupnpc qt4 ];
|
buildInputs = [ db boost gmp mpfr qt4 ];
|
||||||
|
|
||||||
meta = with stdenv.lib; {
|
meta = with stdenv.lib; {
|
||||||
description = "Peer-to-peer currency with demurrage fee";
|
description = "Peer-to-peer currency with demurrage fee";
|
||||||
|
24
pkgs/applications/altcoins/go-ethereum-classic/default.nix
Normal file
24
pkgs/applications/altcoins/go-ethereum-classic/default.nix
Normal file
@ -0,0 +1,24 @@
|
|||||||
|
{ stdenv, lib, buildGoPackage, fetchgit, fetchhg, fetchbzr, fetchsvn }:
|
||||||
|
|
||||||
|
buildGoPackage rec {
|
||||||
|
name = "go-ethereum-classic-${version}";
|
||||||
|
version = "3.5.0";
|
||||||
|
rev = "402c1700fbefb9512e444b32fe12c2d674638ddb";
|
||||||
|
|
||||||
|
goPackagePath = "github.com/ethereumproject/go-ethereum";
|
||||||
|
subPackages = [ "cmd/evm" "cmd/geth" ];
|
||||||
|
|
||||||
|
src = fetchgit {
|
||||||
|
inherit rev;
|
||||||
|
url = "https://github.com/ethereumproject/go-ethereum";
|
||||||
|
sha256 = "15wji12wqcrgsb1glwwz4jv7rsas71bbxh7750iv2phn7jivm0fi";
|
||||||
|
};
|
||||||
|
|
||||||
|
goDeps = ./deps.nix;
|
||||||
|
|
||||||
|
meta = {
|
||||||
|
description = "Golang implementation of Ethereum Classic";
|
||||||
|
homepage = "https://github.com/ethereumproject/go-ethereum";
|
||||||
|
license = with lib.licenses; [ lgpl3 gpl3 ];
|
||||||
|
};
|
||||||
|
}
|
39
pkgs/applications/altcoins/go-ethereum-classic/deps.nix
Normal file
39
pkgs/applications/altcoins/go-ethereum-classic/deps.nix
Normal file
@ -0,0 +1,39 @@
|
|||||||
|
# This file was generated by https://github.com/kamilchm/go2nix v1.2.0
|
||||||
|
[
|
||||||
|
{
|
||||||
|
goPackagePath = "github.com/maruel/panicparse";
|
||||||
|
fetch = {
|
||||||
|
type = "git";
|
||||||
|
url = "https://github.com/maruel/panicparse";
|
||||||
|
rev = "ae43f192cef2add653fe1481a3070ed00a4a6981";
|
||||||
|
sha256 = "11q8v4adbrazqvh24235s5nifck0d1083gbwv4dh5lhd10xlwdvr";
|
||||||
|
};
|
||||||
|
}
|
||||||
|
{
|
||||||
|
goPackagePath = "github.com/mattn/go-runewidth";
|
||||||
|
fetch = {
|
||||||
|
type = "git";
|
||||||
|
url = "https://github.com/mattn/go-runewidth";
|
||||||
|
rev = "97311d9f7767e3d6f422ea06661bc2c7a19e8a5d";
|
||||||
|
sha256 = "0dxlrzn570xl7gb11hjy1v4p3gw3r41yvqhrffgw95ha3q9p50cg";
|
||||||
|
};
|
||||||
|
}
|
||||||
|
{
|
||||||
|
goPackagePath = "github.com/mitchellh/go-wordwrap";
|
||||||
|
fetch = {
|
||||||
|
type = "git";
|
||||||
|
url = "https://github.com/mitchellh/go-wordwrap";
|
||||||
|
rev = "ad45545899c7b13c020ea92b2072220eefad42b8";
|
||||||
|
sha256 = "0ny1ddngvwfj3njn7pmqnf3l903lw73ynddw15x8ymp7hidv27v9";
|
||||||
|
};
|
||||||
|
}
|
||||||
|
{
|
||||||
|
goPackagePath = "github.com/nsf/termbox-go";
|
||||||
|
fetch = {
|
||||||
|
type = "git";
|
||||||
|
url = "https://github.com/nsf/termbox-go";
|
||||||
|
rev = "4163cd39dda1c0dda883a713640bc01e08951c24";
|
||||||
|
sha256 = "1vzrhxf8823lrnwf1bfyxwlm52pph5iq2hgr1d0n07v8kjgqkrmx";
|
||||||
|
};
|
||||||
|
}
|
||||||
|
]
|
@ -1,5 +1,5 @@
|
|||||||
{ stdenv, fetchurl, pkgconfig, openssl, db48, boost
|
{ stdenv, fetchurl, pkgconfig, openssl, db48, boost
|
||||||
, zlib, miniupnpc, qt4, qmake4Hook, utillinux, protobuf, qrencode
|
, zlib, qt4, qmake4Hook, utillinux, protobuf, qrencode
|
||||||
, withGui }:
|
, withGui }:
|
||||||
|
|
||||||
with stdenv.lib;
|
with stdenv.lib;
|
||||||
@ -13,10 +13,12 @@ stdenv.mkDerivation rec{
|
|||||||
sha256 = "1iyh6dqrg0mirwci5br5n5qw3ghp2cs23wd8ygr56bh9ml4dr1m8";
|
sha256 = "1iyh6dqrg0mirwci5br5n5qw3ghp2cs23wd8ygr56bh9ml4dr1m8";
|
||||||
};
|
};
|
||||||
|
|
||||||
buildInputs = [ pkgconfig openssl db48 boost zlib
|
buildInputs = [ pkgconfig openssl db48 boost zlib utillinux protobuf ]
|
||||||
miniupnpc utillinux protobuf ]
|
|
||||||
++ optionals withGui [ qt4 qmake4Hook qrencode ];
|
++ optionals withGui [ qt4 qmake4Hook qrencode ];
|
||||||
|
|
||||||
|
qmakeFlags = ["USE_UPNP=-"];
|
||||||
|
makeFlags = ["USE_UPNP=-"];
|
||||||
|
|
||||||
configureFlags = [ "--with-boost-libdir=${boost.out}/lib" ]
|
configureFlags = [ "--with-boost-libdir=${boost.out}/lib" ]
|
||||||
++ optionals withGui [ "--with-gui=qt4" ];
|
++ optionals withGui [ "--with-gui=qt4" ];
|
||||||
|
|
||||||
|
@ -1,5 +1,5 @@
|
|||||||
{ stdenv, fetchurl, pkgconfig, openssl, db48, boost
|
{ stdenv, fetchurl, pkgconfig, openssl, db48, boost
|
||||||
, zlib, miniupnpc, qt4, qmake4Hook, utillinux, protobuf, qrencode
|
, zlib, qt4, qmake4Hook, utillinux, protobuf, qrencode
|
||||||
, withGui }:
|
, withGui }:
|
||||||
|
|
||||||
with stdenv.lib;
|
with stdenv.lib;
|
||||||
@ -13,8 +13,10 @@ stdenv.mkDerivation rec{
|
|||||||
sha256 = "0cixnkici74204s9d5iqj5sccib5a8dig2p2fp1axdjifpg787i3";
|
sha256 = "0cixnkici74204s9d5iqj5sccib5a8dig2p2fp1axdjifpg787i3";
|
||||||
};
|
};
|
||||||
|
|
||||||
buildInputs = [ pkgconfig openssl db48 boost zlib
|
qmakeFlags = ["USE_UPNP=-"];
|
||||||
miniupnpc utillinux protobuf ]
|
makeFlags = ["USE_UPNP=-"];
|
||||||
|
|
||||||
|
buildInputs = [ pkgconfig openssl db48 boost zlib utillinux protobuf ]
|
||||||
++ optionals withGui [ qt4 qmake4Hook qrencode ];
|
++ optionals withGui [ qt4 qmake4Hook qrencode ];
|
||||||
|
|
||||||
configureFlags = [ "--with-boost-libdir=${boost.out}/lib" ]
|
configureFlags = [ "--with-boost-libdir=${boost.out}/lib" ]
|
||||||
|
@ -1,5 +1,5 @@
|
|||||||
{ stdenv, cmake, fetchFromGitHub, file, gcc_multi, libX11, makeWrapper
|
{ stdenv, cmake, fetchFromGitHub, file, gcc_multi, libX11, makeWrapper
|
||||||
, overrideCC, qt5, requireFile, unzip, wineStable
|
, overrideCC, qt5, requireFile, unzip, wine
|
||||||
}:
|
}:
|
||||||
|
|
||||||
let
|
let
|
||||||
@ -26,7 +26,8 @@ let
|
|||||||
installPhase = "cp -r . $out";
|
installPhase = "cp -r . $out";
|
||||||
};
|
};
|
||||||
|
|
||||||
wine-wow64 = wineStable.override {
|
wine-wow64 = wine.override {
|
||||||
|
wineRelease = "stable";
|
||||||
wineBuild = "wineWow";
|
wineBuild = "wineWow";
|
||||||
};
|
};
|
||||||
|
|
||||||
|
@ -16,7 +16,7 @@ let
|
|||||||
# "git describe" when _not_ on an annotated tag(!): MAJOR.MINOR-REV-HASH.
|
# "git describe" when _not_ on an annotated tag(!): MAJOR.MINOR-REV-HASH.
|
||||||
|
|
||||||
# Version to build.
|
# Version to build.
|
||||||
tag = "5.8";
|
tag = "5.10";
|
||||||
|
|
||||||
in
|
in
|
||||||
|
|
||||||
@ -25,8 +25,8 @@ stdenv.mkDerivation rec {
|
|||||||
|
|
||||||
src = fetchgit {
|
src = fetchgit {
|
||||||
url = "git://git.ardour.org/ardour/ardour.git";
|
url = "git://git.ardour.org/ardour/ardour.git";
|
||||||
rev = "e5c6f16126e0901654b09ecce990554b1ff73833";
|
rev = "9c629c0c76808cc3e8f05e43bc760f849566dce6";
|
||||||
sha256 = "1lcvslrcw6g4kp9w0h1jx46x6ilz4nzz0k2yrw4gd545k1rwx0c1";
|
sha256 = "062igiaaj18kbismrpzbafyq1ryyqj3lh0ajqqs2s8ms675x33sl";
|
||||||
};
|
};
|
||||||
|
|
||||||
buildInputs =
|
buildInputs =
|
||||||
|
@ -1,13 +1,14 @@
|
|||||||
{ stdenv, fetchgit, cairomm, cmake, libjack2, libpthreadstubs, libXdmcp, libxshmfence, libsndfile, lv2, ntk, pkgconfig }:
|
{ stdenv, fetchFromGitHub , cairomm, cmake, libjack2, libpthreadstubs, libXdmcp, libxshmfence, libsndfile, lv2, ntk, pkgconfig }:
|
||||||
|
|
||||||
stdenv.mkDerivation rec {
|
stdenv.mkDerivation rec {
|
||||||
name = "artyFX-git-${version}";
|
name = "artyFX-${version}";
|
||||||
version = "2015-05-07";
|
version = "1.3";
|
||||||
|
|
||||||
src = fetchgit {
|
src = fetchFromGitHub {
|
||||||
url = "https://github.com/harryhaaren/openAV-ArtyFX.git";
|
owner = "openAVproductions";
|
||||||
rev = "3a8cb9a5e4ffaf27a497a31cc9cd6f2e79622d5b";
|
repo = "openAV-ArtyFX";
|
||||||
sha256 = "0nsmycm64a686ysfnmdvnaazijvfj90z5wyp96kyr81nsrbcv2ij";
|
rev = "release-${version}";
|
||||||
|
sha256 = "012hcy1mxl7gs2lipfcqp5x0xv1azb9hjrwf0h59yyxnzx96h7c9";
|
||||||
};
|
};
|
||||||
|
|
||||||
buildInputs = [ cairomm cmake libjack2 libpthreadstubs libXdmcp libxshmfence libsndfile lv2 ntk pkgconfig ];
|
buildInputs = [ cairomm cmake libjack2 libpthreadstubs libXdmcp libxshmfence libsndfile lv2 ntk pkgconfig ];
|
||||||
|
@ -1,6 +1,6 @@
|
|||||||
{
|
{
|
||||||
stdenv, lib, fetchurl,
|
mkDerivation, lib, fetchurl,
|
||||||
gettext, makeQtWrapper, pkgconfig,
|
gettext, pkgconfig,
|
||||||
qtbase,
|
qtbase,
|
||||||
alsaLib, curl, faad2, ffmpeg, flac, fluidsynth, gdk_pixbuf, lame, libbs2b,
|
alsaLib, curl, faad2, ffmpeg, flac, fluidsynth, gdk_pixbuf, lame, libbs2b,
|
||||||
libcddb, libcdio082, libcue, libjack2, libmad, libmcs, libmms, libmodplug,
|
libcddb, libcdio082, libcue, libjack2, libmad, libmcs, libmms, libmodplug,
|
||||||
@ -24,16 +24,14 @@ let
|
|||||||
};
|
};
|
||||||
in
|
in
|
||||||
|
|
||||||
stdenv.mkDerivation {
|
mkDerivation {
|
||||||
inherit version;
|
inherit version;
|
||||||
name = "audacious-qt5-${version}";
|
name = "audacious-qt5-${version}";
|
||||||
|
|
||||||
sourceFiles = lib.attrValues sources;
|
sourceFiles = lib.attrValues sources;
|
||||||
sourceRoots = lib.attrNames sources;
|
sourceRoots = lib.attrNames sources;
|
||||||
|
|
||||||
nativeBuildInputs = [
|
nativeBuildInputs = [ gettext pkgconfig ];
|
||||||
gettext makeQtWrapper pkgconfig
|
|
||||||
];
|
|
||||||
|
|
||||||
buildInputs = [
|
buildInputs = [
|
||||||
# Core dependencies
|
# Core dependencies
|
||||||
@ -68,15 +66,9 @@ stdenv.mkDerivation {
|
|||||||
fi
|
fi
|
||||||
|
|
||||||
done
|
done
|
||||||
|
|
||||||
source $stdenv/setup
|
|
||||||
wrapQtProgram $out/bin/audacious
|
|
||||||
wrapQtProgram $out/bin/audtool
|
|
||||||
'';
|
'';
|
||||||
|
|
||||||
enableParallelBuilding = true;
|
meta = with lib; {
|
||||||
|
|
||||||
meta = with stdenv.lib; {
|
|
||||||
description = "Audio player";
|
description = "Audio player";
|
||||||
homepage = http://audacious-media-player.org/;
|
homepage = http://audacious-media-player.org/;
|
||||||
maintainers = with maintainers; [ ttuegel ];
|
maintainers = with maintainers; [ ttuegel ];
|
||||||
|
@ -27,7 +27,7 @@ stdenv.mkDerivation rec {
|
|||||||
description = "A range of synthesiser, electric piano and organ emulations";
|
description = "A range of synthesiser, electric piano and organ emulations";
|
||||||
homepage = http://bristol.sourceforge.net;
|
homepage = http://bristol.sourceforge.net;
|
||||||
license = licenses.gpl3;
|
license = licenses.gpl3;
|
||||||
platforms = platforms.linux;
|
platforms = ["x86_64-linux" "i686-linux"];
|
||||||
maintainers = [ maintainers.goibhniu ];
|
maintainers = [ maintainers.goibhniu ];
|
||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
@ -1,6 +1,6 @@
|
|||||||
{ stdenv, fetchFromGitHub, cmake, vlc
|
{ stdenv, fetchFromGitHub, cmake, vlc
|
||||||
, withQt4 ? false, qt4
|
, withQt4 ? false, qt4
|
||||||
, withQt5 ? true, qtbase, qtsvg, qttools, makeQtWrapper
|
, withQt5 ? true, qtbase, qtsvg, qttools
|
||||||
|
|
||||||
# Cantata doesn't build with cdparanoia enabled so we disable that
|
# Cantata doesn't build with cdparanoia enabled so we disable that
|
||||||
# default for now until I (or someone else) figure it out.
|
# default for now until I (or someone else) figure it out.
|
||||||
@ -63,8 +63,6 @@ stdenv.mkDerivation rec {
|
|||||||
++ stdenv.lib.optional withMusicbrainz libmusicbrainz5
|
++ stdenv.lib.optional withMusicbrainz libmusicbrainz5
|
||||||
++ stdenv.lib.optional (withTaglib && withDevices) udisks2;
|
++ stdenv.lib.optional (withTaglib && withDevices) udisks2;
|
||||||
|
|
||||||
nativeBuildInputs = stdenv.lib.optional withQt5 makeQtWrapper;
|
|
||||||
|
|
||||||
cmakeFlags = stdenv.lib.flatten [
|
cmakeFlags = stdenv.lib.flatten [
|
||||||
(fstat withQt5 "QT5")
|
(fstat withQt5 "QT5")
|
||||||
(fstats withTaglib [ "TAGLIB" "TAGLIB_EXTRAS" ])
|
(fstats withTaglib [ "TAGLIB" "TAGLIB_EXTRAS" ])
|
||||||
@ -88,10 +86,6 @@ stdenv.mkDerivation rec {
|
|||||||
sed -i -e 's/STRLESS/VERSION_LESS/g' cmake/FindTaglib.cmake
|
sed -i -e 's/STRLESS/VERSION_LESS/g' cmake/FindTaglib.cmake
|
||||||
'';
|
'';
|
||||||
|
|
||||||
postInstall = stdenv.lib.optionalString withQt5 ''
|
|
||||||
wrapQtProgram "$out/bin/cantata"
|
|
||||||
'';
|
|
||||||
|
|
||||||
meta = with stdenv.lib; {
|
meta = with stdenv.lib; {
|
||||||
homepage = https://github.com/cdrummond/cantata;
|
homepage = https://github.com/cdrummond/cantata;
|
||||||
description = "A graphical client for MPD";
|
description = "A graphical client for MPD";
|
||||||
|
@ -1,20 +1,20 @@
|
|||||||
{ stdenv, fetchFromGitHub, cmake, libsndfile, flex, bison, boost
|
{ stdenv, fetchFromGitHub, cmake, libsndfile, libsamplerate, flex, bison, boost, gettext
|
||||||
, alsaLib ? null
|
, alsaLib ? null
|
||||||
, libpulseaudio ? null
|
, libpulseaudio ? null
|
||||||
, tcltk ? null
|
, libjack2 ? null
|
||||||
, liblo ? null
|
, liblo ? null
|
||||||
|
, ladspa-sdk ? null
|
||||||
# maybe csound can be compiled with support for those, see configure output
|
, fluidsynth ? null
|
||||||
# , ladspa ? null
|
# , gmm ? null # opcodes don't build with gmm 5.1
|
||||||
# , fluidsynth ? null
|
, eigen ? null
|
||||||
# , jack ? null
|
, curl ? null
|
||||||
# , gmm ? null
|
, tcltk ? null
|
||||||
# , wiiuse ? null
|
, fltk ? null
|
||||||
}:
|
}:
|
||||||
|
|
||||||
stdenv.mkDerivation rec {
|
stdenv.mkDerivation rec {
|
||||||
name = "csound-6.08.1";
|
name = "csound-${version}";
|
||||||
version = "6.08.1";
|
version = "6.09.0";
|
||||||
|
|
||||||
enableParallelBuilding = true;
|
enableParallelBuilding = true;
|
||||||
|
|
||||||
@ -24,11 +24,18 @@ stdenv.mkDerivation rec {
|
|||||||
owner = "csound";
|
owner = "csound";
|
||||||
repo = "csound";
|
repo = "csound";
|
||||||
rev = version;
|
rev = version;
|
||||||
sha256 = "03xnva17sw35ga3n96x1zdfgw913dga1hccly85wzfn0kxz4rld9";
|
sha256 = "1vfb0mab89psfwidadjrn5mbzq3bhjbyrrmyp98yp0xm6a8cssih";
|
||||||
};
|
};
|
||||||
|
|
||||||
nativeBuildInputs = [ cmake flex bison ];
|
cmakeFlags = [ "-DBUILD_CSOUND_AC=0" ] # fails to find Score.hpp
|
||||||
buildInputs = [ libsndfile alsaLib libpulseaudio tcltk boost liblo ];
|
++ stdenv.lib.optional (libjack2 != null) "-DJACK_HEADER=${libjack2}/include/jack/jack.h";
|
||||||
|
|
||||||
|
nativeBuildInputs = [ cmake flex bison gettext ];
|
||||||
|
buildInputs = [ libsndfile libsamplerate boost ]
|
||||||
|
++ builtins.filter (optional: optional != null) [
|
||||||
|
alsaLib libpulseaudio libjack2
|
||||||
|
liblo ladspa-sdk fluidsynth eigen
|
||||||
|
curl tcltk fltk ];
|
||||||
|
|
||||||
meta = with stdenv.lib; {
|
meta = with stdenv.lib; {
|
||||||
description = "Sound design, audio synthesis, and signal processing system, providing facilities for music composition and performance on all major operating systems and platforms";
|
description = "Sound design, audio synthesis, and signal processing system, providing facilities for music composition and performance on all major operating systems and platforms";
|
||||||
|
@ -1,4 +1,4 @@
|
|||||||
{ stdenv, fetchFromGitHub, fftw, libsndfile, qtbase, qtmultimedia, qmakeHook, makeQtWrapper }:
|
{ stdenv, fetchFromGitHub, fftw, libsndfile, qtbase, qtmultimedia, qmake }:
|
||||||
|
|
||||||
let
|
let
|
||||||
|
|
||||||
@ -37,9 +37,9 @@ in stdenv.mkDerivation rec {
|
|||||||
owner = "gillesdegottex";
|
owner = "gillesdegottex";
|
||||||
};
|
};
|
||||||
|
|
||||||
buildInputs = [ fftw libsndfile qtbase qtmultimedia qmakeHook ];
|
buildInputs = [ fftw libsndfile qtbase qtmultimedia ];
|
||||||
|
|
||||||
nativeBuildInputs = [ makeQtWrapper ];
|
nativeBuildInputs = [ qmake ];
|
||||||
|
|
||||||
postPatch = ''
|
postPatch = ''
|
||||||
substituteInPlace dfasma.pro --replace '$$DFASMAVERSIONGITPRO' '${version}'
|
substituteInPlace dfasma.pro --replace '$$DFASMAVERSIONGITPRO' '${version}'
|
||||||
@ -53,10 +53,6 @@ in stdenv.mkDerivation rec {
|
|||||||
|
|
||||||
enableParallelBuilding = true;
|
enableParallelBuilding = true;
|
||||||
|
|
||||||
postInstall = ''
|
|
||||||
wrapQtProgram "$out/bin/dfasma"
|
|
||||||
'';
|
|
||||||
|
|
||||||
meta = with stdenv.lib; {
|
meta = with stdenv.lib; {
|
||||||
description = "Analyse and compare audio files in time and frequency";
|
description = "Analyse and compare audio files in time and frequency";
|
||||||
longDescription = ''
|
longDescription = ''
|
||||||
|
@ -1,10 +1,10 @@
|
|||||||
{ stdenv, fetchurl, cmake, fftw, gtkmm2, libxcb, lv2, pkgconfig, xorg }:
|
{ stdenv, fetchurl, cmake, fftw, gtkmm2, libxcb, lv2, pkgconfig, xorg }:
|
||||||
stdenv.mkDerivation rec {
|
stdenv.mkDerivation rec {
|
||||||
name = "eq10q-${version}";
|
name = "eq10q-${version}";
|
||||||
version = "2.1";
|
version = "2.2";
|
||||||
src = fetchurl {
|
src = fetchurl {
|
||||||
url = "mirror://sourceforge/project/eq10q/${name}.tar.gz";
|
url = "mirror://sourceforge/project/eq10q/${name}.tar.gz";
|
||||||
sha256 = "0brrr6ydsppi4zzn3vcgl0zgq5r8jmlcap1hpr3k43yvlwggb880";
|
sha256 = "16mhcav8gwkp29k9ki4dlkajlcgh1i2wvldabxb046d37dq4qzrk";
|
||||||
};
|
};
|
||||||
|
|
||||||
buildInputs = [ cmake fftw gtkmm2 libxcb lv2 pkgconfig xorg.libpthreadstubs xorg.libXdmcp xorg.libxshmfence ];
|
buildInputs = [ cmake fftw gtkmm2 libxcb lv2 pkgconfig xorg.libpthreadstubs xorg.libXdmcp xorg.libxshmfence ];
|
||||||
|
@ -1,6 +1,6 @@
|
|||||||
{ stdenv
|
{ stdenv
|
||||||
, coreutils
|
, coreutils
|
||||||
, fetchurl
|
, fetchFromGitHub
|
||||||
, makeWrapper
|
, makeWrapper
|
||||||
, pkgconfig
|
, pkgconfig
|
||||||
, clang
|
, clang
|
||||||
@ -16,11 +16,13 @@ with stdenv.lib.strings;
|
|||||||
|
|
||||||
let
|
let
|
||||||
|
|
||||||
version = "2.0.a51";
|
version = "2.1.0";
|
||||||
|
|
||||||
src = fetchurl {
|
src = fetchFromGitHub {
|
||||||
url = "mirror://sourceforge/project/faudiostream/faust-${version}.tgz";
|
owner = "grame-cncm";
|
||||||
sha256 = "1yryjqfqmxs7lxy95hjgmrncvl9kig3rcsmg0v49ghzz7vs7haxf";
|
repo = "faust";
|
||||||
|
rev = "v${builtins.replaceStrings ["."] ["-"] version}";
|
||||||
|
sha256 = "1pmiwy287g79ipz9pppnkfrdgls3l912kpkr7dfymk9wk5y5di9m";
|
||||||
};
|
};
|
||||||
|
|
||||||
meta = with stdenv.lib; {
|
meta = with stdenv.lib; {
|
||||||
@ -67,7 +69,7 @@ let
|
|||||||
#
|
#
|
||||||
# For now, fix this by 1) pinning the llvm version; 2) manually setting LLVM_VERSION
|
# For now, fix this by 1) pinning the llvm version; 2) manually setting LLVM_VERSION
|
||||||
# to something the makefile will recognize.
|
# to something the makefile will recognize.
|
||||||
sed '52iLLVM_VERSION=3.8.0' -i compiler/Makefile.unix
|
sed '52iLLVM_VERSION=${stdenv.lib.getVersion llvm}' -i compiler/Makefile.unix
|
||||||
'';
|
'';
|
||||||
|
|
||||||
# Remove most faust2appl scripts since they won't run properly
|
# Remove most faust2appl scripts since they won't run properly
|
||||||
@ -194,8 +196,8 @@ let
|
|||||||
# export parts of the build environment
|
# export parts of the build environment
|
||||||
for script in "$out"/bin/*; do
|
for script in "$out"/bin/*; do
|
||||||
wrapProgram "$script" \
|
wrapProgram "$script" \
|
||||||
--set FAUSTLIB "${faust}/lib/faust" \
|
--set FAUSTLIB "${faust}/share/faust" \
|
||||||
--set FAUST_LIB_PATH "${faust}/lib/faust" \
|
--set FAUST_LIB_PATH "${faust}/share/faust" \
|
||||||
--set FAUSTINC "${faust}/include/faust" \
|
--set FAUSTINC "${faust}/include/faust" \
|
||||||
--prefix PATH : "$PATH" \
|
--prefix PATH : "$PATH" \
|
||||||
--prefix PKG_CONFIG_PATH : "$PKG_CONFIG_PATH" \
|
--prefix PKG_CONFIG_PATH : "$PKG_CONFIG_PATH" \
|
||||||
|
@ -1,4 +1,4 @@
|
|||||||
{ stdenv, fetchFromGitHub, fftw, freeglut, mesa_glu, qtbase, qtmultimedia, qmakeHook
|
{ stdenv, fetchFromGitHub, fftw, freeglut, mesa_glu, qtbase, qtmultimedia, qmake
|
||||||
, alsaSupport ? true, alsaLib ? null
|
, alsaSupport ? true, alsaLib ? null
|
||||||
, jackSupport ? false, libjack2 ? null
|
, jackSupport ? false, libjack2 ? null
|
||||||
, portaudioSupport ? false, portaudio ? null }:
|
, portaudioSupport ? false, portaudio ? null }:
|
||||||
@ -20,7 +20,7 @@ stdenv.mkDerivation rec {
|
|||||||
owner = "gillesdegottex";
|
owner = "gillesdegottex";
|
||||||
};
|
};
|
||||||
|
|
||||||
nativeBuildInputs = [ qmakeHook ];
|
nativeBuildInputs = [ qmake ];
|
||||||
buildInputs = [ fftw qtbase qtmultimedia ]
|
buildInputs = [ fftw qtbase qtmultimedia ]
|
||||||
++ optionals alsaSupport [ alsaLib ]
|
++ optionals alsaSupport [ alsaLib ]
|
||||||
++ optionals jackSupport [ libjack2 ]
|
++ optionals jackSupport [ libjack2 ]
|
||||||
|
@ -4,7 +4,7 @@
|
|||||||
}:
|
}:
|
||||||
|
|
||||||
let
|
let
|
||||||
version = "4.2.0";
|
version = "4.3.0";
|
||||||
|
|
||||||
deps = [
|
deps = [
|
||||||
alsaLib
|
alsaLib
|
||||||
@ -46,7 +46,7 @@ stdenv.mkDerivation {
|
|||||||
|
|
||||||
src = fetchurl {
|
src = fetchurl {
|
||||||
url = "https://github.com/MarshallOfSound/Google-Play-Music-Desktop-Player-UNOFFICIAL-/releases/download/v${version}/google-play-music-desktop-player_${version}_amd64.deb";
|
url = "https://github.com/MarshallOfSound/Google-Play-Music-Desktop-Player-UNOFFICIAL-/releases/download/v${version}/google-play-music-desktop-player_${version}_amd64.deb";
|
||||||
sha256 = "0n59b73jc6b86p5063xz7n0z48wy9mzqcx0l34av2hqkx6wcb2h8";
|
sha256 = "0mbrfnsnajmpwyqyrjmcv84ywzimjmm2b8faxqiwfcikdgpm9amb";
|
||||||
};
|
};
|
||||||
|
|
||||||
dontBuild = true;
|
dontBuild = true;
|
||||||
|
@ -1,4 +1,4 @@
|
|||||||
{ stdenv, fetchFromGitHub, alsaLib, pkgconfig, qtbase, qtscript, qmakeHook
|
{ stdenv, fetchFromGitHub, alsaLib, pkgconfig, qtbase, qtscript, qmake
|
||||||
}:
|
}:
|
||||||
|
|
||||||
stdenv.mkDerivation rec {
|
stdenv.mkDerivation rec {
|
||||||
@ -11,7 +11,8 @@ stdenv.mkDerivation rec {
|
|||||||
sha256 = "184ydb9f1303v332k5k3f1ki7cb6nkxhh6ij0yn72v7dp7figrgj";
|
sha256 = "184ydb9f1303v332k5k3f1ki7cb6nkxhh6ij0yn72v7dp7figrgj";
|
||||||
};
|
};
|
||||||
|
|
||||||
buildInputs = [ alsaLib pkgconfig qtbase qtscript qmakeHook ];
|
nativeBuildInputs = [ qmake ];
|
||||||
|
buildInputs = [ alsaLib pkgconfig qtbase qtscript ];
|
||||||
|
|
||||||
qmakeFlags = [ "PREFIX=/" ];
|
qmakeFlags = [ "PREFIX=/" ];
|
||||||
|
|
||||||
|
@ -3,11 +3,11 @@
|
|||||||
|
|
||||||
stdenv.mkDerivation rec {
|
stdenv.mkDerivation rec {
|
||||||
name = "jalv-${version}";
|
name = "jalv-${version}";
|
||||||
version = "1.4.6";
|
version = "1.6.0";
|
||||||
|
|
||||||
src = fetchurl {
|
src = fetchurl {
|
||||||
url = "http://download.drobilla.net/${name}.tar.bz2";
|
url = "http://download.drobilla.net/${name}.tar.bz2";
|
||||||
sha256 = "1f1hcq74n3ziw8bk97mn5a1vgw028dxikv3fchaxd430pbbhqgl9";
|
sha256 = "1x2wpzzx2cgvz3dgdcgsj8dr0w3zsasy62mvl199bsdj5fbjaili";
|
||||||
};
|
};
|
||||||
|
|
||||||
buildInputs = [
|
buildInputs = [
|
||||||
|
@ -1,4 +1,4 @@
|
|||||||
{ stdenv, fetchFromGitHub, libav_0_8, libkeyfinder, qtbase, qtxmlpatterns, qmakeHook, taglib }:
|
{ stdenv, fetchFromGitHub, libav_0_8, libkeyfinder, qtbase, qtxmlpatterns, qmake, taglib }:
|
||||||
|
|
||||||
stdenv.mkDerivation rec {
|
stdenv.mkDerivation rec {
|
||||||
name = "keyfinder-${version}";
|
name = "keyfinder-${version}";
|
||||||
@ -11,7 +11,8 @@ stdenv.mkDerivation rec {
|
|||||||
owner = "ibsh";
|
owner = "ibsh";
|
||||||
};
|
};
|
||||||
|
|
||||||
buildInputs = [ libav_0_8 libkeyfinder qtbase qtxmlpatterns qmakeHook taglib ];
|
nativeBuildInputs = [ qmake ];
|
||||||
|
buildInputs = [ libav_0_8 libkeyfinder qtbase qtxmlpatterns taglib ];
|
||||||
|
|
||||||
postPatch = ''
|
postPatch = ''
|
||||||
substituteInPlace is_KeyFinder.pro \
|
substituteInPlace is_KeyFinder.pro \
|
||||||
|
@ -2,13 +2,13 @@
|
|||||||
|
|
||||||
stdenv.mkDerivation rec {
|
stdenv.mkDerivation rec {
|
||||||
name = "lv2bm-${version}";
|
name = "lv2bm-${version}";
|
||||||
version = "git-2015-04-10";
|
version = "git-2015-11-29";
|
||||||
|
|
||||||
src = fetchFromGitHub {
|
src = fetchFromGitHub {
|
||||||
owner = "portalmod";
|
owner = "moddevices";
|
||||||
repo = "lv2bm";
|
repo = "lv2bm";
|
||||||
rev = "08681624fc13eb700ec2b5cabedbffdf095e28b3";
|
rev = "e844931503b7597f45da6d61ff506bb9fca2e9ca";
|
||||||
sha256 = "11pi97jy4f4c3vsaizc8a6sw9hnhnanj6y1fil33yd9x7f8f0kbj";
|
sha256 = "1rrz5sp04zjal6v34ldkl6fjj9xqidb8xm1iscjyljf6z4l516cx";
|
||||||
};
|
};
|
||||||
|
|
||||||
buildInputs = [ glib lilv lv2 pkgconfig serd sord sratom ];
|
buildInputs = [ glib lilv lv2 pkgconfig serd sord sratom ];
|
||||||
|
@ -1,13 +1,13 @@
|
|||||||
{ stdenv, fetchFromGitHub, faust2jaqt, faust2lv2 }:
|
{ stdenv, fetchFromGitHub, faust2jaqt, faust2lv2 }:
|
||||||
stdenv.mkDerivation rec {
|
stdenv.mkDerivation rec {
|
||||||
name = "faustCompressors-v${version}";
|
name = "faustCompressors-v${version}";
|
||||||
version = "1.1.1";
|
version = "1.2";
|
||||||
|
|
||||||
src = fetchFromGitHub {
|
src = fetchFromGitHub {
|
||||||
owner = "magnetophon";
|
owner = "magnetophon";
|
||||||
repo = "faustCompressors";
|
repo = "faustCompressors";
|
||||||
rev = "v${version}";
|
rev = "v${version}";
|
||||||
sha256 = "0mkram2hm7i5za7pfn5crh2arbajk8praksxzgjx90rrxwl1y3d1";
|
sha256 = "144f6g17q4m50kxzdncsfzdyycdfprnpwdaxcwgxj4jky1xsha1d";
|
||||||
};
|
};
|
||||||
|
|
||||||
buildInputs = [ faust2jaqt faust2lv2 ];
|
buildInputs = [ faust2jaqt faust2lv2 ];
|
||||||
@ -15,6 +15,7 @@ stdenv.mkDerivation rec {
|
|||||||
buildPhase = ''
|
buildPhase = ''
|
||||||
for f in *.dsp;
|
for f in *.dsp;
|
||||||
do
|
do
|
||||||
|
echo "compiling standalone from" $f
|
||||||
faust2jaqt -time -double -t 99999 $f
|
faust2jaqt -time -double -t 99999 $f
|
||||||
done
|
done
|
||||||
|
|
||||||
@ -22,6 +23,7 @@ stdenv.mkDerivation rec {
|
|||||||
|
|
||||||
for f in *.dsp;
|
for f in *.dsp;
|
||||||
do
|
do
|
||||||
|
echo "compiling plugin from" $f
|
||||||
faust2lv2 -time -double -gui -t 99999 $f
|
faust2lv2 -time -double -gui -t 99999 $f
|
||||||
done
|
done
|
||||||
'';
|
'';
|
||||||
@ -30,6 +32,7 @@ stdenv.mkDerivation rec {
|
|||||||
mkdir -p $out/lib/lv2
|
mkdir -p $out/lib/lv2
|
||||||
mv *.lv2/ $out/lib/lv2
|
mv *.lv2/ $out/lib/lv2
|
||||||
mkdir -p $out/bin
|
mkdir -p $out/bin
|
||||||
|
rm newlib.sh
|
||||||
for f in $(find . -executable -type f);
|
for f in $(find . -executable -type f);
|
||||||
do
|
do
|
||||||
cp $f $out/bin/
|
cp $f $out/bin/
|
||||||
|
@ -2,11 +2,11 @@
|
|||||||
|
|
||||||
pythonPackages.buildPythonApplication rec {
|
pythonPackages.buildPythonApplication rec {
|
||||||
name = "mopidy-spotify-${version}";
|
name = "mopidy-spotify-${version}";
|
||||||
version = "3.0.0";
|
version = "3.1.0";
|
||||||
|
|
||||||
src = fetchurl {
|
src = fetchurl {
|
||||||
url = "https://github.com/mopidy/mopidy-spotify/archive/v${version}.tar.gz";
|
url = "https://github.com/mopidy/mopidy-spotify/archive/v${version}.tar.gz";
|
||||||
sha256 = "0w7bhq6nz2xly5g72xd98r7lyzmx7nzfdpghk7vklkx0x41qccz8";
|
sha256 = "1mh87w4j0ypvsrnax7kkjgfxfpnw3l290jvfzg56b8qlwf20khjl";
|
||||||
};
|
};
|
||||||
|
|
||||||
propagatedBuildInputs = [ mopidy pythonPackages.pyspotify ];
|
propagatedBuildInputs = [ mopidy pythonPackages.pyspotify ];
|
||||||
|
@ -1,27 +1,32 @@
|
|||||||
{ stdenv, fetchurl, mpd_clientlib }:
|
{ stdenv, fetchFromGitHub, autoreconfHook, pkgconfig, mpd_clientlib }:
|
||||||
|
|
||||||
stdenv.mkDerivation rec {
|
stdenv.mkDerivation rec {
|
||||||
version = "0.27";
|
|
||||||
name = "mpc-${version}";
|
name = "mpc-${version}";
|
||||||
|
version = "0.28";
|
||||||
|
|
||||||
src = fetchurl {
|
src = fetchFromGitHub {
|
||||||
url = "http://www.musicpd.org/download/mpc/0/${name}.tar.xz";
|
owner = "MusicPlayerDaemon";
|
||||||
sha256 = "0r10wsqxsi07gns6mfnicvpci0sbwwj4qa9iyr1ysrgadl5bx8j5";
|
repo = "mpc";
|
||||||
|
rev = "v${version}";
|
||||||
|
sha256 = "1g8i4q5xsqdhidyjpvj6hzbhxacv27cb47ndv9k68whd80c5f9n9";
|
||||||
};
|
};
|
||||||
|
|
||||||
buildInputs = [ mpd_clientlib ];
|
buildInputs = [ mpd_clientlib ];
|
||||||
|
|
||||||
preConfigure =
|
nativeBuildInputs = [ autoreconfHook pkgconfig ];
|
||||||
''
|
|
||||||
export LIBMPDCLIENT_LIBS=${mpd_clientlib}/lib/libmpdclient.${if stdenv.isDarwin then mpd_clientlib.majorVersion + ".dylib" else "so." + mpd_clientlib.majorVersion + ".0." + mpd_clientlib.minorVersion}
|
enableParallelBuilding = true;
|
||||||
export LIBMPDCLIENT_CFLAGS=${mpd_clientlib}
|
|
||||||
'';
|
preConfigure = ''
|
||||||
|
export LIBMPDCLIENT_LIBS=${mpd_clientlib}/lib/libmpdclient.${if stdenv.isDarwin then mpd_clientlib.majorVersion + ".dylib" else "so." + mpd_clientlib.majorVersion + ".0." + mpd_clientlib.minorVersion}
|
||||||
|
export LIBMPDCLIENT_CFLAGS=${mpd_clientlib}
|
||||||
|
'';
|
||||||
|
|
||||||
meta = with stdenv.lib; {
|
meta = with stdenv.lib; {
|
||||||
description = "A minimalist command line interface to MPD";
|
description = "A minimalist command line interface to MPD";
|
||||||
homepage = http://www.musicpd.org/clients/mpc/;
|
homepage = http://www.musicpd.org/clients/mpc/;
|
||||||
license = licenses.gpl2;
|
license = licenses.gpl2;
|
||||||
maintainers = [ maintainers.algorith ];
|
maintainers = with maintainers; [ algorith ];
|
||||||
platforms = with platforms; linux ++ darwin;
|
platforms = with platforms; linux ++ darwin;
|
||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
@ -1,16 +1,16 @@
|
|||||||
{ stdenv, fetchzip, cmake, pkgconfig
|
{ stdenv, fetchzip, cmake, pkgconfig
|
||||||
, alsaLib, freetype, libjack2, lame, libogg, libpulseaudio, libsndfile, libvorbis
|
, alsaLib, freetype, libjack2, lame, libogg, libpulseaudio, libsndfile, libvorbis
|
||||||
, portaudio, qtbase, qtdeclarative, qtenginio, qtscript, qtsvg, qttools
|
, portaudio, qtbase, qtdeclarative, qtscript, qtsvg, qttools
|
||||||
, qtwebkit, qtxmlpatterns
|
, qtwebkit, qtxmlpatterns
|
||||||
}:
|
}:
|
||||||
|
|
||||||
stdenv.mkDerivation rec {
|
stdenv.mkDerivation rec {
|
||||||
name = "musescore-${version}";
|
name = "musescore-${version}";
|
||||||
version = "2.0.3";
|
version = "2.1.0";
|
||||||
|
|
||||||
src = fetchzip {
|
src = fetchzip {
|
||||||
url = "https://github.com/musescore/MuseScore/archive/v${version}.tar.gz";
|
url = "https://github.com/musescore/MuseScore/archive/v${version}.tar.gz";
|
||||||
sha256 = "067f4li48qfhz2barj70zpf2d2mlii12npx07jx9xjkkgz84z4c9";
|
sha256 = "1rlxz2nzilz7n6c0affnjk2wcxl4b8949qxs0xi555gxg01kybls";
|
||||||
};
|
};
|
||||||
|
|
||||||
hardeningDisable = [ "relro" "bindnow" ];
|
hardeningDisable = [ "relro" "bindnow" ];
|
||||||
@ -31,7 +31,6 @@ stdenv.mkDerivation rec {
|
|||||||
];
|
];
|
||||||
|
|
||||||
preBuild = ''
|
preBuild = ''
|
||||||
make lupdate
|
|
||||||
make lrelease
|
make lrelease
|
||||||
'';
|
'';
|
||||||
|
|
||||||
@ -45,7 +44,7 @@ stdenv.mkDerivation rec {
|
|||||||
|
|
||||||
buildInputs = [
|
buildInputs = [
|
||||||
alsaLib libjack2 freetype lame libogg libpulseaudio libsndfile libvorbis
|
alsaLib libjack2 freetype lame libogg libpulseaudio libsndfile libvorbis
|
||||||
portaudio qtbase qtdeclarative qtenginio qtscript qtsvg qttools
|
portaudio qtbase qtdeclarative qtscript qtsvg qttools
|
||||||
qtwebkit qtxmlpatterns #tesseract
|
qtwebkit qtxmlpatterns #tesseract
|
||||||
];
|
];
|
||||||
|
|
||||||
@ -56,6 +55,5 @@ stdenv.mkDerivation rec {
|
|||||||
platforms = platforms.linux;
|
platforms = platforms.linux;
|
||||||
maintainers = [ maintainers.vandenoever ];
|
maintainers = [ maintainers.vandenoever ];
|
||||||
repositories.git = https://github.com/musescore/MuseScore;
|
repositories.git = https://github.com/musescore/MuseScore;
|
||||||
broken = true;
|
|
||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
@ -1,19 +1,27 @@
|
|||||||
{ stdenv, fetchurl, pkgconfig, glib, ncurses, mpd_clientlib, libintlOrEmpty }:
|
{ stdenv, fetchFromGitHub, autoreconfHook, pkgconfig, glib, ncurses, mpd_clientlib, libintlOrEmpty }:
|
||||||
|
|
||||||
stdenv.mkDerivation rec {
|
stdenv.mkDerivation rec {
|
||||||
version = "0.24";
|
|
||||||
name = "ncmpc-${version}";
|
name = "ncmpc-${version}";
|
||||||
|
version = "0.27";
|
||||||
|
|
||||||
src = fetchurl {
|
src = fetchFromGitHub {
|
||||||
url = "http://www.musicpd.org/download/ncmpc/0/ncmpc-${version}.tar.xz";
|
owner = "MusicPlayerDaemon";
|
||||||
sha256 = "1sf3nirs3mcx0r5i7acm9bsvzqzlh730m0yjg6jcyj8ln6r7cvqf";
|
repo = "ncmpc";
|
||||||
|
rev = "v${version}";
|
||||||
|
sha256 = "0sfal3wadqvy6yas4xzhw35awdylikci8kbdcmgm4l2afpmc1lrr";
|
||||||
};
|
};
|
||||||
|
|
||||||
buildInputs = [ pkgconfig glib ncurses mpd_clientlib ]
|
buildInputs = [ glib ncurses mpd_clientlib ];
|
||||||
++ libintlOrEmpty;
|
# ++ libintlOrEmpty;
|
||||||
|
nativeBuildInputs = [ autoreconfHook pkgconfig ];
|
||||||
|
|
||||||
NIX_LDFLAGS = stdenv.lib.optionalString stdenv.isDarwin "-lintl";
|
NIX_LDFLAGS = stdenv.lib.optionalString stdenv.isDarwin "-lintl";
|
||||||
|
|
||||||
|
# without this, po/Makefile.in.in is not being created
|
||||||
|
preAutoreconf = ''
|
||||||
|
./autogen.sh
|
||||||
|
'';
|
||||||
|
|
||||||
configureFlags = [
|
configureFlags = [
|
||||||
"--enable-colors"
|
"--enable-colors"
|
||||||
"--enable-lyrics-screen"
|
"--enable-lyrics-screen"
|
||||||
|
@ -1,13 +1,14 @@
|
|||||||
{stdenv, fetchurl, libogg, libao, pkgconfig, libopus, flac}:
|
{stdenv, fetchurl, libogg, libao, pkgconfig, libopus, flac}:
|
||||||
|
|
||||||
stdenv.mkDerivation rec {
|
stdenv.mkDerivation rec {
|
||||||
name = "opus-tools-0.1.9";
|
name = "opus-tools-0.1.10";
|
||||||
src = fetchurl {
|
src = fetchurl {
|
||||||
url = "http://downloads.xiph.org/releases/opus/${name}.tar.gz";
|
url = "http://downloads.xiph.org/releases/opus/${name}.tar.gz";
|
||||||
sha256 = "0fk4nknvl111k89j5yckmyrh6b2wvgyhrqfncp7rig3zikbkv1xi";
|
sha256 = "135jfb9ny3xvd27idsxj7j5ns90lslbyrq70cq3bfwcls4r7add2";
|
||||||
};
|
};
|
||||||
|
|
||||||
buildInputs = [ libogg libao pkgconfig libopus flac ];
|
nativeBuildInputs = [ pkgconfig ];
|
||||||
|
buildInputs = [ libogg libao libopus flac ];
|
||||||
|
|
||||||
meta = {
|
meta = {
|
||||||
description = "Tools to work with opus encoded audio streams";
|
description = "Tools to work with opus encoded audio streams";
|
||||||
|
@ -1,14 +1,14 @@
|
|||||||
{ stdenv, fetchurl, pkgconfig, alsaLib, libjack2, dbus, qtbase, qttools, qtx11extras }:
|
{ stdenv, fetchurl, pkgconfig, alsaLib, libjack2, dbus, qtbase, qttools, qtx11extras }:
|
||||||
|
|
||||||
stdenv.mkDerivation rec {
|
stdenv.mkDerivation rec {
|
||||||
version = "0.4.4";
|
version = "0.4.5";
|
||||||
name = "qjackctl-${version}";
|
name = "qjackctl-${version}";
|
||||||
|
|
||||||
# some dependencies such as killall have to be installed additionally
|
# some dependencies such as killall have to be installed additionally
|
||||||
|
|
||||||
src = fetchurl {
|
src = fetchurl {
|
||||||
url = "mirror://sourceforge/qjackctl/${name}.tar.gz";
|
url = "mirror://sourceforge/qjackctl/${name}.tar.gz";
|
||||||
sha256 = "19bbljb3iz5ss4s5fmra1dxabg2fnp61sa51d63zsm56xkvv47ak";
|
sha256 = "1dsavjfzz5bpzc80mvfs940w9f9f47cf4r9cqxnaqrl4xilsa3f5";
|
||||||
};
|
};
|
||||||
|
|
||||||
buildInputs = [
|
buildInputs = [
|
||||||
|
@ -1,12 +1,12 @@
|
|||||||
{ stdenv, fetchurl, pkgconfig, qt5, alsaLib, libjack2 }:
|
{ stdenv, fetchurl, pkgconfig, qt5, alsaLib, libjack2 }:
|
||||||
|
|
||||||
stdenv.mkDerivation rec {
|
stdenv.mkDerivation rec {
|
||||||
version = "0.4.2";
|
version = "0.4.3";
|
||||||
name = "qmidinet-${version}";
|
name = "qmidinet-${version}";
|
||||||
|
|
||||||
src = fetchurl {
|
src = fetchurl {
|
||||||
url = "mirror://sourceforge/qmidinet/${name}.tar.gz";
|
url = "mirror://sourceforge/qmidinet/${name}.tar.gz";
|
||||||
sha256 = "1sdnd189db44xhl9p8pd8h4bsy8s0bn1y64lrdq7nb21mwg8ymcs";
|
sha256 = "1qhxhlvi6bj2a06i48pw81zf5vd36idxbq04g30794yhqcimh6vw";
|
||||||
};
|
};
|
||||||
|
|
||||||
hardeningDisable = [ "format" ];
|
hardeningDisable = [ "format" ];
|
||||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user