It's not by any means exhaustive, but we're still going to change the
implementation, so let's just use this as a starting point.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
These values match against the client IDs only, so let's rename it to
something that actually reflects that. Having client.cert in the same
namespace also could lead to confusion, because the client.cert setting
is for the *debugging* client only.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Referring to the GnuTLS documentation isn't very nice if the user has to
use a search engine to find that documentation. So let's directly link
to it.
The type was "str" before, but it's actually a colon-separated string,
so if we set options in multiple modules, the result is one concatenated
string.
I know there is types.envVar, which does the same as separatedString ":"
but I found that it could confuse the reader of the Taskserver module.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
We already document that we allow special values such as "all" and
"none", but the type doesn't represent that. So let's use an enum in
conjuction with a loeOf type so that this becomes clear.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
The option authzldapauthoritative had been removed in 2.4
I pushed this into 16.03 instead of master first. My fault.
(cherry picked from commit 516f47efef)
Using nixos-taskserver is more verbose but less cryptic and I think it
fits the purpose better because it can't be confused to be a wrapper
around the taskdctl command from the upstream project as
nixos-taskserver shares no commonalities with it.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
With a cluttered up module source it's really a pain to navigate through
it, so it's a good idea to put it into another file.
No changes in functionality here, just splitting up the files and fixing
references.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
A small test which checks whether tasks can be synced using the
Taskserver.
It doesn't test group functionality because I suspect that they're not
yet implemented upstream. I haven't done an in-depth check on that but I
couldn't find a method of linking groups to users yet so I guess this
will get in with one of the text releases of Taskwarrior/Taskserver.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Finally, this is where we declaratively set up our organisations and
users/groups, which looks like this in the system configuration:
services.taskserver.organisations.NixOS.users = [ "alice" "bob" ];
This automatically sets up "alice" and "bob" for the "NixOS"
organisation, generates the required client keys and signs it via the
CA.
However, we still need to use nixos-taskdctl export-user in order to
import these certificates on the client.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
It's a helper for NixOS systems to make it easier to handle CA
certificate signing, similar to what taskd provides but comes preseeded
with the values from the system configuration.
The tool is very limited at the moment and only allows to *add*
organisations, users and groups. Deletion and suspension however is much
simpler to implement, because we don't need to handle certificate
signing.
Another limitation is that we don't take into account whether
certificates and keys are already set in the system configuration and if
they're set it will fail spectacularly.
For passing the commands to the taskd command, we're using a small C
program which does setuid() and setgid() to the Taskserver user and
group, because runuser(1) needs PAM (quite pointless if you're already
root) and su(1) doesn't allow for setting the group and setgid()s to the
default group of the user, so it even doesn't work in conjunction with
sg(1).
In summary, we now have a shiny nixos-taskdctl command, which lets us do
things like:
nixos-taskdctl add-org NixOS
nixos-taskdctl add-user NixOS alice
nixos-taskdctl export-user NixOS alice
The last command writes a series of shell commands to stdout, which then
can be imported on the client by piping it into a shell as well as doing
it for example via SSH:
ssh root@server nixos-taskdctl export-user NixOS alice | sh
Of course, in terms of security we need to improve this even further so
that we generate the private key on the client and just send a CSR to
the server so that we don't need to push any secrets over the wire.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
We want to declaratively specify users and organisations, so let's add
another module option "organisations", which allows us to specify users,
groups and of course organisations.
The implementation of this is not yet done and this is just to feed the
boilerplate.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Using just the host for the common name *and* for listening on the port
is quite a bad idea if you want to listen on something like :: or an
internal IP address which is proxied/tunneled to the outside.
Hence this separates host and fqdn.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
The server starts up without that option anyway, but it complains about
its value not being set. As we probably want to have access to that
configuration value anyway, let's expose this via the NixOS module as
well.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Now the service starts up if only the services.taskserver.enable option
is set to true.
We now also have three systemd services (started in this order):
* taskserver-init: For creating the necessary data directory and also
includes a refecence to the configuration file in
the Nix store.
* taskserver-ca: Only enabled if none of the server.key, server.cert,
server.crl and caCert options are set, so we can
allow for certificates that are issued by another
CA.
This service creates a new CA key+certificate and a
server key+certificate and signs the latter using
the CA key.
The permissions of these keys/certs are set quite
strictly to allow only the root user to sign
certificates.
* taskserver: The main Taskserver service which just starts taskd.
We now also log to stdout and thus to the journal.
Of course, there are still a few problems left to solve, for instance:
* The CA currently only signs the server certificates, so it's
only usable for clients if the server doesn't validate client certs
(which is kinda pointless).
* Using "taskd <command>" is currently still a bit awkward to use, so
we need to properly wrap it in environment.systemPackages to set the
dataDir by default.
* There are still a few configuration options left to include, for
example the "trust" option.
* We might want to introduce an extraConfig option.
* It might be useful to allow for declarative configuration of
organisations and users, especially when it comes to creating client
certificates.
* The right signal has to be sent for the taskserver service to reload
properly.
* Currently the CA and server certificates are created using
server.host as the common name and doesn't set additional certificate
information. This could be improved by adding options that explicitly
set that information.
As for the config file, we might need to patch taskd to allow for
setting not only --data but also a --cfgfile, which then omits the
${dataDir}/config file. We can still use the "include" directive from
the file specified using --cfgfile in order to chainload
${dataDir}/config.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
The descriptions for the options previously seem to be from the
taskdrc(5) manual page. So in cases where they didn't make sense for us
I changed the wording a bit (for example for client.deny we don't have a
"comma-separated list".
Also, I've reordered things a bit for consistency (type, default,
example and then description) and add missing types, examples and
docbook tags.
Options that are not used by default now have a null value, so that we
can generate a configuration file out of all the options defined for the
module.
The dataDir default value is now /var/lib/taskserver, because it doesn't
make sense to put just yet another empty subdirectory in it and "data"
doesn't quite make sense anyway, because it also contains the
configuration file as well.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
We're aiming for a proper integration into systemd/journald, so we
really don't want zillions of separate log files flying around in our
system.
Same as with the pidFile. The latter is only needed for taskdctl, which
is a SysV-style initscript and all of its functionality plus a lot more
is handled by systemd already.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
The service doesn't start with the "taskd" user being present, so we
really should add it. And while at it, it really makes sense to add a
default group as well.
I'm using a check for the user/group name as well, to allow the
taskserver to be run as an existing user.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
I'm renaming the attribute name for uid, because the user name is called
"taskd" so we should really use the same name for it.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Unetbootin works by altering the image and placing a boot loader on it.
For this reason, it cannot work with UEFI and the installation guides
for other distributions (incl. Debian and Fedora) recommend against
using it.
Since dd writes the image verbatim to the drive, and not just the files,
it is not necessary to change the label after using it for UEFI
installations.
vcunat: tiny changes to the PR. Close#14139.
Most of the desktop environments will spawn pulseaudio, but we can instead simply run it as a systemd service instead.
This patch also makes the system wide service run in foreground as recommended by the systemd projects and allows it to use sd_notify to signal ready instead of reading a pid written to a file. It is now also restarted on failure.
The user version has been tested with KDE and works fine there.
The system-wide version runs, but I haven't actually used it and upstream does not recommend running in this mode.
This patch makes dbus launch with any user session instead of
leaving it up to the desktop environment launch script to run it.
It has been tested with KDE, which simply uses the running daemon
instead of launching its own.
This is upstream's recommended way to run dbus.
This reverts commit 45c218f893.
Busybox's modprobe causes numerous "Unknown symbol" errors in the
kernel log, even though the modules do appear to load correctly.
NixOps has infrequent releases, so it's not the best place for keeping
the list of current AMIs. Putting them in Nixpkgs means that AMI
updates will be delivered as part of the NixOS channels.
I had the basic version of this laying around for some while but didn't
continue on it. Originally it was for testing support for the Neo layout
introduced back then (8cd6d53).
We only test the first three Neo layers, because the last three layers
are largely comprised of special characters and in addition to that the
support for the VT keymap seems to be limited compared to the Xorg
keymap.
Yesterday @NicolasPetton on IRC had troubles with the Colemak layout
(IRC logs: http://nixos.org/irc/logs/log.20160330, starting at 16:08)
and I found that test again, so I went for improving and adding to
<nixpkgs>.
While the original problem seemed to be related to GDM, we can still add
another subtest that checks whether GDM correctly applies the keyboard
layout. However I don't have a clue how to properly configure the
keyboard layout on GDM, at least not within the NixOS configuration.
The main goal of this test is not to test a complete set of all key
mappings but to check whether the keymap is loaded and working at all.
It also serves as an example for NixOS keyboard configurations.
The list of keyboard layouts is by no means complete, so everybody is
free to add their own to the test or improve the existing ones.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
We now generate a qcow2 image to prevent hitting Hydra's output size
limit. Also updated /root/user-data -> /etc/ec2-metadata/user-data.
http://hydra.nixos.org/build/33843133
These two steps seem to fail intermittently with exit code 1. It isn't clear to me why, or what the issue is. Adding the `--verbose` option, hoping to capture some debugging information which might aid stabilization. Also: I was unable to replicate the failure locally.
This basic module allows you to specify the tmux configuration.
As great as tmux is, some of the defaults are pretty awful, so having a
way to specify the config really helps.
- services.iodined moved to services.iodine
- configuration file backwards compatable
- old iodine server configuration moved to services.iodine.server
- attribute set services.iodine.clients added to specify any number
of iodine clients
- example:
iodine.clients.home = { server = "iodinesubdomain.yourserver.com"; ... };
- client services names iodine-name where name would be home
Systemd 229 sets kernel.core_pattern to "|/bin/false" by default,
unless systemd-coredump is enabled. Revert back to the default of
writing "core" in the current directory.
This reverts commit e8e8164f34. I
misread the original commit as adding the "which" package, but it only
adds it to base.nix. So then the original motivation (making it work
in subshells) doesn't hold. Note that we already have some convenience
aliases that don't work in subshells either (such as "ll").
Previously, the cisco resolver was used on the theory that it would
provide the best user experience regardless of location. The downsides
of cisco are 1) logging; 2) missing supoprt for DNS security extensions.
The new upstream resolver is located in Holland, supports DNS security,
and *claims* to not log activity. For users outside of Europe, this will
mean reduced performance, but I believe it's a worthy tradeoff.
When iodined tries to start before any interface other than loopback has an ip, iodined fails.
Wait for ip-up.target
The above is because of the following:
in iodined's code: src/common.c line 157
the flag AI_ADDRCONFIG is passed as a flag to getaddrinfo.
Iodine uses the function
get_addr(char *host,
int port,
int addr_family,
int flags,
struct sockaddr_storage *out);
to get address information via getaddrinfo().
Within get_addr, the flag AI_ADDRCONFIG is forced.
What this flag does, is cause getaddrinfo to return
"Name or service not known" as an error explicitly if no ip
has been assigned to the computer.
see getaddrinfo(3)
Wait for an ip before starting iodined.
Fixes#12794 by reverting the source tree splitup (c92dbff) to use the
source tarball directly into the main Chromium derivation and making the
whole source/ subdirectory obsolete. The reasons for this are explained
in 4f981b4f84.
This also now renames the "sources.nix" file to "upstream-info.nix",
which is a more proper name for the file, because it not only contains
"source code" but also the Chrome binaries needed for the proprietary
plugins (of course "source" could also mean "where to get it", but I
wanted to avoid this ambiguity entirely).
I have successfully built and tested this using the VM tests.
All results can be found here:
https://headcounter.org/hydra/eval/313435
Assigning the channelMap by the function attrset argument at the
top-level of the test expression file may reference a different
architecture than we need for the tests.
So if we get the pkgs attribute by auto-calling, this will lead to test
failure because we have a different architecture for the test than for
the browser.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
This has been the case before e45c211, but it turns out that it's very
useful to override the channel packages so we can run tests with
different Chromium build options.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
* the major change is to set TARGETDIR=${vardir}, and symlink from
${vardir} back to ${out} instead of the other way around. this
gives CP more liberty to write to more directories -- in particular
it seems to want to write some configuration files outside of conf?
* run.conf does not need 'export'
* minor tweaks to CrashPlanDesktop.patch
The docker service is socket activated by default; thus,
`waitForUnit("docker.service")` before any docker command causes the
unit test to time out.
Instead, do `waitForUnit("sockets.target")` to ensure that sockets are
setup before running docker commands.
GnuPG 2.1.x changed the way the gpg-agent works, and that new approach no
longer requires (or even supports) the "start everything as a child of the
agent" scheme we've implemented in NixOS for older versions.
To configure the gpg-agent for your X session, add the following code to
~/.xsession or some other appropriate place that's sourced at start-up:
gpg-connect-agent /bye
GPG_TTY=$(tty)
export GPG_TTY
If you want to use gpg-agent for SSH, too, also add the settings
unset SSH_AGENT_PID
export SSH_AUTH_SOCK="${HOME}/.gnupg/S.gpg-agent.ssh"
and make sure that
enable-ssh-support
is included in your ~/.gnupg/gpg-agent.conf.
The gpg-agent(1) man page has more details about this subject, i.e. in the
"EXAMPLES" section.
This patch fixes https://github.com/NixOS/nixpkgs/issues/12927.
It would be great to configure good rate-limiting defaults for this via
/proc/sys/net/ipv4/icmp_ratelimit and /proc/sys/net/ipv6/icmp/ratelimit,
too, but I didn't since I don't know what a "good default" would be.
Some users may wish to improve their privacy by using per-query
key pairs, which makes it more difficult for upstream resolvers to
track users across IP addresses.
- fix `enable` option description
using `mkEnableOption longDescription` is incorrect; override
`description` instead
- additional details for proper usage of the service, including
an example of the recommended configuration
- clarify `localAddress` option description
- clarify `localPort` option description
- clarify `customResolver` option description
Probably not many people care about i686-linux any more, but building
all these images is fairly expensive (e.g. in the worst case, every
Nixpkgs commit would trigger a few gigabytes of uploads to S3).
Previously this was done in three derivations (one to build the raw
disk image, one to convert to OVA, one to add a hydra-build-products
file). Now it's done in one step to reduce the amount of copying
to/from S3. In particular, not uploading the raw disk image prevents
us from hitting hydra-queue-runner's size limit of 2 GiB.