With these changes, a container can have more then one veth-pair. This allows for example to have LAN and DMZ as bridges on the host and add dedicated containers for proxies, ipv4-firewall and ipv6-firewall. Or to have a bridge for normal WAN, one bridge for administration and one bridge for customer-internal communication. So that web-server containers can be reached from outside per http, from the management via ssh and can talk to their database via the customer network.
The scripts to set up the containers are now rendered several times instead of just one template. The scripts now contain per-container code to configure the extra veth interfaces. The default template without support for extra-veths is still rendered for the imperative containers.
Also a test is there to see if extra veths can be placed into host-bridges or can be reached via routing.
This makes the container a bit more secure, by preventing root
creating device nodes to access the host file system, for
instance. (Reference: systemd-nspawn@.service in systemd.)
This moves nixos-containers into its own package so that it can be
relied upon by other packages/systems. This should make development
using dynamic containers much easier.
Since systemd version 230, it is required to have a machine-id file
prior to the startup of the container. If the file is empty, a transient
machine ID is generated by systemd-nspawn.
See systemd/systemd#3014 for more details on the matter.
This unbreaks all of the containers-* NixOS tests.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Cc: @edolstraCloses: #15808
The existence of $root/var/lib/private/host-notify as a socket
prevented a bind mount:
container foo[8083]: Failed to create mount point /var/lib/containers/foo/var/lib/private/host-notify: No such device or address
Without the templating (which is still present for imperative containers), it
will be possible to set individual dependencies. Like depending on the network
only if the hostbridge or hardware interfaces are used.
Ported from #3021
This allows the containers to have their interface in a bridge on the host.
Also this adds IPv6 addresses to the containers both with bridged and unbridged
network.
If the host is shutting down, machinectl may fail because it's
bus-activated and D-Bus will be shutting down. So just send a signal
to the leader process directly.
Fixes#6212.
Systemd in a container will call sd_notify when it has finished
booting, so we can use that to signal that the container is
ready. This does require some fiddling with $NOTIFY_SOCKET.
Previously "machinectl reboot/poweroff" brutally killed the container,
as did "systemctl stop/restart". And reboot didn't actually work. Now
everything is fine.
Previously "machinectl reboot/poweroff" brutally killed the container,
as did "systemctl stop/restart". And reboot didn't actually work. Now
everything is fine.
By setting a line like
MACVLANS="eno1"
in /etc/containers/<name>.conf, the container will get an Ethernet
interface named mv-eno1, which represents an additional MAC address on
the physical eno1 interface. Thus the container has direct access to
the physical network. You can specify multiple interfaces in MACVLANS.
Unfortunately, you can't do this with wireless interfaces.
Note that dhcpcd is disabled in containers by default, so you'll
probably want to set
networking.useDHCP = true;
in the container, or configure a static IP address.
To do: add a containers.* option for this, and a flag for
"nixos-container create".