This enables legacy seccomp sandbox by default even on chromium 22, because the
BPF sandbox is still work in progress, please see:
http://crbug.com/139872http://crbug.com/130662
Because the BPF seccomp sandbox is used in case the legacy seccomp mode
initialization fails, we might need to patch this again, as soon as the BPF
sandbox is fully implemented to fall back to legacy seccomp and use BPF by
default.
We now have two patches for "default to seccomp" - one for Chromium 21 and one
for 22 or higher.
The patch doesn't apply in version 22 and newer, because mode 1 sandboxes are
connsidered "legacy" (well, apart from the fact that I'd personally prefer BPF
anyway), for reasons I wasn't able to find, yet. But let's proceed on BPF
integration and thus gain more insight on the exact reasons.
If you look at what changed, you'll surely notice that version 22 is now in
beta, so we have to expect things to break. And one thing that will break for
sure is the seccomp patch, because beginning with 22 the new BPF seccomp sandbox
is going to replace the mode 1 seccomp sandbox.
This commit doesn't add any feature and just fixes a small annoyance which
result in messages like this:
Checking if xxx applies...no.
See that there is no whitespace between "..." and "no"? Well, the world cares
for more important things, but for me personally those minor annoyances can turn
into major annoyances.
chromium: Improve update script and update to latest versions.
Previously, we had a single hash of the whole version response from
omahaproxy.
Unfortunately the dev version is released quite frequently, so the hash
is of no use at all (we could rather directly fetch rather than
executing the script, because it will fetch all channels anyway).
This pull request adds two methods of caching:
* First of all, if a perticular version/channel is already in the
previous version of the sources.nix file, don't download it again.
* And the second method is to check if the current sha256 is already
downloaded and reads the corresponding sha256 from the lookup table.
So, this should really help to avoid flooding the download servers and
to not stress impatient users too much.
So, now even Firefox can be built with our shiny new fixed up NSS derivation,
and as this is desired (especially if we want to support certificates from the
CA bundle), let's make it the default.
Hurray! This is the first time chromium is working with NSS _and_ is able to
verify certificates using the root certificates built in into NSS.
Optimally it would use certs from OPENSSL_X509_CERT_FILE, but at least it's
working, so let's add that at some later point.
virtualbox: Fix build for manual kernel.
This should fix building VirtualBox against kernels made using the new
manual kernel configuration system.
This has been tested with the standard nixpkgs kernel as well.
First of all, modules won't install when there is no "make modules" prior to it,
so we're doing this now with a new function called forEachModule, so we can
avoid duplication as much as possible.
In addition this sets $sourcedir to the current directory of the configurePhase,
so we're able to find the source tree later on, after several chdir()s.
The scripts/depmod.sh checks whether the path in $DEPMOD is executable and only
executes it if that's the case. So, by setting DEPMOD to "/do_not_use_depmod"
the destination path doesn't exist _and_ thus isn't executable aswell.
The for loop didn't find $curdir, because it was set _after_ the directory has
been changed. The variable is now called $srcroot and is set before the
installPhase is changing directories.
Don't rely on VirtualBox's in-tree build scripts to set include paths correctly
and use the official way of the Linux kernel to build the modules. That way we
don't need to make ugly symlinks in the kernel tree or heavily patch VirtualBox.
Until this commit we had a single hash of the whole version response from
omahaproxy. This worked well for not updating unnecessarily but only until one
single channel has a new version available.
Unfortunately the dev version is released quite frequently, so the hash is of no
use at all (we could rather directly fetch everything everytime we execute the
script).
This led to this commit, which adds two methods of caching:
First of all, if a perticular version/channel is already in the previous version
of the sources.nix file, don't download it again.
And the second method is to check if the current sha256 is already downloaded
and reads the corresponding sha256 from the lookup table.
So, this should really help to avoid flooding the download servers and to not
stress impatient users too much.
The reason is because unpacking debian packages requires fewer dependencies (ar,
gzip and tar, nothing more), and in addition we can explicitly reference a
version number from the apt repository.
Previous commit reverted Xen back to 4.0.3 because xend from 4.1.* and newer
hangs for unknown reasons.
The new "xl" toolstack from 4.1.* and unstable works, yet PCI passthrough is not
supported by xl in 4.1.* and is broken in the unstable.
With this patch I was able to passthrough ATI Radeon HD 6950 without 3D
acceleration, though, to both Linux and Windows guests. Which is the best
archived result with Xen PCI passthrough on NixOS after trying out all possible
Xen versions.
Same VGA card works fine if passed through into a guest with KVM (acceleration,
GPGPU, everything works). I should have tried KVM from the start.
This caused HTML5 video to not work because this shared library is loaded at
runtime.
Unfortunately we can't use system ffmpeg yet, because upgrading would break
builds of other packages, and it would result in a copy of ffmpeg laying around
aswell, so we can defer this until we have fixed ffmpeg.
Thanks to @bluescreen303 for the bug report.
The configure script picks up libbsd.so from the host machine.
It uses simple find command to locate the file, but the linker
can not use it.
The fix replace the search path to /no-such-path