Perl on darwin (and any other sane platform) has a pretty good threading
support, enable it.
As it turns out, we were building non-multithreaded perl on all systems,
since glibc was not part of the stdenv anymore:
nix-repl> pkgs = import <nixpkgs> {}
nix-repl> pkgs.stdenv ? glibc
false
meaning that the comments were incorrect. Thus, clear up the confusion
and remove the misleading comments, while enabling multithreading by
default. The builds will fail on unsupported platforms, and in this case
the only place is the bootstrap, where we already force
non-multithreaded perl.
As a consequence of the above, this change will cause the full rebuild
of stdenv on all platforms, including linux.
The upstream repository has been archived, with a note that this should never be needed:
https://github.com/alexcrichton/wasm-gc
It also happens to have custom patch logic that will fail the newer cargo
verification in #79975
Changes the default fetcher in the Rust Platform to be the newer
`fetchCargoTarball`, and changes every application using the current default to
instead opt out.
This commit does not change any hashes or cause any rebuilds. Once integrated,
we will start deleting the opt-outs and recomputing hashes.
See #79975 for details.
One of the motivations for this change is the following Discourse
discussion:
https://discourse.dhall-lang.org/t/offline-use-of-prelude/137
Many users have requested Dhall support for "offline" packages
that can be fetched/built/installed using ordinary package
management tools (like Nix) instead of using Dhall's HTTP import system.
I will continue to use the term "offline" to mean Dhall package
builds that do not use Dhall's language support for HTTP imports (and
instead use the package manager's support for HTTP requests, such
as `pkgs.fetchFromGitHub`)
The goal of this change is to document what is the idiomatic way to
implement "offline" Dhall builds by implementing Nixpkgs support
for such builds. That way when other package management tools ask
me how to package Dhall with their tools I can refer them to how it
is done in Nixpkgs.
This change contains a fully "offline" build for the largest Dhall
package in existence, known as "dhall-packages" (not to be confused
with `dhallPackages`, which is our Nix attribute set containing
Dhall packages).
The trick to implementing offline builds in Dhall is to take
advantage of Dhall's support for semantic integrity checks. If an
HTTP import is protected by an integrity check and a cached build
product matches the integrity check then the HTTP import is never
resolved and the expression is instead fetched from cache.
By "installing" dependencies in a pre-seeded and isolated cache
we can replace remote HTTP imports with dependencies that have
been built and supplied by Nix instead.
The offline nature of the builds are enforced by compiling the
Haskell interpreter with the `-f-with-http` flag, which disables
the interpreter's support for HTTP imports. If a user forgets
to supply a necessary dependency as a Nix build product then the
build fails informing them that HTTP imports are disabled.
By default, built packages are "binary distributions", containing
just a cache product and a Dhall expression which can be used to
resolve the corresponding cache product.
Users can also optionally enable a "source distribution" of a package
which already includes the equivalent fully-evaluated Dhall code (for
convenience), but this is disabled by default to keep `/nix/store`
utilization as compact as possible.
dependencies:
- moarvm: init at 2020.01.1
- nqp: init at 2020.01
- zef: init 0.8.2
Replaced the rakudo-star distribution with packages for raku, moarvm, nqp and
zef.
According to https://endoflife.software/programming-languages/server-side-scripting/ruby
ruby 2.4 will go end-of-life in march, where the new release of nixpkgs
will be cut. We won't be able to support it for security updates.
Remove all references to ruby_2_4 and add ruby_2_7 instead where
missing.
Mark packages that depend on ruby 2.4 as broken:
* chefdk
* sonic-pi
The rebuilds happen because changing the end-part of URL
changes the name of the resulting file as placed into nix store
(those names were wrong/confusing before this change)
The 672c3c1d2a refactor accidentally
dropped the last version component from the source URLs. This change
puts its back.
$ for lua in lua5_{1,2,3};do nix-instantiate --json --eval . -A $lua.src.urls | jq -r '.[]' | xargs nix-prefetch-url; done
Before this change:
lua-5.1.tar.gz 1hbjhh211p82vhwqhx4mmhmvhv56060acnka80gbmfdk3q3bjnvz (wrong hash because this is lua 5.1.0. We want 5.1.5 )
lua-5.2.tar.gz HTTP error 404
lua-5.3.tar.gz HTTP error 404
After this change:
lua-5.1.5.tar.gz 0cskd4w0g6rdm2q8q3i4n1h3j8kylhs3rq8mxwl9vwlmlxbgqh16
lua-5.2.4.tar.gz 0jwznq0l8qg9wh5grwg07b5cy3lzngvl5m2nl1ikp6vqssmf9qmr <-- Desired hash
lua-5.3.5.tar.gz 1b2qn2rv96nmbm6zab4l877bd4zq7wpwm8drwjiy2ih4jqzysbhc
Converted to base16 with `nix-hash --type sha256 --to-base16`:
lua-5.1.5.tar.gz 2640fc56a795f29d28ef15e13c34a47e223960b0240e8cb0a82d9b0738695333 <-- Desired hash
lua-5.2.4.tar.gz b9e2e4aad6789b3b63a056d442f7b39f0ecfca3ae0f1fc0ae4e9614401b69f4b
lua-5.3.5.tar.gz 0c2eed3f960446e1a3e4b9a1ca2f3ff893b6ce41942cf54d5dd59ab4b3b058ac <-- Desired hash
According to https://repology.org/repository/nix_unstable/problems, we have a
lot of packages that have http links that redirect to https as their homepage.
This commit updates all these packages to use the https links as their
homepage.
The following script was used to make these updates:
```
curl https://repology.org/api/v1/repository/nix_unstable/problems \
| jq '.[] | .problem' -r \
| rg 'Homepage link "(.+)" is a permanent redirect to "(.+)" and should be updated' --replace 's@$1@$2@' \
| sort | uniq > script.sed
find -name '*.nix' | xargs -P4 -- sed -f script.sed -i
```
Naive concatenation of $LD_LIBRARY_PATH can result in an empty
colon-delimited segment; this tells glibc to load libraries from the
current directory, which is definitely wrong, and may be a security
vulnerability if the current directory is untrusted. (See #67234, for
example.) Fix this throughout the tree.
Signed-off-by: Anders Kaseorg <andersk@mit.edu>
- Replaced python override from the final stdenv, instead we
propagate our bootstrap python to stage4 and override both
CF and xnu to use it.
- Removed CF argument from python interpreters, this is redundant
since it's not overidden anymore.
- Inherit CF from stage4, making it the same as the stdenv.
In "perl: fuse configureFlags" [1] the effects of the preConfigure
phase were merged into configureFlags. After this change values with
spaces do not reach the configure script intact.
The only flag this affects is `ldflags` for Aarch32 and Mips, and perl
builds without it on armv7l-linux so it's probably no longer required
on any platform.
Fixes:
configuring
configure flags: -de -Dcc=cc <...> -Dldflags=\"-lm -lrt\"
./Configure: eval: line 1677: unexpected EOF while looking for matching `"'
./Configure: eval: line 1678: syntax error: unexpected end of file
Configure: unknown option -lrt"
[1] 3b50d0462a
In "perl: fuse configureFlags" [1] the effects of the preConfigure
phase were merged into configureFlags. After this change values with
spaces do not reach the configure script intact.
The only flag this affects is `ldflags` for Aarch32 and Mips, and perl
builds without it on armv7l-linux so it's probably no longer required
on any platform.
Fixes:
configuring
configure flags: -de -Dcc=cc <...> -Dldflags=\"-lm -lrt\"
./Configure: eval: line 1677: unexpected EOF while looking for matching `"'
./Configure: eval: line 1678: syntax error: unexpected end of file
Configure: unknown option -lrt"
[1] 3b50d0462a
When `makeWrapperArgs` variable is not set, `declare -p makeWrapperArgs`
will return with 1 and print an error message to stderr.
I did not handle the non-existence case in b0633406cb
because I thought `mk-python-derivation` will always define `makeWrapperArgs`
but `wrapProgram` can be called independently. And even with `mk-python-derivation`,
`makeWrappers` will not be set unless explicitly declared in the derivation
because of https://github.com/NixOS/nix/issues/1461.
I was lead to believe that because the builds were succeeding and I confirmed
that the mechanism fails when the variable is not defined and `-o nounset` is enabled.
It appears that `wrapPython` setup hook is not running under `-o nounset`, though,
invaldating the assumption.
Now we are checking that the variable exists before checking its type, which
will get rid of the warning and also prevent future error when `-o nounset`
is enabled in the setup hook.
For more information, see the discussion at
https://github.com/NixOS/nixpkgs/commit/a6bb2ede232940a96150da7207a3ecd15eb6328
Bash takes an assignment of a string to an array variable:
local -a user_args
user_args="(foo bar)"
to mean appending the string to the array, not parsing the string into
an array as is the case when on the same line as the declaration:
local -a user_args="(foo bar)"
b0633406cb extracted the declaration before
the newly branched code block, causing string makeWrapperArgs being added
to the array verbatim.
Since local is function scoped, it does not matter if we move it inside
each of the branches so we fix it this way.
When `makeWrapperArgs` is a Bash array, we only passed the first
item to `wrapProgram`. We need to use `"${makeWrapperArgs[@]}"`
to extract all the items. But that breaks the common string case so
we need to handle that case separately.
This will turn manylinux support back on by default.
PIP will now do runtime checks against the compatible glibc version to
determine if the current interpreter is compatible with a given
manylinux specification. However it will not check if any of the
required libraries are present.
The motivation here is that we want to support building python packages
with wheels that require manylinux support. There is no real change for
users of source builds as they are still buildings packages from source.
The real noticeable(?) change is that impure usages (e.g. running `pip
install package`) will install manylinux packages that previously
refused to install.
Previously we did claim that we were not compatible with manylinux and
thus they wouldn't be installed at all.
Now impure users will have basically the same situation as before: If
you require some wheel only package it didn't work before and will not
properly work now. Now the program will fail during runtime vs during
installation time.
I think it is a reasonable trade-off since it allows us to install
manylinux packages with nix expressions and enables tools like
poetry2nix.
This should be a net win for users as it allows wheels, that we
previously couldn't really support, to be used.
Install again default deps.edn. deps.edn was embedded in clojure jar,
but that change was reverted, see
a34969513f
Update derivation to produce only one output. Multiple outputs was
introduced by #35140, but I don't think is necessary anymore.
Before, we'd always use `cc = null`, and check for that. The problem is
this breaks for cross compilation to platforms that don't support a C
compiler.
It's a very subtle issue. One might think there is no problem because we
have `stdenvNoCC`, and presumably one would only build derivations that
use that. The problem is that one still wants to use tools at build-time
that are themselves built with a C compiler, and those are gotten via
"splicing". The runtime version of those deps will explode, but the
build time / `buildPackages` versions of those deps will be fine, and
splicing attempts to work this by using `builtins.tryEval` to filter out
any broken "higher priority" packages (runtime is the default and
highest priority) so that both `foo` and `foo.nativeDrv` works.
However, `tryEval` only catches certain evaluation failures (e.g.
exceptions), and not arbitrary failures (such as `cc.attr` when `cc` is
null). This means `tryEval` fails to let us use our build time deps, and
everything comes apart.
The right solution is, as usually, to get rid of splicing. Or, baring
that, to make it so `foo` never works and one has to explicitly do
`foo.*`. But that is a much larger change, and certaily one unsuitable
to be backported to stable.
Given that, we instead make an exception-throwing `cc` attribute, and
create a `hasCC` attribute for those derivations which wish to
condtionally use a C compiler: instead of doing `stdenv.cc or null ==
null` or something similar, one does `stdenv.hasCC`. This allows quering
without "tripping" the exception, while also allowing `tryEval` to work.
No platform without a C compiler is yet wired up by default. That will
be done in a following commit.