If we pass `-lgcc_s` explicitly to the linker, it will be added as a
dependency even if no functions are used from it. This behavior is not
consistent with other systems. GCC can already handle passing the
correct flags, so let's rely on that instead.
As an added benefit, we now get support for the `-static-libgcc` flag;
and `-static-pie` will no longer mistakenly link us against the dynamic
version of libgcc.
No toolchain rebuild is required.
If we have the LLVM port installed, CMake might pick up some of the
tools installed as part of it (`llvm-ar`, `llvm-strip`, etc.) instead of
the ones belonging to the host toolchain. These, of course, can't be run
on the host platform, so builds would eventually fail. This made it
impossible to rebuild the LLVM toolchain.
We now set these variables explicitly when compiling the LLVM runtime
libraries in order to avoid this issue.
This will come in handy if we want to use the LLVM port with a GNU host
compiler.
As of version 13, libc++ uses `__attribute__((using_if_exists))` to
import global LibC functions into the `std` namespace, which allows some
symbols to be absent. GCC does not support this attribute, so it fails
to build libc++ due to some obscure `wchar.h` functions. This means that
cross-compiling libc++ is not possible; and on-target builds would be
tedious, so we'll be better off using the toolchain's `libstdc++`.
The goal of these more recent additions to the Dockerfile is to provide
a working copy of SerenityOS with the toolchain prebuilt. To me, these
additions feel misplaced:
- The toolchain is built assuming the i686 architecture, which may not
be what you want.
- You get a shallow clone of the project limiting you in your abilities
to navigate through the project's history or bisect.
- There's this awkward directory structure of `/serenity/serenity-git`
and `/serenity/out`.
The Dockerfile is immensely useful for building SerenityOS in a
containerized environment, separate from the host's environment. If we
want to automate builds, we can always use CI or extend this image to
do so. For now, let's remove the `git clone` and associated actions.
Fixes#9310.
If we want to use clang-tidy on the codebase, we'll need to build
clang-tidy from an LLVM that has been patched and built with Serenity
cross-compilation support.
Serenity defines a protected range of memory that must not be mmapped,
and is apparently reserved for kernel tasks. In this case, the protected
range is anything below 0x800000.
However, in its default setting, binutils chooses the memory address
0x400000 as the mapping address for executables that do not have PIE
enabled, resulting in mmap being unable to map the file unless the load
address has been overwritten at link time or if it's a PIE.
To mitigate this, move the default base address somewhere outside of
that range (and preferably not anywhere close near the beginning of the
useable virtual memory space, to avoid running into it during sequential
allocations).
We were previously using TRY_COMPILE_TARGET_TYPE to bypass the compiler
check at the beginning of the CMake build, since we don't have LibC
available and therefore can't link at that point.
However, this breaks a lot of assumptions in try_compile when it comes
to library checks. While this was the main idea behind our usage of the
flag, it also has some really nasty side effects when software wants
to find out what library a symbol is in.
Instead, just manually tell CMake that our compiler works as intended
and keep the target type setting at its default.
`CMAKE_INSTALL_PREFIX` is supposed to be the in-system installation
path. The sysroot path on the host doesn't belong there, since other
applications will duplicate that path when applying their respective
sysroot.
This commit updates the Clang toolchain's version to 13.0.0, which comes
with better C++20 support and improved handling of new features by
clang-format. Due to the newly enabled `-Bsymbolic-functions` flag, our
Clang binaries will only be 2-4% slower than if we dynamically linked
them, but we save hundreds of megabytes of disk space.
The `BuildClang.sh` script has been reworked to build the entire
toolchain in just three steps: one for the compiler, one for GNU
binutils, and one for the runtime libraries. This reduces the complexity
of the build script, and will allow us to modify the CI configuration to
only rebuild the libraries when our libc headers change.
Most of the compile flags have been moved out to a separate CMake cache
file, similarly to how the Android and Fuchsia toolchains are
implemented within the LLVM repo. This provides a nicer interface than
the heaps of command-line arguments.
We no longer build separate toolchains for each architecture, as the
same Clang binary can compile code for multiple targets.
The horrible mess that `SERENITY_CLANG_ARCH` was, has been removed in
this commit. Clang happily accepts an `i686-pc-serenity` target triple,
which matches what our GCC toolchain accepts.
This allows the linker to link against these dynamic libraries when
compiling libc++/libunwind, without having to do a separate
bootstrapping LibC build.
Without this change, libc++ would fail to pick up the need to link to
`LibPthread` if no prior builds of it existed. Because of this, we'd
immediately have an assertion failure in SystemServer, as mutexes are
used for the safe construction of function-local static variables.
I used "git grep -FIn http://" to find all occurrences, and looked at
each one. If an occurrence was really just a link, and if a https
version exists, and if our Browser can access it at least as well as the
http version, then I changed the occurrence to https.
I'm happy to report that I didn't run into a single site where Browser
can't deal with the https version.
Replace the old logic where we would start with a host build, and swap
all the CMake compiler and target variables underneath it to trick
CMake into building for Serenity after we configured and built the Lagom
code generators.
The SuperBuild creates two ExternalProjects, one for Lagom and one for
Serenity. The Serenity project depends on the install stage for the
Lagom build. The SuperBuild also generates a CMakeToolchain file for the
Serenity build to use that replaces the old toolchain file that was only
used for Ports.
To ensure that code generators are rebuilt when core libraries such as
AK and LibCore are modified, developers will need to direct their manual
`ninja` invocations to the SuperBuild's binary directory instead of the
Serenity binary directory.
This commit includes warning coalescing and option style cleanup for the
affected CMakeLists in the Kernel, top level, and runtime support
libraries. A large part of the cleanup is replacing USE_CLANG_TOOLCHAIN
with the proper CMAKE_CXX_COMPILER_ID variable, which will no longer be
confused by a host clang compiler.
Ninja disables its fancy output mode when it's not writing to a TTY.
So don't pipe its output into something else, so that it writes to
a TTY if the invoking terminal is a TTY.
`LLVM_LLVM_BUILD_LLVM_DYLIB` does not exist, so passing this does
nothing but make CMake warn.
However, since we pass `LLVM_LINK_LLVM_DYLIB`, `LLVM_BUILD_LLVM_DYLIB`
(the correct spelling) defaults to true anyways. So let's pass fewer
flags.
No behavior change, but fixes a CMake warning.
I locally modified Meta/serenity.sh to pass `--dev` to BuildIt.sh
in build_toolchain(). Then I ran `Meta/serenity.sh rebuild-toolchain`,
cd'd into Toolchain/Tarballs/binutils-2.37, `git add`ed unadded files in
`git status`, and then ran `git diff > ../../Patches/binutils.patch`.
Then I did the same for Toolchain/Tarballs/gcc-11.2.0 (and was careful
not to `git add` serenity-kernel.h, since that's created by
Toolchain/BuildIt.sh).
No behavior change. This just rewrites the patch like git writes it.
This library is used by virtually all executables in the Clang
toolchain. By default, it is linked statically, which leads to huge
file sizes and us running out of artifact storage disk space on CI.
This contains all the bits and pieces necessary to build a Clang binary
that will correctly compile SerenityOS.
I had some trouble with getting LLVM building with a single command, so
for now, I decided to build each LLVM component in a separate command
invocation. In the future, we can also make the main llvm build step
architecture-independent, but that would come with extra work to make
library and include paths work.
The binutils build invocation and related boilerplate is duplicated
because we only use `objdump` from GNU binutils in the Clang toolchain,
so most features can be disabled.
CMake specifies -arch arm64 for our toolchain. Unfortunately that's an
option GCC only understands when built for macOS. This causes the build
to fail.
I haven't been able to get CMake to not specify that option so this adds
a dummy option to GCC.
Previously we'd place the QEMU binaries into the architecture-specific
toolchain directory. This is a problem because the BuildIt.sh script
clears those directories which also removes the QEMU binaries users
may have built earlier. Also, the QEMU binaries are not specific to
the target architecture.
Docker is a nice way of doing build automation, or just
containerizing builds for increased safety and isolating unstable
packages. The old Dockerfile in the toolchain did not satisfy these
needs. The new Dockerfile is known to run successfully on Docker
version 20.10.7. It clones the SerenityOS repo and builds the
toolchain. In this way, it is intended to be a starting point for other
Docker images that can e.g. run builds. For example, one can simply run
this docker image as-is, exec a shell in it and run a build there.
Rather than having the toolchain build fail half-way through we should
check whether the user has installed all the required tools and
libraries early on.
Previously the buildstep function would obscure error codes because
the return value of the function was the exit code for the sed command
which caused us to continue execution even though one of the build
steps had failed.
With set -o pipefail the return value of the buildstep function is
the real command's exit code.
This ensures inter-machine compatibility by not emitting any processor
specific instructions. This fixes the issue raised by the non AVX-512
supporting GitHub actions runners.
-march=native specializes the binaries for the CPU features available on
the CPU the binary is being compiled on. This matches the needs of the
Toolchain, as it's always built and used on that machine only.
This should be safe for the github actions VMs as well, as they all run
on a standard VM SKU in "the cloud".
I saw small but notable improvements in end-2-end build times in my
local testing. Each compilation unit is on average around a second
faster on my Intel(R) Core(TM) i7-8705G CPU @ 3.10GHz.
This makes stdlib.h and stdio.h functions available in the std
namespace for C++.
libstdc++v3's link tests can fail if you don't have an up-to-date
build directory, for example:
1. Have libc with missing _Exit symbol because you haven't done
a build since that was added.
2. Run toolchain rebuild. libstdc++v3's configure script will
realize that it can do link tests in general but will fail
later on when it tries to link a program that tests for _Exit.
Even though this is a toolchain patch this does not necessarily
require rebuilding the toolchain right away. This is only required
once we start using any of these new members in the std namespace,
e.g. for ports.
This fixes the -nodefaultlibs flag for gcc which previously
linked against libgcc_s anyway. Even though this is a toolchain
patch we don't need to rebuild the toolchain right away.
BuildIt.sh had a bunch of SC2086 errors, where we were not quoting
variables in variable expansions. The logic being:
Quoting variables prevents word splitting and glob expansion,
and prevents the script from breaking when input contains spaces,
line feeds, glob characters and such.
Reference: https://github.com/koalaman/shellcheck/wiki/SC2086
As bcoles noticed in #6772, shellcheck actually found a real bug here,
where the user's build directory included spaces.
Close: #6772
BuildFuseExt2.sh was saying it should be run under /bin/sh but it is
using bash extensions like pushd/popd, ${BASH_SOURCE[0]}, etc. So just
run it under bash to avoid any potential issues.
Ordinarily this would force the compiler to not inline certain
symbols and call them via the PLT instead. To counteract this
I've also added -fno-semantic-interposition which disables
ELF symbol interposition. Our dynamic loader doesn't support
this anyway and we might even consider not implementing this
at all.
Even though this is a toolchain change this doesn't require
rebuilding the toolchain unless you're planning to build
for the x86_64 arch.
Previously the toolchain's binutils would not have been able to
build binaries on 32-bit host systems (not that this would be
much of an issue nowadays) because one of the #ifdefs was in
the wrong place.
I moved the #ifdef in the port's patch and this now updates
the toolchain's patch file to match the port's patch.
Changes since rc4:
0cef06d187: Update version for v6.0.0-rc5 release
5351fb7cb2: hw/block/nvme: fix invalid msix exclusive uninit
ffa090bc56: target/s390x: fix s390_probe_access to check PAGE_WRITE_ORG
bc38e31b4e: net: check the existence of peer before trying to pad
Make this stuff a bit easier to maintain by using the
root level variables to build up the Toolchain paths.
Also leave a note for future editors of BuildIt.sh to
give them warning about the other changes they'll need
to make.
This enables building usermode programs with exception handling. It also
builds a libstdc++ without exception support for the kernel.
This is necessary because the libstdc++ that gets built is different
when exceptions are enabled. Using the same library binary would
require extensive stubs for exception-related functionality in the
kernel.
Instead GCC should be used to automatically link against crt0
and crt0_shared depending on the type of object file that is being
built.
Unfortunately this requires a rebuild of the toolchain as well
as everything that has been built with the old GCC.
GCC determines whether the system's <limits.h> header is usable
and installs a different version of its own <limits.h> header
depending on whether the system header file exists.
If the system header is missing GCC's <limits.h> header does not
include the system header via #include_next.
For this to work we need to install LibC's headers before
attempting to build GCC.
Also, re-running BuildIt.sh "hides" this problem because at that
point the sysroot directory also already has a <limits.h> header
file from the previous build.
Our TLS implementation relies on the TLS model being "initial-exec".
We previously enforced this by adding the '-ftls-model=initial-exec'
flag in the root CmakeLists file, but that did not affect ports - So
now we put that flag in the gcc spec files.
Closes#5366
realpath(1) is specific to coreutils and its behavior can be had
with readlink -f
Create the Toolchain Build directory if it doesn't exist before
calling readlink, since realpath(3) on at least OpenBSD will error
on a non-existent path
The current version of our Python port (3.6.0) is over four years old by
now and has (or had, I haven't actually tried it in a while) some
limitations - time for an upgrade! The latest Python release is 3.9.1,
so I used that version. It's a from-scratch port, no patches are taken
from the previous port to ensure the smallest possible amount of code is
patched. The BuildPython.sh script is useful so I kept it, with some
tweaks. I added a short document explaining each patch to ease judging
their underlying problem and necessity in the future.
Compared to the old Python port, this one does support both the time
module as well as threading (at least _thread) just fine. Importing
modules written in C (everything in /usr/local/lib/python3.9/lib-dynload)
currently asserts in Serenity's dynamic loader, which is unfortunate but
probably solvable. Possibly related to #4642. I didn't try building
Python statically, which might be one possibility to circumvent this
issue.
I also renamed the directory to just "python3", which is analogous to
the Python 3.x package most Linux distributions provide. That implicitly
means that we likely will not support multiple versions of the Python
port at any given time, but again, neither do many other systems by
default. Recent versions are usually backwards compatible anyway though,
so having the latest shouldn't be a problem.
On the other hand bumping the version should now be be as simple as
updating the variables in version.sh, given that no new patches are
required.
These core modules to currently not build - I chose to ignore that for
now rather than adding more patches to make them work somehow, which
means they're fully unavailable. This should probably be fixed in
Serenity itself.
_ctypes, _decimal, _socket, mmap, resource, termios
These optional modules requiring 3rd-party dependencies do currently not
build (even with depends="ncurses openssl zlib"). Especially the absence
of a readline port makes the REPL a bit painful to use. :^)
_bz2, _curses, _curses_panel, _dbm, _gdbm, _hashlib, _lzma, _sqlite3,
_ssl, _tkinter, _uuid, nis, ossaudiodev, readline, spwd, zlib
I did some work on LibC and LibM beforehand to add at least stubs of
missing required functions, it still encounters an ASSERT_NOT_REACHED()
/ TODO() every now and then, notably frexp() (implementations of that
can be found online easily if you want to get that working right now).
But then again that's our fault and not this port's. :^)
We now configure gcc to always use the -fno-exceptions flag.
This does not affect our code since we do not use exceptions, and also
fixes the gcc port.
RTTI is still disabled for the Kernel, and for the Dynamic Loader. This
allows for much less awkward navigation of class heirarchies in LibCore,
LibGUI, LibWeb, and LibJS (eventually). Measured RootFS size increase
was < 1%, and libgui.so binary size was ~3.3%. The small binary size
increase here seems worth it :^)
* Add SERENITY_ARCH option to CMake for selecting the target toolchain
* Port all build scripts but continue to use i686
* Update GitHub Actions cache to include BuildIt.sh
A good number of contributors use macOS. However, we have a bit of
a tendency of breaking the macOS build without realising it.
Luckily, GitHub Actions does actually supply macOS environments,
so let's use it.
We now configure the gcc spec files to use a different crt files for
static & PIE binaries.
This relieves us from the need to explicitly specify the desired crt0
file in cmake scripts.
This is necessary because cache reusability will be determined by Github Actions.
Note that we only cache if explicitly asked to do so,
which only happens on Github Actions.
When libstdc++ was added in 4977fd22b8, just calling
'make install' was the easiest way to install the headers. And the headers are all
that is needed for libstdc++ to determine the ABI. Since then, BuildIt.sh was
rewritten again and again, and somehow everyone just silently assumed that
libstdc++ also depends on libc.a and libm.a, because surely it does?
Turns out, it doesn't! This massively reduces the dependencies of libstdc++,
hopefully meaning that the Toolchain doesn't need to be rebuilt so often on Travis.
Furthermore, the old method of trying to determine the dependency tree with
bash/grep/etc. has finally broken anyways:
https://travis-ci.com/github/SerenityOS/serenity/builds/179805569#L567
In summary, this should eliminate most of the Toolchain rebuilds on Travis,
and therefore make Travis build blazingly fast! :^)
./configure generates about 3500 lines in a few seconds. Noone will ever read
those lines and they make loading the Travis webpage slower. And if there is
ever a problem, it will be because the Travis base image changed (which happens
only rarely) in a way that interferes with compiling gcc (which is incredibly
unlikely), or we update gcc (which happens very rarely) and gcc doesn't like
the Travis iamge (which again is incredibly unlikely). In all of these cases,
finding the culprit will be self-evident.
Empirically, every single push or PR has to download *and then upload*
about 3.6 GiB of "cache stuff", which takes up about 400 seconds:
https://travis-ci.com/github/SerenityOS/serenity/builds/177500795
On every single push/PR! No matter what!
Those 3.6 GB consist of:
- 3.2 GB Toolchain cache (around 260 MB per compressed item)
- 0.4 GB ccache, but is capped at 0.5 GB: https://travis-ci.com/github/BenWiederhake/serenity/builds/177528549
- (And 200 KB for some weird debian package? Dunno.)
Investigating in the size, the Toolchain consists mostly of *DEBUG SYMBOLS IN
THE COMPILER BINARIES* which comically misses the point. If we ever run into
compiler crashes, any stacktrace would be lost anyway as soon as the Travis VM
shuts down. Furthermore, Travis will only ever compile Serenity itself, and
Serenity forbids C in it's Contribution Guidelines. That's another 20 MB we
don't need to cache.
Stripping the binaries and deleting the C compiler reduces the uncompressed size
from 1200 MB down to 220 MB. The compressed size gets reduced from 260 MB to 70MB.
That's a reduction of 73%.
It'll take a while until the 'old' toolchains get deleted.
I guess it'll take less than a week.
From that point onward, the Travis cache will be 1.2 GB, consisting of:
- 0.7 GB Toolchain cache
- 0.5 GB ccache
- (And that weird 200 KB deb file)
If network speeds are linear, then this should reduce the "cache network
overhead time" from about 400 seconds to about 120 seconds.
tl;dr: Strip unnecessary debug infos, delete an unused files, and speed
everything up by two minutes. (Both Toolchain cache hits and Toolchain rebuilds!)
This change allows users to use CMAKE_GENERATOR=Ninja ./BuildIt.sh
BuildIt.sh assumes the default cmake generator is Make. However,
the user may specify CMAKE_GENERATOR=Ninja, for example, to set the
default generator. Therefore, instead of calling make to build the
LibC target we should call cmake --build to use the correct generated
files.
The SDL port failed to build because the CMake toolchain filed pointed
to the old root. Now the toolchain file assumes that the Root is in
Build/Root.
Additionally, the AK/ and Kernel/ headers need to be installed in the
root too.
We can do away with that shenanigans now that libstdc++ is gone.
Also, simplify the toolchain dependency hash calculation to only depend
on the toolchain build script(s) and the Patches files we use to modify
the toolchain itself.
This is __cxa_guard_acquire, __cxa_guard_release, and __cxa_guard_abort.
We put these symbols in a 'fake' libstdc++ to trick gcc into thinking it
has libstdc++. These symbols are necessary for C++ programs and not C
programs, so, seems file. There's no way to tell gcc that, for example,
the standard lib it should use is libc++ or libc. So, this is what we
have for now.
When threaded code enters a block that is trying to call the constructor
for a block-scope static, the compiler will emit calls to these methods
to handle the "call_once" nature of block-scope statics.
The compiler creates a 64-bit guard variable, which it checks the first
byte of to determine if the variable should be intialized or not.
If the compiler-generated code reads that byte as a 0, it will call
__cxa_guard_acquire to try and be the thread to call the constructor for
the static variable. If the first byte is 1, it will assume that the
variable's constructor was called, and go on to access it.
__cxa_guard_acquire uses one of the 7 implementation defined bytes of
the guard variable as an atomic 8 bit variable. To control a state
machine that lets each entering thread know if they gained
'initialization rights', someone is working on the varaible, someone is
working on the varaible and there's at least one thread waiting for it
to be intialized, or if the variable was initialized and it's time to
access it. We only store a 1 to the byte the compiler looks at in
__cxa_guard_release, and use a futex to handle waiting.
In order to remove libstdc++ completely, we need to give up on their
implementation of abi::__cxa_demangle. The demangler logic will actually
have to be quite complex, and included in both the kernel and userspace.
A definite fixme for the future, to parse the mangled names into real
deal names.
Back in 36ba0a35ee I thought that Travis would
automagically delete theoldest files. Apparently it does not.
Note that no dummy changes are needed, because BuildIt.sh lists itself
as a dependency for the Toolchain. Hooray for something that works!
It didn't feel right to have a "DHCPClient" in a "Servers" directory.
Rename this to Services to better reflect the type of programs we'll
be putting in there.
The toolchain builds just fine without the git repository (tested on
windows and linux). We can skip setting up the repo and apply the
patches directly when we aren't working on the toolchain itself. A
flag `--dev` has been added for cases when git repo is needed. Without
the flag, regular patch is applied. This significantly improves build
times for first time builds.
This should give a significant boost to Travis speeds, because most of the
compile time is spent building the toolchain over and over again.
However, the toolchain (or libc or libm) changes only rarely,
so most rebuilds can skip this step.
The hashing has been put into a separate file to keep it
as decoupled as possible from BuiltIt.sh.
Turns out the reason GCC wasn't as smart about startup code for
shared objects as we hoped is because nobody told it to be :D
Change the STARTFILE_SPEC and ENDFILE_SPEC in gcc/config/serenity.h to
skip crt0.o and to link the S variants of crtbegin
and crtend for shared objects.
Because we're using the crtbegin and crtend from libgcc, also tell
libgcc in libgcc/config.host to compile crtbeginS and crtendS from
crtstuff.c.
Toolchain build makes git repo out of toolchain to allow patching
Fix Makefiles to use new libstdc++
Parameterize BuildIt with default TARGET of i686 but arm is experimental
For python3 cross compilation, a native installation of python3 is
needed. This patch adds a build script for python3 to the toolchain
and informs the user to run that script if the python port is build
and no native python3 with the same major and minor version is
being found.
Added a script to build QEMU from source as part of the Toolchain.
The script content could be in BuildIt.sh but has been put in
a seperate file to make the build optional.
Added PATH=$PATH to sudo calls to hand over the Toolchain's PATH
setup by UseIt.sh. This enabled the script's to use the QEMU
contained in the SerenityOS toolchain.
Deleted old documentation in Meta and replaced it by a new
documentation in the Toolchain folder.
Ports/.port_include.sh, Toolchain/BuildIt.sh, Toolchain/UseIt.sh
have been left largely untouched due to use of Bash-exclusive
functions and variables such as $BASH_SOURCE, pushd and popd.
When we used "make install" in the past, the "install" target would pull
in the library targets as dependencies, and everything got built that way.
Now that we use "install.sh" instead, we have to build things manually.
This makes out-of-tree linking possible. And at the same time, add a
CMakeToolchain.txt file that can be used to build arbitrary cmake-using
applications on Serenity by pointing to the CMAKE_TOOLCHAIN_FILE when
running cmake:
-DCMAKE_TOOLCHAIN_FILE=~/code/serenity/Toolchain/CMakeToolchain.txt