Cross build environment

This article Article description::provides instructions on creating a cross build environment using [[crossdev.]]

Cross build environments are needed for different situations:


 * To cross build software for slow target hosts on a fast build host.
 * To build software with a different toolchain (e.g. different libc versions).
 * When a specialized system environment is needed:
 * e.g. a separate multilib system for binaries with abnormal dependencies to be kept seperate from the main system (like the Steam platform).
 * e.g. a base image for Docker containers.

Create the cross toolchain

 * Install crossdev:


 * Install the toolchain. Target takes a tuple ; see


 * Examples for targets:
 * Bare metal ARM targets:  (see ARM what else is needed for Cortex-R and Cortex-M devices)
 * For a Raspberry Pi:
 * Raspberry Pi A, A+, B, B+:
 * Raspberry Pi 2 or 3 B in 32-bit mode:
 * Raspberry Pi 3 64-bit, Raspberry Pi 4
 * For a amd64 multilib environment (when you're not in x86_64 natively):
 * And many more combinations, depending on support in gcc, a libc, binutils etc... It solely depends on the target platform.

Update the target build configuration

 * The target should be changed according to the installation handbook. For the base system at least, these options should be checked and the rest can be configured later:


 * Set the appropriate profile. See below for target architecture specific examples.
 * If built on amd64, see the lib64-bug at

Raspberry Pi specific

 * Set the appropriate make profile. For the Raspberry Pi, it might be be:

Allwinner A20 specific
In addition to the auto-generated file content, the following modifications are necessary for successful cross-compilation:


 * Set the appropriate make profile. For Allwinner A20 based boards it would be:

Build the base system
The base system can either be be built from scratch or stage3 tarball can be unpacked into. To build it from scratch:


 * Build the system packages

For the Raspberry Pi, it would be:

For the Allwinner A20, it would be:

(Do not worry about failed packages, this will be fixed later)


 * Build other essential packages:


 * To build the failed packages, it may be needed to compile them "natively", which means in this case, that the packages need to be compiled in the target chroot environment. If the target host has a different architecture a qemu-chroot is needed. For targets that the build host CPU can handle directly, the following steps can be skipped, to chroot directly into the target environment.
 * Install QEMU on the host:


 * Prepare QEMU for the target (in this case for the ARM architecture):


 * Install QEMU to the build environment:


 * A first test:

If it works, leave the chroot and go on with the next steps.
 * Optional: This step is not necessary in most cases. To make the target environment emulation more complete, a wrapper can be used, that passes the correct cpu option to qemu. The following would be an example for the Raspberry Pi cpu option for qemu (-cpu arm1176). Please check if the command at the end (qemu-arm) is present on the build host.


 * Build it with

Rust Packages
Some packages (such as gnome-base/librsvg) depend on a rust cross toolchain. To build the rust cross toolchain first modify your make.conf to add the necessary LLVM targets:

Then modify the environment for the package dev-lang/rust. Note, this must be done in the following file, package.env does not get parsed the same way as the following file.

Now, rebuild rust and llvm with the new configuration.

Next, install the rust standard library for the target architecture. This is accomplished by first adding it to the cross overlay. then installing it.

Now unmask it:

And install it:

Now cross emerging rust packages should work (provided the packages do not have bugs such as accidental using the host linker)

Chroot into the target environment

 * Create a chroot script


 * To chroot into the new environment, run the script and complete the setup of the build environment.
 * Create the Portage temporary directory:
 * Update and  and run:
 * Check/reload config:
 * To run inside the chroot, it is required that other config variables are passed to Portage. This can be done with an alias:
 * Packages that were unable to cross-compile can now be built with:
 * After installation of the base system, the target environment can be finished according to the standard installation handbook for the architecture used.
 * To run inside the chroot, it is required that other config variables are passed to Portage. This can be done with an alias:
 * Packages that were unable to cross-compile can now be built with:
 * After installation of the base system, the target environment can be finished according to the standard installation handbook for the architecture used.
 * After installation of the base system, the target environment can be finished according to the standard installation handbook for the architecture used.
 * After installation of the base system, the target environment can be finished according to the standard installation handbook for the architecture used.

Known bugs and limitations

 * On amd64 build hosts, some cross-compiled packages end up in the target environment in even if it is not a 64bit target, so set a symlink:


 * Some packages that create new system users fail to create them in the target environment and create them in build host instead (e.g. ). Create the user manually or emerge the package in the chroot again.
 * If the build host is no-multilib and target environment is multilib and fails to compile because of missing 32bit support of the cross compiler:
 * Temporarily remove the dependency on in ;
 * Emerge after that in the chroot environment;
 * Now it is possible to emerge in the chroot.
 * If in an arm64 chroot, emerge fails just after the message  is printed to the terminal, add   to.
 * LTO users on the host machine can run into issues with autoconf not being able to check for endianess in programs such as Python so it's wise to disable your LTO flags in make.conf while running crossdev.
 * Compiling Musl with LTO has a negative effect in most cases. See  for more information.

Updating the cross toolchain

 * After creating the cross toolchain for the first time portage will take over managing updates for toolchain components.
 * There are cases where emerging a new version of a toolchain can fail.
 * In particular this is known to happen for major version updates of gcc.
 * If you encounter failures when emerging a cross gcc toolchain e.g. when updating as in
 * Fix by bootstraping the toolchain using crossdev as in.

Cross building static binaries for closed systems
Static binaries are not needed often, but there are some occasions where they are useful:


 * When creating (Docker) container images. According to the container philosophy it is recommended to run only one process per container. It is also recommended to put as little as possible into a container. In this case one statically linked binary is desirable.
 * When a program will run on a closed system like an ARM device with Android. In the case of Android it is possible to either use the Android's NDK and/or its libc implementation "bionic" or build a statically linked binary, that depends on no system libs and can run standalone.

Cross build toolchain for static binaries
It is pretty much the same as above beside that it is not needed to emerge a full @system. The build essentials are enough.


 * If the target is an Android device, the architecture is probably armv7a or arvm8a, so the tuple  could be
 * Compiling the build essentials for an example Android toolchain could look like:


 * should be switched on after installation of the base system (e.g. if the target program needs )

Customized glibc
There are some issues with glibc. This does not affect alternative libcs like µclibc or musl. When using glibc, pay attention to the following.


 * Even when a program was built with, the resulting binaries aren't necessary really static. Because of design decisions of  glibc, at least the  files are looked up dynamically. To force nss linked statically the flag   can be used for compiling glibc.


 * When a program is linked statically and makes use of glibc's NSS features like  the lookup of user names fails when  is set to "compat". Set it to files in this case:


 * The glibc has hard-coded absolute paths for some configuration files like the file. On a closed system (like Android) these files doesn't necessarily exist and without them DNS lookups will fail. Normally, the files can't be written without root privileges. If becoming root is not an option, glibc must be customized to look at a different location for these files. Keep in mind that this is only necessary if the program makes use of glibc functions which require these files. But virtually every program that connects to the internet uses   and therefore needs a.

(Optional) Create an customized glibc ebuild
This step is only necessary if the glibc config files don't reside in and the target program makes use of glibc's lookup functions (probably when the program does DNS or username lookups)


 * 1) Copy the content of the  directory to a local ebuild repository and create a custom glibc ebuild
 * 2) Make the following changes:
 * 3) Remove the KEYWORDS line (to prevent accidental use in other environments).
 * 4) If required, change the path to the config files, by adding to the   section

Rebuild a customized glibc
Rebuild glibc with static options. When the default path was changed in the previous step remember to change  here, as appropriate.

Build the desired Software

 * Chroot into the target environment (e.g. )
 * If the default path for glibc config files was changed, symlink it
 * Build a statically linked package
 * An example for Android with is:
 * An example for a statically linked is:

Example: Use a statically linked privoxy on Android

 * Unzip the binary tarball from above to on the Android device (e.g. through the ssh server "sshelper" via Google Play or "ftpserver" via F-Droid)
 * Change the Privoxy config 's entries with   and   to   and.


 * Put a in  (e.g. with nameservers from www.opennicproject.org or the ad blocking nameservers from www.alternate-dns.com  )
 * Put a Privoxy startscript in


 * Install the Android app "connectbot" (available via F-Droid)
 * Open a local connection named "privoxy" and close it again
 * Hold long the "privoxy" connection to open the context menu and choose "edit host"
 * Insert as "automation"-task:  (a newline is needed at the end)
 * To start Privoxy, open the connection in connectbot
 * To browse through Privoxy, add as proxy: localhost/8118 to the mobile data APNs (in the Android control panel at "mobile networks" → "APNs") and WiFis.