From Gentoo Wiki
Jump to:navigation Jump to:search

Docker is a container virtualization environment which can establish development or runtime environments without modifying the environment of the base operating system. It has the ability to deploy instances of containers that provide a thin virtualization, using the host kernel, which makes it faster and lighter than full hardware virtualization.

Containers that produce kernel panics will induce kernel panics into the host operating system.



If the kernel has not been configured properly before merging the app-containers/docker package, a list of missing kernel options will be printed by emerge. These kernel features must be enabled manually.

Press the / key while in the ncurses-based menuconfig to search the name of the configuration option.

For the most up-to-date values, check the contents of the CONFIG_CHECK in /var/db/repos/gentoo/app-containers/docker/docker-9999.ebuild file.

A graphical representation would look something like this:

Kernel configuration may vary depending on different kernel versions, Docker versions, and different USE flags. It is recommended to read messages for package app-containers/docker when emerging Docker, and recompile kernel based on what is not set when it should be.
KERNEL Configuring the kernel for Docker
General setup  --->
    [*] POSIX Message Queues
    -*- Control Group support  --->
        [*]   Memory controller 
        [*]     Swap controller
        [*]       Swap controller enabled by default
        [*]   IO controller
        [ ]     IO controller debugging
        [*]   CPU controller  --->
              [*]   Group scheduling for SCHED_OTHER
              [*]     CPU bandwidth provisioning for FAIR_GROUP_SCHED
              [*]   Group scheduling for SCHED_RR/FIFO
        [*]   PIDs controller
        [*]   Freezer controller
        [*]   HugeTLB controller
        [*]   Cpuset controller
        [*]     Include legacy /proc/<pid>/cpuset file
        [*]   Device controller
        [*]   Simple CPU accounting controller
        [*]   Perf controller
        [ ]   Example controller 
    -*- Namespaces support
        [*]   UTS namespace
        -*-   IPC namespace
        [*]   User namespace
        [*]   PID Namespaces
        -*-   Network namespace
-*- Enable the block layer  --->
    [*]   Block layer bio throttling support
-*- IO Schedulers  --->
    [*]   CFQ IO scheduler
        [*]   CFQ Group Scheduling support   
[*] Networking support  --->
      Networking options  --->
        [*] Network packet filtering framework (Netfilter)  --->
            [*] Advanced netfilter configuration
            [*]  Bridged IP/ARP packets filtering
                Core Netfilter Configuration  --->
                  <*> Netfilter connection tracking support 
                  *** Xtables matches ***
                  <*>   "addrtype" address type match support
                  <*>   "conntrack" connection tracking match support
                  <M>   "ipvs" match support
            <M> IP virtual server support  --->
                  *** IPVS transport protocol load balancing support ***
                  [*]   TCP load balancing support
                  [*]   UDP load balancing support
                  *** IPVS scheduler ***
                  <M>   round-robin scheduling
                  [*]   Netfilter connection tracking
                IP: Netfilter Configuration  --->
                  <*> IPv4 connection tracking support (required for NAT)
                  <*> IP tables support (required for filtering/masq/NAT)
                  <*>   Packet filtering
                  <*>   IPv4 NAT
                  <*>     MASQUERADE target support
                  <*>   iptables NAT support  
                  <*>     MASQUERADE target support
                  <*>     NETMAP target support
                  <*>     REDIRECT target support
        <*> 802.1d Ethernet Bridging
        [*] QoS and/or fair queueing  ---> 
            <*>   Control Group Classifier
        [*] L3 Master device support
        [*] Network priority cgroup
        -*- Network classid cgroup
Device Drivers  --->
    [*] Multiple devices driver support (RAID and LVM)  --->
        <*>   Device mapper support
        <*>     Thin provisioning target
    [*] Network device support  --->
        [*]   Network core driver support
        <M>     Dummy net driver support
        <M>     MAC-VLAN support
        <M>     IP-VLAN support
        <M>     Virtual eXtensible Local Area Network (VXLAN)
        <*>     Virtual ethernet pair device
    Character devices  --->
        -*- Enable TTY
        -*-   Unix98 PTY support
        [*]     Support multiple instances of devpts (option appears if you are using systemd)
File systems  --->
    <*> Overlay filesystem support 
    Pseudo filesystems  --->
        [*] HugeTLB file system support
Security options  --->
    [*] Enable access key retention support
    [*]   Enable register of persistent per-UID keyrings
    [*]   Diffie-Hellman operations on retained keys

After exiting the kernel configuration, rebuild the kernel. If the kernel rebuild also performs a kernel upgrade, be sure to rebuild the bootloader's menu configuration, then reboot the system to the newly recompiled kernel binary.

Compatibility check

There is a Docker way of checking the kernel configuration compatibility:

user $/usr/share/docker/contrib/


In versions prior to 20.10.1, the docker command line tool had been included with app-containers/docker, however for newer versions it has been moved to app-containers/docker-cli.

Install app-containers/docker and app-containers/docker-cli:

root #emerge --ask --verbose app-containers/docker app-containers/docker-cli

PaX kernel

When running a PaX kernel (like the deprecated hardened-sources package), memory protection on containerd needs to be disabled.

Tools in the sys-apps/paxctl package are necessary for this operation. See Hardened/PaX Quickstart for an introduction.

root #/sbin/paxctl -m /usr/bin/containerd

For the hello-world example, set this flag for containerd-shim and runc:

root #/sbin/paxctl -m /usr/bin/containerd-shim
root #/sbin/paxctl -m /usr/bin/runc

If an issue with denied chmods in chroots occurs, a more recent version of Docker (>=1.12) is needed. Use the ~amd64 Keyword for Docker and its dependencies listed subsequently when running emerge app-containers/docker again.




After Docker has been successfully installed, add it to the system's default runlevel then tell OpenRC to start the daemon:

root #rc-update add docker default
root #rc-service docker start

If the registry service is required:

root #rc-update add registry default
root #rc-service registry start

If additional options are required to be passed to the docker daemon, then edit the /etc/conf.d/docker file. See upstream documentation for the various options that can be passed to the DOCKER_OPTS variable.


To have Docker start on boot, enable it:

root #systemctl enable docker.service

To start it now:

root #systemctl start docker.service

To pass any additional options to the docker daemon, create the /etc/docker/daemon.json file. See the upstream documentation for various options that can be placed into this systemd specific configuration file.


Add relevant users to the docker group:

root #usermod -aG docker <username>
Allowing a user to talk to the Docker daemon is equivalent to giving it full root access to the host. More

Storage driver

By default, Docker will use the device-mapper storage driver. View Docker's settings in detail with the info subcommand:

user $docker info

To change the storage driver, first verify the host machines kernel has support for the desired fileystem. The btrfs filesystem will be used in this example:

user $grep btrfs /proc/filesystems

Be aware the root of the docker engine (/var/lib/docker/ by default) must be adjusted to use the btrfs filesystem. If the btrfs storage pool is located under /mnt or /srv, then be sure to change the root (call the 'graph' in docker speak) of the engine.


OpenRC users will need to adjust the DOCKER_OPTS variable in the service configuration file located in /etc/conf.d. The example below displays a change to the storage driver and the docker engine root:

FILE /etc/conf.d/docker
DOCKER_OPTS="--storage-driver btrfs --data-root /srv/var/lib/docker"

Start or restart the docker service in order to for the changes to take effect and then validate the changes:

root #docker info


systemd users will need to create a /etc/docker/daemon.json file in order to change the storage driver for the docker service. For example, to use the btrfs driver:

FILE /etc/docker/daemon.json
    "storage-driver": "btrfs"

(Re)start the service in order to make the changes take effect:

root #systemctl restart docker



In order to test the installation, run the following command:

user $docker run --rm hello-world

Hello from Docker.
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker Hub account:

For more examples and ideas, visit:

That will first download from the Docker Hub the image named hello-world (if it has not been downloaded locally yet), then it will run it inside new namespaces. It purpose is just to display some text through a container.

Building from a Dockerfile

Create a new Dockerfile in an empty directory with the following content:

FILE Dockerfile
FROM php:5.6-apache


user $docker build -t my-php-app .
user $docker run -it --rm --name my-running-app my-php-app

Own images

There are two different ideas how a container should be built:

  • The minimal approach: According to the container philosophy a container should only contain what is needed to serve one process. In this case ideally the container consists of one static binary.
  • The VM approach: A container can be treated like a full system virtualization environment. In this case the container includes a whole operating system.

Build environment for the image

The image can be created out of a live system or - preferably - out of a special build environment.

To create a build environment for the image, follow the Cross_build_environment guide. There is no need to emerge a full @system. The build essentials are enough.

The toolchain tuple could look like x86_64-docker-linux-gnu.

The build essentials can be build like this:

root #x86_64-docker-linux-gnu-emerge -uva1 --keep-going $(egrep '^[a-z]+' /usr/portage/profiles/default/linux/ portage openrc util-linux netifrc

The minimal approach: Statically linked binaries using Crossdev

There are some caveats with this. The hints for statically linked binaries should be kept in mind for this.

To build an nginx-image, first chroot into the build environment (e.g. chroot-x86_64-docker).

Build the desired package statically linked:

root #NGINX_MODULES_HTTP="gzip" CFLAGS="$(emerge --info|grep ^CFLAGS|grep -oP '(?<=").*(?=")') -static" CXXFLAGS=$CFLAGS LDFLAGS="$(emerge --info|grep LDFLAGS|grep -oP '(?<=").*(?=")') -static" PKGDIR=/tmp/ emerge-chroot -va1 --buildpkgonly nginx:mainline

Extract the binary package to a tmp dir (e.g. mkdir /tmp/nginx && cd /tmp/nginx && tar xjvf /tmp/www-servers/nginx-*.tbz2)

Change the nginx configuration. At least add daemon off; and swap listen for listen

Add etc/passwd,etc/resolv.conf,etc/nsswitch.conf and a appropriate etc/ssldirectory. Make sure the etc/nsswitch.conf has "files" instead of "compat" and the etc/passwd file has an "nginx" user entry.

Create the docker image out of the current directory:

user $tar --numeric-owner -cj --to-stdout . |docker import - nginx-image

Spawn a container and start nginx:

user $docker run -p 80:80 -p 443:443 --name nginx-test -ti --rm nginx-image nginx

Alternative minimal approach: Dynamically linked binaries using Kubler

Kubler is a Gentoo based image meta builder. It will help to automate the build process to create Gentoo based containers and is especially helpfulfor those new to Crossdev. It allows a fine graded configuration of the build process but also comes with a list of predefined containers that will be build on the current system according to current Portage, the script will extract the dynamic libraries required by the application and copy them in the container. The container are linked to a static busybox image that allow basic shell interaction but the only way to update it is rebuild it with the kubler script.

The VM-like approach

Create the image out of the full environment:

user $cd /usr/x86_64-docker-linux-gnu/ && tar --numeric-owner -cj --to-stdout . --exclude=./{proc,sys,tmp/portage} .|docker import - gentoo-image

Spawn a new gentoo container and start a shell:

user $docker run -v /usr/portage:/usr/portage --name gentoo-test -ti gentoo-image /bin/bash

This image can used as a base image. To build a nginx image for example run emerge nginx inside the container and push it back as new image afterwards:

user $docker commit --message "nginx-image" gentoo-test


Docker service crashes/fails to start (OpenRC)

After adding --storage-driver btrfs to DOCKER_OPTS and restarting the Docker service, Docker may crash. Check this with rc-status.

If this is the case, try adding the btrfs USE flag for the Docker package, and updating Docker package.

root #touch /etc/portage/package.use/docker
root #nano /etc/portage/package.use/docker
FILE /etc/portage/package.use/docker
app-containers/docker btrfs device-mapper

Install Docker with the new USE flags

root #emerge --update --deep --newuse app-containers/docker

Docker service restart

root #rc-service docker restart

Docker service fails to start (systemd)

Some users have issues on starting docker.service because of device-mapper error. It can be solved by loading a different storage-driver. E.g. Loading “overlay” graph driver instead of “device-mapper” graph driver.

“overlay” graph driver requires "Overlay filesystem support" in kernel configuration:

KERNEL Configuring the kernel for Docker
File systems  --->
    <*> Overlay filesystem support

Add following to /etc/portage/package.use/docker, then re-emerge Docker will solve this issue:

FILE /etc/portage/package.use/docker
app-containers/docker overlay -device-mapper

In case of an error saying, Error starting daemon: Error initializing network controller: list bridge addresses failed: no available network, the docker0 network bridge may be missing. Please see the following Docker issue which provides a bash script solution to create the docker0 network bridge:

Docker service runs but fails to start container (systemd)

If using systemd-232 or higher and receive an error related to cgroups:

user $docker run hello-world
container_linux.go:247: starting container process caused
docker: Error response from daemon: invalid header field value "oci runtime errotainer
init caused \\\"rootfs_linux.go:54: mounting \\\\\\\"cgroup\\\\\\\" to
ro38729f19a34501/merged\\\\\\\" at \\\\\\\"/sys/fs/cgroup\\\\\\\" caused \\\\\\\"n

Add the following line to the kernel boot parameters:

CODE Kernel Boot Parameter

Docker service runs but fails to start container (systemd)

If using systemd-232 or higher, and it throws this error:

user $docker run hello-world
applying cgroup configuration for process caused \"open /sys/fs/cgroup/docker/cpuset.cpus.effective: no such file or directory

Add the following line to the kernel boot parameters:

CODE Kernel Boot Parameter

If using systemd and received this error:

user $docker run hello-world
cgroup mountpoint does not exist

Run the following commands as root:

root #mkdir /sys/fs/cgroup/systemd
root #mount -t cgroup -o none,name=systemd cgroup /sys/fs/cgroup/systemd

This is not ideal as these commands must be run after each reboot, but it works.

Docker service fails because cgroup device not mounted (systemd)

By default systemd uses hybrid cgroup hierarchy combining cgroup and cgroup2 devices. Docker still needs cgroup(v1) devices. Activate USE flag cgroup-hybrid for systemd.

Activate USE flag for systemd

FILE /etc/portage/package.use/systemd
sys-apps/systemd cgroup-hybrid

Install systemd with the new USE flags

root #emerge --ask --oneshot sys-apps/systemd


If systemd-networkd is used for network management, additional options are needed for IP forwarding and/or IP masquerade.

FILE /etc/systemd/network/


These options are used instead of the sysctl settings for ip forwarding and/or masquerade.

In case the Docker containers are shutting down, with errors from systemd-udevd that complain of not being able to assign persistent MAC address to virtual interface(s): See

FILE /etc/systemd/network/
NamePolicy=kernel database onboard slot path

See also

  • LXC — a virtualization system making use of the cgroups feature of the Linux kernel.

External resources