From Gentoo Wiki
Jump to:navigation Jump to:search

Docker is a container virtualization environment which can establish development or runtime environments without modifying the environment of the base operating system. It has the ability to deploy instances of containers that provide a thin virtualization, using the host kernel, which makes it faster and lighter than full hardware virtualization.

Containers that produce kernel panics will induce kernel panics into the host operating system.


USE flags

USE flags for app-containers/docker The core functions you need to create Docker images and run Docker containers

apparmor Enable support for the AppArmor application security system
btrfs Enables dependencies for the "btrfs" graph driver, including necessary kernel flags.
container-init Makes the a staticly-linked init system tini available inside a container.
device-mapper Enables dependencies for the "devicemapper" graph driver, including necessary kernel flags.
overlay Enables dependencies for the "overlay" graph driver, including necessary kernel flags.
seccomp Enable seccomp (secure computing mode) to perform system call filtering at runtime to increase security of programs
selinux !!internal use only!! Security Enhanced Linux support, this must be set by the selinux profile or breakage will occur


If the kernel has not been configured properly before merging the app-containers/docker package, a list of missing kernel options will be printed by emerge. These kernel features must be enabled manually.

Press the / key while in the ncurses-based menuconfig to search the name of the configuration option.

For the most up-to-date values, check the contents of the CONFIG_CHECK in /var/db/repos/gentoo/app-containers/docker/docker-9999.ebuild file.

A graphical representation would look something like this:

Kernel configuration may vary depending on different kernel versions, Docker versions, and different USE flags. It is recommended to read messages for package app-containers/docker when emerging Docker, and recompile kernel based on what is not set when it should be.
KERNEL Configuring the kernel for Docker
General setup  --->
   [*] POSIX Message Queues
   BPF subsystem  --->
      [*] Enable bpf() system call (<span style="color:green;">Optional</span>)
   [*] Control Group support  --->
      [*] Memory controller 
      [*] Swap controller (<span style="color:green;">Optional</span>)
      [*]   Swap controller enabled by default (<span style="color:green;">Optional</span>)
      [*] IO controller (<span style="color:green;">Optional</span>)
      [*] CPU controller  --->
         [*] Group scheduling for SCHED_OTHER (<span style="color:green;">Optional</span>)
         [*]   CPU bandwidth provisioning for FAIR_GROUP_SCHED (<span style="color:green;">Optional</span>)
         [*] Group scheduling for SCHED_RR/FIFO (<span style="color:green;">Optional</span>)
      [*] PIDs controller (<span style="color:green;">Optional</span>)
      [*] Freezer controller
      [*] HugeTLB controller (<span style="color:green;">Optional</span>)
      [*] Cpuset controller
         [*]  Include legacy /proc/<pid>/cpuset file (<span style="color:green;">Optional</span>)
      [*] Device controller
      [*] Simple CPU accounting controller
      [*] Perf controller (<span style="color:green;">Optional</span>)
      [*] Support for eBPF programs attached to cgroups (<span style="color:green;">Optional</span>)
   [*] Namespaces support
      [*] UTS namespace
      [*] IPC namespace
      [*] User namespace (<span style="color:green;">Optional</span>)
      [*] PID Namespaces
      [*] Network namespace
General architecture-dependent options  --->
   [*] Enable seccomp to safely execute untrusted bytecode (<span style="color:green;">Optional</span>)
[*] Enable the block layer  --->
   [*] Block layer bio throttling support (<span style="color:green;">Optional</span>)
[*] Networking support  --->
    Networking options  --->
       [*] Network packet filtering framework (Netfilter)  --->
            [*] Advanced netfilter configuration
            [*]   Bridged IP/ARP packets filtering
               Core Netfilter Configuration  --->
                  [*] Netfilter connection tracking support
                  [*] Network Address Translation support 
                  [*] MASQUERADE target support
                  [*] Netfilter Xtables support
                  [*]    "addrtype" address type match support
                  [*]    "conntrack" connection tracking match support
                  [*]    "ipvs" match support (<span style="color:green;">Optional</span>)
                  [*]    "mark" match support 
            [*] IP virtual server support  ---> (<span style="color:green;">Optional</span>)
               [*] TCP load balancing support (<span style="color:green;">Optional</span>)
               [*] UDP load balancing support (<span style="color:green;">Optional</span>)
               [*] round-robin scheduling (<span style="color:green;">Optional</span>)
               [*] Netfilter connection tracking (<span style="color:green;">Optional</span>)       
            IP: Netfilter Configuration  --->
               [*] IP tables support
               [*]    Packet filtering
               [*]    iptables NAT support
               [*]      MASQUERADE target support
               [*]      REDIRECT target support (<span style="color:green;">Optional</span>)
        [*] 802.1d Ethernet Bridging
        [*]   VLAN filtering
        [*] QoS and/or fair queueing  --->  (<span style="color:green;">Optional</span>)
           [*] Control Group Classifier (<span style="color:green;">Optional</span>)
        [*] L3 Master device support
        [*] Network priority cgroup (<span style="color:green;">Optional</span>)
Device Drivers  --->
   [*] Multiple devices driver support (RAID and LVM)  --->
      [*] Device mapper support (<span style="color:green;">Optional</span>)
      [*]  Thin provisioning target (<span style="color:green;">Optional</span>)
    [*] Network device support  --->
       [*] Network core drive support
       [*]   Dummy net driver support
       [*]   MAC-VLAN net driver support
       [*]   IP-VLAN support
       [*]   Virtual eXtensible Local Area Network (VXLAN)
       [*]   Virtual ethernet pair device
    Character devices  --->
        -*- Enable TTY
        -*-    Unix98 PTY support
        [*]       Support multiple instances of devpts (option appears if you are using systemd)
File systems  --->
   [*] Btrfs filesystem support (<span style="color:green;">Optional</span>)
   [*]   Btrfs POSIX Access Control Lists (<span style="color:green;">Optional</span>)
   [*] Overlay filesystem support 
   Pseudo filesystems  --->
      [*] HugeTLB file system support (<span style="color:green;">Optional</span>)
Security options  --->
   [*] Enable access key retention support

After exiting the kernel configuration, rebuild the kernel. If the kernel rebuild also performs a kernel upgrade, be sure to rebuild the bootloader's menu configuration, then reboot the system to the newly recompiled kernel binary.

Compatibility check

There is a Docker way of checking the kernel configuration compatibility:

user $/usr/share/docker/contrib/


In versions prior to 20.10.1, the docker command line tool had been included with app-containers/docker, however for newer versions it has been moved to app-containers/docker-cli.

Install app-containers/docker and app-containers/docker-cli:

root #emerge --ask --verbose app-containers/docker app-containers/docker-cli

PaX kernel

When running a PaX kernel (like the deprecated hardened-sources package), memory protection on containerd needs to be disabled.

Tools in the sys-apps/paxctl package are necessary for this operation. See Hardened/PaX Quickstart for an introduction.

root #/sbin/paxctl -m /usr/bin/containerd

For the hello-world example, set this flag for containerd-shim and runc:

root #/sbin/paxctl -m /usr/bin/containerd-shim
root #/sbin/paxctl -m /usr/bin/runc

If an issue with denied chmods in chroots occurs, a more recent version of Docker (>=1.12) is needed. Use the ~amd64 Keyword for Docker and its dependencies listed subsequently when running emerge app-containers/docker again.


The docker daemon configuration is located at /etc/docker/daemon.json, more information on configuring this file is available at:

The current docker configuration can be viewed with:

root #docker info



OpenRC users can adjust the DOCKER_OPTS variable in the service configuration file located in /etc/conf.d. The example below displays a change to the storage driver to btrfs and the docker engine root to /srv/var/lib/docker:

FILE /etc/conf.d/docker
DOCKER_OPTS="--storage-driver btrfs --data-root /srv/var/lib/docker"
See upstream documentation for the various options that can be passed to the DOCKER_OPTS variable.
Configuration changes will not be active until the docker service is reloaded or restarted.

After Docker has been successfully installed and configured, it can be added to the system's default runlevel, starting it at boot:

root #rc-update add docker default
root #rc-service docker start

If the registry service is required:

root #rc-update add registry default
root #rc-service registry start


To have Docker start on boot, enable it:

root #systemctl enable docker.service

To start it now:

root #systemctl start docker.service
Systemd will use configuration present in /etc/docker/daemon.json, information about configuring this file is present above.


Add relevant users to the docker group:

root #usermod -aG docker <username>
Allowing a user to talk to the Docker daemon is equivalent to giving it full root access to the host. More

Storage driver

Overlay2 storage driver is the preferred storage driver for all currently supported Linux distributions, and requires no extra configuration.

View Docker's settings in detail with the info subcommand:

user $docker info

To change the storage driver, first verify the host machines kernel has support for the desired fileystem. The btrfs filesystem will be used in this example:

Btrfs requires additional configuration if your system does not use btrfs as storage:
FILE /etc/docker/daemon.jsonSet the docker storage driver to use btrfs
    "storage-driver": "btrfs"
user $grep btrfs /proc/filesystems

Be aware the root of the docker engine (/var/lib/docker/ by default) must be adjusted to use the btrfs filesystem. If the btrfs storage pool is located under /mnt or /srv, then be sure to change the root (call the 'graph' in docker speak) of the engine.


Port forwarding must be enabled for docker container networking to work.

This can be temporarily enabled using procfs:

user $sudo sysctl net.ipv4.ip_forward=1

A more permanent change can be made with:

FILE /etc/sysctl.d/local.confEnable ip forwarding persistently



In order to test the installation, run the following command:

user $docker run --rm hello-world

Hello from Docker.
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker Hub account:

For more examples and ideas, visit:

That will first download from the Docker Hub the image named hello-world (if it has not been downloaded locally yet), then it will run it inside new namespaces. It purpose is just to display some text through a container.

For most commands, the container ID can be used in place of the container name, not all containers will have a name.

Listing images

Current images can be listed with:

user $docker images

Starting a container from an image

A new container can be started using an image with run. The following command starts a docker container which is an Alpine Linux shell:

user $docker run -it --rm alpine:3.18 ash
Adding -it starts the container interactively, and --rm deletes it once execution is complete.

Listing containers

Current containers can be listed with:

user $docker container list

Viewing container config

To view the configuration for a container:

user $docker container inspect {container name}

Running a command in a running container

To execute a command in an already running container:

user $docker exec {container name} {command}

Stopping a container

A running container can be stopped with:

user $docker stop {container name}

Starting a container

If a container has been stopped, it can be started again with:

user $docker start {container name}

Building from a Dockerfile

Create a new Dockerfile in an empty directory with the following content:

FILE Dockerfile
FROM php:5.6-apache


user $docker build -t my-php-app .
user $docker run -it --rm --name my-running-app my-php-app

Custom Images

Containers are generally structured with either of the following approaches:

  • The minimal approach: According to the container philosophy a container should only contain what is needed to serve one process. In this case ideally the container consists of one static binary.
  • The VM approach: A container can be treated like a full system virtualization environment. In this case the container includes a whole operating system.

Building the image environment

The image can be constructed using many methods. The simplest would involve adding a single binary which can be executed. Using emerge to generate the environment is a simple and effective method, but more advanced methods such as using crossdev or catalyst are possible.

Using emerge to build the environment

Portage can be used to simply construct an application environment.

A Gentoo-based Docker image can be constructed by using emerge with the --root flag. Simply --oneshot the desired packages to that destination.

The following command creates a container for net-p2p/transmission at /var/lib/chroot/builddir/transmission:

root #emerge --ask --verbose --root /var/lib/chroot/builddir/transmission --oneshot transmission
This approach includes many packages such as compilers which are not required for operation.
This approach does not include a shell, or even init system.

Alternative minimal approach: Dynamically linked binaries using Kubler

Kubler is a generic, extendable build orchestrator, written in Bash. It can be used to take advantage of Portage's features to build lightweight Docker or Podman images without needing to mess with crossdev, or as a tool to assist with ebuild development.

Detailed instructions for using Kubler are available Here.

Packing the environment into a tarball

Once the build environment has been created, the contents can be archived with tar to be imported into Docker.

The following command creates gentoo-transmission.tar.gz based on the contents of /var/lib/chroot/builddir/transmission/:

root #tar -czf gentoo-transmission.tar.gz -C /var/lib/chroot/builddir/transmission/ .
When backing up image sources, signing the tarball with gpg can be useful to verify that the file has not been modified after creation.

Importing into Docker

If using a Dockerfile, the tarball can be imported with ADD. To import gentoo-transmission.tar.gz:

FILE Dockerfile
FROM scratch
ADD gentoo-transmission.tar.gz /
EXPOSE 9091/tcp
EXPOSE 51413/tcp
EXPOSE 51413/udp
USER transmission
CMD ["/usr/bin/transmission-daemon", "-f", "-g", "/config"]

The image can be manually imported with:

root #docker import gentoo-transmission.tar.gz

Tagging the Image

The imported image should be visible using docker images:

root #docker images
REPOSITORY           TAG       IMAGE ID       CREATED          SIZE
<none>               <none>    a5c15b539917   13 minutes ago   622MB

Using the image ID a5c15b539917, the tag gentoo-transmission can be applied to this image:

root #docker tag a5c15b539917 gentoo-transmission


Docker service crashes/fails to start (OpenRC)

After adding --storage-driver btrfs to DOCKER_OPTS and restarting the Docker service, Docker may crash. Check this with rc-status.

If this is the case, try adding the btrfs USE flag for the Docker package, and updating Docker package.

root #touch /etc/portage/package.use/docker
root #nano /etc/portage/package.use/docker
FILE /etc/portage/package.use/docker
app-containers/docker btrfs device-mapper

Install Docker with the new USE flags

root #emerge --update --deep --newuse app-containers/docker

Docker service restart

root #rc-service docker restart

Docker service fails because cgroup device not mounted (OpenRC)

On an error like:

unable to start container process: error during container init: error mounting "cgroup" to rootfs at "/sys/fs/cgroup": mount cgroup:/sys/fs/cgroup/openrc (via /proc/self/fd/6), flags: 0xf, data: openrc: invalid argument

The solution is to set following:

FILE /etc/rc.conf

and restart

root #rc-service cgroups restart

Docker service fails to start (systemd)

Some users have issues on starting docker.service because of device-mapper error. It can be solved by loading a different storage-driver. E.g. Loading “overlay” graph driver instead of “device-mapper” graph driver.

“overlay” graph driver requires "Overlay filesystem support" in kernel configuration:

KERNEL Configuring the kernel for Docker
File systems  --->
    <*> Overlay filesystem support

Add following to /etc/portage/package.use/docker, then re-emerge Docker will solve this issue:

FILE /etc/portage/package.use/docker
app-containers/docker overlay -device-mapper

In case of an error saying, Error starting daemon: Error initializing network controller: list bridge addresses failed: no available network, the docker0 network bridge may be missing. Please see the following Docker issue which provides a bash script solution to create the docker0 network bridge:

Docker service runs but fails to start container (systemd)

If using systemd-232 or higher and receive an error related to cgroups:

user $docker run hello-world
container_linux.go:247: starting container process caused
docker: Error response from daemon: invalid header field value "oci runtime errotainer
init caused \\\"rootfs_linux.go:54: mounting \\\\\\\"cgroup\\\\\\\" to
ro38729f19a34501/merged\\\\\\\" at \\\\\\\"/sys/fs/cgroup\\\\\\\" caused \\\\\\\"n

Add the following line to the kernel boot parameters:

CODE Kernel Boot Parameter

Docker service runs but fails to start container (systemd)

If using systemd-232 or higher, and it throws this error:

user $docker run hello-world
applying cgroup configuration for process caused \"open /sys/fs/cgroup/docker/cpuset.cpus.effective: no such file or directory

Add the following line to the kernel boot parameters:

CODE Kernel Boot Parameter

If using systemd and received this error:

user $docker run hello-world
cgroup mountpoint does not exist

Run the following commands as root:

root #mkdir /sys/fs/cgroup/systemd
root #mount -t cgroup -o none,name=systemd cgroup /sys/fs/cgroup/systemd

This is not ideal as these commands must be run after each reboot, but it works.

Docker service fails because cgroup device not mounted (systemd)

By default systemd uses hybrid cgroup hierarchy combining cgroup and cgroup2 devices. Docker still needs cgroup(v1) devices. Activate USE flag cgroup-hybrid for systemd.

Activate USE flag for systemd

FILE /etc/portage/package.use/systemd
sys-apps/systemd cgroup-hybrid

Install systemd with the new USE flags

root #emerge --ask --oneshot sys-apps/systemd


If systemd-networkd is used for network management, additional options are needed for IP forwarding and/or IP masquerade.

FILE /etc/systemd/network/


These options are used instead of the sysctl settings for ip forwarding and/or masquerade.

In case the Docker containers are shutting down, with errors from systemd-udevd that complain of not being able to assign persistent MAC address to virtual interface(s): See

FILE /etc/systemd/network/
NamePolicy=kernel database onboard slot path

See also

  • LXC — a virtualization system making use of Linux's namespaces and cgroups.

External resources