LXD

LXD Article description::is a next generation system container manager. The core of LXD is a privileged daemon which exposes a REST API over a local Unix socket as well as over the network (if enabled).

LXD isn't a rewrite of LXC; in fact it is built on top of LXC to provide a new, better user experience. Under the hood, LXD uses LXC through liblxc and its Go binding to create and manage the containers. It's basically an alternative to LXC's tools and distribution template system with the added features that come from being controllable over the network.

For those new to container technology, it can be useful to first read the LXC Virtualization Concepts article.

Key features of LXD include:


 * It prefers to launch unprivileged containers (secure by default).
 * A command-line client interacts with a daemon.
 * Configuration is made intuitive and scriptable through cascading profiles.
 * Configuration changes are performed with the command (not config files).
 * Multiple hosts can be federated together (with a certificate-based trust system).
 * A federated setup means that containers can be launched on remote machines and live-migrated between hosts (using CRIU technology).
 * It is usable as a standalone hypervisor or integrated with Openstack Nova

Kernel configuration
It is a good idea to have most kernel flags required by and.

Do you have plans for running systemd-based unprivileged containers? You will probably need to enable the "Gentoo Linux -> Support for init systems, system and service managers -> systemd" (CONFIG_GENTOO_LINUX_INIT_SYSTEMD)

Authorize a non-privileged user
All members of the group can use any of the available containers, irrespective of who created the container.

This will allow a non-root user to interact with the control socket which is owned by the UNIX group. For the group change to take effect, users need to log out and log back in again.

Configure subuid/subgid
LXD requires that subuids and subgids for the  user are propely configured. An overview for the recommended configuration of subuid/subgids is given in the wiki - Subuid subgid.

OpenRC
The service is available and can be added to the  runlevel:

systemd
The systemd unit file has also been installed.

Configuration
has a few available options related to debug output, but the defaults are adequate for this quick start.'

Configure the bridge
If a new bridge was created by, start it now.

If desired, the bridge can be configured to come up automatically in the runlevel.

Launch a container
Add an image repository at a remote called "images":

This is an untrusted remote, which can be a source of images that have been published with the --public flag. Trusted remotes are also possible, and are used as container hosts and also to serve private images. This specific remote is not special to LXD; organizations may host their own images.

There are Gentoo images in the list, although they are not maintained by the Gentoo project. LXC users may recognize these images as the same ones available using the "download" template.

A shell can be run in the container's context.

While the container sees its processes as running as the root user, running  on the host shows the processes running as UID 1000000. This is the advantage of unprivileged containers: root is only root in the container, and is nobody special in the host. It is possible to manipulate the subuid/subgid maps to allow containers access to host resources (for example, write to the host's X socket) but this must be explicitly allowed.

Configuration
Configuration of containers is managed with the  and   commands. The two commands provide largely the same capabilities, but  acts on single containers while   configures a profile which can be used across multiple containers.

Importantly, containers can be launched with multiple profiles. The profiles have a cascading effect so that a profile specified later in the list can add, remove, and override configuration values that were specified in a earlier profile. This can allow for complex setups where groups of containers can be specified which share some properties but not others.

The default profile is applied if no profile is specified on the command line. In the quick start, the  omitted the profile, and so was equivalent to:

Notice that that the default profile only specifies that a container should have a single NIC which is bridged onto an existing bridge lxcbr0. So, having a bridge with that name is not a hard requirement, it just happens to be named in the default profile.

Available configuration includes limits on memory and CPU cores, and also devices including NICs, bind mounts, and block/character device nodes.

Configuration is documented in (substitute the correct version of course).

Example
Here a container is launched with the default profile and also a "cpusandbox" profile which imposes a limit of one CPU core. A directory on the host is also bind-mounted into the container using the container-specific  command.

First, prepare a reusable profile.

requires a container name, so a container is initialized.

In this example a host directory is bind-mounted into the container at. While this could be configured in a profile, instead it will be considered an exclusive feature for that container.

Set the directory to be owned by the container's root user (really UID 1000000 in the host).

Multi-host setup
Two hosts on a network, alpha and beta, are running the lxd daemon. The goal is to run commands on alpha which can manipulate containers and images on either alpha or beta.

Setup
Configure the daemon on the remote to listen over HTTPS instead of the default local Unix socket.

Restart the daemon after this step, and be sure that the firewall will accept incoming connections as specified.

On beta configure a trust password, which is only used until certificates are exchanged.

Add the beta remote to alpha.

Result
It is now possible to perform actions on beta from alpha using the remote: syntax

To copy containers or images, the source ("from") host must have its daemon listening via HTTPS not Unix socket.

Virtual machines
LXD can use QEMU to run virtual machines. The default image server already hosts many virtual machine alternatives, even pre-configured desktop vm images. You'll find virtual machine images by checking the "TYPE"-field. Many pre-configured desktop images can be found with a "desktop" in their description.

Running and operating virtual machines requires QEMU to be installed, with the following USE-flags enabled: spice, usbredir, virtfs.

For graphical sessions to work in your virtual machine, i.e. logging to a desktop, either or  needs to be installed.

The following kernel options are needed:,  ,   and either   or   depending on your CPU. Please see QEMU for more accurate config options. You'll also need to enable virtualization in your BIOS, otherwise you'll get an error message of "KVM: disabled by BIOS". Basically, make sure exists before trying to launch a virtual machine. And setup QEMU properly so it works.

Get a virtual machine image, and launch it:

Access the shell of your virtual machine:

Access the desktop/GUI of your virtual machine:

Allocating resources in a virtual machine
By default the virtual machine images are very starved. You may need to use or external tools to give them more resources. You can configure many resources similarly to containers, and there are tons of these configuration options. Please refer to upstream documentation about adjusting resources for lxd containers.

CPU
Give the VM 8 cores:

Or to allow it to use 50 % of CPU capability:

Give it the lowest CPU priority:

Disk size
On the host resize image with.

Now move on to the virtual machine.

Memory
Give your VM 8 GB of memory:

Also accepts.

Network
Whether you want to receive an IP address via dhcp or statically, the Handbook has written steps that work in a virtual machine. Get your interface:

enp5s0: flags=4098 mtu 1500 ether 00:61:3e:a3:82:51 txqueuelen 1000  (Ethernet) RX packets 0 bytes 0 (0.0 B)        RX errors 0  dropped 0  overruns 0  frame 0 TX packets 0 bytes 0 (0.0 B)        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73 mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128  scopeid 0x10 loop txqueuelen 1000  (Local Loopback) RX packets 0 bytes 0 (0.0 B)        RX errors 0  dropped 0  overruns 0  frame 0 TX packets 0 bytes 0 (0.0 B)        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

Live migration
TODO

Automatic BTRFS integration
When LXD detects that is on a Btrfs filesystem, it uses Btrfs' snapshot capabilities to ensure that images, containers and snapshots share blocks as much as possible. No user action is required to enable this behavior.

When the container was launched in the Quick Start section above LXD created subvolumes for the image and container. The container filesystem is a copy-on-write snapshot of the image.

Making a snapshot of the running container filesystem creates another copy-on-write snapshot.

/dev/lxd/sock
A socket is bind-mounted into the container at. It serves no critical purpose, but is available to users as a means to query configuration information about the container.

Containers freeze on 'lxc stop' with OpenRC (+ SysVinit)
If the container freezes during the stop command with

while using OpenRC, try to turn it off directly with :

If that works, edit the file in the container adding following part:

Shutdown the container with, and next time it is booted, the container should work like normal and should work as expected. Be careful when doing world updates to not blindly merge changes to.

Running systemd based containers on OpenRC hosts
To support systemd guests, e.g. ubuntu/debian/arch linux containers, on an OpenRC system, the host must be modified to support the systemd cgroup. It is recommended to use cgroupsv2 as most containers support it and OCI runtimes and  also expect cgroupsv2 to be present.

To enable the cgroupsv2 modify to set   and uncomment and add  :

Older versions up to lxd-3.9 might need a raw.lxc config entry in addition to mount the host's cgroups automagically into the container:

For more details take a look a the upstream issue on github.com

Using LXD with self-spun ZFS pools
If you use ZFS with LXD and provide it the full pool path, then LXD will export the pool on shutdown for safety.

On startup, LXD will look for pools in a standard path, and in the block devices (which can be displayed using.

If you create your own pools outside of LXD and those are not in the standard path or in block devices, you must import them explicitly before starting LXD if you want LXD to find them. If you do not, LXD will fail to start.

(See https://discuss.linuxcontainers.org/t/lxd-exports-zfs-pools-at-shutdown-but-does-not-import-them-properly-at-startup/13031/2 for more information.)

External Resources

 * Just me and Opensource Youtube channel, multiple LXD related guide videos. Better filtering.
 * User:Juippis/The_ultimate_testing_system_with_lxd A real life Gentoo usage example.