LXC

Introduction
LXC (Linux Containers) was initially created by IBM, available in the mainline Linux kernel. It uses cgroups and in concept is similar to Solaris Zones and FreeBSD Jails. As the previously named technologies it aims to provide an higher level of segregation than a simple chroot.

Virtualization concepts
This section is a basic overview of how lxc fits in to the virtualization world, the type of approach it uses, and the benefits and limitations thereof. If you are trying to figure out if lxc is for you, or it's your first time setting up virtualization under Linux, then you should at least skim this section.

Roughly speaking there are two types of virtualization in use today, container-based virtualization and full virtualization.

Container-based Virtualization (lxc)
Container based virtualization is very fast and efficient. It's based on the premise that an OS kernel provides different views of the system to different running processes. This sort of segregation or compartmentalisation (sometimes called "thick sandboxing") can be useful for ensuring guaranteed access to hardware resources such as CPU and IO bandwidth, whilst maintaining security and efficiency.

On the unix family of operating systems, it is said that container based virtualization has its roots in the 1982 release of the chroot tool, a filesystem subsystem specific container-based virtualization tool that was written by Sun Microsystems founder Bill Joy and published as part of 4.2BSD.

Since this early tool, which has become a mainstay of the unix world, a large number of unix developers have worked to mature more powerful container based virtualization solutions. Some examples:
 * Solaris Zones
 * FreeBSD Jails
 * Linux VServer
 * OpenVZ

On Linux, historically the major two techniques have been Linux-VServer (open source / community driven) and OpenVZ (a free spinoff of a commercial product).

However, neither of these will be accepted in to the Linux kernel. Instead Linus has opted for a more flexible, longer-term approach to achieving similar goals, using various new kernel features. lxc is the next-generation container-based virtualization solution that uses these new features.

Conceptually, lxc can be seen as a further development of the existing 'chroot' technique with extra dimensions added. Where 'chroot'-ing only offers isolation at the file system level, lxc offers complete logical isolation from a container to the host and all other containers. In fact, installing a new Gentoo container from scratch is pretty much the same as for any normal Gentoo installation.

Some of the most notably differences are:
 * each container will share the kernel with the host (and other containers). No kernel need to be present and/or mounted on the containers /boot directory;
 * devices and filesystem will be (more or less) 'inherited' from the host, and need not be configured as would apply for a normal installation;
 * if the host is using the openrc system for bootstrapping, such configuration items will "automagically" be omitted (i.e. filesystem mounts from fstab).

The last point is important to keep lxc based installation as much as simple and the same as for normal installations (no exceptions).

Full Virtualization (not lxc)
Full virtualization and paravirtualization solutions aim to simulate the underlying hardware. This type of solution, unlike lxc and other container-based solutions, usually allow you to run any operating system. Whilst this may be useful for the purposes of security and server consolidation, it is hugely inefficient compared to container based solutions. The most popular solutions in this area right now are probably VMware, KVM/QEMU, Xen, & VirtualBox.

Limitations of lxc
With lxc, you can efficiently manage resource allocation in real time. In addition, you should be able to run different Linux distributions on the same host kernel in different containers (though there may be teething issues with startup and shutdown 'run control' (rc) scripts, and these may need to be modified slightly to make some guests work. That said, maintainers of tools such as openrc are increasingly implementing lxc detection to ensure correct behaviour when their code runs within containers.)

Unlike full virtualization solutions, lxc will not let you run other operating systems (such as proprietary operating systems, or other types of unix).

However, in theory there is no reason why you can't install a full or paravirtualization solution on the same kernel as your lxc host system and run both full/paravirtualised guests in addition to lxc guests at the same time.

Should you elect to do this, there are powerful abstracted virtualization management API under development, such as [libvirt] and [ganeti], that you may wish to check out.

In short: ... but can co-exist with other virtualization solutions if required.
 * One kernel
 * One operating system
 * Many instances

MAJOR Temporary Problems with LXC - READ THIS
As documented over here, basically containers are not functional as security containers at present, in that if you have root on a container you have root on the whole box.
 * root in a container has all capabilities
 * Workaround:
 * Do not treat root privileges in the container any more lightly than on the host itself.
 * legacy UID/GID comparisons in many parts of the kernel code are dumb and will not respect containers
 * Workaround:
 * Do not mount parts of external filesystems within a container, except ro (read only).
 * Do not re-use UIDs/GIDs between the container and the host
 * shutdown and halt will run over the host system.
 * Workaround:
 * Restrict/Replace them in the container

Containers are still useful for isolating applications, including their networking interfaces, and applying resource limits and accounting to those applications. As the above issues are resolved, they will also become functional security containers.

If you are designing a virtualisation solution for the long term and want a timeframe, then with appropriate disclaimers, judging from various comments and experience, an extremely rough timeframe might be 'circa end of 2012'. But no guarantees.

See also CAP_SYS_ADMIN: the new root.

lxc Components
lxc uses two new / lesser known kernel features known as 'control groups' and 'POSIX file capabilities'. It also includes 'template scripts' to setup different guest environments.

Control Groups
Control Groups are a multi-hierarchy, multi-subsystem resource management / control framework for the Linux kernel.

In simpler language, what this means is that unlike the old chroot tool which was limited to the file subsystem, control groups let you define a 'group' encompassing one or more processes (eg: sshd, Apache) and then specify a variety of resource control and accounting options for that control group against multiple subsystems, such as:
 * filesystem access
 * general device access
 * memory resources
 * network device resources
 * CPU bandwidth
 * block device IO bandwidth
 * various other aspects of a control group's view of the system

The user-space access to these new kernel features is a kernel-provided filesystem, known as 'cgroup'. It is typically mounted at /cgroup and provides files similar to /proc and /sys representing the running environment and various kernel configuration options.

POSIX File Capabilities
POSIX file capabilities are a way to allocate privileges to a process that allow for more specific security controls than the traditional 'root' vs. 'user' privilege separation on unix family operating systems.

Host Setup
To get an lxc-capable host system working you will need the following steps.

Kernel with the appropriate LXC options enabled
If you are unfamiliar with recompiling kernels, see the copious documentation available on that subject in addition to the notes below.

Kernel options required
The complete list of relevant kernel options (tested on 3.2.1-gentoo-r2) is as follows. You can check your running kernel with the lxc-checkconfig script.

Freezer Support
Freezer support allows you to 'freeze' and 'thaw' a running guest, something like 'suspend' under VMWare products. It appears to be under heavy development as of October 2010 (LXC list) but is apparently mostly functional. Please add additional notes on this page if you explore further. CONFIG_CGROUP_FREEZER / "Freeze/thaw support" ('General Setup -> Control Group support -> Freezer cgroup subsystem')

Scheduling Options
Scheduling allows you to specify how much hardware access (CPU bandwidth, block device bandwidth, etc.) control groups have. CONFIG_CGROUP_SCHED / "Cgroup sched" ('General Setup -> Control Group support -> Group CPU scheduler') FAIR_GROUP_SCHED / "Group scheduling for SCHED_OTHER" ('General Setup -> Control Group support -> Group CPU scheduler -> Group scheduling for SCHED_OTHER') CONFIG_BLK_CGROUP / "Block IO controller" ('General Setup -> Control Group support -> Block IO controller') CONFIG_CFQ_GROUP_IOSCHED / "CFQ Group Scheduling support" ('Enable the block layer -> IO Schedulers -> CFQ I/O scheduler -> CFQ Group Scheduling support')

Resource Counters (Memory/Swap Accounting)
Resource counters are an 'accounting' feature - they allow you to measure resource utilisation in your guest. They are also an apparent prerequisite for limiting memory and swap utilisation. CONFIG_RESOURCE_COUNTERS / "Resource counters" ('General Setup -> Control Group support -> Resource counters')

For memory resources... CONFIG_CGROUP_MEM_RES_CTLR / "Cgroup memory controller" ('General Setup -> Control Group support -> Resource counters -> Memory Resource Controller for Control Groups')

If you want to also count swap utilisation, also select... CONFIG_CGROUP_MEM_RES_CTLR_SWAP / "Memory Resource Controller Swap Extension(EXPERIMENTAL)" ('General Setup -> Control Group support -> Resource counters -> Memory Resource Controller for Control Groups -> Memory Resource Controller Swap Extension')

CPU Accounting
This allows you to measure the CPU utilisation of your control groups. CONFIG_CGROUP_CPUACCT / "Cgroup cpu account" ('General Setup -> Control Group support -> Simple CPU accounting cgroup subsystem')

Networking Options
Ethernet bridging, veth, macvlan and vlan (802.1q) support are optional, but you probably want these. CONFIG_BRIDGE / "802.1d Ethernet Bridging" ('Networking support -> Networking options -> 802.1d Ethernet Bridging') CONFIG_VETH / "Veth pair device" CONFIG_MACVLAN / "Macvlan" CONFIG_VLAN_8021Q / "Vlan"

Reconfig Gentoo kernel
You can use the lxc-checkconfig tool to list kernel options that you need to enable in order to make your existing kernel configuration lxc compatible(tested on 3.2.1-gentoo-r2). Process would be something like...

Then copy your kernel to your boot partition, reconfigure your boot loader, and reboot.

lxc userspace utilities
Because lxc is currently very new, it is probably worth making sure that you have the absolute latest version. Therefore, before we begin, you should ensure that your portage tree is up to date with the following command.

Next, figure out which version of lxc is available with:

Now go ahead and install with...

Mounted cgroup filesystem
The 'cgroup' filesystem provides user-space access to the required kernel control group features, and is required by the lxc userspace utilities.

Recent kernels introduced /sys/fs/cgroup as default location.

The openrc has already mounts 'cgroup' filesystem during bootstrap, therefore, there is no need for users to mount it manually.

You can check it using:
 * 1) mount | grep cgroup

cgroup_root on /sys/fs/cgroup type tmpfs (rw,nosuid,nodev,noexec,relatime,size=10240k,mode=755) openrc on /sys/fs/cgroup/openrc type cgroup (rw,nosuid,nodev,noexec,relatime,release_agent=/lib/rc/sh/cgroup-release agent.sh,name=openrc) cpuset on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset) cpu on /sys/fs/cgroup/cpu type cgroup (rw,nosuid,nodev,noexec,relatime,cpu) cpuacct on /sys/fs/cgroup/cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpuacct) memory on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory) devices on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices) freezer on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer) blkio on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio) perf_event on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)

Networking: Ethernet bridge
You probably want to set up an ethernet bridge. Note that this requires the CONFIG_BRIDGE and CONFIG_VETH symbol to be enabled in your kernel.

Installation
To check if the tools are already installed for configuring and modifying a bridge, use the portage preview command...


 * 1) emerge -pv net-misc/bridge-utils

These are the packages that would be merged, in order:

Calculating dependencies... done! [ebuild N    ] net-misc/bridge-utils-1.4  32 kB

Total: 1 package (1 new), Size of downloads: 32 kB

If you see this, the tools are not installed yet. Go ahead and install with...

Host Configuration
First, we need to add the bridge device to the /etc/conf.d/net file. As an example, bridge configuration with DHCP:

More documentation can be found in /usr/share/doc/openrc-0.9.9.3/net.example.

Next, create the init script and start the interface as follows:

Finally, to make sure the bridge is automatically set up on subsequent boots, run:

To grant the guest access to the internet, you will need to use iptables. If it's not installed, first emerge it.

Allow ip forwarding in your or with the following command:

Add the iptables rules to grand masqueraded access to the internet. For example (substitute 'eth0' with your external facing physical interface):

This is equivalent to: EXTIF=eth0 # external facing physical interface IP=`ifconfig $EXTIF|grep 'inet addr'|cut -d ':' -f2|cut -d ' ' -f1` iptables -t nat -A POSTROUTING -o $EXTIF -j SNAT --to-source $IP

Save the configuration and ensure it is restored at boot:

Guest Configuration
Your guest network configuration resides in the guest's lxc.conf file. Documentation for this file is accessible with: man lxc.conf

If you have used a template script to create your guest, this will typically reside in the parent directory of the guest's root filesystem. However, using /etc/lxc/ to store guest configurations is also common.

Your guest configuration should include the following network-related lines:

If, like me, you are using dhcp inside the container to get an IP address, then run it once as shown. LXC will generate a random MAC address for the interface. To keep your DHCP server from getting confused, you will want to use that MAC address all the time. So find out what it is, and then uncomment the 'lxc.network.hwaddr' line and specify it there.

Template scripts
A number of 'template scripts' are distributed with the lxc package. These scripts assist with generating various guest environments.

Template scripts live in /usr/lib(lib64)/lxc/templates/ but should be executed via the lxc-create tool as follows:

The rootfs of linux container is stored in /etc/lxc/guestname/

Configuration files (the -f configuration-file option) are usually used to specify the network configuration for the guest. For example:

lxc.network.type=veth lxc.network.link=br0 lxc.network.flags=up

More documentation about linux container network can be found in this article.

The template scripts included in app-emulation/lxc-0.8.0-rc1-r1 are:


 * lxc-altlinux assists with setting up ALT Linux guests.
 * Fixme: this template script cannot be executed in gentoo linux directly, because it contains "apt-get" command when downloading Alt linux guest.


 * lxc-archlinux assists with setting up Archlinux guests (see wiki.archlinux.org and Archlinux Chroot in Gentoo). Note that in order to use lxc-archlinux, you must:
 * Fixme: It seems the pacman-4.0.1 cannot work correctly in gentoo linux
 * Fixme: It seems the pacman-4.0.1 cannot work correctly in gentoo linux


 * lxc-busybox assists with setting up minimal guests using Busybox (see busybox.net)
 * lxc-debian assists with setting up Debian guests (see debian.org). Note that in order to use lxc-debian, you must:


 * lxc-fedora assists with setting up Fedora guests (see fedoraproject.org). Note that in order to use lxc-fedora, you must:
 * You will also need to install febootstrap tool from http://people.redhat.com/~rjones/febootstrap/. An ebuild has been created but is not yet in portage:
 * You will also need to install febootstrap tool from http://people.redhat.com/~rjones/febootstrap/. An ebuild has been created but is not yet in portage:


 * lxc-opensuse assists with setting up opensuse linux guest.
 * Fixme: lack of zypper command line package manager tool in gentoo portage.


 * lxc-sshd assistss with setting up minimal sshd guests (see openssh.com).
 * Fixme: libdir is empty in lxc-sshd template, which will cause an mount error when starting sshd guest.


 * lxc-ubuntu assist with setting up Ubuntu guests (see ubuntu.com). Note that in order to use lxc-ubuntu, you must:
 * Usage is as follows...
 * It takes a very long time to create a ubuntu guest, please be patient.
 * It takes a very long time to create a ubuntu guest, please be patient.
 * It takes a very long time to create a ubuntu guest, please be patient.

The following commands enable the template guest to be started via init scripts.

Automatic setup: lxc-gentoo
The lxc-gentoo tool can download, extract and configure a gentoo guest for you. It fixes a lot of little issues that you may otherwise find tedious and are not yet outlined in the manual guest configuration section, below.

You can download it here: lxc-gentoo page

Additional developers, bug fixes, comments, etc. are welcome.

Manual Guest Configuration
LXC allows a configuration file for each guest container, specifying name, IP address, etc. As my, you can use lxc-gentoo script to create a gentoo guest:

After creating the gentoo guest, you can manage guest as follows:

The final command enables the guest to be started via init scripts.

Option: Shared /usr/portage/distfiles
If you want to share distfiles from your host, you can set the PORTAGE_RO_DISTDIRS variable to a space-separated list of directories to search. Portage will create a symlink in DISTDIR to the first matching file found in PORTAGE_RO_DISTDIRS if the file does not already exist in DISTDIR.

In the latest lxc-gentoo script (comment=269ea1735503cd932421d7c63d729f849279690d), the fstab is hardcoded. Therefore, if you want to mount the shared distfiles, you should add lxc.mount option to utsname.conf file. You can get more information from flameeyes's blog.

Busybox
lxc contains a minimal template script for busybox. Busybox is basically a base system oriented towards embedded use, where many base utilities exist in an optimized form within one stripped binary to save on memory. Busybox is installed as part of the base gentoo system, so the script works right away. Example:

Debian
You will need to install dev-util/debootstrap package.

You can then use the lxc supplied debian template script to download all required files, generate a configuration file and a root filesystem for your guest.

sshd
lxc contains a minimal template script for sshd guests. You can create the sshd guest through:

ubuntu
lxc contains a minimal template script for Ubuntu guests (see ubuntu.com). Note that in order to use lxc-ubuntu, you must:

Manual use
To start the guest, simply run:

To stop the guest, run:

If for any reason guest fails to start, see the error messages by running:

But it needs to be created from /etc/init.d/lxc.guestname script.

Please be aware, that when you have daemonized the booting process (-d), you will not get any output on screen. This might happen when you conveniently use an alias wich daemonizes by default, and forgot about it. You may get puzzled later by this if there is problem while booting a new the container that has not been configured properly (e.g. network).

Use from gentoo init system
If you have made a symbolic link in /etc/init.d for each guest container you have created then instead of using the LXC userland tools directly you can start and stop a guest as follows:

Of course, the use of such scripts is primarily intended for booting and stopping the system. To make a guest to the rc chain, run:

To enter a (already started) guest directly from the host machine, see the lxc-console section below.

lxc-console
Using lxc-console provides console access to the guest. To use type:

If you get a message saying lxc-console: console denied by guestname, then you need to add to your container config: lxc.tty = 1

To exit the console, use:

Note that unless you log out inside the guest, you will remain logged on, so the next time you run lxc-console, you will return to the same session.

Usage of lxc-console should be restricted to root. It should be primarily a tool for system administrators (root) to enter a (newly) container after it is first created, e.g. when the network connection is not properly configured yet. Using multiple instances of lxc-console on distinct guests works fine, but starting a second instance for a guest that is already governed by another lxc-console session, leads to redirection of keyboard input and terminal output. Best is to avoid use of lxc-console at all. (Perhaps lxc developers should enhance the tool in such way that is only possible for singleton use per guest. ;-)

Accessing the container with sshd
A common technique to allow users direct access into a system container is to run a separate sshd inside the container. Users then connect to that sshd directly. In this way, you can treat the container just like you treat a full virtual machine where you grant external access. If you give the container a routable address, then users can reach it without using ssh tunneling.

If you set up the container with a virtual ethernet interface connected to a bridge on the host, then it can have its own ethernet address on the LAN, and you should be able to connect directly to it without logically involving the host (the host will transparently relay all traffic destined for the container, without the need for any special considerations). You should be able to simply 'ssh '.

Filesystem layout
For lxc-0.8.0-rc1-r1, the /etc/init.d/lxc init script expect guest configuration to be at /etc/lxc/.conf.

Some of the lxc tools apparently assume that /etc/lxc// exists. Probably then, that's the 'right' place to store your lxc.conf files (guest configuration files) and any extra guest configuration information.

However, you should keep the guests' root filesystems out of /etc since it's not a path that's supposed to store large volumes of binary data.

The templates of lxc will use the following locations::
 * /etc/lxc//config = guest configuration file
 * /etc/lxc//fstab = optional guest fstab file
 * /var/lxc//rootfs = root filesystem image
 * /var/log/lxc/ = lxc-start logfile