LXC

LXC (Linux Containers) is a virtualization system, making use of the "cgroups" feature. It is conceptually similar to Solaris's Zones and FreeBSD's Jails, so to provide more segregation of a simple chroot without having to incur in the penalties of a full virtualization solution.

Virtualization concepts
This section is a basic overview of how LXC fits in to the virtualization world, the type of approach it uses, and the benefits and limitations thereof.

If you are trying to figure out if lxc is for you, or it's your first time setting up virtualization under Linux, then you should at least skim this section.

Roughly speaking there are two types of virtualization in use today, container-based virtualization and full virtualization.

Container-based virtualization (LXC)
Container based virtualization is very fast and efficient. It's based on the premise that an OS kernel provides different views of the system to different running processes. This sort of segregation or compartmentalisation (sometimes called "thick sandboxing") can be useful for ensuring guaranteed access to hardware resources such as CPU and IO bandwidth, whilst maintaining security and efficiency.

On the unix family of operating systems, it is said that container based virtualization has its roots in the 1982 release of the chroot tool, a filesystem subsystem specific container-based virtualization tool that was written by Sun Microsystems founder Bill Joy and published as part of 4.2BSD.

Since this early tool, which has become a mainstay of the unix world, a large number of unix developers have worked to mature more powerful container based virtualization solutions. Some examples:


 * Solaris Zones
 * FreeBSD Jails
 * Linux VServer
 * OpenVZ

On Linux, historically the major two techniques have been Linux-VServer (open source / community driven) and OpenVZ (a free spinoff of a commercial product).

However, neither of these will be accepted in to the Linux kernel. Instead Linus has opted for a more flexible, longer-term approach to achieving similar goals, using various new kernel features. lxc is the next-generation container-based virtualization solution that uses these new features.

Conceptually, LXC can be seen as a further development of the existing 'chroot' technique with extra dimensions added. Where 'chroot'-ing only offers isolation at the file system level, LXC offers complete logical isolation from a container to the host and all other containers. In fact, installing a new Gentoo container from scratch is pretty much the same as for any normal Gentoo installation.

Some of the most notable differences include:


 * Each container will share the kernel with the host (and other containers). No kernel need to be present and/or mounted on the containers /boot directory;
 * Devices and filesystem will be (more or less) 'inherited' from the host, and need not be configured as would apply for a normal installation;
 * If the host is using the OpenRC system for bootstrapping, such configuration items will "automagically" be omitted (i.e. filesystem mounts from fstab).

The last point is important to keep lxc based installation as much as simple and the same as for normal installations (no exceptions).

Full virtualization (not LXC)
Full virtualization and paravirtualization solutions aim to simulate the underlying hardware. This type of solution, unlike lxc and other container-based solutions, usually allow you to run any operating system. Whilst this may be useful for the purposes of security and server consolidation, it is hugely inefficient compared to container based solutions. The most popular solutions in this area right now are probably VMware, KVM, Xen and VirtualBox.

Limitations of LXC
With LXC, you can efficiently manage resource allocation in real time. In addition, you should be able to run different Linux distributions on the same host kernel in different containers (though there may be teething issues with startup and shutdown 'run control' (rc) scripts, and these may need to be modified slightly to make some guests work. That said, maintainers of tools such as OpenRC are increasingly implementing LXC detection to ensure correct behavior when their code runs within containers.)

Unlike full virtualization solutions, LXC will not let you run other operating systems (such as proprietary operating systems, or other types of unix).

However, in theory there is no reason why you can't install a full or paravirtualization solution on the same kernel as your LXC host system and run both full/paravirtualised guests in addition to LXC guests at the same time.

Should you elect to do this, there are powerful abstracted virtualization management API under development, such as libvirt and ganeti, that you may wish to check out.

In short:


 * One kernel
 * One operating system
 * Many instances

... but can co-exist with other virtualization solutions if required.

MAJOR temporary problems with LXC - READ THIS!
As documented over here(this link is obsolete), basically containers are not functional as security containers at present, in that if you have root on a container you have root on the whole box.


 * root in a container has all capabilities
 * Workaround:
 * Do not treat root privileges in the container any more lightly than on the host itself.
 * Better solution would be to use unprivileged containers (see below).
 * legacy UID/GID comparisons in many parts of the kernel code are dumb and will not respect containers
 * Workaround:
 * Do not mount parts of external filesystems within a container, except ro (read only).
 * Do not re-use UIDs/GIDs between the container and the host
 * shutdown and halt will run over the host system.
 * Workaround:
 * Restrict/Replace them in the container
 * Don't do both (1) mount proc in a guest that you don't trust, and (2) have CONFIG_MAGIC_SYSRQ 'Magic SysRq Key' enabled in your kernel (which creates /proc/sysrq-trigger) ... as this can be abused for denial of service
 * Workaround:
 * Turn off MAGIC_SYSRQ option from kernel config.

Containers are still useful for isolating applications, including their networking interfaces, and applying resource limits and accounting to those applications. As the above issues are resolved, they will also become functional security containers.

If you are designing a virtualisation solution for the long term and want a timeframe, then with appropriate disclaimers, judging from various comments and experience, an extremely rough timeframe might be 'circa end of 2012'. But no guarantees.

See also CAP_SYS_ADMIN: the new root.

LXC components
LXC uses two new and lesser known kernel features known as 'control groups' and 'POSIX file capabilities'. It also includes 'template scripts' to setup different guest environments.

Control groups
Control Groups are a multi-hierarchy, multi-subsystem resource management / control framework for the Linux kernel.

In simpler language, what this means is that unlike the old chroot tool which was limited to the file subsystem, control groups let you define a 'group' encompassing one or more processes (eg: sshd, Apache) and then specify a variety of resource control and accounting options for that control group against multiple subsystems, such as:


 * Filesystem access
 * General device access
 * Memory resources
 * Network device resources
 * CPU bandwidth
 * Block device IO bandwidth
 * Various other aspects of a control group's view of the system

The user-space access to these new kernel features is a kernel-provided filesystem, known as 'cgroup'. It is typically mounted at /cgroup and provides files similar to /proc and /sys representing the running environment and various kernel configuration options.

POSIX file capabilities
POSIX file capabilities are a way to allocate privileges to a process that allow for more specific security controls than the traditional 'root' vs. 'user' privilege separation on unix family operating systems.

Host setup
To get an lxc-capable host system working you will need the following steps.

Kernel with the appropriate LXC options enabled
If you are unfamiliar with recompiling kernels, see the copious documentation available on that subject in addition to the notes below.

Kernel options required
The ebuild will check for the most important options for the kernel that are required to set up a LXC host. This is, though, not a fatal check which means you have to make sure to have the options correctly enabled manually. The package also comes with an upstream-provided lxc-checkconfig script that should report on the proper options.

Write down missing kernel options in output, or leave it open and switch to a new terminal; if complaints about missing "File capabilities" ignore it; the feature is now enabled by default and the setting has been removed.

Search for each kernel CONFIG feature listed in the output of lxc-checkconfig script using the search hot-key, enable them one by one, save the new configuration, and quit. For more information on kernel configuration visit the kernel configuration article.

Once finished build the kernel:

Freezer support
Freezer support allows you to 'freeze' and 'thaw' a running guest, something like 'suspend' under VMware products. It appears to be under heavy development as of October 2010 (LXC list) but is apparently mostly functional. Please add additional notes on this page if you explore further.

CONFIG_CGROUP_FREEZER / "Freeze/thaw support" ('General Setup -> Control Group support -> Freezer cgroup subsystem')

Scheduling options
Scheduling allows you to specify how much hardware access (CPU bandwidth, block device bandwidth, etc.) control groups have.

CONFIG_CGROUP_SCHED / "Cgroup sched" ('General Setup -> Control Group support -> Group CPU scheduler') FAIR_GROUP_SCHED / "Group scheduling for SCHED_OTHER" ('General Setup -> Control Group support -> Group CPU scheduler -> Group scheduling for SCHED_OTHER') CONFIG_BLK_CGROUP / "Block IO controller" ('General Setup -> Control Group support -> Block IO controller') CONFIG_CFQ_GROUP_IOSCHED / "CFQ Group Scheduling support" ('Enable the block layer -> IO Schedulers -> CFQ I/O scheduler -> CFQ Group Scheduling support')

Memory/swap accounting
To measure resource utilization in your guest...

CONFIG_CGROUP_MEM_RES_CTLR / "Cgroup memory controller" ('General Setup -> Control Group support -> Resource counters -> Memory Resource Controller for Control Groups')

If you want to also count swap utilization, also select...

CONFIG_CGROUP_MEM_RES_CTLR_SWAP / "Memory Resource Controller Swap Extension(EXPERIMENTAL)" ('General Setup -> Control Group support -> Resource counters -> Memory Resource Controller for Control Groups -> Memory Resource Controller Swap Extension')

Resource counters were recently removed from the kernel and replaced with page counters which are now automatically selected when the above is selected. Ignore any userland warnings about missing resource counter config.

CPU accounting
This allows you to measure the CPU utilization of your control groups.

CONFIG_CGROUP_CPUACCT / "Cgroup cpu account" ('General Setup -> Control Group support -> Simple CPU accounting cgroup subsystem')

Networking options
Ethernet bridging, veth, macvlan and vlan (802.1q) support are optional, but you probably want at least one of these:

CONFIG_BRIDGE / "802.1d Ethernet Bridging" ('Networking support -> Networking options -> 802.1d Ethernet Bridging') CONFIG_VETH / "Veth pair device" CONFIG_MACVLAN / "Macvlan" CONFIG_VLAN_8021Q / "Vlan"

Further details about LXC networking options are available on Flameeyes's Weblog

LXC userspace utilities
Due to LXC's still unstable nature, Gentoo provides ebuilds only for the most recent version available, therefore make sure to update the Portage tree before proceeding:

Mounted cgroup filesystem
The 'cgroup' filesystem provides user-space access to the required kernel control group features, and is required by the LXC userspace utilities. Up to kernel 3.1, the filesystem's mountpoint wasn't well defined; nowadays it is defined to be mounted (split) within. Recent OpenRC versions already mount it during boot, and the ebuild already depends on a new enough version.

Check it using:

Network configuration
The network section defines how the network is virtualized in the container. The network virtualization acts at layer two. In order to use the network virtualization, parameters must be specified to define the network interfaces of the container. Several virtual interfaces can be assigned and used in a container even if the system has only one physical network interface.

According to  option in  there are six types of network virtualization to be used for the container:


 * none: will cause the container to share the host's network namespace. This means the host network devices are usable in the container. It also means that if both the container and host have upstart as init, 'halt' in a container (for instance) will shut down the host. So, it's not safe option at all.
 * empty: will create only the loopback interface. It means no network connection of the container with outside world.
 * phys: an already existing physical device interface specified by the  is assigned to the container. It means you need a spare network device for this option for the container to use.
 * veth: a virtual ethernet pair device is created with one side assigned to the container and the other side attached to a bridge (see Network bridge) specified by the  option. If the bridge is not specified, then the veth pair device will be created but not attached to any bridge. Otherwise, the bridge has to be created on the system before starting the container. lxc won't handle any configuration outside of the container. This is the most common option to use for the isolated network inside virtual container with the connection to outside world for the home use.
 * vlan: a vlan interface is linked with the interface specified by the  and assigned to the container. The vlan identifier is specified with the option  . VLANs are usually useful to split big networks into isolated parts (subnetworks) from each other.
 * macvlan: a macvlan interface is linked with the interface specified by the  and assigned to the container.   specifies the mode the macvlan will use to communicate between different macvlan on the same upper device. The accepted modes are private, the device never communicates with any other device on the same upper_dev (default), vepa , the new Virtual Ethernet Port Aggregator (VEPA) mode, it assumes that the adjacent bridge returns all frames where both source and destination are local to the macvlan port, i.e. the bridge is set up as a reflective relay. Broadcast frames coming in from the upper_dev get flooded to all macvlan interfaces in VEPA mode, local frames are not delivered locally, or bridge , it provides the behavior of a simple bridge between different macvlan interfaces on the same port. Frames from one interface to another one get delivered directly and are not sent out externally. Broadcast frames get flooded to all other bridge ports and to the external interface, but when they come back from a reflective relay, we don't deliver them again. Since we know all the MAC addresses, the macvlan bridge mode does not require learning or STP like the bridge module does. For more information about macvlan modes with clear pictures see Virtual switching technologies and Linux bridge presentation. Also note that macvlan option usually needs either external gateway or switch and won't communicate with host's internal configured gateway. It will be seen as another network interface outside of the container with the separate MAC-address. So, if you'll try to assign container to the external WAN interface of your provider of the Internet, then your Internet provider will be seeing it as different MAC-address interface. This means that you won't get Internet access inside container in that case if you paid only for 1 MAC to the provider. So, this option is mostly useful for big Internet servers with some amount of spare external WAN addresses and separate gateway.

Host configuration for VLANs inside the bridge which are connected to container's virtual Ethernet pair device
Let's assume that we have a host with enp2s0 device connected to provider LAN network wich connects to the Internet (WAN) through it using ppp0 interface. We also have our private LAN network on the enp3s6 interface side. As long as we don't have many spare network interfaces and we also want some container's network isolation let's create another VLAN interface (enp3s6.1) on the host assigned to our private LAN network's interface enp3s6. Then we put it inside the bridge br0.1 as a port.

Then let's create bridge interface, restart enp3s6 interface to get enp3s6.1 and put bridge interface to startup:

You will have something like the following configuration:

Let's now start our container with veth assigned to our bridge br0.1. You'll get another network interface on the host's side which looks like this:

Both our host's enp3s6.1 VLAN and container's virtual Ethernet pair device vethB004H3 are ports of our bridge br0.1:

Lets now give the Internet to the container. We'll use Nftables for that. As long as we don't want container access to our private network LAN or our provider's LAN we'll give only access to the ppp0 WAN device to the container. Let's assume you already have configuration on your host similar to the following Nftables/Examples. Then you'll have to add several rules to it into according places.

This will give you Internet access inside container. You can later create more isolated containers inside each separate bridge br0.X or connect several container's interfaces inside one br0.Y.

Guest configuration for a virtual Ethernet pair device connected by bridge
Your guest network configuration resides in the guest's file. To auto-generate it we will use distributive-specific template scripts, but we need some network configuration base for generation. We will use as such base config file. Documentation for both of this files is accessible with:.

Your guest configuration should include the following network-related lines:

If you are using DHCP inside the container to get an IP address, then run it once as shown. LXC will generate a random MAC address for the interface. To keep your DHCP server from getting confused, you will want to use that MAC address all the time. So find out what it is, and then uncomment the 'lxc.network.hwaddr' line and specify it there.

Adjusting guest config of the container after using template script
If you got unworkable network inside the container (after using template script) then you always can adjust your guest configuration on the host using file. For example:

You can also always change network config inside container by adjusting it's configuration files (after login into container), for example:

Template scripts
A number of 'template scripts' are distributed with the LXC package. These scripts assist with generating various guest environments.

Template scripts live in but should be executed via the  tool as follows:

The rootfs of Linux container is stored in

Configuration files (the  option) are usually used to specify the network settings for the initial guest configuration as we described above inside.

Using download as the template-name displays a list of available guest environments to download. See below under LXC.

Automatic setup: LXC standard Gentoo template script
It's probably the recommended way now as long as the latest internal Gentoo template script is based on lxc-gentoo script and has some additional fixes to it like:


 * Out of the lxc-create compatibility
 * Vanilla Gentoo config
 * Ready to use cache (shared Portage, distfiles, eix cache)

See for additional info and also consult  file after using this template script.

Let's use LXC's template script to create a Gentoo guest:

After creating the guest you can manage it as usual.

Automatic setup: lxc-gentoo
The tool can download, extract and configure a Gentoo guest for you, including cryptographic validation of sources and support for arbitrary architectures/variants via Qemu.

You can download it here: lxc-gentoo page

Additional developers, bug fixes, comments, etc. are welcome.

Manual guest configuration
LXC allows a configuration file for each guest container, specifying name, IP address, etc. the lxc-gentoo script can be used to create a Gentoo guest:

After creating the guest manage it as usual.

Option: Shared /usr/portage/distfiles
If you want to share distfiles from the host set the PORTAGE_RO_DISTDIRS variable to a space-separated list of directories to search. Portage will create a symlink in DISTDIR to the first matching file found in PORTAGE_RO_DISTDIRS if the file does not already exist in DISTDIR.

In the latest lxc-gentoo script (comment=269ea1735503cd932421d7c63d729f849279690d), the fstab is hardcoded. Therefore, if you want to mount the shared distfiles, you should add lxc.mount option to file.

You can get more information from flameeyes's blog.

Alt Linux

 * Fixme: this template script cannot be executed in Gentoo Linux directly, because it contains command when downloading Alt Linux guest.

Arch Linux
lxc-archlinux template assists with setting up Archlinux guests (see Archlinux Chroot in Gentoo). Note that in order to use lxc-archlinux, you must:

Fixme: It seems the pacman-4.0.1 cannot work correctly in gentoo linux

You need to edit pacman configuration:

You will also need to install these tools: https://projects.archlinux.org/arch-install-scripts.git

Fixme: The archlinux template does not create a working container, giving an error on not being able to find (the file  does not exists). Chrooting into the linux container (the directory) and issuing: solves this issue. Also, you need CONFIG_DEVTMPFS activated in the kernel configuration if you configure the container as stated in the archlinux wiki

Edit: This is working perfectly with

Busybox
lxc contains a minimal template script for busybox. Busybox is basically a base system oriented towards embedded use, where many base utilities exist in an optimized form within one stripped binary to save on memory. Busybox is installed as part of the base Gentoo system, so the script works right away. Example:

Debian
You will need to install package:

You can then use the LXC supplied Debian template script to download all required files, generate a configuration file and a root filesystem for your guest.

Fedora
lxc-fedora template assists with setting up Fedora guests. Note that in order to use lxc-fedora, you must:

You will also need to install febootstrap tool from http://people.redhat.com/~rjones/febootstrap/. An ebuild has been created but is not yet in the Portage tree (See ).

In addition, in order for the script /usr/share/lxc/template/lxc-fedora to mount the squashfs on the loop device, you need to have CONFIG_SQUASHFS=m and CONFIG_SQUASHFS_XZ=y selected in the kernel config.

OpenSUSE
lxc-opensuse template assists with setting up OpenSUSE Linux guest. Fixme: lack of zypper command line package manager tool in Gentoo Portage.

sshd
LXC contains a minimal template script for sshd guests. You can create the sshd guest through:

Ubuntu
lxc contains a minimal template script for Ubuntu guests (see ubuntu.com). Note that in order to use lxc-ubuntu, you must:

Usage is as follows...

Or, in versions <

This will create the folder ubuntu-guest. Inside the folder, there will be a file called config. It takes a very long time to create a Ubuntu guest, please be patient.

Another example alternative using the download template:

Manual use
To start and stop the guest container, simply run:

Please be aware, that when you have daemonized the booting process, you will not get any output on screen. This might happen when you conveniently use an alias which daemonizes by default, and forgot about it. You may get puzzled later by this if there is problem while booting a new the container that has not been configured properly (e.g. network).

You should use the username and password of the existing system user used when creating the container.

To set root password, enter the directory and you will see the directory rootfs. Issue:

Set the password with the command:

Use from Gentoo init system
Gentoo's ebuild (without the  USE flag enabled), provides an init script to manage containers and start them at boot time. To make use of the init script you just have to create a symlink in the directory:

Of course, the use of such scripts is primarily intended for booting and stopping the system. To make a guest to the rc chain, run:

To enter a (already started) guest directly from the host machine, see the lxc-console section below.

Use from Gentoo systemd
To start the system in the container, call:

To stop it again, issue:

To start it automatically at (host) system boot up, use:

lxc-console
Using lxc-console provides console access to the guest. To use type:

If you get a message saying lxc-console: console denied by guestname, then you need to add to your container config:

lxc.tty = 1

To exit the console, use:

Note that unless you log out inside the guest, you will remain logged on, so the next time you run lxc-console, you will return to the same session.

Usage of lxc-console should be restricted to root. It should be primarily a tool for system administrators (root) to enter a (newly) container after it is first created, e.g. when the network connection is not properly configured yet. Using multiple instances of lxc-console on distinct guests works fine, but starting a second instance for a guest that is already governed by another lxc-console session, leads to redirection of keyboard input and terminal output. Best is to avoid use of lxc-console at all. (Perhaps lxc developers should enhance the tool in such way that is only possible for singleton use per guest. ;-)

Accessing the container with sshd
A common technique to allow users direct access into a system container is to run a separate sshd inside the container. Users then connect to that sshd directly. In this way, you can treat the container just like you treat a full virtual machine where you grant external access. If you give the container a routable address, then users can reach it without using ssh tunneling.

If you set up the container with a virtual ethernet interface connected to a bridge on the host, then it can have its own Ethernet address on the LAN, and you should be able to connect directly to it without logically involving the host (the host will transparently relay all traffic destined for the container, without the need for any special considerations). You should be able to simply 'ssh '.

Filesystem layout
Some of the lxc tools apparently assume that /etc/lxc// exists. However, you should keep the guests' root filesystems out of since it's not a path that's supposed to store large volumes of binary data.

The templates of LXC will use the following locations:


 * /etc/lxc//config = guest configuration file
 * /etc/lxc//fstab = optional guest fstab file
 * /var/lxc//rootfs = root filesystem image
 * /var/log/lxc/ = lxc-start logfile

Unprivileged containers
Unprivileged containers are the safest containers. Usual privileged LXC should be considered unsafe because while running in a separate namespace, UID 0 in the container is still equal to UID 0 (root) outside of the container, meaning that if you somehow get access to any host resource through proc, sys or some random syscalls, you can potentially escape the container and then you'll be root on the host. That's what user namespaces were designed for. Each user that's allowed to use them on the system gets assigned a range of unused UIDs and GIDs. So, unprivileged LXC map, for instance, user and group ids 0 through 65,000 in the container to the ids 100,000 through 165,000 on the host. That means that UID 0 (root) in the container maps into UID 100,000 outside the container. So, in case something goes wrong and an attacker manages to escape the container, one find himself with as many rights as a nobody user.

The standard paths also have their unprivileged equivalents:


 * /etc/lxc/lxc.conf => ~/.config/lxc/lxc.conf
 * /etc/lxc/default.conf => ~/.config/lxc/default.conf
 * /var/lib/lxc => ~/.local/share/lxc
 * /var/lib/lxcsnaps => ~/.local/share/lxcsnaps
 * /var/cache/lxc => ~/.cache/lxc

Your user, while it can create new user namespaces in which it'll be UID 0 and will have some of root's privileges against resources tied to that namespace will obviously not be granted any extra privilege on the host. Unfortunately this also means that the following common operations aren't allowed:


 * Mounting most of filesystems.
 * Creating device nodes.
 * Any operation against a UID/GID outside of the mapped set.

This also means that your user will be limited of creating new network devices on the host or changing bridge configuration. To workaround that, LXC team wrote a tool called “lxc-user-nic” which is the only SETUID binary part of LXC 1.0 and which performs one simple task. It parses a configuration file and based on its content creates network devices for the user and bridge them. To prevent abuse, you can restrict the number of devices a user can request and to what bridge they may be added by editing file.

Prerequisites
Prerequisites for well working unprivileged containers include:


 * Kernel: 3.13 + a couple of staging patches or later version
 * User namespaces enabled in the kernel (CONFIG_USER_NS=y)
 * A very recent version of shadow that supports subuid/subgid (sys-apps/shadow-4.2.1 or later)
 * Per-user cgroups on all controllers
 * LXC 1.0 or higher
 * A version of PAM with a loginuid patch (it's a dependency of recent version of shadow mentioned above, so it installs automatically with recent shadow-4.2.1)

LXC pre-built containers
Because of the limitations mentioned above you won't be allowed to use to create a block or character device in a user namespace as being allowed to do so would let you access anything on the host. Same thing goes with some filesystems, you won’t for example be allowed to do loop mounts or mount an ext partition, even if you can access the block device. Those limitations are a big problem during the initial bootstrap of a container as tools like debootstrap, yum, … usually try to do some of those restricted actions and will fail pretty badly.

Some templates may be tweaked to work and workaround such as a modified fakeroot could be used to bypass some of those limitations but the current state is that the most distribution templates (including Gentoo) simply won't work with those. Instead you should use the "download" template which will provide you with pre-built images of the distributions that are known to work in such an environment. This template is used to contact a server which contains daily pre-built rootfs and configuration for most common templates instead of assembling the rootfs and local configuration.

Those images are built from LXC project's Jenkins server. The actual build process is pretty straightforward, a basic chroot is assembled, then the current git master is downloaded, built and the standard templates are run with the right release and architecture, the resulting rootfs is compressed, a basic config and metadata (expiry, files to template, …) is saved, the result is pulled by LXC project's main server, signed with a dedicated GPG key and published on the public web server.

The client side is a simple template which contacts the server over https (the domain is also DNSSEC enabled and available over IPv6), grabs signed indexes of all the available images, checks if the requested combination of distribution, release and architecture is supported and if it is, grabs the rootfs and metadata tarballs, validates their signature and stores them in a local cache. Any container creation after that point is done using that cache until the time the cache entries expires at which point it'll grab a new copy from the server. You can also use "--flush-cache" parameter to flush the local copy (if present).

The template has been carefully written to work on any system that has a POSIX-compliant shell with wget. gpg is recommended but can be disabled if your host doesn't have it (at your own risks). The current list of images can be requested by passing "–list" parameter:

While the template was designed to workaround limitations of unprivileged containers, it works just as well with system containers, so even on a system that doesn’t support unprivileged containers you can do:

And you'll get a new container running the latest build of Ubuntu 15.04 Vivid Vervet amd64.

Configuring unprivileged LXC
Install the required packages:

Create files necessary for assigning subuids and subgids:

Create new user, set it's password and login:

Make sure your user has a UID and GID map defined in and :

On Gentoo, a default allocation of 65536 UIDs and GIDs is given to every new user on the system, so you should already have one. If not, you'll have to assign a set of subuids and subgids for a user manually:

That last one is required because LXC needs it to access after it switched to the mapped UIDs. If you’re using ACLs, you may instead use “u:100000:x” as a more specific ACL.

Now create  with the following content:

The last two strings are mean that you have one UID map and one GID map defined for the container which will map UIDs and GIDs 0 through 65,536 in the container to UIDs and GIDs 100,000 through 165,536 on the host. Those values should match those found in and, the values above are just illustrative ones.

And with:

This declares that the user “lxc” is allowed up to 2 veth type devices to be created and added to the bridge called br0.1.

Don't forget to add into the PATH environment variable either inside the  for all users to take effect or inside  the for current user. Otherwise lxc-* commands will not work under your user environment (it is not the case for lxc-1.1.0-r5, lxc-1.1.1 and later versions because they use standard  path for command files). Example:

Now let’s create our first unprivileged container with:

Don't forget to change root password of unprivileged LXC with the following commands by running under your user:

Then you can login easily with your new password as usual under your user:

P.S. To be accomplished. "Creating cgroups" section has to be added with or without cgmanager through OpenRC/systemd accordingly (See "Creating cgroups" paragraph there as an example at the moment).

External resources

 * Stéphane Graber's LXC 1.0: Blog post series