libvirt
libvirt is a virtualization management toolkit.
The libvirt package is comprised of two components: a toolkit, and a static object library. It primarily provides virtualization support for UNIX.
Overview
app-emulation/libvirt provides a CLI toolkit that can be used to assist in the creation and configuration of new domains. It is also used to adjust a domain’s resource allocation/virtual hardware.
Libvirt offers an extensive set of features, which makes sense, given that it is a library which can interface with other virtualization software, such as QEMU, LXC, VMware, VirtualBox, Xen.
Libvirt feature overview
- libvirt stores its configuration for each virtual machine and container inside directories using the XML format under /etc/libvirt. For example, QEMU config goes under /etc/libvirt/qemu directory; LXC config goes under /etc/libvirt/lxc
- libvirt can be used to create/delete/maintain virtual machines and container instances.
- libvirt can start/stop containers and virtual machines.
- libvirt can be used to snapshot a virtual machine instance.
- libvirt can mount CD-ROM ISO images.
- libvirt can be used to create different networking connections for a guest OS in VM or a container.
- libvirt can create bridges, MACVLAN, static netdev, and NAT'd IP interface.
- libvirt can be used to create/delete/maintain storage pools using many different filesystems and methods. Some include directly sharing a directory, block device, gluster, iSCSI, LVM, multi-path devices, netfs, SCSI, RADOS/Ceph, and Sheepdog.
libvirt can manage the following types of virtual machines and containers, among others:
Installation
Kernel
The following kernel config is recommended by the libvirtd daemon.
Check the logs to see if any additional kernel configs are requested by the build.
[*] Networking support
Networking Options --->
[*] Network packet filtering framework (Netfilter) --->
[*] Advanced netfilter configuration
Core Netfilter Configuration --->
<*> "conntrack" connection tracking match support
<*> CHECKSUM target support
IPv6: Netfilter Configuration --->
<*> IP6 tables support (required for filtering)
<*> ip6tables NAT support
<*> Ethernet Bridge tables (ebtables) support --->
<*> ebt: nat table support
<*> ebt: mark filter support
[*] QoS and/or fair queueing --->
<*> Hierarchical Token Bucket (HTB)
<*> Stochastic Fairness Queueing (SFQ)
<*> Ingress/classifier-action Qdisc
<*> Netfilter mark (FW)
<*> Universal 32bit comparisons w/ hashing (U32)
[*] Actions
<*> Traffic Policing
The following kernel options are required to pass some checks by the virt-host-validate tool. That also means that are requirements for some functionality.
General setup --->
[*] Control Group support --->
--- Control Group support
[*] IO controller
Device Drivers --->
[*] Memory Controller drivers ---
--- Memory Controller drivers
Device Drivers --->
[*] Network device support --->
[*] Network core driver support
<*> Universal TUN/TAP device driver support
USE flags
Some packages are aware of the libvirt
USE flag.
Review the possible USE flags for libvirt:
USE flags for app-emulation/libvirt C toolkit to manipulate virtual machines
+caps
|
Use Linux capabilities library to control privilege |
+libvirtd
|
Builds the libvirtd daemon as well as the client utilities instead of just the client utilities |
+qemu
|
Support management of QEMU virtualisation (app-emulation/qemu) |
+udev
|
Enable virtual/udev integration (device discovery, power and storage device support, etc) |
+virt-network
|
Enable virtual networking (NAT) support for guests. Includes all the dependencies for NATed network mode. Effectively any network setup that relies on libvirt to setup and configure network interfaces on your host. This can include bridged and routed networks ONLY if you are allowing libvirt to create and manage the underlying devices for you. In some cases this requires enabling the 'netcf' USE flag (currently unavailable). |
apparmor
|
Enable support for the AppArmor application security system |
audit
|
Enable support for Linux audit subsystem using sys-process/audit |
bash-completion
|
Enable bash-completion support |
dtrace
|
Enable dtrace support via dev-debug/systemtap |
firewalld
|
DBus interface to iptables/ebtables allowing for better runtime management of your firewall. |
fuse
|
Allow LXC to use sys-fs/fuse for mountpoints |
glusterfs
|
Enable GlusterFS support via sys-cluster/glusterfs |
iscsi
|
Allow using an iSCSI remote storage server as pool for disk image storage |
iscsi-direct
|
Allow using libiscsi for iSCSI storage pool backend |
libssh
|
Use net-libs/libssh to communicate with remote libvirtd hosts, for example: qemu+libssh://server/system |
libssh2
|
Use net-libs/libssh2 to communicate with remote libvirtd hosts, for example: qemu+libssh2://server/system |
lvm
|
Allow using the Logical Volume Manager (sys-fs/lvm2) as pool for disk image storage |
lxc
|
Support management of Linux Containers virtualisation (app-containers/lxc) |
nbd
|
Allow using sys-block/nbdkit to access network disks |
nfs
|
Allow using Network File System mounts as pool for disk image storage |
nls
|
Add Native Language Support (using gettext - GNU locale utilities) |
numa
|
Use NUMA for memory segmenting via sys-process/numactl and sys-process/numad |
openvz
|
Support management of OpenVZ virtualisation (openvz-sources) |
parted
|
Allow using real disk partitions as pool for disk image storage, using sys-block/parted to create, resize and delete them. |
pcap
|
Support auto learning IP addreses for routing |
policykit
|
Enable PolicyKit (polkit) authentication support |
rbd
|
Enable rados block device support via sys-cluster/ceph |
sasl
|
Add support for the Simple Authentication and Security Layer |
selinux
|
!!internal use only!! Security Enhanced Linux support, this must be set by the selinux profile or breakage will occur |
test
|
Enable dependencies and/or preparations necessary to run tests (usually controlled by FEATURES=test but can be toggled independently) |
verify-sig
|
Verify upstream signatures on distfiles |
virtiofsd
|
Drag in virtiofsd dependency app-emulation/virtiofsd |
virtualbox
|
Support management of VirtualBox virtualisation (app-emulation/virtualbox) |
wireshark-plugins
|
Build the net-analyzer/wireshark plugin for the Libvirt RPC protocol |
xen
|
Support management of Xen virtualisation (app-emulation/xen) |
zfs
|
Enable ZFS backend storage sys-fs/zfs |
If libvirt is going to be used, you may need the
usbredir
USE flags to redirect USB devices to another machine over TCP.libvirt comes with a number of USE flags. Please check those flags and set them according to your setup. These are recommended USE flags for libvirt:
app-emulation/libvirt pcap virt-network numa fuse macvtap vepa qemu
USE_EXPAND
Additional ebuild configuration frobs are provided as the USE_EXPAND variables QEMU_USER_TARGETS and QEMU_SOFTMMU_TARGETS. See app-emulation/qemu for a list of all the available targets (there are a heck of a lot of them; most of them are very obscure and may be ignored; leaving these variables at their default values will disable almost everything which is probably just fine for most users).
For each target specified, a qemu executable will be built. A softmmu
target is the standard qemu use-case of emulating an entire system (like VirtualBox or VMWare, but with optional support for emulating CPU hardware along with peripherals). user
targets execute user-mode code only; the (somewhat shockingly ambitious) purpose of these targets is to "magically" allow importing user-space linux ELF binaries from a different architecture into the native system (that is, they are like multilib, without the awkward need for a software stack or CPU capable of running it).
In order to enable QEMU_USER_TARGETS and QEMU_SOFTMMU_TARGETS we can edit the variables globally in /etc/portage/make.conf, i.e.:
QEMU_SOFTMMU_TARGETS="arm x86_64 sparc"
QEMU_USER_TARGETS="x86_64"
Or, the /etc/portage/package.use file(s) can be modified. Two equivalent syntaxes are available: traditional USE flag syntax, i.e.:
app-emulation/qemu qemu_softmmu_targets_arm qemu_softmmu_targets_x86_64 qemu_softmmu_targets_sparc
app-emulation/qemu qemu_user_targets_x86_64
Another alternative is to use the newer sexy USE_EXPAND-specific syntax:
app-emulation/qemu QEMU_SOFTMMU_TARGETS: arm x86_64 sparc QEMU_USER_TARGETS: x86_64
Emerge
After reviewing and adding any desired USE flags, emerge app-emulation/libvirt and app-emulation/qemu :
root #
emerge --ask app-emulation/libvirt app-emulation/qemu
Additional software
Verify host as QEMU-capable
To verify that the host hardware has the needed virtualization support, issue the following command:
user $
grep --color -E "vmx|svm" /proc/cpuinfo
The vmx or svm CPU flag should be red highlighted and available.
File /dev/kvm must exist.
Configuration
Environment variables
A list of all environment variables read by the libvirt library and its toolkit commands:
- DISPLAY - for virtualbox-only
- DNSMASQ_CLIENT_ID - Used with dnsmasqd
- DNSMASQ_IAID - Used with dnsmasqd
- DNSMASQ_INTERFACE - Used with dnsmasqd
- DNSMASQ_LEASE_EXPIRES - Used with dnsmasqd
- DNSMASQ_OLD_HOSTNAME - Used with dnsmasqd
- DNSMASQ_SERVER_DUID - Used with dnsmasqd
- DNSMASQ_SUPPLIED_HOSTNAME - Used with dnsmasqd
- LIBVIRT_ADMIN_DEFAULT_URI - administration
- LIBVIRT_AUTH_FILE - authentication
- LIBVIRT_AUTOSTART
- LIBVIRT_DEBUG
- LIBVIRT_DEFAULT_URI
- LIBVIRT_DIR_OVERRIDE
- LIBVIRT_GNUTLS_DEBUG
- LIBVIRT_LIBSSH_DEBUG
- LIBVIRT_LOG_FILTERS
- LIBVIRT_LOG_OUTPUTS
- LISTEN_PID - For systemd only.
- LISTEN_FDS - For systemd only.
- NOTIFY_SOCKET - for systemd-only.
- QEMU_AUDIO_DRV
- SDL_AUDIODRIVER
- VBOX_APP_HOME - for virtualbox-only
- VIR_BRIDGE_NAME - Bridging
- VIRSH_DEFAULT_CONNECT_URI
- VIRTD_PATH
Files
Files that are read by the host-side OS; libvirt library, libvirtd daemon. and its sets of commands:
- /etc/libvirt/hooks/
- /etc/libvirt/libvirt-admin.conf
- /etc/libvirt/libvirt.conf
- /etc/libvirt/libvirtd.conf
- /etc/libvirt/libxl.conf
- /etc/libvirt/libxl-lockd.conf
- /etc/libvirt/libxl-sanlock.conf
- /etc/libvirt/lxc.conf
- /etc/libvirt/nwfilter/
- /etc/libvirt/secrets/
- /etc/libvirt/storage/
- /etc/libvirt/virtlockd.conf
- /etc/libvirt/virtlogd.conf
- /proc/cgroups
- /proc/cpuinfo
- /proc/modules
- /proc/mounts
- /proc/net/dev
- /proc/stat
- /proc/sys/ipv4/ip_forward
- /proc/sys/ipv6/conf/all/forwarding
- /proc/sys/ipv6/conf/%s/%s
- /sys/class/fc_host/host0
- /sys/class/fc_remote_ports
- /sys/class/scsi_host
- /sys/devices/system
- /sys/devices/system/%s/cpu/online
- /sys/devices/system/cpu/online
- /sys/devices/system/node/node0/access1
- /sys/devices/system/node/node0/meminfo
- /sys/devices/system/node/node0/memory_side_cache
- /sys/devices/system/node/online
- /sys/fs/resctrl
- /sys/fs/resctrl/info/%s/num_closids
- /sys/kernel/mm/ksm
- /sys/kernel/mm/transparent_hugepage/hpage_pmd_size
- /sys/fs/resctrl/%s/schemata
- /sys/fs/resctrl/info/%s/min_cbm_bits
- /sys/fs/resctrl/info/MB/bandwidth_gran
- /sys/fs/resctrl/info/MB/min_bandwidth
- /sys/fs/resctrl/info/MB/num_closids
- /sys/fs/resctrl/info/L3_MON
- /proc/vz/vestat - Only with openvz
- /sys/fs/resctrl/info/L3_MON/num_rmids
- /var/lib/libvirt/boot
- /var/lib/libvirt/dnsmasq
- /var/lib/libvirt/images
- /var/lib/libvirt/sanlock
User permissions
If
policykit
USE flag is not enabled for libvirt package, the libvirt group will not be created when app-emulation/libvirt is emerged. If this is the case, another group, such as wheel must be used for unix_sock_group.After emerging, to run virt-manager as a normal user, ensure each user has been added to the libvirt group:
root #
usermod -a -G libvirt <user>
Uncomment the following lines from the libvirtd configuration file:
auth_unix_ro = "none"
auth_unix_rw = "none"
unix_sock_group = "libvirt"
unix_sock_ro_perms = "0777"
unix_sock_rw_perms = "0770"
Be sure to have the user log out then log in again for the new group settings to be applied.
virt-admin should then be launchable as a regular user, after the services have been started.
If permission denied issues are experienced when loading ISO images user directories (somewhere beneath /home/) then the /var/lib/libvirt/images/ directory can be used to store the images.
Service
OpenRC
To start libvirtd daemon using OpenRC and add it to default runlevel:
root #
rc-service libvirtd start && rc-update add libvirtd default
systemd
Historically all libvirt functionality was provided in the monolithic libvirtd daemon. Upstream has developed a new modular architecture for libvirt where each driver is run in its own daemon. Therefore, recent versions of libvirt (at least >=app-emulation/libvirt-9.3.0) need the service units for the hypervisor drivers enabled. For QEMU this is virtqemud.service, for Xen it is virtxend.service and for LXC virtlxcd.service and their corresponding sockets.
Enable the service units and their sockets, depending on the functionality (qemu, xen, lxc) you need:
root #
systemctl enable --now virtqemud.service
Created symlink /etc/systemd/system/multi-user.target.wants/virtqemud.service → /usr/lib/systemd/system/virtqemud.service. Created symlink /etc/systemd/system/sockets.target.wants/virtqemud.socket → /usr/lib/systemd/system/virtqemud.socket. Created symlink /etc/systemd/system/sockets.target.wants/virtqemud-ro.socket → /usr/lib/systemd/system/virtqemud-ro.socket. Created symlink /etc/systemd/system/sockets.target.wants/virtqemud-admin.socket → /usr/lib/systemd/system/virtqemud-admin.socket. Created symlink /etc/systemd/system/sockets.target.wants/virtlogd.socket → /usr/lib/systemd/system/virtlogd.socket. Created symlink /etc/systemd/system/sockets.target.wants/virtlockd.socket → /usr/lib/systemd/system/virtlockd.socket. Created symlink /etc/systemd/system/sockets.target.wants/virtlogd-admin.socket → /usr/lib/systemd/system/virtlogd-admin.socket. Created symlink /etc/systemd/system/sockets.target.wants/virtlockd-admin.socket → /usr/lib/systemd/system/virtlockd-admin.socket.
root #
systemctl enable --now virtstoraged.socket
Created symlink /etc/systemd/system/sockets.target.wants/virtstoraged.socket → /usr/lib/systemd/system/virtstoraged.socket. Created symlink /etc/systemd/system/sockets.target.wants/virtstoraged-ro.socket → /usr/lib/systemd/system/virtstoraged-ro.socket. Created symlink /etc/systemd/system/sockets.target.wants/virtstoraged-admin.socket → /usr/lib/systemd/system/virtstoraged-admin.socket.
All the service units use a timeout that causes them to shutdown after 2 minutes if no VM is running. They get automatically reactivated when a socket is accessed, e. g. when virt-manager is started or a virsh command is run.
Firewall
The following firewall chain names have been reserved by the libvirt library and libvirtd daemon.
Reserved chain name | Description |
---|---|
nat | NAT |
LIBVIRT_INP | Firewall input |
LIBVIRT_FWI | Firewall input |
LIBVIRT_FWO | Firewall output |
LIBVIRT_FWX | Firewall forward |
LIBVIRT_OUT | Firewall output |
LIBVIRT_PRT | Firewall postrouting |
To firewall administrators: nat chain name is often used by net-firewall/shorewall, net-firewall/firewalld, net-firewall/ufw, net-firewall/ipfw and possibly net-firewall/fwbuilder; it is far much easier to rename it at the firewall side than it is to rename nat within libvirt/libvirtd.
Networking
For configuration of networking under libvirt, continue reading at libvirt/QEMU networking.
Usage
A list of domains (configured VMs) can be obtained by running:
root #
virsh list
Id Name State ------------------------ 1 gentoo running 2 default running
If no VM is running at the moment, virsh list will output an empty list, use virsh list --all to see all VM's created, enabled, turned off or inactive.
Details of nodes (CPUs) can be checked by running:
user $
virsh nodeinfo
CPU model: x86_64 CPU(s): 4 CPU frequency: 1600 MHz CPU socket(s): 1 Core(s) per socket: 4 Thread(s) per core: 1 NUMA cell(s): 1 Memory size: 16360964 KiB
The libvirtd daemon can be checked via Unix socket by running:
root #
virsh sysinfo
<sysinfo type='smbios'> <bios> <entry name='vendor'>Dell Inc.</entry> <entry name='version'>A22</entry> <entry name='date'>11/29/2018</entry> <entry name='release'>4.6</entry> </bios> <system> <entry name='manufacturer'>Dell Inc.</entry> <entry name='product'>OptiPlex 3010</entry> <entry name='version'>01</entry> <entry name='serial'>JRJ0SW1</entry> <entry name='uuid'>4c4c4544-0052-4a10-8030-cac04f535731</entry> <entry name='sku'>OptiPlex 3010</entry> <entry name='family'>Not Specified</entry> </system> <baseBoard> <entry name='manufacturer'>Dell Inc.</entry> <entry name='product'>042P49</entry> <entry name='version'>A00</entry> <entry name='serial'>/JRJ0SW1/CN701632BD05R5/</entry> <entry name='asset'>Not Specified</entry> <entry name='location'>Not Specified</entry> </baseBoard> <chassis> <entry name='manufacturer'>Dell Inc.</entry> <entry name='version'>Not Specified</entry> <entry name='serial'>JRJ0SW1</entry> <entry name='asset'>Not Specified</entry> <entry name='sku'>To be filled by O.E.M.</entry> </chassis> <processor> <entry name='socket_destination'>CPU 1</entry> <entry name='type'>Central Processor</entry> <entry name='family'>Core i5</entry> <entry name='manufacturer'>Intel(R) Corporation</entry> <entry name='signature'>Type 0, Family 6, Model 58, Stepping 9</entry> <entry name='version'>Intel(R) Core(TM) i5-3470 CPU @ 3.20GHz</entry> <entry name='external_clock'>100 MHz</entry> <entry name='max_speed'>3200 MHz</entry> <entry name='status'>Populated, Enabled</entry> <entry name='serial_number'>Not Specified</entry> <entry name='part_number'>Fill By OEM</entry> </processor> <memory_device> <entry name='size'>8 GB</entry> <entry name='form_factor'>DIMM</entry> <entry name='locator'>DIMM1</entry> <entry name='bank_locator'>Not Specified</entry> <entry name='type'>DDR3</entry> <entry name='type_detail'>Synchronous</entry> <entry name='speed'>1600 MT/s</entry> <entry name='manufacturer'>8C26</entry> <entry name='serial_number'>00000000</entry> <entry name='part_number'>TIMETEC-UD3-1600</entry> </memory_device> <memory_device> <entry name='size'>8 GB</entry> <entry name='form_factor'>DIMM</entry> <entry name='locator'>DIMM2</entry> <entry name='bank_locator'>Not Specified</entry> <entry name='type'>DDR3</entry> <entry name='type_detail'>Synchronous</entry> <entry name='speed'>1600 MT/s</entry> <entry name='manufacturer'>8C26</entry> <entry name='serial_number'>00000000</entry> <entry name='part_number'>TIMETEC-UD3-1600</entry> </memory_device> <oemStrings> <entry>Dell System</entry> <entry>1[0585]</entry> <entry>3[1.0] </entry> <entry>12[www.dell.com] </entry> <entry>14[1]</entry> <entry>15[11]</entry> </oemStrings> </sysinfo>
Host verification
To verify entire host setup of libvirtd, execute:
user $
virt-host-validate
QEMU: Checking for hardware virtualization : PASS QEMU: Checking if device /dev/kvm exists : PASS QEMU: Checking if device /dev/kvm is accessible : PASS QEMU: Checking if device /dev/vhost-net exists : PASS QEMU: Checking if device /dev/net/tun exists : PASS QEMU: Checking for cgroup 'cpu' controller support : PASS QEMU: Checking for cgroup 'cpuacct' controller support : PASS QEMU: Checking for cgroup 'cpuset' controller support : PASS QEMU: Checking for cgroup 'memory' controller support : PASS QEMU: Checking for cgroup 'devices' controller support : PASS QEMU: Checking for cgroup 'blkio' controller support : PASS QEMU: Checking for device assignment IOMMU support : PASS QEMU: Checking if IOMMU is enabled by kernel : PASS QEMU: Checking for secure guest support : WARN (Unknown if this platform has Secure Guest support) LXC: Checking for Linux >= 2.6.26 : PASS LXC: Checking for namespace ipc : PASS LXC: Checking for namespace mnt : PASS LXC: Checking for namespace pid : PASS LXC: Checking for namespace uts : PASS LXC: Checking for namespace net : PASS LXC: Checking for namespace user : PASS LXC: Checking for cgroup 'cpu' controller support : PASS LXC: Checking for cgroup 'cpuacct' controller support : PASS LXC: Checking for cgroup 'cpuset' controller support : PASS LXC: Checking for cgroup 'memory' controller support : PASS LXC: Checking for cgroup 'devices' controller support : PASS LXC: Checking for cgroup 'freezer' controller support : FAIL (Enable 'freezer' in kernel Kconfig file or mount/enable cgroup controller in your system) LXC: Checking for cgroup 'blkio' controller support : PASS LXC: Checking if device /sys/fs/fuse/connections exists : PASS
Invocation
For invocation of the command line interface (CLI) of libvirt, see virsh invocation.
For invocation of the libvirtd daemon:
user $
libvirtd --help
Usage: libvirtd [options] Options: -h | --help Display program help -v | --verbose Verbose messages -d | --daemon Run as a daemon & write PID file -l | --listen Listen for TCP/IP connections -t | --timeout <secs> Exit after timeout period -f | --config <file> Configuration file -V | --version Display version information -p | --pid-file <file> Change name of PID file libvirt management daemon: Default paths: Configuration file (unless overridden by -f): /etc/libvirt/libvirtd.conf Sockets: /run/libvirt/libvirt-sock /run/libvirt/libvirt-sock-ro TLS: CA certificate: /etc/pki/CA/cacert.pem Server certificate: /etc/pki/libvirt/servercert.pem Server private key: /etc/pki/libvirt/private/serverkey.pem PID file (unless overridden by -p): /run/libvirtd.pid
virsh cannot assist with the creation of XML files needed by libvirt. This is what some virt-* tools and QEMU front-ends are for.
Removal
Removal of libvirt package (toolkit, library, and utilities) can be done by executing:
root #
emerge --ask --depclean --verbose app-emulation/libvirt
Troubleshooting
Messages mentioning ...or mount/enable cgroup controller in your system
Some of those messages are addressed on the previous section about the kernel configuration.
If the above doesn't fix the problem, follow the section Control groups on the LXC page to activate the correct kernel options.
WARN (Unknown if this platform has Secure Guest support)
This message appears on non IBM s390 or AMD systems and seems to be of little relevance [1] [2] [3] [4].
See also
- Virtualization — the concept and technique that permits running software in an environment separate from a computer operating system.
- QEMU — a generic, open source hardware emulator and virtualization suite.
- QEMU/Front-ends — facilitate VM management and use
- Libvirt/QEMU_networking — details the setup of Gentoo networking by Libvirt for use by guest containers and QEMU-based virtual machines.
- Libvirt/QEMU_guest — covers libvirt and its creation of a virtual machine (VM) for use under the soft-emulation mode QEMU hypervisor Type-2, notably using virsh command.
- Virt-manager — desktop user interface for management of virtual machines and containers through the libvirt library
- Virt-manager/QEMU_guest — QEMU creation of a guest (VM or container)
- QEMU/Linux guest — describes the setup of a Gentoo Linux guest in QEMU using Gentoo bootable media.
- Virsh — a CLI-based virtualization management toolkit.
External resources
- Daniel P. Berrangé libvirt announcements
- Red Hat Virtualization Network Configuration
- Create libvirt XML file for a virtual machine (VM) of Gentoo Install CD
References
- ↑ Libvirt Protected Virtualization on s390
- ↑ libvir-list mailing list PATCH 3/6 qemu: check if AMD secure guest support is enabled
- ↑ libvir-list mailing list PATCH 4/6 tools: secure guest check on s390 in virt-host-validate
- ↑ libvir-list mailing list PATCH 5/6 tools: secure guest check for AMD in virt-host-validate