libvirt is a virtualization management toolkit..
The libvirt package comprises of a toolkit and a static object library. It primarily provides virtualization support for UNIX.
libvert toolkit enable the creation of new domains, and configuration & adjustment of a domain’s resource allocation & virtual hardware, all from the command line interface (CLI).
Libvirt has the most features of any virtualizers given that it is the common library (but not limited) to QEMU, LXC, Docker, VMware, VirtualBox, Xen.
- Firstly, libvirt is used by many Virtualization Software.
- libvirt stores all its configuration in XML format for each virtual machine (VM) and containers under /etc/libvirt. For example, QEMU-specific goes under /etc/libvirt/qemu directory; LXC goes under /etc/libvirt/lxc
- libvirt can create/delete/maintain an instance of many virtual machines (VM) and containers.
- libvirt can start/stop a VM/container.
- libvirt can save an snapshot instance of a VM.
- libvirt can mount a CD-ROM ISO image
- libvirt can create different networking connections for a guest OS in VM or a container to use
- libvirt can create bridges, MACVLAN, static netdev, and NAT'd IP interface.
- libvirt can create/delete/maintain storage pools using many different filesystems such as directory, direct hard drive, gluster, iSCSI, LVM, multi-path devices, netfs, SCSI, RADOS/Ceph, and Sheepdog.
libvirt can manage the following type of guest VM/container:
The following kernel config is recommended by the libvirtd daemon.
Check the logs to see if any additional kernel configs are requested by the build.
[*] Networking support Networking Options ---> [*] Network packet filtering framework (Netfilter) ---> [*] Advanced netfilter configuration Core Netfilter Configuration ---> <*> "conntrack" connection tracking match support <*> CHECKSUM target support IPv6: Netfilter Configuration ---> <*> ip6tables NAT support <*> Ethernet Bridge tables (ebtables) support ---> <*> ebt: nat table support <*> ebt: mark filter support [*] QoS and/or fair queueing ---> <*> Hierarchical Token Bucket (HTB) <*> Stochastic Fairness Queueing (SFQ) <*> Ingress/classifier-action Qdisc <*> Netfilter mark (FW) <*> Universal 32bit comparisons w/ hashing (U32) [*] Actions <*> Traffic Policing
Some packages are aware of the
libvirt USE flag.
Review the possible USE flags for libvirt:
USE flags for app-emulation/libvirt C toolkit to manipulate virtual machines
||Enable AppArmor support|
||Enable support for Linux audit subsystem using sys-process/audit|
||Enable bash-completion support|
||Use Linux capabilities library to control privilege|
||Enable dtrace support via dev-util/systemtap|
||DBus interface to iptables/ebtables allowing for better runtime management of your firewall.|
||Allow LXC to use sys-fs/fuse for mountpoints|
||Enable GlusterFS support via sys-cluster/glusterfs|
||Allow using an iSCSI remote storage server as pool for disk image storage|
||Allow using libiscsi for iSCSI storage pool backend|
||Use net-libs/libssh to communicate with remote libvirtd hosts, for example: qemu+libssh://server/system|
||Use net-libs/libssh2 to communicate with remote libvirtd hosts, for example: qemu+libssh2://server/system|
||Builds the libvirtd daemon as well as the client utilities instead of just the client utilities|
||Allow using the Logical Volume Manager (sys-fs/lvm2) as pool for disk image storage|
||Support management of Linux Containers virtualisation (app-containers/lxc)|
||Allow using Network File System mounts as pool for disk image storage|
||Add Native Language Support (using gettextGNU locale utilities)|
||Use NUMA for memory segmenting via sys-process/numactl and sys-process/numad|
||Support management of OpenVZ virtualisation (openvz-sources)|
||Allow using real disk partitions as pool for disk image storage, using sys-block/parted to create, resize and delete them.|
||Support auto learning IP addreses for routing|
||Enable PolicyKit (polkit) authentication support|
||Support management of QEMU virtualisation (app-emulation/qemu)|
||Enable rados block device support via sys-cluster/ceph|
||Add support for the Simple Authentication and Security Layer|
||!!internal use only!! Security Enhanced Linux support, this must be set by the selinux profile or breakage will occur|
||Enable virtual/udev integration (device discovery, power and storage device support, etc)|
||Verify upstream signatures on distfiles|
||Enable virtual networking (NAT) support for guests. Includes all the dependencies for NATed network mode. Effectively any network setup that relies on libvirt to setup and configure network interfaces on your host. This can include bridged and routed networks ONLY if you are allowing libvirt to create and manage the underlying devices for you. In some cases this requires enabling the 'netcf' USE flag (currently unavailable).|
||Support management of VirtualBox virtualisation (app-emulation/virtualbox)|
||Build the net-analyzer/wireshark plugin for the Libvirt RPC protocol|
||Support management of Xen virtualisation (app-emulation/xen)|
||Enable ZFS backend storage sys-fs/zfs|
If libvirt is going to be used, you may need the
usbredirUSE flags to redirect USB devices to another machine over TCP.
libvirt comes with a number of Use flags. Please check those flags and set them according to your setup. These are recommended USE flags for libvirt:
app-emulation/libvirt pcap virt-network numa fuse macvtap vepa qemu
Additional ebuild configuration frobs are provided as the USE_EXPAND variables QEMU_USER_TARGETS and QEMU_SOFTMMU_TARGETS. See app-emulation/qemu for a list of all the available targets (there are a heck of a lot of them; most of them are very obscure and may be ignored; leaving these variables at their default values will disable almost everything which is probably just fine for most users).
For each target specified, a qemu executable will be built. A
softmmu target is the standard qemu use-case of emulating an entire system (like VirtualBox or VMWare, but with optional support for emulating CPU hardware along with peripherals).
user targets execute user-mode code only; the (somewhat shockingly ambitious) purpose of these targets is to "magically" allow importing user-space linux ELF binaries from a different architecture into the native system (that is, they are like multilib, without the awkward need for a software stack or CPU capable of running it).
In order to enable QEMU_USER_TARGETS and QEMU_SOFTMMU_TARGETS we can edit the variables globally in /etc/portage/make.conf, i.e.:
QEMU_SOFTMMU_TARGETS="arm x86_64 sparc" QEMU_USER_TARGETS="x86_64"
Or, the /etc/portage/package.use file(s) can be modified. Two equivalent syntaxes are available: traditional USE flag syntax, i.e.:
app-emulation/qemu qemu_softmmu_targets_arm qemu_softmmu_targets_x86_64 qemu_softmmu_targets_sparc app-emulation/qemu qemu_user_targets_x86_64
Another alternative is to use the newer sexy USE_EXPAND-specific syntax:
app-emulation/qemu QEMU_SOFTMMU_TARGETS: arm x86_64 sparc QEMU_USER_TARGETS: x86_64
After reviewing and adding any desired USE flags, emerge app-emulation/qemu:
emerge --ask app-emulation/qemu
Verify host as QEMU-capable
To verify that the host hardware has the needed virtualization support, issue the following command:
grep --color -E "vmx|svm" /proc/cpuinfo
The vmx or svm CPU flag should be red highlighted and available.
File /dev/kvm must exist.
A list of all environment variables read by the libvirt library and its toolkit commands:
- DISPLAY - for virtualbox-only
- DNSMASQ_CLIENT_ID - Used with dnsmasqd
- DNSMASQ_IAID - Used with dnsmasqd
- DNSMASQ_INTERFACE - Used with dnsmasqd
- DNSMASQ_LEASE_EXPIRES - Used with dnsmasqd
- DNSMASQ_OLD_HOSTNAME - Used with dnsmasqd
- DNSMASQ_SERVER_DUID - Used with dnsmasqd
- DNSMASQ_SUPPLIED_HOSTNAME - Used with dnsmasqd
- LIBVIRT_ADMIN_DEFAULT_URI - administration
- LIBVIRT_AUTH_FILE - authentication
- LISTEN_PID - For systemd only.
- LISTEN_FDS - For systemd only.
- NOTIFY_SOCKET - for systemd-only.
- VBOX_APP_HOME - for virtualbox-only
- VIR_BRIDGE_NAME - Bridging
Files that are read by the host-side OS; libvirt library, libvirtd daemon. and its sets of commands:
- /proc/vz/vestat - Only with openvz
After emerging, to run virt-manager as a normal user, ensure each user has been added to the libvirt group:
usermod -a -G libvirt <user>
Uncomment the following lines from the libvirtd configuration file:
auth_unix_ro = "none" auth_unix_rw = "none" unix_sock_group = "libvirt" unix_sock_ro_perms = "0777" unix_sock_rw_perms = "0770"
Be sure to have the user log out then log in again for the new group settings to be applied.
policykitUSE flag is not enabled for libvirt package, the libvirt group is not created, in which case another group, such as wheel must be used for unix_sock_group.
The service needs to be started. It's also a good idea to enabled in order to be around once we restart the system.
To start libvirtd daemon using OpenRC:
rc-service libvirtd start && rc-update add libvirtd default
To enable and start libvirtd daemon using systemd:
systemctl enable --now libvirtd
virt-admin should now be launchable as a regular user.
If permission denied issues are experienced when loading ISO images user directories (somewhere beneath /home/) then the /var/lib/libvirt/images/ directory can be used to store the images.
The following firewall chain names have been reserved by the libvirt library and libvirtd daemon.
|Reserved chain name||Description|
To firewall administrators: nat chain name is often used by net-firewall/shorewall, net-firewall/firewalld, net-firewall/ufw, net-firewall/ipfw and possibly net-firewall/fwbuilder; it is far much easier to rename it at the firewall side than it is to rename nat within libvirt/libvirtd.
For configuration of networking under libvirt, continue reading at libvirt/QEMU networking.
The libvirt can be checked by running virsh:
CPU model: x86_64 CPU(s): 4 CPU frequency: 1600 MHz CPU socket(s): 1 Core(s) per socket: 4 Thread(s) per core: 1 NUMA cell(s): 1 Memory size: 16360964 KiB
The libvirtd daemon can be checked via Unix socket by running:
<sysinfo type='smbios'> <bios> <entry name='vendor'>Dell Inc.</entry> <entry name='version'>A22</entry> <entry name='date'>11/29/2018</entry> <entry name='release'>4.6</entry> </bios> <system> <entry name='manufacturer'>Dell Inc.</entry> <entry name='product'>OptiPlex 3010</entry> <entry name='version'>01</entry> <entry name='serial'>JRJ0SW1</entry> <entry name='uuid'>4c4c4544-0052-4a10-8030-cac04f535731</entry> <entry name='sku'>OptiPlex 3010</entry> <entry name='family'>Not Specified</entry> </system> <baseBoard> <entry name='manufacturer'>Dell Inc.</entry> <entry name='product'>042P49</entry> <entry name='version'>A00</entry> <entry name='serial'>/JRJ0SW1/CN701632BD05R5/</entry> <entry name='asset'>Not Specified</entry> <entry name='location'>Not Specified</entry> </baseBoard> <chassis> <entry name='manufacturer'>Dell Inc.</entry> <entry name='version'>Not Specified</entry> <entry name='serial'>JRJ0SW1</entry> <entry name='asset'>Not Specified</entry> <entry name='sku'>To be filled by O.E.M.</entry> </chassis> <processor> <entry name='socket_destination'>CPU 1</entry> <entry name='type'>Central Processor</entry> <entry name='family'>Core i5</entry> <entry name='manufacturer'>Intel(R) Corporation</entry> <entry name='signature'>Type 0, Family 6, Model 58, Stepping 9</entry> <entry name='version'>Intel(R) Core(TM) i5-3470 CPU @ 3.20GHz</entry> <entry name='external_clock'>100 MHz</entry> <entry name='max_speed'>3200 MHz</entry> <entry name='status'>Populated, Enabled</entry> <entry name='serial_number'>Not Specified</entry> <entry name='part_number'>Fill By OEM</entry> </processor> <memory_device> <entry name='size'>8 GB</entry> <entry name='form_factor'>DIMM</entry> <entry name='locator'>DIMM1</entry> <entry name='bank_locator'>Not Specified</entry> <entry name='type'>DDR3</entry> <entry name='type_detail'>Synchronous</entry> <entry name='speed'>1600 MT/s</entry> <entry name='manufacturer'>8C26</entry> <entry name='serial_number'>00000000</entry> <entry name='part_number'>TIMETEC-UD3-1600</entry> </memory_device> <memory_device> <entry name='size'>8 GB</entry> <entry name='form_factor'>DIMM</entry> <entry name='locator'>DIMM2</entry> <entry name='bank_locator'>Not Specified</entry> <entry name='type'>DDR3</entry> <entry name='type_detail'>Synchronous</entry> <entry name='speed'>1600 MT/s</entry> <entry name='manufacturer'>8C26</entry> <entry name='serial_number'>00000000</entry> <entry name='part_number'>TIMETEC-UD3-1600</entry> </memory_device> <oemStrings> <entry>Dell System</entry> <entry>1</entry> <entry>3[1.0] </entry> <entry>12[www.dell.com] </entry> <entry>14</entry> <entry>15</entry> </oemStrings> </sysinfo>
To verify entire host setup of libvirtd, execute:
QEMU: Checking for hardware virtualization : PASS QEMU: Checking if device /dev/kvm exists : PASS QEMU: Checking if device /dev/kvm is accessible : PASS QEMU: Checking if device /dev/vhost-net exists : PASS QEMU: Checking if device /dev/net/tun exists : PASS QEMU: Checking for cgroup 'cpu' controller support : PASS QEMU: Checking for cgroup 'cpuacct' controller support : PASS QEMU: Checking for cgroup 'cpuset' controller support : PASS QEMU: Checking for cgroup 'memory' controller support : PASS QEMU: Checking for cgroup 'devices' controller support : PASS QEMU: Checking for cgroup 'blkio' controller support : PASS QEMU: Checking for device assignment IOMMU support : PASS QEMU: Checking if IOMMU is enabled by kernel : PASS QEMU: Checking for secure guest support : WARN (Unknown if this platform has Secure Guest support) LXC: Checking for Linux >= 2.6.26 : PASS LXC: Checking for namespace ipc : PASS LXC: Checking for namespace mnt : PASS LXC: Checking for namespace pid : PASS LXC: Checking for namespace uts : PASS LXC: Checking for namespace net : PASS LXC: Checking for namespace user : PASS LXC: Checking for cgroup 'cpu' controller support : PASS LXC: Checking for cgroup 'cpuacct' controller support : PASS LXC: Checking for cgroup 'cpuset' controller support : PASS LXC: Checking for cgroup 'memory' controller support : PASS LXC: Checking for cgroup 'devices' controller support : PASS LXC: Checking for cgroup 'freezer' controller support : FAIL (Enable 'freezer' in kernel Kconfig file or mount/enable cgroup controller in your system) LXC: Checking for cgroup 'blkio' controller support : PASS LXC: Checking if device /sys/fs/fuse/connections exists : PASS
For invocation of the command line interface (CLI) of libvirt, see virsh invocation.
For invocation of the libvirtd daemon:
Usage: libvirtd [options] Options: -h | --help Display program help -v | --verbose Verbose messages -d | --daemon Run as a daemon & write PID file -l | --listen Listen for TCP/IP connections -t | --timeout <secs> Exit after timeout period -f | --config <file> Configuration file -V | --version Display version information -p | --pid-file <file> Change name of PID file libvirt management daemon: Default paths: Configuration file (unless overridden by -f): /etc/libvirt/libvirtd.conf Sockets: /run/libvirt/libvirt-sock /run/libvirt/libvirt-sock-ro TLS: CA certificate: /etc/pki/CA/cacert.pem Server certificate: /etc/pki/libvirt/servercert.pem Server private key: /etc/pki/libvirt/private/serverkey.pem PID file (unless overridden by -p): /run/libvirtd.pid
virsh cannot assist with the creation of XML files needed by libvirt. This is what some virt-* tools and QEMU front-ends are for.
Removal of libvirt package (toolkit, library, and utilities) can be done by executing:
emerge --ask --depclean --verbose app-emulation/libvirt
- Virtualization — the concept and technique that permits running software in an environment separate from a computer operating system.
- QEMU — a generic, open source hardware emulator and virtualization suite.
- QEMU/QEMU front-ends — user interface application to the QEMU/libvirt API/library.
- Libvirt/QEMU_networking — details the setup of Gentoo networking by Libvirt for use by guest containers and QEMU-based virtual machines.
- Libvirt/QEMU_guest — covers libvirt and its creation of a virtual machine (VM) for use under the soft-emulation mode QEMU hypervisor Type-2, notably using virsh command.
- Virt-manager — desktop user interface for management of virtual machines and containers through the libvirt library
- Virt-manager/QEMU_networking — setup of networking using a virt-manager GUI frontend tool
- Virt-manager/QEMU_guest — QEMU creation of a guest (VM or container)