libvirt
libvirt is a virtualization management toolkit.
The libvirt package is comprised of two components: a toolkit, and a static object library. It primarily provides virtualization support for UNIX.
Overview
app-emulation/libvirt package provides a CLI toolkit that can be used to assist in the creation and configuration of new domains. It is also used to adjust a domain’s resource allocation/virtual hardware.
Features
The overview of Libvirt features are:
- Guest configuration is stored in the XML format at /etc/libvirt. For example, QEMU config goes under /etc/libvirt/qemu
- Snapshots for virtual machines can be crated and rolled back.
- Network interface creation and management, including bridge and MACVLAN creation.
- Network configuration automation and management for NAT and DHCP.
- Storage pool management for easier mounting on guests, filesystems including:
Supported guest types
libvirt can manage the following types of virtual machines and containers, among others:
Installation
Verify host as QEMU-capable:
To verify that the host hardware has the needed virtualization support, issue the following command:
host$
grep --color -E "vmx|svm" /proc/cpuinfo
The vmx or svm CPU flag should be red highlighted and available.
File /dev/kvm must exist.
Kernel
The following kernel config is recommended by the libvirtd daemon.
Check the logs to see if any additional kernel configs are requested by the build.
[*] Networking support
Networking Options --->
[*] Network packet filtering framework (Netfilter) --->
[*] Advanced netfilter configuration
Core Netfilter Configuration --->
<*> "conntrack" connection tracking match support
<*> CHECKSUM target support
IPv6: Netfilter Configuration --->
<*> IP6 tables support (required for filtering)
<*> ip6tables NAT support
<*> Ethernet Bridge tables (ebtables) support --->
<*> ebt: nat table support
<*> ebt: mark filter support
[*] QoS and/or fair queueing --->
<*> Hierarchical Token Bucket (HTB)
<*> Stochastic Fairness Queueing (SFQ)
<*> Ingress/classifier-action Qdisc
<*> Netfilter mark (FW)
<*> Universal 32bit comparisons w/ hashing (U32)
[*] Actions
<*> Traffic Policing
<*> Checksum Updating
The following kernel options are required to pass some checks by the virt-host-validate tool. That also means that are requirements for some functionality.
blkio
General setup --->
[*] Control Group support --->
--- Control Group support
[*] IO controller
memory
Device Drivers --->
[*] Memory Controller drivers ---
--- Memory Controller drivers
tun
(used in the default libvirt/virt-manager networking setup)Device Drivers --->
[*] Network device support --->
[*] Network core driver support
<*> Universal TUN/TAP device driver support
USE flags
Some packages are aware of the libvirt
USE flag.
Review the possible USE flags for libvirt:
USE flags for app-emulation/libvirt C toolkit to manipulate virtual machines
+caps
|
Use Linux capabilities library to control privilege |
+libvirtd
|
Builds the libvirtd daemon as well as the client utilities instead of just the client utilities |
+qemu
|
Support management of QEMU virtualisation (app-emulation/qemu) |
+udev
|
Enable virtual/udev integration (device discovery, power and storage device support, etc) |
+virt-network
|
Enable virtual networking (NAT) support for guests. Includes all the dependencies for NATed network mode. Effectively any network setup that relies on libvirt to setup and configure network interfaces on your host. This can include bridged and routed networks ONLY if you are allowing libvirt to create and manage the underlying devices for you. In some cases this requires enabling the 'netcf' USE flag (currently unavailable). |
apparmor
|
Enable support for the AppArmor application security system |
audit
|
Enable support for Linux audit subsystem using sys-process/audit |
bash-completion
|
Enable bash-completion support |
dtrace
|
Enable dtrace support via dev-debug/systemtap |
firewalld
|
DBus interface to iptables/ebtables allowing for better runtime management of your firewall. |
fuse
|
Allow LXC to use sys-fs/fuse for mountpoints |
glusterfs
|
Enable GlusterFS support via sys-cluster/glusterfs |
iscsi
|
Allow using an iSCSI remote storage server as pool for disk image storage |
iscsi-direct
|
Allow using libiscsi for iSCSI storage pool backend |
libssh
|
Use net-libs/libssh to communicate with remote libvirtd hosts, for example: qemu+libssh://server/system |
libssh2
|
Use net-libs/libssh2 to communicate with remote libvirtd hosts, for example: qemu+libssh2://server/system |
lvm
|
Allow using the Logical Volume Manager (sys-fs/lvm2) as pool for disk image storage |
lxc
|
Support management of Linux Containers virtualisation (app-containers/lxc) |
nbd
|
Allow using sys-block/nbdkit to access network disks |
nfs
|
Allow using Network File System mounts as pool for disk image storage |
nls
|
Add Native Language Support (using gettext - GNU locale utilities) |
numa
|
Use NUMA for memory segmenting via sys-process/numactl and sys-process/numad |
openvz
|
Support management of OpenVZ virtualisation (openvz-sources) |
parted
|
Allow using real disk partitions as pool for disk image storage, using sys-block/parted to create, resize and delete them. |
pcap
|
Support auto learning IP addreses for routing |
policykit
|
Enable PolicyKit (polkit) authentication support |
rbd
|
Enable rados block device support via sys-cluster/ceph |
sasl
|
Add support for the Simple Authentication and Security Layer |
selinux
|
!!internal use only!! Security Enhanced Linux support, this must be set by the selinux profile or breakage will occur |
test
|
Enable dependencies and/or preparations necessary to run tests (usually controlled by FEATURES=test but can be toggled independently) |
verify-sig
|
Verify upstream signatures on distfiles |
virtiofsd
|
Drag in virtiofsd dependency app-emulation/virtiofsd |
virtualbox
|
Support management of VirtualBox virtualisation (app-emulation/virtualbox) |
wireshark-plugins
|
Build the net-analyzer/wireshark plugin for the Libvirt RPC protocol |
xen
|
Support management of Xen virtualisation (app-emulation/xen) |
zfs
|
Enable ZFS backend storage sys-fs/zfs |
If libvirt is going to be used, you may need the
usbredir
USE flags to redirect USB devices to another machine over TCP.libvirt comes with a number of USE flags. Please check those flags and set them according to your setup. These are recommended USE flags for libvirt:
/etc/portage/package.use/libvirt
app-emulation/libvirt pcap virt-network numa fuse macvtap vepa qemu
USE_EXPAND
See /etc/portage/make.conf#USE_EXPAND for more detail on USE_EXPAND.
Emerge
After reviewing and adding any desired USE flags, emerge app-emulation/libvirt and app-emulation/qemu :
root #
emerge --ask app-emulation/libvirt app-emulation/qemu
Additional software
Custom UEFI
Custom UEFI are provided by emulation-app/firmware-virt package.
Configuration
Environment variables
See specific CLI commands related to Libvirt for its available environment variable settings: virsh, libvirtd.
Files
When a domain starts, client using Libvirt API library (ie., virt-manager, virsh) checks for that domain XML file in the following paths:
- System mode: /etc/libvirt/qemu/
- User mode: $HOME/.config/libvirt/qemu/
Other directory paths used by Libvirt library are:
- /etc/libvirt/hooks/
- /etc/libvirt/nwfilter/
- /etc/libvirt/secrets/
- /etc/libvirt/storage/
- /proc/
- /proc/sys/ipv4/
- /proc/sys/ipv6/conf/all/
- /proc/sys/ipv6/conf/%s/%s
- /sys/class/fc_host/
- /sys/devices/system/%s/cpu/
- /sys/devices/system/node/node0/
- /sys/fs/resctrl/info/%s/
- /sys/kernel/mm/transparent_hugepage/
- /sys/fs/resctrl/info/%s/
- /sys/fs/resctrl/info/MB/
- /var/lib/libvirt/
For specific file accesses, see Libvirt-related CLI commands (ie., libvirtd, virsh, virt-manager).
User permissions
To have a user join the UNIX libvirt group, check that the group name is already defined:
host$
getent group libvirt
libvirt:x:1001:
If the output line exist above, then run:
host$
sudo usermod -aG libvirt username
If libvirt group is missing, then policykit
may have not been installed.
If
policykit
USE flag is not enabled for libvirt package, the libvirt group will not be created when app-emulation/libvirt is emerged. If this is the case, another group, such as wheel must be used for unix_sock_group.If not using policykit
and still wanting to use libvirt session (user-mode), then add manually the wheel group, replace username with its actual username, run:
host$
sudo usermod -aG wheel username
It is difficult to use libvirt group name without the
policykit
. Use this wheel group instead.Uncomment the following lines from the libvirtd configuration file:
/etc/libvirt/libvirtd.conf
auth_unix_ro = "none"
auth_unix_rw = "none"
unix_sock_group = "libvirt"
unix_sock_ro_perms = "0777"
unix_sock_rw_perms = "0770"
Replace libvirt in unix_sock_group with wheel if no
policykit
installed.Be sure to have the user log out then log in again for the new group settings to be applied.
virt-admin should then be launchable as a regular user, after the services have been started.
If permission denied issues are experienced when loading ISO images user directories (somewhere beneath /home/) then the /var/lib/libvirt/images/ directory can be used to store the images.
Service
OpenRC
To start libvirtd daemon using OpenRC and add it to default runlevel:
host-root#
rc-service libvirtd start && rc-update add libvirtd default
systemd
Historically, all libvirt functionality was provided by the monolithic libvirtd daemon. Upstream has developed a new modular architecture for libvirt where each driver is run in its own daemon. Therefore, recent versions of libvirt (at least >=app-emulation/libvirt-9.3.0) need the service units for the hypervisor drivers enabled. For QEMU this is virtqemud.service, for Xen it is virtxend.service and for LXC virtlxcd.service and their corresponding sockets.
Enable the service units and their sockets, depending on the functionality (qemu, xen, lxc) you need:
host-root#
systemctl enable --now virtnetworkd.service
Created symlink '/etc/systemd/system/multi-user.target.wants/virtnetworkd.service' → '/usr/lib/systemd/system/virtnetworkd.service'. Created symlink '/etc/systemd/system/sockets.target.wants/virtnetworkd.socket' → '/usr/lib/systemd/system/virtnetworkd.socket'. Created symlink '/etc/systemd/system/sockets.target.wants/virtnetworkd-ro.socket' → '/usr/lib/systemd/system/virtnetworkd-ro.socket'. Created symlink '/etc/systemd/system/sockets.target.wants/virtnetworkd-admin.socket' → '/usr/lib/systemd/system/virtnetworkd-admin.socket'.
host-root#
systemctl enable --now virtqemud.service
Created symlink /etc/systemd/system/multi-user.target.wants/virtqemud.service → /usr/lib/systemd/system/virtqemud.service. Created symlink /etc/systemd/system/sockets.target.wants/virtqemud.socket → /usr/lib/systemd/system/virtqemud.socket. Created symlink /etc/systemd/system/sockets.target.wants/virtqemud-ro.socket → /usr/lib/systemd/system/virtqemud-ro.socket. Created symlink /etc/systemd/system/sockets.target.wants/virtqemud-admin.socket → /usr/lib/systemd/system/virtqemud-admin.socket. Created symlink /etc/systemd/system/sockets.target.wants/virtlogd.socket → /usr/lib/systemd/system/virtlogd.socket. Created symlink /etc/systemd/system/sockets.target.wants/virtlockd.socket → /usr/lib/systemd/system/virtlockd.socket. Created symlink /etc/systemd/system/sockets.target.wants/virtlogd-admin.socket → /usr/lib/systemd/system/virtlogd-admin.socket. Created symlink /etc/systemd/system/sockets.target.wants/virtlockd-admin.socket → /usr/lib/systemd/system/virtlockd-admin.socket.
host-root#
systemctl enable --now virtstoraged.socket
Created symlink /etc/systemd/system/sockets.target.wants/virtstoraged.socket → /usr/lib/systemd/system/virtstoraged.socket. Created symlink /etc/systemd/system/sockets.target.wants/virtstoraged-ro.socket → /usr/lib/systemd/system/virtstoraged-ro.socket. Created symlink /etc/systemd/system/sockets.target.wants/virtstoraged-admin.socket → /usr/lib/systemd/system/virtstoraged-admin.socket.
host-root#
systemctl enable --now virtlogd.service
Created symlink '/etc/systemd/system/sockets.target.wants/virtlogd.socket' → '/usr/lib/systemd/system/virtlogd.socket'. Created symlink '/etc/systemd/system/sockets.target.wants/virtlogd-admin.socket' → '/usr/lib/systemd/system/virtlogd-admin.socket'.
All the service units use a timeout that causes them to shutdown after 2 minutes if no VM is running. They get automatically reactivated when a socket is accessed, e. g. when virt-manager is started or a virsh command is run.
Firewall
The following firewall chain names have been reserved by the libvirt library and libvirtd daemon.
Reserved chain name | Description |
---|---|
nat | NAT |
LIBVIRT_INP | Firewall input |
LIBVIRT_FWI | Firewall input |
LIBVIRT_FWO | Firewall output |
LIBVIRT_FWX | Firewall forward |
LIBVIRT_OUT | Firewall output |
LIBVIRT_PRT | Firewall postrouting |
To firewall administrators: nat chain name is often used by net-firewall/shorewall, net-firewall/firewalld, net-firewall/ufw, net-firewall/ipfw and possibly net-firewall/fwbuilder; it is far much easier to rename it at the firewall side than it is to rename nat within libvirt/libvirtd.
Networking
For configuration of networking under libvirt, continue reading at QEMU networking in Libvirt.
Autostart
AutoStart feature enables a domain to power up automatically after the host or X session gets ready.
Autostarting a domain can be done in session-mode (X) or in system-mode (host).
System-mode autostart
AutoStart for system-mode domain is natively supported by libvirtd.
For AutoStart option of a domain on power-up/reset:
- run
virsh --connect qemu:///system autostart <vm-name>
; creates a symbolic link to /etc/libvirt/qemu/autostart/<vm-name>.xml. - from the virt-manager "
Virtual Machine Manager
" window, select the domain in the main panel, go to Edit->Virtual Machine Details menu suboptions, - or from the virt-manager "
<domain> on QEMU/KVM
" window, go to View->Detail menu options then select Boot Options line item in left navigation panel: under Autostart in main panel
Toggle the checkbox for "Start virtual machine at boot up".
Session-mode autostart
The hypervisor controller (libvirtd) does not support directly the autostart of session-mode --connect=qemu:///session) domains.
Because session mode runs as the unprivileged user, its libvirt instance is only available after a user logs in.
Also, libvirtd does not handle session mode.
The choices of auto-starting a session-mode domain are:
- XDG AutoStart mechanism
- Login script
- X session script
XDG AutoStart mechanisms
Create a .desktop-type file in $HOME/.config/autostart/ directory.
host$
vi $HOME/.config/autostart/libvirt-mydomain.desktop
Replace mydomain with the actual domain name.
Fill the .desktop file with:
$HOME/.config/autostart/libvirt-mydomain.desktop
"Typical .desktop
file"[Desktop Entry]
Name=My Domain VM
Type=Application
Exec=virsh --connect qemu:///session start mydomain
X-GNOME-Autostart-enabled=true
Replace mydomain with the actual domain name.
KDE Plasma will autostart ANY file in $HOME/.config/autostart/ directory, so no additional configuration setting is needed for KDE Plasma
For advance control of starting session-mode domains under KDE Plasma, select one of the following lines for your .desktop file:
$HOME/.config/autostart/libvirt-mydomain.desktop
.desktop
file contentX-KDE-autostart-phase=0 # Pre-KDE startup
X-KDE-autostart-phase=1 # After KDE startup
X-KDE-autostart-phase=2 # After user apps
UNIX script
To autostart by UNIX login script, use one of the following login scripts:
- $HOME/.bash_profile
- $HOME/.profile.d/libvirt-autostart.sh
And insert the following command line into that login script:
virsh --connect qemu:///session start <vm-name>
Usage
A list of domains (configured VMs) can be obtained by running:
host$
virsh list
Id Name State ------------------------ 1 gentoo running 2 default running
If no VM is running at the moment, virsh list will output an empty list, use virsh list --all to see all VM's created, enabled, turned off or inactive.
Details of nodes (CPUs) can be checked by running:
host$
virsh nodeinfo
CPU model: x86_64 CPU(s): 4 CPU frequency: 1600 MHz CPU socket(s): 1 Core(s) per socket: 4 Thread(s) per core: 1 NUMA cell(s): 1 Memory size: 16360964 KiB
The libvirtd daemon can be checked via Unix socket by running:
host$
virsh sysinfo
<sysinfo type='smbios'> <bios> <entry name='vendor'>Dell Inc.</entry> <entry name='version'>A22</entry> <entry name='date'>11/29/2018</entry> <entry name='release'>4.6</entry> </bios> <system> <entry name='manufacturer'>Dell Inc.</entry> <entry name='product'>OptiPlex 3010</entry> <entry name='version'>01</entry> <entry name='serial'>JRJ0SW1</entry> <entry name='uuid'>4c4c4544-0052-4a10-8030-cac04f535731</entry> <entry name='sku'>OptiPlex 3010</entry> <entry name='family'>Not Specified</entry> </system> <baseBoard> <entry name='manufacturer'>Dell Inc.</entry> <entry name='product'>042P49</entry> <entry name='version'>A00</entry> <entry name='serial'>/JRJ0SW1/CN701632BD05R5/</entry> <entry name='asset'>Not Specified</entry> <entry name='location'>Not Specified</entry> </baseBoard> <chassis> <entry name='manufacturer'>Dell Inc.</entry> <entry name='version'>Not Specified</entry> <entry name='serial'>JRJ0SW1</entry> <entry name='asset'>Not Specified</entry> <entry name='sku'>To be filled by O.E.M.</entry> </chassis> <processor> <entry name='socket_destination'>CPU 1</entry> <entry name='type'>Central Processor</entry> <entry name='family'>Core i5</entry> <entry name='manufacturer'>Intel(R) Corporation</entry> <entry name='signature'>Type 0, Family 6, Model 58, Stepping 9</entry> <entry name='version'>Intel(R) Core(TM) i5-3470 CPU @ 3.20GHz</entry> <entry name='external_clock'>100 MHz</entry> <entry name='max_speed'>3200 MHz</entry> <entry name='status'>Populated, Enabled</entry> <entry name='serial_number'>Not Specified</entry> <entry name='part_number'>Fill By OEM</entry> </processor> <memory_device> <entry name='size'>8 GB</entry> <entry name='form_factor'>DIMM</entry> <entry name='locator'>DIMM1</entry> <entry name='bank_locator'>Not Specified</entry> <entry name='type'>DDR3</entry> <entry name='type_detail'>Synchronous</entry> <entry name='speed'>1600 MT/s</entry> <entry name='manufacturer'>8C26</entry> <entry name='serial_number'>00000000</entry> <entry name='part_number'>TIMETEC-UD3-1600</entry> </memory_device> <memory_device> <entry name='size'>8 GB</entry> <entry name='form_factor'>DIMM</entry> <entry name='locator'>DIMM2</entry> <entry name='bank_locator'>Not Specified</entry> <entry name='type'>DDR3</entry> <entry name='type_detail'>Synchronous</entry> <entry name='speed'>1600 MT/s</entry> <entry name='manufacturer'>8C26</entry> <entry name='serial_number'>00000000</entry> <entry name='part_number'>TIMETEC-UD3-1600</entry> </memory_device> <oemStrings> <entry>Dell System</entry> <entry>1[0585]</entry> <entry>3[1.0] </entry> <entry>12[www.dell.com] </entry> <entry>14[1]</entry> <entry>15[11]</entry> </oemStrings> </sysinfo>
Host verification
To verify entire host setup of libvirtd, execute:
host-root#
virt-host-validate
QEMU: Checking for hardware virtualization : PASS QEMU: Checking if device /dev/kvm exists : PASS QEMU: Checking if device /dev/kvm is accessible : PASS QEMU: Checking if device /dev/vhost-net exists : PASS QEMU: Checking if device /dev/net/tun exists : PASS QEMU: Checking for cgroup 'cpu' controller support : PASS QEMU: Checking for cgroup 'cpuacct' controller support : PASS QEMU: Checking for cgroup 'cpuset' controller support : PASS QEMU: Checking for cgroup 'memory' controller support : PASS QEMU: Checking for cgroup 'devices' controller support : PASS QEMU: Checking for cgroup 'blkio' controller support : PASS QEMU: Checking for device assignment IOMMU support : PASS QEMU: Checking if IOMMU is enabled by kernel : PASS QEMU: Checking for secure guest support : WARN (Unknown if this platform has Secure Guest support) LXC: Checking for Linux >= 2.6.26 : PASS LXC: Checking for namespace ipc : PASS LXC: Checking for namespace mnt : PASS LXC: Checking for namespace pid : PASS LXC: Checking for namespace uts : PASS LXC: Checking for namespace net : PASS LXC: Checking for namespace user : PASS LXC: Checking for cgroup 'cpu' controller support : PASS LXC: Checking for cgroup 'cpuacct' controller support : PASS LXC: Checking for cgroup 'cpuset' controller support : PASS LXC: Checking for cgroup 'memory' controller support : PASS LXC: Checking for cgroup 'devices' controller support : PASS LXC: Checking for cgroup 'freezer' controller support : FAIL (Enable 'freezer' in kernel Kconfig file or mount/enable cgroup controller in your system) LXC: Checking for cgroup 'blkio' controller support : PASS LXC: Checking if device /sys/fs/fuse/connections exists : PASS
Run virt-host-validate at root, otherwise the devices cgroups will fail
Connect types - Default
The connect type --connect <URI> CLI option tells the Libvirt clients how to connect (transport), and optionally where.
The connect type option lets the Libvirt client (virt-manager, virsh) connect to and manage the hypervisor (eg. libvirtd, libqemud) daemon.
If the connect type option not used, the default URI is supplied by its Libvirt client application and is always a local hypervisor URI:
Libvirt client | Default URI | Socket Path | Requires Root? |
---|---|---|---|
virsh as root | qemu:///system | /run/libvirt/libvirt-sock | ✅ |
virsh as regular user | qemu:///session | $XDG_RUNTIME_DIR/libvirt/libvirt-sock | ❌ |
virt-manager | Both system + session |
Connect types - Required
URI is used to denote the specific connect type for a libvirt client to connect to a hypervisor aka libvirt daemon (e.g., libqemud).
The generalized syntax of the Libvirt connect URI is:
transport[+protocol]:///target-or-path transport[+protocol]://[user@]hostname[:port]/target-or-path[?extra_parameters]
Required URI components are transport and target-or-path, but hostname is required only for remote hypervisor daemon.
Transport
qemu hypervisor type is the most common choice for the mandatory transport syntax component.
Set transport to one of the following hypervisor type: (qemu
, xen
, lxc
, vz
, uml
, bhyve
, exs
, vbox
, test
, hv
).
target-or-path
There are two types of pathways for the target-or-path syntax component of connect type URI:
- predefined target name
- absolute path specification
Depending on the transport/protocol combo, one of above syntax is used and each example given below:
URI Example | Path / Target | Meaning |
---|---|---|
qemu:///session | session | Per-user session daemon |
qemu:///system | system | System-wide daemon |
qemu+ssh://user@host/system | system | Remote system daemon via SSH protocol |
qemu+unix:///path/to/socket | /path/to/socket | UNIX domain socket path, in absolute path specification. |
test:///default | default | Named mock environment, for testing only |
esx://user@host/?transport=https | ?transport=https | ESXi uses HTTP query string for configuration, in absolute path specification. |
lxc:/// | (empty) | Default system daemon for Linux LXC container. |
Predefined target name
The target name is the component to communicate within that libvirt daemon.
The connect type depends on $USER environment variable for the target part of target-or-path keyword used in CLI connect type syntax:
- session, if regular UNIX user. Commonly used in X user-logged-in sessions.
- system, if root user. Useful for persistence at bootup.
Path
Absolute paths are the standard and expected form when specifying Unix socket locations in Libvirt URIs.
Using relative path specification like qemu+unix://./johndoe/.cache/libvirt, behavior would be unusual and typically unsupported or error-prone in Libvirt.
Connect types - Optional
Optional components to the connect type URI are detailed below.
Hostname
For the hostname syntax component, the host name is optional.
Hypervisor (libqemud, formerly libvirtd) can be reach using a UNIX domain socket or an inet network socket.
No Hostname
No hostname means only unix domain socket is used.
UNIX Domain Socket is the most common usage to connect with a local host hypervisor (libqemud).
unix domain socket is located in /var/run/libvirt/libvirt.sock.
This 3-slash is identical to file:/// URI. It represents a missing host name, also non-network socket.
The most common method of connecting to a local host hypervisor is through UNIX domain socket:
/var/lib/libvirt/libvirt.sock, for non-root user /var/lib/libvirt/libvirt-admin.sock, for root
libvirtd UNIX domain socket is created at /var/run/libvirt/libvirt.sock by libvirtd (or other libvirt-variant VM handlers, like virtqemud, virtxdomd.
Hostname given
If hostname is specified, then inet network socket is used instead of UNIX domain socket.
DNS domain name lookup is used to find the IP address of the host name, which can be local or remote.
Protocol
- qemu+ssh:// - a secured connection to QEMU over SSH/TCP protocol
- xen+ssh:// - a secured connection to Xen Dom0 over SSH/TCP protocol
virt-manager can connect to multiple local hosts and remote hosts using different protocols.
Connect types syntax
The breakdown of Connect Type URI format is:
+protocol
- (Optional) connection over transport (+tcp
,+ssh
,+tls
,+libssh2
,+unix
). Default isunix
.user@
- (Optional) SSH username. Default is$USER
environment value.
:port
- (Optional) The port number to use other than the default16509/tcp
./path
- Path to the libvirt UNIX socket on the remote machine (/session
or/system
(default)).
Connect Type URI Usage
For command line, pass the connect type for a local host:
host$
virsh --connect=qemu:///session
or
host-root#
virt-manager -c qemu:///system
Using an environment variable, pass the connect type for a local host:
host$
export LIBVIRT_DEFAULT_URI=qemu:///session; virsh
To connect to the hypervisor, choose one of the valid hypervisor code:
Hypervisor code name | Description |
---|---|
qemu |
local host UNIX domain socket to QEMU. |
xen |
local host Xen Domain 0 (Dom0). |
lxc |
|
vz |
|
uml |
|
bhyve |
|
exs |
|
vbox |
|
test |
|
hv |
When using system mode (--connect=qemu:///system), the files accessed are:
- /run/libvirt/libvirt-sock
- /run/libvirt/libvirt-admin-sock
- /run/libvirt/libvirt-sock-ro
When using session mode (--connect=qemu:///session), the files accessed are:
- /run/user/1000/libvirt/libvirt-sock
- /run/user/1000/libvirt/libvirt-admin-sock
More URI details at [libvirt.org URI].
Invocation
For invocation of the command line interface (CLI) of libvirt, see virsh invocation.
For invocation of the libvirtd daemon:
user $
libvirtd --help
Usage: libvirtd [options] Options: -h | --help Display program help -v | --verbose Verbose messages -d | --daemon Run as a daemon & write PID file -l | --listen Listen for TCP/IP connections -t | --timeout <secs> Exit after timeout period -f | --config <file> Configuration file -V | --version Display version information -p | --pid-file <file> Change name of PID file libvirt management daemon: Default paths: Configuration file (unless overridden by -f): /etc/libvirt/libvirtd.conf Sockets: /run/libvirt/libvirt-sock /run/libvirt/libvirt-sock-ro TLS: CA certificate: /etc/pki/CA/cacert.pem Server certificate: /etc/pki/libvirt/servercert.pem Server private key: /etc/pki/libvirt/private/serverkey.pem PID file (unless overridden by -p): /run/libvirtd.pid
virsh cannot assist with the creation of XML files needed by libvirt. This is what some virt-* tools and QEMU front-ends are for.
Removal
Removal of app-emulation/libvirt package (toolkit, library, and utilities) can be done by executing:
root #
emerge --ask --depclean --verbose app-emulation/libvirt
Troubleshooting
Messages mentioning ...or mount/enable cgroup controller in your system
Some of those messages are addressed on the previous section about the kernel configuration.
If the above doesn't fix the problem, follow the section Control groups on the LXC page to activate the correct kernel options.
WARN (Unknown if this platform has Secure Guest support)
This message appears on non IBM s390 or AMD systems and seems to be of little relevance [1] [2] [3] [4].
Docker doesn't work
Can Libvirt Work with Docker?
While Libvirt itself doesn’t manage Docker containers, there are workarounds to make them work
- Running Docker inside a VM managed by Libvirt:
- You can create a VM using Libvirt/KVM and install Docker inside it.
- Useful for isolating Docker workloads in a dedicated VM.
- Using Libvirt-lxc (Limited Support):
- Libvirt has an LXC (Linux Containers) driver, which is somewhat similar to Docker.
- However, libvirt-lxc is not as feature-rich as Docker.
- Using Podman (A Docker Alternative) with Libvirt:
- Podman is a rootless container tool compatible with Docker.
- Unlike Docker, Podman does not require a daemon, making it easier to run inside Libvirt-managed VMs.
Using Libvirt with Android
Workarounds to Use Libvirt with Android.
While Libvirt cannot directly manage Google AVF and its pKVMs, you can:
- Use Libvirt to manage Android x86/x64 VMs on Linux (via QEMU/KVM).
- Run Android inside a Libvirt-managed VM (e.g., using android-x86 ISO on QEMU/KVM).
- Use Libvirt on Android devices running full Linux distributions (e.g., via Termux or a rooted environment).
See also
- Virtualization — the concept and technique that permits running software in an environment separate from a computer operating system.
- QEMU — a generic, open-source hardware emulator and virtualization suite.
- QEMU/Front-ends — facilitate VM management and use
- Libvirt/QEMU_networking — details the setup of Gentoo networking by Libvirt for use by guest containers and QEMU-based virtual machines.
- Libvirt/QEMU_guest — creation of a guest domain (virtual machine, VM), running inside a QEMU hypervisor, using tools found in libvirt package.
- Virt-manager — lightweight GUI application designed for managing virtual machines and containers via the libvirt API.
- Virt-manager/QEMU_guest — creation of a guest virtual machine (VM) running inside a QEMU hypervisor using just the virt-manager GUI tool.
- QEMU/Linux guest — describes the setup of a Gentoo Linux guest in QEMU using Gentoo bootable media.
- Virsh — a CLI-based virtualization management toolkit.
- Libvirt/libvirtd — a daemon for Libvirt management of virtual machines.
- Virt-install — a CLI-based virtualization machine creator utility.
External resources
- Daniel P. Berrangé libvirt announcements
- Red Hat Virtualization Network Configuration
- Create libvirt XML file for a virtual machine (VM) of Gentoo Install CD
References
- ↑ Libvirt Protected Virtualization on s390
- ↑ libvir-list mailing list PATCH 3/6 qemu: check if AMD secure guest support is enabled
- ↑ libvir-list mailing list PATCH 4/6 tools: secure guest check on s390 in virt-host-validate
- ↑ libvir-list mailing list PATCH 5/6 tools: secure guest check for AMD in virt-host-validate