LVM (Logical Volume Manager) is a software which uses physical devices abstract as PVs (Physical Volumes) in storage pools called VG (Volume Group). Whereas physical volumes could be a partition, whole SATA hard drives grouped as JBOD (Just a Bunch Of Disks), RAID systems, iSCSI, Fibre Channel, eSATA etc.
- 1 Installation
- 2 Configuration
- 3 Usage
- 3.1 PV (Physical Volume)
- 3.2 VG (Volume Group)
- 3.3 LV (Logical Volume)
- 3.4 Thin metadata, pool, and LV
- 4 Examples
- 4.1 Preparation
- 4.2 LVM2 Linear volumes
- 4.3 /etc/fstab
- 4.4 LVM2 Snapshots and LVM2 Thin Snapshots
- 4.5 LVM2 Mirrors
- 4.6 LVM2 RAID 0/Stripeset
- 4.7 LVM2 RAID 1
- 4.8 LVM2 Stripeset with Parity (RAID4 and RAID5)
- 4.9 LVM2 RAID 6
- 4.10 LVM RAID10
- 5 Troubleshooting
- 6 External resources
You need to activate the following kernel options:
|clvm||No||Allow users to build clustered lvm2|
|cman||No||Cman support for clustered lvm|
|lvm1||Yes||Allow users to build lvm2 with lvm1 support|
|readline||Yes||Enables support for libreadline, a GNU line-editing library that almost everyone wants|
|selinux||No||No||!!internal use only!! Security Enhanced Linux support, this must be set by the selinux profile or breakage will occur|
|static||No||!!do not set this during bootstrap!! Causes binaries to be statically linked instead of dynamically|
|static-libs||No||Build static libraries|
|thin||Yes||Support for thin volumes|
|udev||Yes||Enable sys-fs/udev integration (device discovery, power and storage device support, etc)|
The configuration file is /etc/lvm/lvm.conf
To start LVM manaully:
To start LVM at boot time:
To start lvm manually:
To start LVM at boot time:
LVM on root
Most bootloaders cannot boot from LVM directly - neither GRUB legacy nor LILO can. Grub 2 CAN boot from an LVM linear LV, mirrored LV and possibly some kinds of RAID LVs. No bootloader currently support thin LVs.
For that reason, it is recommended to use a non-LVM /boot partition and mount the LVM root from an initramfs. Genkernel. Genkernel-next, and dracut can generate an initramfs suitable for most LV types. RAID10 LVs require at least LVM 2.02.98; Thin LVs are not supported under either Genkernel; support is broken under Genkernel-next - the former does not include the thin-provisoing-tools binaries; the latter does, but they are dynamically linked and the required shared libraries are not included in the initramfs, and thin-provisoing-tools (as of version 0.2.1) build system does not support building static binaries; dracut only adds thin support is the host the initramfs is generated on has a thin root.
LVM organizes storage in three different levels as follows:
- hard drives, partitions, RAID systems or other means of storage are initialized as PV (Physical Volume)
- Physical Volumes (PV) are grouped together in Volume Groups (VG)
- Logical Volumes (LV) are managed in Volume Groups (VG)
PV (Physical Volume)
Physical Volumes are the actual hardware or storage system LVM builds up upon.
The partition type for LVM is 8e (Linux LVM):
In fdisk, you can create MBR partitions using the n key and then change the partition type with the t key to 8e. We will end up with one primary partition /dev/sdX1 of partition type 8e (Linux LVM).
The following command creates a Physical Volume (PV) on the two first primary partitions of /dev/sdX and /dev/sdY:
The folloing command lists all active Physical Volumes (PV) in the system:
You can scan for PV in the system, to troubleshoot not properly initialized or lost storage devices:
LVM automatically distributed the data onto all available PV, if not told otherwise. To make sure there is no data left on our device before we remove it, use the following command:
This might take a long time and once finished, there should be no data left on /dev/sdX1. We first remove the PV from our Volume Group (VG) and then the actual PV:
VG (Volume Group)
Volume Groups (VG) consist of one or more Physical Volumes (PV) and show up as /dev/<VG name>/ in the device file system.
The following command creates a Volume Group (VG) named vg0 on two previously initialized Physical Volumes (PV) named /dev/sdX1 and /dev/sdY1:
The folloing command lists all active Volume Groups (VG) in the system:
You can scan for VG in the system, to troubleshoot not properly created or lost VGs:
With the following command, we extend the existing Volume Group (VG) vg0 onto the Physical Volume (PV) /dev/sdZ1:
Before we can remove a Physical Volume (PV), we need to make sure that LVM has no data left on the device. To move all data off that PV and distribute it onto the other available, use the following command:
This might take a while and once finished, we can remove the PV from our VG:
Before we can remove a Volume Group (VG), we have to remove all existing Snapshots, all Logical Volumes (LV) and all Physical Volumes (PV) but one. The following command removes the VG named vg0:
LV (Logical Volume)
Logical Volumes (LV) are created and managed in Volume Groups (VG), once created they show up as /dev/<VG name>/<LV name> and can be used like normal partitions.
With the following command, we create a Logical Volume (LV) named lvol1 in Volume Group (VG) vg0 with a size of 150MB:
There are other useful options to set the size of a new LV like:
- -l 100%FREE = maximum size of the LV within the VG
- -l 50%VG = 50% size of the whole VG
The following command lists all Logical Volumes (LV) in the system:
You can scan for LV in the system, to troubleshoot not properly created or lost LVs:
With the following command, we can extend the Logical Volume (LV) named lvol1 in Volume Group (VG) vg0 to 500MB:
Once the LV is extended, we need to grow the file system as well (in this example we used ext4 and the LV is mounted to /mnt/data):
Before we can reduce the size of our Logical Volume (LV) without corrupting existing data, we have to shrink the file system on it. In this example we used ext4, the LV needs to be unmounted to shrink the file system:
Now we are ready to reduce the size of our LV:
Logical Volumes (LV) can be set to be read only storage devices.
The LV needs to be remounted for the changes to take affect:
To set the LV to be read/write again:
Before we remove a Logical Volume (LV) we should unmount and deactivate, so no further write activity can take place:
The following command removes the LV named lvol1 from VG named vg0:
Thin metadata, pool, and LV
Recent versions of LVM2 (2.02.89) support "thin" volumes. Thin volumes are to block devices what sparse files are to filesystems. Thus, a thin LV within a pool can be "overcommitted" - it can even be larger than the pool itself. Just like a sparse file, the "holes" are filled as the block device gets populated. If the filesystem has "discard" support, as files are deleted, the "holes" can be recreated, reducing utilization of the thin pool.
Create thin pool
Each thin pool has some metadata associated with it, which is added to the thin pool size. You can specify it explicitly, otheriwse lvm2 will compute one based on the size of the thin pool as the minimum of pool_chunks * 64 bytes or 2MiB, whichever is larger.
This create a thin pool named "thin_pool" with a size of 150MB (actually, it slightly bigger than 150MB because of the metadata).
This create a thin pool named "thin_pool" with a size of 150MB and an explicit metadata size of 2MiB.
Unfortunately, because the metasize is added to thin pool size, the intuitive way of filling a VG wit ha thin pool doesn't work:
Note the thin pool does not have an associated device node like other LV's.
Create a thin LV
A Thin LV is somewhat unusual in LVM - the thin pool itself is an LV, so a thin LV is a "LV-within-an-LV". Since the volumes are sparse, a virtual size instead of a physical size is specified:
Note how the LV is larger then the pool it is create in. Its also possible to create the thin metadata, pool and LV on the same command:
List thin pool and thin LV
Thin LV are just like any other lv are are displayed using the lvdisplay and scanned using lvscan
Extend thin pool
The thin pool is expanded like a non-thin LV:
Extend thin LV
A Thin LV is expanded just like a regular LV:
Note this is asymmetric from create where the virtual size was specified with -V instead of -L/-l. The filesystem can then be expanded using that filesystem's tools.
Reduce thin pool
Currently, LVM cannot reduce the size of the thin pool.
Reduce thin LV
Before shrinking an LV, shrink the filesystem first using that filesystem's tools. Some filesystems do not support shrinking. A Thin LV is reduced just like a regular LV:
Note this is asymmetric from create where the virtual size was specified with -V instead of -L/-l.
Thin pool Permissions
It is not possible to change the permission on the thin pool (nor would it make any sense to).
Thin LV Permissions
A thin LV can be set read-only/read-write the same waya regular LV is
Thin pool Removal
The thin pool cannot be removed until all the thin LV within it are removed. Once that is done, it can be removed:
Thin LV Removal
A thin is removed like a regular LV
We can create some scenarios using loopback devices, so no real storage devices are used.
First we need to make sure the loopback module is loaded. If you want to play around with partitions, use the following option:
Now we need to either tell LVM to not use udev to scan for devices or change the filters in /etc/lvm/lvm.conf. In this case we just temporarily do not use udev:
We create some image files, that will become our storage devices (uses ~10GB of real hard drive space):
Check which loopback devices are available:
We assume all loopback devices are available and create our hard drives:
Now we can use /dev/loop[0-4] as we would use any other hard drive in the system.
LVM2 Linear volumes
Linear volumes are the most common kind of LVM volume. A linear volume can consume all or part of a LV. LVM will attempt to allocate the LV to be as physically contiguous as possible. If there's a PV large enough to hold the entire LV, LVM will allocate it there, otherwise it will split it up into a few pieces a possible.
A linear volume is actually implemented as degenerate stripe set (containing a single stripe).
Creating a linear volume
To create a linear volume:
The linear volume is the default type.
LVM allocate the first LV to use all of the first PV and part of the second, and the second PV to use all of the third PV.
Because linear volumes have no special requirements, they are the easiest to manipulate and can be resized, relocated, at will. If an LV is allocated across multiple PVs, and any of the PV's are unavailable, that LV cannot be started and will be unusable.
Here is an example of an entry in fstab (using ext4):
For thin volumes, add the discard option:
LVM2 Snapshots and LVM2 Thin Snapshots
A snapshot is an LV as copy of another LV, which takes in all the changes that were made in the original LV to show the content of that LV in a different state. We once again use our two hard drives and create LV lvol1 this time with 60% of VG vg0:
Now we create a snapshot of lvol1 named 08092011_lvol1 and give it 10% of VG vg0:
Mount our snapshot somewhere else:
We could now access data in lvol1 from a previous state.
LVM2 snapshots are writeable LV, we could use them to let a project go on into two different directions:
Now we have three different versions of LV lvol1, the original and two snapshots which can be used parallel and changes are written to the snapshots.
LVM2 Thin Snapshots
Creating a thin snapshot is simple:
Note how a size is not specified with -l/-L - nor the virtual size with -V. Snapshots have a virtual size the same as their origin, and a phyical size of 0 like all new thin volumes. This also means its not possible to limit the phyical size of the snapshot. Thin snapshots are writable just like regualr snapshot.
Recursive snapshpots can be created:
Thin snapshots have several advantages over regular snapshots. First, thin snapshots are independent of their origins once created. The origin can be shrunk or deleted without affecting the snapshot. Second, thin snapshots can be efficiently created recursively (snapshots of snapshots) without the "chaining" overhead of regular recursive LVM snapshots.
LVM2 Rollback Snapshots
To rollback the logical volume to the version of the snapshot, use the following command:
This might take a couple of minutes, depending on the size of the volume.
LVM2 Thin Rollback Snapshots
For thin volumes, lvconvert --merge does not work. Instead, delete the origin and rename the snapshot:
LVM support mirrored volume, which provide fault tolerance in the event of drive failure. Unlike RAID1, there is no performance benefit - all reads and writes are delivered to a single "leg" of the mirror. 1 additional PV is required for each mirror.
Mirrors support 3 kind of logs:
- Disk mirror logs the state of the mirror on the disk in extra metadata extents. LVM keeps track of what mirrored and can pick up where it left off if incomplete. This is the default.
- Mirror logs are disk logs that are themselves mirrored.
- Core mirror logs record the state of the mirror in memory only. LVM will have to rebuild the mirror every time it is activated. Useful for temporary mirrors.
Creating an mirror LV
To create an LV with a single mirror:
The -m 1 indicate we want to create 1 (additional) mirror, requiring 2 PV's. The --nosync option is an optimization - without it LVM will try synchronize the mirror by copying empty sectors from one LV to another.
Creating a mirror of an existing LV
It is possible to create a mirror of an existing LV:
The mirrors an existing LV onto a different PV. The -b option puts the operation into the background, as mirroring an LV takes a long time.
Removing a mirror of an existing LV
To remove mirror, set the number of mirrors to 0:
To simulate a failure:
If part of the mirror is unavailable (usually because the disk containing the PV has failed), the VG will need to be brought up in degraded mode:
On the first write, LVM will notice the mirror is broken. The default policy ("remove") is to automatically reduce/break the mirror according to the number of pieces available. A 3-way mirror with a missing PV will be reduced to 2-way mirror; a 2-way mirror will be reduced to a regular linear volume. If the failure is only transient, and the missing PV returns after LVM has broken the mirror, the mirrored LV will need to be recreated on it.
To recover the mirror, The failed PV needs to be removed from the VG, and replacement one added (or if the VG has a free PV, created on a different PV), the mirror recreated with lvconvert, and the old PV removed from the VG
It is possible to have LVM recreate the mirror with free extents on a different PV if a "leg" fails, to do that, set mirror_image_fault_policy to "allocate" in lvm.conf.
It is not (yet) possible to create a mirrored thin pool or thin volume. It is possible to create a mirrored thin pool my creating a normal mirrored LV and then converting the LV it to a thin pool with lvconvert. 2 LVs are required: One for the thin pool and one for the thin metadata, the conversion process will merge them into a single LV.
LVM2 RAID 0/Stripeset
Instead of a linear volume, where multiple contiguous volumes are appended, it possible to create a striped or RAID 0 volume for better performance.
Creating a stripe set
To create a 3-PV striped volume:
The -i option indicated how many PVs to stripe over, in this case, 3.
On each PV 400MB got reserved for LV lvm_stripe in VG vg00
It is possible to mirror a stripe set. The -i and -m options can be combined to create a striped mirror:
This creates a 2 PV stripe set and mirrors it on 2 different PVs, for a total of 4 PVs. An existing stripe set can be mirrored with lvconvert.
A thin pool can be striped like any other LV. All the thin volumes created from the pool inherit that settings - do not specify it manually when creating a thin volume.
It is not possible to stripe an existing volume, nor reshape the stripes across more/less PVs, nor to convert to a different RAID level/linear volume. A stripe set can be mirrored. It is possible to extend a stripe set across additional PVs, but they must be added in multiples of the original stripe set (which will effectively linearly append a new stripe set), or --alloc anywhere must be specified (which can hurt performance). In the above example, 3 additional PVs would be required without --alloc anywhere.
LVM2 RAID 1
Unlike RAID 0, which is striping, RAID 1 is mirroring, but implemented differently than the original LVM mirror. Under RAID1, reads are spread out across PV, improving performance. RAID1 Mirror failures do not cause I/O to block because LVM does not need to break it on write.
Any place where an LVM mirror could be used, a RAID 1 mirror can be used in its place. Its possible to have LVM create RAID1 mirrors instead of regular mirrors implicitly by setting mirror_segtype_default in lvm.conf to raid1.
Creating RAID 1 LV
To create an LV with a single mirror:
On each PV about 1.2G got reserved for LV lvm_raid1 in VG vg00
Note the difference for creating a mirror: There is no mirrorlog specified, because RAID1 LV (explicit) do not an explicit mirror log - it built-in to the LV. Second, --type raid1 needs is added, it wasn't needed with LVM mirror before. Also note the similarities: -m 1 to for a single mirror (-i 1' works too for RAID 1, unlike an LVM mirror), and the --nosync to skip the initial sync
Converting existing LV to RAID 1
It is possible to convert an existing LV to RAID 1:
Conversion is similar to creating a mirror from an existing LV.
Removing a RAID 1 mirror
To remove a RAID 1 mirror, set the number of mirrors to 0:
Same as an LVM mirror
Failed RAID 1
Simulating a failure is the same as an LVM mirror
If part of the RAID1 is unavailable (usually because the disk containing the PV has failed), the VG will need to be brought up in degraded mode:
Unlike an LVM mirror, writing missing part of the RAID does NOT breaking the mirroring. If the failure is only transient, and the missing PV returns, LVM will resync the mirror my copying cover the out-of-date segments instead of the entire LV.
To recover the RAID 1, The failed PV needs to be removed from the VG, and replacement one added (or if the VG has a free PV, created on a different PV), the mirror repaired with lvconvert, and the old PV removed from the VG.
It is not (yet) possible to create a RAID 1 thin pool or thin volume. It is possible to create a RAID 1 thin pool by creating a normal mirrored LV and then converting the LV it to a thin pool with lvconvert. 2 LVs are required: One for the thin pool and one for the thin metadata, the conversion process will merge them into a single LV.
LVM2 Stripeset with Parity (RAID4 and RAID5)
RAID 0 is not fault-tolerant - if any of the PVs fail the LV is unusable. By adding a parity stripe to RAID 0 the LV can still function will a single missing PV. A new PV can then be added to restore fault tolerance.
Stripsets with parity come in 2 flavors: RAID 4 and RAID 5. Under RAID 4.all the parity stripes are stored on the same LV. The PV containing the LV can become a bottleneck because all writes hit that PV, and gets worse the more PVs in the array. With RAID 5, the parity data is distributed evenly across the LVs and no PV is a bottleneck. For that reason, RAID 4 is rare is considered obsolete/historical and in practice all stripesets with parity are RAID 5.
Creating a RAID5 LV
Like the RAID0/Stripe without parity, the -i option is used to specify the number of PVs stripe. However, only the data PV are specified with -i - LVM adds the parity one automatically. Thus for a 3 PV RAID5, its -i 2 and not -i 3 .
On each PV about 600MB got reserved for LV lvm_raid5 in VG vg00
Recovering from a failed RAID5
To simulate a failure:
The VG will need to be brought up in degraded mode
The volume will work normally at this point, however this degraded the array to RAID 0 until a replacement PV is added. Performance is unlikely to be affected while the array is degraded - while it does need to recompute is missing data via parity, it only requires simple XOR the parity block with the remaining data. The overhead is negligible compared to the disk I/O.
To repair the RAID5:
Its possible to replace a still working PV in RAID5 as well
The same restrictions of stripe sets apply to stripe sets with parity as well: It is not possible to stripe with parity an existing volume, nor reshape the stripes with parity across more/less PVs, nor to convert to a different RAID level/linear volume. A stripe set with parity can be mirrored. It is possible to extend a stripe set with parity across additional PVs, but they must be added in multiples of the original stripe set with parity (which will effectively linearly append a new stripe set with parity), or --alloc anywhere must be specified (which can hurt performance). In the above example, 3 additional PVs would be required without --alloc anywhere.
Thin RAID5 LV
It is not (yet) possible to create stripe set with parity (RAID5) thin pool or thin volume. It is possible to create a RAID5 thin pool by creating a normal RAID5 LV and then converting the LV into a thin pool with lvconvert. 2 LVs are required: One for the thin pool and one for the thin metadata, the conversion process will merge them into a single LV.
LVM2 RAID 6
RAID 6 is similar to RAID 5, however RAID 6 can survive up to TWO PV failures, thus offering more fault tolerance than RAID5 at the expense of an extra PV.
Creating a RAID6 LV
Like raid5, the -i option is used to specify the number of PVs stripe, excluding the 2 PV's for parity. Thus for 5 PV RAID6, its -i 3 and not -i 5.
On each PV about 680MB got reserved for LV lvm_raid6 in VG vg00
Recovering from a failed RAID6
Recovery for RAID6 is the same as RAID5. A RAID6 LV with a single failure reduces to RAID5. A RAID6 LV with 2 failures reduces to RAID0. It is left as an exercise to the reader to simulate a 2 PV failure.
Unlike RAID5 where parity block is cheap to recompute vs disk I/O, this is only half true in RAID6. RAID6 uses 2 parity stripes: One stripe is computed the same way as RAID5 (simple XOR). The second parity stripe is much harder to compute.
The same restrictions of stripe sets with parity apply to RAID6 as well: It is not possible to RAID6 an existing volume, nor reshape a RAID6 across more/less PVs, nor to convert to a different RAID level/linear volume. A RAID6 can be mirrored. It is possible to extend a RAID6 across additional PVs, but they must be added in multiples of the original RAID6 (which will effectively linearly append a new RAID6), or --alloc anywhere must be specified (which can hurt performance). In the above example, 5 additional PVs would be required without --alloc anywhere.
Thin RAID6 LV
It is not (yet) possible to create a RAID6 thin pool or thin volumes. It is possible to create a RAID6 thin pool by creating a normal RAID6 LV and then converting the LV into a thin pool with lvconvert. 2 LVs are required: One for the thin pool and one for the thin metadata, the conversion process will merge them into a single LV.
RAID10 is a combination of RAID0 and RAID1. Its s more powerful than RAID 0+RAID 1 as mirror is done at the stripe level instead of the LV level, and therefore the layout need not be symmetric. A RAID10 volume can tolerate at least a single missing PV, and possibly more.
Creating a RAID10 LV
Both the -i AND -m options are specified: -i is the number of stripes and -m is the number of mirrors. 2 stripes and 1 mirror require 4 PVs. --nosync is an optimization to skip the initial copy.
On each PV 2G got reserved for LV lvm_raid10 in VG vg00
Recovering from a failed RAID10
For a single failed PV, recovery for RAID10 is the same as RAID5. In the example above LVM chose to stripe over PV loop0 and loop2, and mirror on loop1 and loop3. The resulting array can tolerate the loss of any one PV, or 2 PV if they are on different mirrors (0/2, 0/3, 1/2, 1/3 but not 0/1 or 2/3)
The same restrictions of stripe sets apply to RAID10 as well: It is not possible to RAID10 an existing volume, nor reshape the RAID10 across more/less PVs, nor to convert to a different RAID level/linear volume , It is possible to extend a RAID10 across additional PVs, but they must be added in multiples of the original RAID10 (which will effectively linearly append a new RAID10), or --alloc anywhere must be specified (which can hurt performance). In the above example, 4 additional PVs would be required without --alloc anywhere
Thin RAID 10
It is not (yet) possible to create a RAID10 thin pool or thin volumes. It is possible to create a RAID6 thin pool by creating a normal RAID10 LV and then converting the LV into a thin pool with lvconvert. 2 LVs are required: One for the thin pool and one for the thin metadata, the conversion process will merge them into a single LV.
LVM has only MIRROR and snapshots to provide some level of redundancy. However there are certain situations where one might be able to restore lost PV or LV.
By default, on any change to a LVM PV, VG, or LV, LVM2 create a backup file of the metadata in /etc/lvm/archive. These files can be used to recover from an accidental change (like deleting the wrong LV), LVM also keeps a backup copy of the most recent metadata in /etc/lvm/backup. These can be used to restore metadata to a replacement disk, or repair corrupted metadata.
To see what states of the VG are available to be restored (this is just partial output)
Recovering an accidently deleted LV
Suppose LV lvm_raid1 was accidentally removed from VG vg00. It is possible to recover it:
Replacing a failed PV
In the above examples, when a disk containing a PV failed, an "add/remove" technique was used: a new PV was created on a new disk, the VG extended to it, the LV repaired and the old PV removed from the VG. However it possible to do a true "replace" and recreate the metadata on the disk to be the same as the old disk.
Following the above example for a failed RAID 1:
The important line here is the UUID "unknown device".
This recreates the PV metadata, but not the missing LV or VG data on the PV.
This now reconstructs all the missing metadata on the PV, including the LV and VG data. However it doesn't restore the data, so the mirror is out of sync.
This will resync the mirror. This works with RAID 4,5 and 6 as well.
You can deactivat a LV with the following command:
You will not be able to mount the LV anywhere before it got reactivated: