LVM (Logical Volume Manager) is a software which uses physical devices abstract as PVs (Physical Volumes) in storage pools called VG (Volume Group). Whereas physical volumes could be a partition, whole SATA hard drives grouped as JBOD (Just a Bunch Of Disks), RAID systems, iSCSI, Fibre Channel, eSATA etc.
You need to activate the following kernel options:
|clvm||No||Allow users to build clustered lvm2|
|cman||No||Cman support for clustered lvm|
|lvm1||Yes||Allow users to build lvm2 with lvm1 support|
|readline||Yes||Enables support for libreadline, a GNU line-editing library that almost everyone wants|
|selinux||No||No||!!internal use only!! Security Enhanced Linux support, this must be set by the selinux profile or breakage will occur|
|static||No||!!do not set this during bootstrap!! Causes binaries to be statically linked instead of dynamically|
|static-libs||No||Build static libraries|
|thin||Yes||Support for thin volumes|
|udev||Yes||Enable sys-fs/udev integration (device discovery, power and storage device support, etc)|
The configuration file is /etc/lvm/lvm.conf
You can now start LVM:
To start LVM at boot time, add it your boot runlevel:
LVM organizes storage in three different levels as follows:
- hard drives, partitions, RAID systems or other means of storage are initialized as PV (Physical Volume)
- Physical Volumes (PV) are grouped together in Volume Groups (VG)
- Logical Volumes (LV) are managed in Volume Groups (VG)
PV (Physical Volume)
Physical Volumes are the actual hardware or storage system LVM builds up upon.
The partition type for LVM is 8e (Linux LVM):
In fdisk, you can create MBR partitions using the n key and then change the partition type with the t key to 8e. We will end up with one primary partition /dev/sdX1 of partition type 8e (Linux LVM).
The following command creates a Physical Volume (PV) on the two first primary partitions of /dev/sdX and /dev/sdY:
The folloing command lists all active Physical Volumes (PV) in the system:
You can scan for PV in the system, to troubleshoot not properly initialized or lost storage devices:
LVM automatically distributed the data onto all available PV, if not told otherwise. To make sure there is no data left on our device before we remove it, use the following command:
This might take a long time and once finished, there should be no data left on /dev/sdX1. We first remove the PV from our Volume Group (VG) and then the actual PV:
VG (Volume Group)
Volume Groups (VG) consist of one or more Physical Volumes (PV) and show up as /dev/<VG name>/ in the device file system.
The following command creates a Volume Group (VG) named vg0 on two previously initialized Physical Volumes (PV) named /dev/sdX1 and /dev/sdY1:
The folloing command lists all active Volume Groups (VG) in the system:
You can scan for VG in the system, to troubleshoot not properly created or lost VGs:
With the following command, we extend the exisiting Volume Group (VG) vg0 onto the Physical Volume (PV) /dev/sdZ1:
Before we can remove a Physical Volume (PV), we need to make sure that LVM has no data left on the device. To move all data off that PV and distribute it onto the other available, use the following command:
This might take a while and once finished, we can remove the PV from our VG:
Before we can remove a Volume Group (VG), we have to remove all existing Snapshots, all Logical Volumes (LV) and all Physical Volumes (PV) but one. The following command removes the VG named vg0:
LV (Logical Volume)
Logical Volumes (LV) are created and managed in Volume Groups (VG), once created they show up as /dev/<VG name>/<LV name> and can be used like normal partitions.
With the following command, we create a Logical Volume (LV) named lvol1 in Volume Group (VG) vg0 with a size of 150MB:
There are other useful options to set the size of a new LV like:
- -l 100%FREE = maximum size of the LV within the VG
- -l 50%VG = 50% size of the whole VG
The folloing command lists all Logical Volumes (LV) in the system:
You can scan for LV in the system, to troubleshoot not properly created or lost LVs:
With the following command, we can extend the Logical Volume (LV) named lvol1 in Volume Group (VG) vg0 to 500MB:
Once the LV is extended, we need to grow the file system as well (in this example we used ext4 and the LV is mounted to /mnt/data):
Before we can reduce the size of our Logical Volume (LV) without corrupting existing data, we have to shrink the file system on it. In this example we used ext4, the LV needs to be unmounted to shrink the file system:
Now we are ready to reduce the size of our LV:
Logical Volumes (LV) can be set to be read only storage devices.
The LV needs to be remounted for the changes to take affect:
To set the LV to be read/write again:
Before we remove a Logical Volume (LV) we should unmount and deactivate, so no further write activity can take place:
The following command removes the LV named lvol1 from VG named vg0:
Thin metadata, pool, and LV
Recent versin of LVM2 (2.02.89) support "thin" volumes. Thin volumes are to block devices what sparse files are to filesystems. Thus, a thin LV within a pool can be "overcommitted" - it can even be larger than the pool itself. Just like a sparse file, the "holes" are filled as the block device gets populated. If the filesystem has "discard" support, as fiels are deleted, the "holes" can be recreated, reducing utilization of the thin pool.
Create thin pool
Each thin pool has some metadata associated with it, which is added to the thin pool size. You can specify it explicitly, otheriwse lvm2 will compute one based on the size of the thin pool as the minimum of pool_chunks * 64 bytes or 2MiB, whichever is larger.
This create a thin pool named "thin_pool" with a size of 150MB (actually, it slightly bigger than 150MB because of the metadata).
This create a thin pool named "thin_pool" with a size of 150MB and an explicit metadata size of 2MiB.
Unfortunately, because the metasize is added to thin pool size, the intuitive way of filling a VG wit ha thin pool doesn't work:
Note the thin pool does not have an associated device node like other LV's.
Create a thin LV
A Thin LV is somewhat unusual in LVM - the thin pool itself is an LV, so a thin LV is a "LV-within-an-LV". Since the volumes are sparse, a virtual size instead of a phyical size is specified:
Note how the LV is larger then the pool it is create in. Its also possible to create the thin metadata, pool and LV on the same command:
List thin pool and thin LV
Thin LV are just like any other lv are are displayed using the lvdisplay and scanned using lvscan
Extend thin pool
The thin pool is expanded like a non-thin LV:
Extend thin LV
A Thin LV is expanded just like a regular LV:
Note this is asymetric from create where the virtual size was specified with -V isntead of -L/-l. The filesystem can then be expanded using that filesystem's tools.
Reduce thin pool
Currently, LVM cannot reduce the size of the thin pool.
Reduce thin LV
Before shrinking an LV, shrink the filesystem first using that filesystem's tools. Some filesystems do not support shrinking. A Thin LV is reduced just like a regular LV:
Note this is asymetric from create where the virtual size was specified with -V isntead of -L/-l.
Thin pool Permissions
It is not possible to change the permission on the thin pool (nor would it make any sense to).
Thin LV Permissions
A thin LV can be set read-only/read-write the same waya regular LV is
Thin pool Removal
The thin pool cannot be removed until all the thin LV within it are removed. Once that is done, it can be removed:
Thin LV Removal
A thin is removed like a regular LV
We can create some scenarios using loopback devices, so no real storage devices are used.
First we need to make sure the loopback module is loaded. If you want to play around with partitions, use the following option:
Now we need to either tell LVM to not use udev to scan for devices or change the filters in /etc/lvm/lvm.conf. In this case we just temporarely do not use udev:
We create some image files, that will become our storage devices (uses ~6GB of real hard drive space):
Check which loopback devices are available:
We assume all loopback devices are available and create our hard drives:
Now we can use /dev/loop[0-2] as we would use any other hard drive in the system.
Two Hard Drives
In this example, we will initialize two hard drive as PV and then create the VG vg0:
Now lets create the LV lvol1 in our VG vg0 and take the maximum space available:
Create the file system and mount it to /mnt/data:
Now we have the capacity of 2GB from each hard drive available in /mnt/data as one 4GB device.
Here is an example of an entry in fstab (using ext4):
For thin volumes, add the discard option:
We use two hard drives and create our LV lvol1 like in the first example. This time we use 40% of the size of our VG vg0, because we need some space in the VG for the MIRROR and log files:
To create our copy of /dev/vg0/lvol1 on the PV /dev/loop1, use the following command:
LVM will now ensure that a full copy (MIRROR) of /dev/vg0/lvol1 exists on /dev/loop1 and is not distributed between other PVs.
To remove the MIRROR:
If one half of the MIRROR fails, the other one will be automatically converted into a not mirrored LV (loose the mirror atribute). LVM is different from Linux RAID1 that it doesn't read/write from both mirrored images, there is no performace increase.
Thin pools and their volumes are currently incompaitble with mirroring.
A snapshot is an LV as copy of another LV, which takes in all the changes that were made in the original LV to show the content of that LV in a different state. We once again use our two hard drives and create LV lvol1 this time with 60% of VG vg0:
Now we create a snapshot of lvol1 named 08092011_lvol1 and give it 10% of VG vg0:
Mount our snapshot somewhere else:
We could now access data in lvol1 from a previous state.
LVM2 snapshots are writeable LV, we could use them to let a project go on into two different directions:
Now we have three different versions of LV lvol1, the original and two snapshots which can be used parallel and changes are written to the snapshots.
LVM2 Thin Snapshots
Template:Notice Creating a thin snapshot is simple:
Note how a size is not specified with -l/-L - nor the virtual size with -V. Snapshots have a virtual size the same as their origin, and a phyical size of 0 like all new thin volumes. This also means its not possible to limit the phyical size of the snapshot. Thin snapshots are writable just like regualr snapshot. It is also possible to have efficent recursive snapshots (snapshots of snapshots)
Unlink regular LVM snapshots, which get slower with each indirection, this is not that case with thin snapshots.
LVM2 Rollback Snapshots
To rollback the logical volume to the version of the snapshot, use the following command:
This might take a couple of minutes, depending on the size of the volume.
LVM2 Thin Rollback Snapshots
For thin volumes, lvconvert --merge does not work. Instead, delete the origin and rename the snapshot:
The STRIPSET is the same as RAID0, data is written to several devices at the same time to increase performance. In LVM2 it is possible to distribute LV over several PV for the same effect. We create three PV and then VG vg0:
VG vg0 consists of three different hard drives and now we can create our LV and spread it over them:
The option -i 3 indicates that we want to spread it over 3 PV in out VG vg0:
On each PV 400MB got reserved for our LV lvm_stripe in VG vg0
A thin pool can be striped like any other LV. All the thin volumes created from the pool inherit that settings, do not specify it manually when creating a thin volume.
LVM2 can use its internal mechanism to create stripesets with parity in a similar way as RAID5 does, but in this case you need at least 3 different PV
VG vg0 consists of three different hard drives and now we can create our LV and spread it over them:
The option -i 2 indicates that we want create 2 stripes + 1 parity stripe (so we need at least 3 devices)
On each PV 600MB got reserved for our LV lvm_raid5 in VG vg0
Thin pools and their volumes are currently incompaitble with RAID5.
LVM has only MIRROR and snapshots to provide some level of redundancy. However there are certain situations where one might be able to restore lost PV or LV.
In /etc/lvm/archive and /etc/lvm/backup are files which contain logs about metadata changes in LVM. To see what states of the VG are available to be restored:
In this example we removed the LV lvol1 by accident and want it back in our VG vg0:
We want to replace a PV and then restore the metadata to a new one, so that we reach the same state as before the device stopped working. To display all PV in a VG (even lost ones) use the following command:
In this example I let /dev/loop1 (unknown device) fail:
Using the UUID, we can tell LVM to restore new hardware and be implemented within the VG as the old one was.
Then we restore the VG to the state before the PV failed:
Now you can replay your file backup if you haven't already restored the PV itself.
You can deactivat a LV with the following command:
You will not be able to mount the LV anywhere before it got reactivated: