User:Plamen/LVMbg

From Gentoo Wiki
Jump to:navigation Jump to:search

LVM (Logical Volume Manager) е софтуер който позволява физически устройства PVs (Physical Volumes) да бъдат обединени в една или повече групи VGs (Volume Groups). Под физическо устройство може да се разбира част от диск или цял диск или всякакво друго устройство за което кернела предоставя начин за записване и четене на данни. Устройствата могат да бъдат обединени по различни начини от просто набор от дискове до пълен RAID.

Инсталация

За да можете да ползвате LVM трябва да имате позволена device mapper поддръжката в кернела:

KERNEL
Device Drivers  --->
   Multiple devices driver support (RAID and LVM)  --->
       <*> Device mapper support
           <M> Crypt target support
           <M> Snapshot target
           <M> Mirror target
       <M> Multipath target
           <M> I/O Path Selector based on the number of in-flight I/Os
           <M> I/O Path Selector based on the service time
Note
Какви модули ще изберете зависи от това какво смятате да ползвате някой от модулите са необходими за #LVM2_Snapshots, #LVM2_MIRROR, #LVM2_Stripeset и криптиране.

sys-fs/lvm2 се поддържа от Gentoo Linux и има следните use флагове:

  • clvm = Позволява създаването на lvm2 клъстари.
  • cman = Cman поддръжка на lvm клъстари.
  • lvm1 = Включва поддръката на lvm1.
  • static = Инсталира статично свързан lvm2, за да бъде използван в initramfs.
  • readline = Включва поддръжката на libreadline, билиотекана на GNU за четене на ред. Най-вероятно искате това.
  • selinux = Включва поддръжка на Security Enhanced Linux (SELinux) support.
  • static-libs = Инсталира статични библиотеки.

За да инсталирате lvm2 изпълнете следната команда:

root #emerge lvm2

Също трябва да добавите init скрипта на пакета в boot нивото:

root #/etc/init.d/lvm start && rc-update add lvm boot

Конфигурационния файл се намира:

CODE Configuration Files
/etc/lvm/lvm.conf

Използване

LVM организира дисковото пространство в 3 нива:

  • цели дискове, дискови дялове, RAID системи и други се инициализират като физически устройства PV (Physical Volume)
  • Физическите устройства (PV) се групират в групи от устройства Volume Groups (VG)
  • Групите от устройства VG се разделят на логически устройства Logical Volumes (LV)

PV (Физически устройства)

Физическите устройства са истинския хардуер или система за съхранение на данни които LVM ползва за съхраняване на данните.

Дял

Типа на дисковия дял за LVM e 8e(Linux LVM):

root #fdisk /dev/sdX

С fdisk, можете да създадете дял с клавиша n и след това да смените типа му с t на 8e. Предишното ще създаде първичен дял от тип 8e (Linux LVM) на диск /dev/sdX.

Note
Тази стъпка не е необходима, тъй като LVM може да работи директно с целия диск. Всъщност поради някой ограничения на MBR и GPT таблиците се препоръчва да използвате целия диск а не да създавате дял

Създаване на PV

Следната команда ще създаде физическо устройство (PV) на дискове /dev/sdX и /dev/sdY:

root #pvcreate /dev/sd[X-Y]1

Показване на PV

Следната команда ще покаже всички активни PV в системата:

root #pvdisplay

Можете да сканирате за PV, при евентуални проблеми с инициализацията:

root #pvscan

Премахване на PV

LVM автоматично разпределя данните върху всички PV, освен ако експлицитно не е зададено друго. За да преместите данните които може би са на физическото устройство изпълнете:

root #pvmove -v /dev/sdX1

Тази операция може да отнеме много време и след края ѝ всички данни върху /dev/sdX1 трябва да са преместени по други физически устройства. Първо трябва да изключите PV от групата (VG) и след това да го премахнете:

root #vgreduce vg0 /dev/sdX1 && pvremove /dev/sdX1
Note
Ако сте ползвали цял диск, за да може да създадете таблица с дялове задължително трябва да го премахнете PV от диска

VG (Група)

Групата (VG) се състои от едно или повече физически устройства (PV) и се вижда като /dev/<VG name>/ в файловата система с устройстата.

Създаване на VG

Следната команда ще създаде Група (VG) с име vg0 на двете предварително създадени физически устройства (PV) /dev/sdX1 и /dev/sdY1:

root #vgcreate vg0 /dev/sd[X-Y]1

Показване на Групите VG

Следната команда показва всички активни Групи в системата:

root #vgdisplay

Също може да сканирате за Групи:

root #vgscan

Разширяване на Група (VG)

Със следната команда можете да добавите физическо устройство (PV) към съществуваща група. vg0 е името на групата /dev/sdZ1 на устройството което ще се добави:

root #vgextend vg0 /dev/sdZ1

Намаляване на Група (VG)

Преди да изключите Физическо устройство (PV) от групата трябва да преместите данните му по други устройства със следната команда:

root #pvmove -v /dev/sdX1

След като горната команда приключи можете да изключите устройството от Групата:

root #vgreduce vg0 /dev/sdX1

Премахване на Група (VG)

Преди да можете да премахнете Групата трябва да премахнете всички Логически устройства (LV) и Snapshots. В Групата може да има само едно Физическо устройство. Следната команда ще премахне Групата vg0:

root #vgremove vg0

LV (Logical Volume)

Logical Volumes (LV) are created and managed in Volume Groups (VG), once created they show up as /dev/<VG name>/<LV name> and can be used like normal partitions.

Create LV

With the following command, we create a Logical Volume (LV) named lvol1 in Volume Group (VG) vg0 with a size of 150MB:

root #lvcreate -L 150M -n lvol1 vg0

There are other useful options to set the size of a new LV like:

  • -l 100%FREE = maximum size of the LV within the VG
  • -l 50%VG = 50% size of the whole VG

List LV

The folloing command lists all Logical Volumes (LV) in the system:

root #lvdisplay

You can scan for LV in the system, to troubleshoot not properly created or lost LVs:

root #lvscan

Extend LV

With the following command, we can extend the Logical Volume (LV) named lvol1 in Volume Group (VG) vg0 to 500MB:

root #lvextend -L500M /dev/vg0/lvol1
Note
use -L+350M to increase the current size of a LV by 350MB

Once the LV is extended, we need to grow the file system as well (in this example we used ext4 and the LV is mounted to /mnt/data):

Note
Some file systems do support online-resizing, like ext4 otherwise you have to umount the file system first
root #resize2fs /mnt/data 500M

Reduce LV

Before we can reduce the size of our Logical Volume (LV) without corrupting existing data, we have to shrink the file system on it. In this example we used ext4, the LV needs to be unmounted to shrink the file system:

root #umount /mnt/data
root #e2fsck -f /dev/vg0/lvol1
root #resize2fs /dev/vg0/lvol1 150M

Now we are ready to reduce the size of our LV:

root #lvreduce -L150M /dev/vg0/lvol1
Note
use -L-350M to reduce the current size of a LV by 350MB

LV Permissions

Logical Volumes (LV) can be set to be read only storage devices.

root #lvchange -p r /dev/vg0/lvol1

The LV needs to be remounted for the changes to take affect:

root #mount -o remount /dev/vg0/lvol1

To set the LV to be read/write again:

root #lvchange -p rw /dev/vg0/lvol1 && mount -o remount /dev/vg0/lvol1

Remove LV

Before we remove a Logical Volume (LV) we should unmount and deactivate, so no further write activity can take place:

root #umount /dev/vg0/lvol1 && lvchange -a n /dev/vg0/lvol1

The following command removes the LV named lvol1 from VG named vg0:

root #lvremove /dev/vg0/lvol1

Examples

We can create some scenarios using loopback devices, so no real storage devices are used.

Preparation

First we need to make sure the loopback module is loaded. If you want to play around with partitions, use the following option:

root #modprobe -r loop && modprobe loop max_part=63
Note
you cannot reload the module, if it is built into the kernel

Now we need to either tell LVM to not use udev to scan for devices or change the filters in /etc/lvm/lvm.conf. In this case we just temporarely do not use udev:

FILE /etc/lvm/lvm.conf
obtain_device_list_from_udev = 0
Important
this is for testing only, you want to change the setting back when dealing with real devices since it is much faster

We create some image files, that will become our storage devices (uses ~6GB of real hard drive space):

root #mkdir /var/lib/lvm_img
root #dd if=/dev/zero of=/var/lib/lvm_img/lvm0.img bs=1024 count=2097152
root #dd if=/dev/zero of=/var/lib/lvm_img/lvm1.img bs=1024 count=2097152
root #dd if=/dev/zero of=/var/lib/lvm_img/lvm2.img bs=1024 count=2097152

Check which loopback devices are available:

root #losetup -a

We assume all loopback devices are available and create our hard drives:

root #losetup /dev/loop0 /var/lib/lvm_img/lvm0.img
root #losetup /dev/loop1 /var/lib/lvm_img/lvm1.img
root #losetup /dev/loop2 /var/lib/lvm_img/lvm2.img

Now we can use /dev/loop[0-2] as we would use any other hard drive in the system.

Note
On the next reboot, all the loopback devices will be released and the folder /var/lib/lvm_img can be deleted

Two Hard Drives

In this example, we will initialize two hard drive as PV and then create the VG vg0:

root #pvcreate /dev/loop[0-1]
root #vgcreate vg0 /dev/loop[0-1]

Now lets create the LV lvol1 in our VG vg0 and take the maximum space available:

root #lvcreate -l 100%FREE -n lvol1 vg0

Create the file system and mount it to /mnt/data:

root #mkfs.ext4 /dev/vg0/lvol1
root #mount /dev/vg0/lvol1 /mnt/data

Now we have the capacity of 2GB from each hard drive available in /mnt/data as one 4GB device.

Note
The same applies to RAID systems, if you want to create one VG use /dev/md[X-Y] instead

/etc/fstab

Here is an example of an entry in /etc/fstab (using ext4):

FILE /etc/fstab
/dev/vg0/lvol1  /mnt/data  ext4  noatime  0 2

LVM2 MIRROR

We use two hard drives and create our LV lvol1 like in the first example. This time we use 40% of the size of our VG vg0, because we need some space in the VG for the MIRROR and log files:

root #pvcreate /dev/loop[0-1]
root #vgcreate vg0 /dev/loop[0-1]
root #lvcreate -l 40%VG -n lvol1 vg0
root #mkfs.ext4 /dev/vg0/lvol1
root #mount /dev/vg0/lvol1 /mnt/data

To create our copy of /dev/vg0/lvol1 on the PV /dev/loop1, use the following command:

root #lvconvert -m1 /dev/vg0/lvol1 --corelog /dev/loop1

LVM will now ensure that a full copy (MIRROR) of /dev/vg0/lvol1 exists on /dev/loop1 and is not distributed between other PVs.

Note
this is very I/O intensive, --corelog writes the log files for the conversion into memory

To remove the MIRROR:

root #lvconvert -m0 /dev/vg0/lvol1

If one half of the MIRROR fails, the other one will be automatically converted into a not mirrored LV (loose the mirror atribute). LVM is different from Linux RAID1 that it doesn't read/write from both mirrored images, there is no performace increase.

LVM2 Snapshots

A snapshot is an LV as copy of another LV, which takes in all the changes that were made in the original LV to show the content of that LV in a different state. We once again use our two hard drives and create LV lvol1 this time with 60% of VG vg0:

root #pvcreate /dev/loop[0-1]
root #vgcreate vg0 /dev/loop[0-1]
root #lvcreate -l 60%VG -n lvol1 vg0
root #mkfs.ext4 /dev/vg0/lvol1
root #mount /dev/vg0/lvol1 /mnt/data

Now we create a snapshot of lvol1 named 08092011_lvol1 and give it 10% of VG vg0:

root #lvcreate -l 10%VG -s -n 08092011_lvol1 /dev/vg0/lvol1
Important
if a snapshot exceeds it's maximum size, it disappears

Mount our snapshot somewhere else:

root #mkdir /mnt/08092011_data
root #mount /dev/vg0/08092011_lvol1 /mnt/08092011_data

We could now access data in lvol1 from a previous state.
LVM2 snapshots are writeable LV, we could use them to let a project go on into two different directions:

root #lvcreate -l 10%VG -s -n project1_lvol1 /dev/vg0/lvol1
root #lvcreate -l 10%VG -s -n project2_lvol1 /dev/vg0/lvol1
root #mkdir /mnt/project1 /mnt/project2
root #mount /dev/vg0/project1_lvol1 /mnt/project1
root #mount /dev/vg0/project2_lvol1 /mnt/project2

Now we have three different versions of LV lvol1, the original and two snapshots which can be used parallel and changes are written to the snapshots.

Note
the original LV lvol1 cannot be reduced in size or removed if snapshots of it exist. Snapshots can be increased in size without growing the file system on them, but they cannot exceed the size of the original LV

LVM2 Stripeset

The STRIPSET is the same as RAID0, data is written to several devices at the same time to increase performance. In LVM2 it is possible to distribute LV over several PV for the same effect. We create three PV and then VG vg0:

root #pvcreate /dev/loop[0-2]
root #vgcreate vg0 /dev/loop[0-2]

VG vg0 consists of three different hard drives and now we can create our LV and spread it over them:

root #lvcreate -i 3 -l 20%VG -n lvm_stripe vg0

The option -i 3 indicates that we want to spread it over 3 PV in out VG vg0:

root #pvscan
Logging initialised at Thu Sep  8 22:19:27 2011
    Set umask from 0022 to 0077
    Wiping cache of LVM-capable devices
    Wiping internal VG cache
    Walking through all physical volumes
  PV /dev/loop0   VG vg0   lvm2 [2.00 GiB / 1.60 GiB free]
  PV /dev/loop1   VG vg0   lvm2 [2.00 GiB / 1.60 GiB free]
  PV /dev/loop2   VG vg0   lvm2 [2.00 GiB / 1.60 GiB free]
  Total: 3 [5.99 GiB] / in use: 3 [5.99 GiB] / in no VG: 0 [0   ]
    Wiping internal VG cache

On each PV 400MB got reserved for our LV lvm_stripe in VG vg0

LVM2 RAID5

LVM2 can use its internal mechanism to create stripesets with parity in a similar way as RAID5 does, but in this case you need at least 3 different PV

root #pvcreate /dev/loop[0-2]
root #vgcreate vg0 /dev/loop[0-2]

VG vg0 consists of three different hard drives and now we can create our LV and spread it over them:

root #lvcreate --type raid5 -l 20%VG -i 2 -I 64 -n lvm_raid5 vg0

The option -i 2 indicates that we want create 2 stripes + 1 parity stripe (so we need at least 3 devices)

root #pvscan
Logging initialised at Thu Sep  8 22:19:27 2011
    Set umask from 0022 to 0077
    Wiping cache of LVM-capable devices
    Wiping internal VG cache
    Walking through all physical volumes  
  PV /dev/loop0   VG vg0      lvm2 [2,00 GiB / 1,39 GiB free]
  PV /dev/loop1   VG vg0      lvm2 [2,00 GiB / 1,39 GiB free]
  PV /dev/loop2   VG vg0      lvm2 [2,00 GiB / 1,39 GiB free]
  Total: 3 [5.99 GiB] / in use: 3 [5.99 GiB] / in no VG: 0 [0   ]
    Wiping internal VG cache

On each PV 600MB got reserved for our LV lvm_raid5 in VG vg0

Troubleshooting

LVM has only MIRROR and snapshots to provide some level of redundancy. However there are certain situations where one might be able to restore lost PV or LV.

vgcfgrestore

In /etc/lvm/archive and /etc/lvm/backup are files which contain logs about metadata changes in LVM. To see what states of the VG are available to be restored:

root #vgcfgrestore --list vg0
File:         /etc/lvm/archive/vg0_00002-923283887.vg
  VG name:      vg0
  Description:  Created *before* executing 'lvremove /dev/vg0/lvol1'
  Backup Time:  Sat Sep 10 20:02:05 2011

  File:         /etc/lvm/backup/vg0
  VG name:      vg0
  Description:  Created *after* executing 'lvremove /dev/vg0/lvol1'
  Backup Time:  Sat Sep 10 20:02:05 2011

In this example we removed the LV lvol1 by accident and want it back in our VG vg0:

root #vgcfgrestore -f /etc/archive/vg0_00002-923283887.vg vg0
Important
This is not the data in the LV itself that will be restored, just the LV is recreated as it was distributed on the PV before and you have a good chance that the files are still on the PV if they weren't overwritten yet

Replace PV

We want to replace a PV and then restore the metadata to a new one, so that we reach the same state as before the device stopped working. To display all PV in a VG (even lost ones) use the following command:

root #vgdisplay --partial --verbose

In this example I let /dev/loop1 (unknown device) fail:

root #vgdisplay --partial --verbose
--- Physical volumes ---
  PV Name               unknown device
  PV UUID               3B0yFN-zKDY-yICo-fT0M-nJwS-3wZf-UEjvd1
  PV Status             allocatable
  Total PE / Free PE    511 / 0

  PV Name               /dev/loop2
  PV UUID               4AuFVX-PWnX-qpWl-fpoS-euCV-SISW-IX6ceF
  PV Status             allocatable
  Total PE / Free PE    511 / 511

Using the UUID, we can tell LVM to restore new hardware and be implemented within the VG as the old one was.

root #pvcreate --restorefile /etc/archive/<METADATA-FILE> --uuid <UUID> /dev/loop3

Then we restore the VG to the state before the PV failed:

root #vgcfgrestore -f /etc/archive/<METADATA-FILE> vg0

Now you can replay your file backup if you haven't already restored the PV itself.

Deactivate LV

You can deactivat a LV with the following command:

root #umount /dev/vg0/lvol1
root #lvchange -a n /dev/vg0/lvol1

You will not be able to mount the LV anywhere before it got reactivated:

root #lvchange -a y /dev/vg0/lvol1

Links