LVM/es

LVM (Logical Volume Manager) permite a los administradores crear metadispositivos que ofrecen una capa de abstracción entre un sistema de ficheros y el almacenamiento físico que se utiliza por debajo. Los metadispositivos (en los cuales se alojan los sistemas de ficheros) son volúmenes lógicos que utilizan almacenamiento que se obtiene de espacios de almacenamiento llamados grupos de volumen. Un grupo de volumen se abastece con uno o más volúmenes físicos que son los auténticos dispositivos en los que se almacenan los datos.

Los volúmenes físicos pueden ser particiones, discos SATA completos agrupados como JBOD (Just a Bunch Of Disks o Simplemente un puñado de discos), sistemas RAID, iSCSI, Fibre Channel, eSATA etc.

Instalación
LVM se gestiona por los controladores al nivel de núcleo y por aplicaciones en el espacio de usuario que permiten gestionar la configuración de LVM.

Núcleo
Activar las siguientes opciones del núcleo:

Software
Instalar :

Configuración
La configuración de LVM se realiza en diferentes niveles:
 * 1) La gestión de LV, PV y VG a través de las utilidades de gestión
 * 2) El ajuste fino del subsistema LVM a través del fichero de configuración
 * 3) La gestión del servicio al nivel de distribución
 * 4) La configuración a través de un sistema de ficheros RAM inicial

La gestión de los volúmenes físicos y lógicos así como de los grupos de volumen se gestiona en el capítulo Utilización.

Fichero de configuración de LVM
LVM posee un fichero de configuración muy extenso en. La mayoría de usuarios no necesitan modificar ningún aspecto de este fichero para comenzar a usar LVM.

Gestión de servicios
Gentoo ofrece el servicio LVM para detectar automáticamente los grupos de volumen y los volúmenes lógicos.

Se puede gestionar este servicio a través del sistema de inico.

openrc
Para iniciar LVM manualmente:

Para iniciar LVM cuando arranque el sistema:

systemd
Para iniciar LVM manualmente:

Para iniciar LVM cuando arranque el sistema:

Usar LVM en un initramfs
La mayoría de los cargadores de arranque no pueden iniciar directamente desde LVM. Ni GRUB Legacy ni LILO pueden hacerlo. Grub 2 PUEDE iniciar desde un volumen lógico LVM lineal, desde un volumen lógico en espejo y probablemente desde algunos tipos de volúmenes lógicos RAID. Ningún cargador de arranque ofrece soporte actualmente para volúmenes lógicos ligeros (thin).

Debido a esto, se recomienda utilizar una partición de arranque que no sea LVM y montar la partición raíz LVM desde un initramfs. Se puede generar este initramfs de forma automática mediante genkernel, genkernel-next y dracut:


 * genkernel puede iniciar desde cualquier tipo excepto volúmenes ligeros (ya que ni construye ni copia los binarios de thin-provisioning-tools del equipo de construcción) y quizá desde RAID10 (El soporte de RAID10 requiere LVM2 2.02.98, pero genkernel construye la versión 2.02.89. Sin embargo, si se dispone de binarios estáticos, éstos se puede copiar)
 * genkernel-next puede iniciar desde todos los tipos de volúmenes, pero necesita un nuevo app-misc/pax-utils o, de lo contrario, los binarios para volúmenes ligeros estarán rotos (Mirar la incidencia )
 * dracut debería poder iniciar desde todos los tipos, pero únicamente incluye soporte para volúmenes ligeros en el initramfs si el equipo en el que corre tiene un volumen raíz ligero.

Genkernel/Genkernel-next
Haga emerge de o de. El ajuste USE static también se puede activar en el paquete de modo que genkernel utilice los binarios del sistema (de lo contrario construirá su propia copia privada). El siguiente ejemplo construirá únicamente un initramfs (no el núcleo completo) y habilitará el soporte para lvm.

La página del manual de genkernel describe otras opciones que dependen de los requisitos del sistema.

El initrd requerirá parámetros para indicarle cómo iniciar lvm. Éstos se pasan igual que el resto de parámetros del núcleo. Por ejemplo:

Dracut
El paquete se portó desde el proyecto RedHat y es una herramienta similar para generar un initramfs. Ya que actualmente Since it is currently se encuentra en pruebas (~arch), los usuarios necesitarán aceptar este hecho (a través de ) para poder hacer emerge. Antes de hacerlo, se debería añadir la variable  a. Puede que se quieran instalar otros módulos, por favor, eche un vistazo a Dracut. Normalmente la siguiente orden genera un initramfs funcional por defecto.

El initrd requerirá parámetros para indicarle cómo iniciar lvm. Éstos se pasan igual que el resto de parámetros del núcleo. Por ejemplo:

Para obtener una lista completa de opciones de dracut, por favor, lea la sección en el Manual de Dracut.

Utilización
LVM organiza el almacenamiento en tres niveles diferentes como se muestra a continuación:
 * Los discos duros, particiones, sistemas RAID y otros modos de almacenamiento se inicializan como volúmenes físicos (PVs)
 * Los Volúmenes Físicos (PV) se agrupan en Grupos de Volumen (VG)
 * Los Volúmenes Lógicos (LV) se gestionan en los Grupos de Volumen (VG)

PV (Volumen Físico)
Los Volúmenes Físicos son el hardware o el sistema de almacenamiento sobre el que se construye LVM.

Particionamiento
El tipo de partición para LVM es 8e (Linux LVM).

Por ejemplo, para definir el tipo usando  para una partición en :

En, cree particiones usando la tecla n y a continuación cambie el tipo de partición con la tecla t a 8e.

Crear PV
Puede crear o inicializar los volúmenes físico con la orden.

Por ejemplo, la siguiente orden crea un volumen físico en la primera partición primar de los discos y :

Listar PV
Con la orden  se puede obtener información de todos los volúmenes físicos activos del sistema.

Si se necesitan mostrar más volúmenes físicos,  puede detectar volúmenes físicos inactivos y activarlos.

Eliminar PV
LVM distribuye los datos automáticamente entre todos los volúmenes físicos disponibles (a menos que se le indique lo contrario) usando un enfoque lineal. Si un volumen lógico (dentro de un grupo de volumen) es menor que la cantidad total de espacio libre en un volumen físico, entonces todo el espacio se reclama de ese (único) volumen físico de una forma continua. Esto se hace así por cuestiones de rendimiento.

Si se necesita eliminar un volumen físico de un grupo de volumen, se necesita en primer lugar mover los datos de ese volumen físico. Con la orden, todos los datos de un volumen físico se mueven a otros volúmenes físicos dentro del mismo grupo de volumen.

Esta operación puede llevar tiempo dependiendo de la cantidad de datos que se van a mover. Una vez haya terminado, no debería haber datos en el dispositivo. Puede comprobar con  que ningún volumen lógico utiliza ese volumen físico.

El siguiente paso es eliminar el volumen físico del grupo de volumen mediante  después de lo cual el dispositivo se puede "deseleccionar" como un volumen físico usando  :

VG (Grupo de Volumen)
Un grupo de volumen (VG) agrupa varios volúmenes físicos y se muestra como el el sistema de ficheros de los dispositivos. El administrador es el que elige el nombre del grupo de volumen.

Crear VG
La siguiente orden crea un grupo de volumen llamado vg0 con dos volúmenes físicos asignados: y.

Listar VG
Para listar todos los grupos de volumen activos, utilice la orden :

Si falta algún grupo de volumen, utilice la orden  para localizarlo:

Extender VG
Los grupos de volumen agrupan volúmenes físicos, permitiendo a los administradores utilizar un conjunto de recursos de almacenamiento para crear sistemas de ficheros. Cuando un grupo de volumen no tiene los suficientes recursos de almacenamiento, es necesario extenderlo con volúmenes físicos adicionales.

El siguiente ejemplo extiende el grupo de volumen vg0 con un volumen físico en :

¡Recuerde antes necesita inicializar el volumen físico!

Reducir VG
Si se necesita eliminar volúmenes físicos de un grupo de volumen, todos los datos que todavía se utilizan en ese volumen físicos se deben mover a otros dentro del mismo grupo de volumen. Como hemos visto antes, esto se gestiona con la orden  después de la cual el volumen físico se puede eliminar del grupo de volumen usando  :

Eliminar VG
Si ya no se necesita un grupo de volumen (o, en otras palabras el conjunto de recursos de almacenamiento que éste representa ya no se necesita y se necesita liberar los volúmenes físicos dentro de él para otras cuestiones) entonces se puede eliminar el grupo de volumen con. Esto solo funciona si no no se han definido volúmenes lógicos dentro del grupo de volumen y se han eliminado todos los volúmenes físicos del conjunto salvo uno.

LV (Volumen Lógico)
Los volúmenes lógicos son los metadispositivos finales que se ofrecen al sistema normalmente para crear sistemas de ficheros en ellos. Se crean y se gestionan dentro de grupos de volumen y se muestran como. Al igual que con los grupos de volumen el nombre lo decide el administrador.

Crear LV
Para crear un volumen lógico se utiliza la orden. Los parámetros de esta orden son: el tamaño deseado para el volumen lógico (que no puede ser mayor que la cantidad disponible en el grupo de volumen), el grupo de volumen del cual se obtendrá el espacio y el nombre del volumen lógico que se va a crear.

En el ejemplo de abajo, se crea un volumen lógico llamado lvol1 en el grupo de volumen llamado vg0 con un tamaño de 150MB:

Es posible indicarle a  que use todo el espacio disponible en el grupo de volumen. Esto se realiza mediante el parámetro -l el cual selecciona el número de extents (extensiones) en lugar de un tamaño (legible por los humanos). Los volúmenes lógicos se dividen en extensiones lógicas que son trozos de datos dentro de un grupo de volumen. Todas la extensiones dentro de un grupo de volumen tienen el mismo tamaño. Con el párametro -l se puede pedir a  que use todas las extensiones disponibles:

A continuación de FREE se puede utilizar la clave VG para indicar el tamaño total de un grupo de volumen.

Listar LV
Para listar todos los volúmenes lógicos, utilice la orden :

Si echa de menos algún volumen lógico, entonces la orden  se puede utiliza para escanear los grupos de volumen en busca de otros volúmenes lógicos.

Extender LV
Cuando se necesita expandir un volumen lógico, entonces se puede utilizar la orden  para ampliar el espacio asociado al volumen lógico.

For instance, to extend the logical volume lvol1 to a total of 500 MB:

It is also possible to use the size to be added rather than the total size:

An extended volume group does not immediately provide the additional storage to the end users. For that, the file system on top of the volume group needs to be increased in size as well. Not all file systems allow online resizing, so check the documentation for the file system in question for more information.

For instance, to resize an ext4 file system to become 500MB in size:

Reduce LV
If a logical volume needs to be reduced in size, first shrink the file system itself. Not all file systems support online shrinking.

For instance, ext4 does not support online shrinking so the file system needs to be unmounted first. It is also recommended to do a file system check to make sure there are no inconsistencies:

With a reduced file system, it is now possible to reduce the logical volume as well:

LV Permissions
LVM supports permission states on the logical volumes.

For instance, a logical volume can be set to read only using the  command:

The remount is needed as the change is not enforced immediately.

To mark the logical volume as writable again, use the rw permission bit:

Remove LV
Before removing a logical volume, make sure it is no longer mounted:

Deactivate the logical volume so that no further write activity can take place:

With the volume unmounted and deactivated, it can now be removed, freeing the extents allocated to it for use by other logical volumes in the volume group:

Features
LVM provides quite a few interesting features for storage administrators, including (but not limited to)
 * thin provisioning (over-committing storage)
 * snapshot support
 * volume types with different storage allocation methods

Thin provisioning
Recent versions of LVM2 (2.02.89) support "thin" volumes. Thin volumes are to block devices what sparse files are to file systems. Thus, a thin logical volume within a pool can be "over-committed": its presented size can be larger than the allocated size - it can even be larger than the pool itself. Just like a sparse file, the extents are allocated as the block device gets populated. If the file system has discard support extents are freed again as files are removed, reducing space utilization of the pool.

Within LVM, such a thin pool is a special type of logical volume, which itself can host logical volumes.

Creating a thin pool
Each thin pool has metadata associated with it, which is added to the thin pool size. LVM will compute the size of the metadata based on the size of the thin pool as the minimum of pool_chunks * 64 bytes or 2MiB, whichever is larger. The administrator can select a different metadata size as well.

To create a thin pool, add the --type thin-pool --thinpool thin_pool parameters to :

The above example creates a thin pool called thin_pool with a total size of 150 MB. This is the real allocated size for the thin pool (and thus the total amount of actual storage that can be used).

To explicitly ask for a certain metadata size, use the --metadatasize parameter:

Due to the metadata that is added to the thin pool, the intuitive way of using all available size in a volume group for a logical volume does not work (see LVM bug |812726):

Note the thin pool does not have an associated device node like other LV's.

Creating a thin logical volume
A thin logical volume is a logical volume inside the thin pool (which itself is a logical volume). As thin logical volumes are sparse, a virtual size instead of a physical size is specified using the -V parameter:

In this example, the (thin) logical volume lvol1 is exposed as a 300MB-sized device, even though the underlying pool only holds 150MB of real allocated storage.

It is also possible to create both the thin pool as well as the logical volume inside the thin pool in one command:

Listing thin pools and thin logical volumes
Thin pools and thin logical volumes are special types of logical volumes, and as such as displayed through the  command. The  command will also detect these logical volumes.

Extending a thin pool
The thin pool is expanded like a non-thin logical volume using. For instance:

Extending a thin logical volume
A thin logical volume is expanded just like a regular one:

Note that the  command uses the -L option (or -l if extent counts are used) and not a "virtual size" option as was used during the creation.

Reducing a thin pool
Currently, LVM cannot reduce the size of the thin pool. See LVM bug |812731.

Reducing a thin logical volume
Thin logical volumes are reduced just like regular logical volumes.

For instance:

Note that the  command uses the -L option (or -l if extent counts are used) and not a "virtual size" option as was used during the creation.

Removing thin pools
Thin pools cannot be removed until all the thin logical volumes inside it are removed.

When a thin pool no longer services any thin logical volume, it can be removed through the  command:

LVM2 snapshots and thin snapshots
A snapshot is a logical volume that acts as copy of another logical volume. It displays the state of the original logical volume at the time of snapshot creation.

Creating a snapshot logical volume
A snapshot logical volume is created using the -s option to. Snapshot logical volumes are still given allocated storage as LVM "registers" all changes made to the original logical volume and stores these changes in the allocated storage for the snapshot. When querying the snapshot state, LVM will start from the original logical volume and then check all changes registered, "undoing" the changes before showing the result to the user.

A snapshot logical volume henceforth "growths" at the rate that changes are made on the original logical volume. When the allocated storage for the snapshot is completely used, then the snapshot will be removed automatically from the system.

The above example creates a snapshot logical volume called 20140412_lvol1, based on the logical volume lvol1 in volume group vg0. It uses 10% of the space (extents actually) allocated to the volume group.

Accessing a snapshot logical volume
Snapshot logical volumes can be mounted like regular logical volumes. They are even not restricted to read-only operations - it is possible to modify snapshots and thus use it for things such as testing changes before doing these on a "production" file system.

As long as snapshot logical volumes exist, the regular/original logical volume cannot be reduced in size or removed.

LVM thin snapshots
To create a thin snapshot, the  command is used with the   option. No size declaration needs to be passed on:

Thin logical volume snapshots have the same size as their original thin logical volume, and use a physical allocation of 0 just like all other thin logical volumes.

También es posible tomar instantáneas de instantáneas:

Thin snapshots have several advantages over regular snapshots. First, thin snapshots are independent of their original logical volume once created. The original logical volume can be shrunk or deleted without affecting the snapshot. Second, thin snapshots can be efficiently created recursively (snapshots of snapshots) without the "chaining" overhead of regular recursive LVM snapshots.

Rolling back to snapshot state
To rollback the logical volume to the version of the snapshot, use the following command:

This might take a couple of minutes, depending on the size of the volume.

Rolling back thin snapshots
For thin volumes,  does not work. Instead, delete the original logical volume and rename the snapshot:

Different storage allocation methods
LVM supports different allocation methods for storage:
 * linear volumes (which is the default)
 * mirrored volumes (in a more-or-less active/standby setup)
 * striping (RAID0)
 * mirrored volumes (RAID1 - which is more an active/active setup)
 * striping with parity (RAID4 and RAID5)
 * striping with double parity (RAID6)
 * striping and mirroring (RAID10)

Linear volumes
Linear volumes are the most common kind of LVM volumes. LVM will attempt to allocate the logical volume to be as physically contiguous as possible. If there is a physical volume large enough to hold the entire logical volume, then LVM will allocate it there, otherwise it will split it up into as few pieces as possible.

The commands introduced earlier on to create volume groups and logical volumes create linear volumes.

Because linear volumes have no special requirements, they are the easiest to manipulate and can be resized and relocated at will. If a logical volume is allocated across multiple physical volumes, and any of the physical volumes become unavailable, then that logical volume cannot be started anymore and will be unusable.

Mirrored volumes
LVM supports mirrored volumes, which provide fault tolerance in the event of drive failure. Unlike RAID1, there is no performance benefit - all reads and writes are delivered to a single side of the mirror.

To keep track of the mirror state, LVM requires a log to be kept. It is recommended (and often even mandatory) to position this log on a physical volume that does not contain any of the mirrored logical volumes. There are three kind of logs that can be used for mirrors:


 * 1) Disk is the default log type. All changes made are logged into extra metadata extents, which LVM manages. If a device fails, then the changes are kept in the log until the mirror can be restored again.
 * 2) Mirror logs are disk logs that are themselves mirrored.
 * 3) Core mirror logs record the state of the mirror in memory only. LVM will have to rebuild the mirror every time it is activated. This type is useful for temporary mirrors.

To create a logical volume with a single mirror, pass the -m 1 argument (to select standard mirroring) with optionally --mirrorlog to select a particular log type:

The -m 1 tells LVM to create one (additional) mirror, so requiring 2 physical volumes. The --nosync option is an optimization - without it LVM will try synchronize the mirror by copying empty sectors from one logical volume to another.

It is possible to create a mirror of an existing logical volume:

The -b option does the conversion in the background as this can take quite a while.

To remove a mirror, set the number of mirrors (back) to 0:

If part of the mirror is unavailable (usually because the disk containing the physical volume has failed), the volume group will need to be brought up in degraded mode:

On the first write, LVM will notice the mirror is broken. The default policy ("remove") is to automatically reduce/break the mirror according to the number of pieces available. A 3-way mirror with a missing physical volume will be reduced to 2-way mirror; a 2-way mirror will be reduced to a regular linear volume. If the failure is only transient, and the missing physical volume returns after LVM has broken the mirror, the mirrored logical volume will need to be recreated on it.

To recover the mirror, the failed physical volume needs to be removed from the volume group, and a replacement physical volume needs to be added (or if the volume group has a free physical volume, it can be created on that one). Then the mirror can be recreated with lvconvert at which point the old physical volume can be removed from the volume group:

It is possible to have LVM recreate the mirror with free extents on a different physical volume if one side fails. To accomplish that, set  to allocate in.

Thin mirrors
It is not (yet) possible to create a mirrored thin pool or thin volume. It is possible to create a mirrored thin pool my creating a normal mirrored logical volume and then converting the logical volume to a thin pool with lvconvert. 2 logical volumes are required: one for the thin pool and one for the thin metadata; the conversion process will merge them into a single logical volume.

Striping (RAID0)
Instead of a linear volume, where multiple contiguous physical volumes are appended, it possible to create a striped or RAID 0 volume for better performance. This will alternate storage allocations across the available physical volumes.

To create a striped volume over three physical volumes:

The -i option indicates over how many physical volumes the striping should be done.

It is possible to mirror a stripe set. The -i and -m options can be combined to create a striped mirror:

This creates a 2 physical volume stripe set and mirrors it on 2 different physical volumes, for a total of 4 physical volumes. An existing stripe set can be mirrored with lvconvert.

A thin pool can be striped like any other logical volume. All the thin volumes created from the pool inherit that settings - do not specify it manually when creating a thin volume.

It is not possible to stripe an existing volume, nor reshape the stripes across more/less physical volumes, nor to convert to a different RAID level/linear volume. A stripe set can be mirrored. It is possible to extend a stripe set across additional physical volumes, but they must be added in multiples of the original stripe set (which will effectively linearly append a new stripe set).

Mirroring (RAID1)
Unlike RAID 0, which is striping, RAID 1 is mirroring, but implemented differently than the original LVM mirror. Under RAID1, reads are spread out across physical volumes, improving performance. RAID1 mirror failures do not cause I/O to block because LVM does not need to break it on write.

Any place where an LVM mirror could be used, a RAID 1 mirror can be used in its place. It is possible to have LVM create RAID1 mirrors instead of regular mirrors implicitly by setting mirror_segtype_default to raid1 in.

To create a logical volume with a single mirror:

Note the difference for creating a mirror: There is no mirrorlog specified, because RAID1 logical volumes do not have an explicit mirror log - it built-in to the logical volume.

It is possible to convert an existing logical volume to RAID 1:

To remove a RAID 1 mirror, set the number of mirrors to 0:

If part of the RAID1 is unavailable (usually because the disk containing the physical volume has failed), the volume group will need to be brought up in degraded mode:

Unlike an LVM mirror, writing does NOT breaking the mirroring. If the failure is only transient, and the missing physical volume returns, LVM will resync the mirror by copying cover the out-of-date segments instead of the entire logical volume. If the failure is permanent, then the failed physical volume needs to be removed from the volume group, and a replacement physical volume needs to be added (or if the volume group has a free physical volume, it can be created on a different PV). The mirror can then be repaired with lvconvert, and the old physical volume can be removed from the volume group:

Thin RAID1
It is not (yet) possible to create a RAID 1 thin pool or thin volume. It is possible to create a RAID 1 thin pool by creating a normal mirrored logical volume and then converting the logical volume to a thin pool with lvconvert. 2 logical volumes are required: one for the thin pool and one for the thin metadata; the conversion process will then merge them into a single logical volume.

Striping with parity (RAID4 and RAID5)
RAID 0 is not fault-tolerant - if any of the physical volumes fail then the logical volume is unusable. By adding a parity stripe to RAID 0 the logical volume can still function if a physical volume is missing. A new physical volume can then be added to restore fault tolerance.

Stripsets with parity come in 2 flavors: RAID 4 and RAID 5. Under RAID 4, all the parity stripes are stored on the same physical volume. This can become a bottleneck because all writes hit that physical volume, and it gets worse the more physical volumes are in the array. With RAID 5, the parity data is distributed evenly across the physical volumes so none of them become a bottleneck. For that reason, RAID 4 is rare and is considered obsolete/historical. In practice, all stripesets with parity are RAID 5.

Only the data physical volumes are specified with -i, LVM adds one to it automatically for the parity. So for a 3 physical volume RAID5, -i 2 is passed on and not -i 3.

When a physical volume fails, then the volume group will need to be brought up in degraded mode:

The volume will work normally at this point, however this degrades the array to RAID 0 until a replacement physical volume is added. Performance is unlikely to be affected while the array is degraded - although it does need to recompute its missing data via parity, it only requires simple XOR for the parity block with the remaining data. The overhead is negligible compared to the disk I/O.

To repair the RAID5:

It is possible to replace a still working physical volume in RAID5 as well:

The same restrictions of stripe sets apply to stripe sets with parity as well: it is not possible to enable striping with parity on an existing volume, nor reshape the stripes with parity across more/less physical volumes, nor to convert to a different RAID level/linear volume. A stripe set with parity can be mirrored. It is possible to extend a stripe set with parity across additional physical volumes, but they must be added in multiples of the original stripe set with parity (which will effectively linearly append a new stripe set with parity).

Thin RAID5 logical volumes
It is not (yet) possible to create stripe set with parity (RAID5) thin pools or thin logical volumes. It is possible to create a RAID5 thin pool by creating a normal RAID5 logical volume and then converting the logical volume into a thin pool with lvconvert</tt>. 2 logical volumes are required: one for the thin pool and one for the thin metadata; the conversion process will merge them into a single logical volume.

Striping with double parity (RAID6)
RAID 6 is similar to RAID 5, however RAID 6 can survive up to two physical volume failures, thus offering more fault tolerance than RAID5 at the expense of extra physical volumes.

Like raid5, the -i option is used to specify the number of physical volumes to stripe, excluding the 2 physical volumes for parity. So for a 5 physical volume RAID6, pass on -i 3 and not -i 5.

Recovery for RAID6 is the same as RAID5.

Thin RAID6 logical volumes
It is not (yet) possible to create a RAID6 thin pool or thin volumes. It is possible to create a RAID6 thin pool by creating a normal RAID6 logical volume and then converting the logical volume into a thin pool with lvconvert</tt>. 2 logical volumes are required: one for the thin pool and one for the thin metadata; the conversion process will merge them into a single logical volume.

LVM RAID10
RAID10 is a combination of RAID0 and RAID1. It is more powerful than RAID0+RAID1 as the mirroring is done at the stripe level instead of the logical volume level, and therefore the layout doesn't need to be symmetric. A RAID10 volume can tolerate at least a single missing physical volume, and possibly more.

Both the -i and -m options are specified: -i is the number of stripes and -m is the number of mirrors. Two stripes and 1 mirror requires 4 physical volumes.

Thin RAID10
It is not (yet) possible to create a RAID10 thin pool or thin volumes. It is possible to create a RAID10 thin pool by creating a normal RAID10 logical volume and then converting the logical volume into a thin pool with. 2 logical volumes are required: one for the thin pool and one for the thin metadata; the conversion process will merge them into a single logical volume.

Experimenting with LVM
It is possible to experiment with LVM without using real storage devices. To accomplish this, loopback devices are created.

First make sure to have the loopback module loaded.

Next configure LVM to not use udev to scan for devices:

Create some image files which will become the storage devices. The next example uses five files for a total of about ~10GB of real hard drive space:

Check which loopback devices are available:

Assuming all loopback devices are available, next create the devices:

The devices are now available to use as any other hard drive in the system (and thus be perfect for physical volumes).

Troubleshooting
LVM has a few features that already provide some level of redundancy. However, there are situations where it is possible to restore lost physical volumes or logical volumes.

vgcfgrestore utility
By default, on any change to a LVM physical volume, volume group, or logical volume, LVM2 create a backup file of the metadata in. These files can be used to recover from an accidental change (like deleting the wrong logical volume). LVM also keeps a backup copy of the most recent metadata in. These can be used to restore metadata to a replacement disk, or repair corrupted metadata.

To see what states of the volume group are available to be restored (partial output to improve readability):

Recovering an accidentally deleted logical volume
Assuming the logical volume lvm_raid1 was accidentally removed from volume group vg0, it is possible to recover it as follows:

Replacing a failed physical volume
It possible to do a true "replace" and recreate the metadata on the new physical volume to be the same as the old physical volume:

The important line here is the UUID "unknown device".

This recreates the physical volume metadata, but not the missing logical volume or volume group data on the physical volume.

This now reconstructs all the missing metadata on the physical volume, including the logical volume and volume group data. However it doesn't restore the data, so the mirror is out of sync.

This will resync the mirror. This works with RAID 4,5 and 6 as well.

Deactivating a logical volume
It is possible to deactivate a logical volume with the following command:

It is not possible to mount the logical volume anywhere before it gets reactivated:

Recursos externos

 * LVM2 sourceware.org
 * LVM tldp.org
 * LVM2 Wiki redhat.com