ZFS was developed by SUN Microsystems and is an advanced file system.
ZFS includes many features like:
- Manage storage hardware as vdev in zpools
- Manage volumes in zpools (like LVM)
- Redundancy with support for RAIDZ1(RAID5),RAIDZ2(RAID6) and MIRROR(RAID1)
- Resilvering file system
- Data Deduplication
- Data Compression with zle (fast) or gzip (higher compression)
- Snapshots (like differencial backups)
- NFS export of volumes
There are out-of-tree Linux kernel modules available from the ZFSOnLinux Project. The current release is version 0.6.0_rc8 (zpool version 28).
Installing the modules requires keywording the live ebuilds:
Then install sys-fs/zfs:
Add zfs to the boot runlevel to mount all zpools on boot:
ZFS includes already all programs to manage the hardware and the file systems, there are no additional tools needed.
To go through the different commands and scenarios we can create virtual hard drives using loopback devices.
First we need to make sure the loopback module is loaded. If you want to play around with partitions, use the following option:
The following commands create 2GB image files in /var/lib/zfs_img/ that we use as our hard drives (uses ~8GB disk space):
Now we check which loopback devices are in use:
We assume that all loopback devices are available and create our hard drives:
We have now /dev/loop[0-3] as four hard drives available
The program /usr/sbin/zpool is used with any operation regarding zpools.
To export (unmount) an existing zpool named zfs_test into the file system, you can use the following command:
To import (mount) the zpool named zfs_test use this command:
One Hard Drive
Create a new zpool named zfs_test with one hard drive:
The zpool will automatically be mounted, default is the root file system aka /zfs_test
To delete a zpool use this command:
MIRROR Two Hard Drives
In ZFS you can have several harddrives in a MIRROR, where equal copies exist on each storage. This increases the performance and redundancy. To create a new zpool named zfs_test with two hard drives as MIRROR:
To delete the zpool:
RAIDZ1 Three Hard Drives
RAIDZ1 is the equivalent to RAID5, where data is written to the first two drives and a parity onto the third. You need at least three hard drives, one can fail and the zpool is still ONLINE but the faulty drive should be replaced as soon as possible.
To create a pool with RAIDZ1 and three hard drives:
To delete the zpool:
RAIDZ2 Four Hard Drives
RAIDZ2 is the equivalent to RAID6, where data is written to the first two drives and a parity onto the next two. You need at least four hard drives, two can fail and the zpool is still ONLINE but the faulty drives should be replaced as soon as possible.
To create a pool with RAIDZ2 and four hard drives:
To delete the zpool:
You can add hot-spares into your zpool. In case a failure, those are already installed and available to replace faulty vdevs. In this example, we use RAIDZ1 with three hard drives and a zpool named zfs_test:
The status of /dev/loop3 will stay AVAIL until it is set to be online, now we let /dev/loop0 fail:
We replace /dev/loop0 with our spare /dev/loop3:
Now we remove the failed vdev /dev/loop0 and start a manual scrubbing:
Zpool Version Update
With every update of sys-fs/zfs, you are likely to also get a more recent ZFS version. Also the status of your zpools will indicate a warning that a new version is available and the zpools could be upgraded.
To display the current version on a zpool:
To upgrade the version of zpool zfs_test:
To upgrade the version of all zpools in the system:
- You cannot shrink a zpool and remove vdevs after it's initial creation.
- It is possible to add more vdevs to a MIRROR after it's initial creation. Use the following command (/dev/loop0 is the first drive in the MIRROR):
- More than 9 vdevs in one RAIDZ could cause performance regression. For example it is better to use 2xRAIDZ with each five vdevs rather than 1xRAIDZ with 10 vdevs in a zpool
- RAIDZ1 and RAIDZ2 cannot be resized after intial creation (you can only add additional hot spares). You can however replace the hard drives with bigger ones (one at a time), e.g. replace 1T drives with 2T drives to double the available space in the zpool.
- It is possible to mix MIRROR, RAIDZ1 and RAIDZ2 in a zpool. For example a zpool with RAIDZ1 named zfs_test, to add two more vdevs in a MIRROR use:
- It is possible to restore a destroyed zpool, by reimporting it straight after the accident happened:
The program /usr/sbin/zfs is used with any operation regarding volumes. To control the size of a volume you can set quota and you can reserver a certain amount of storage within a zpool, per default the complete storage size in the zpool is used.
We use our zpool zfs_test to create a new volume called volume1:
The volume will be mounted automatically as /zfs_test/volumes1/
Volumes can be mounted with the following command, the mountpoint is defined by the property mountpoint of the volume:
To unmount the volume:
The folder /zfs_test/volume1 stays without the volume behind it. If you write data to it and then try to mount the volume again, you will see the following error message:
To remove volumes volume1 from zpool zfs_test:
Properties for volumes are inherited from the zpool. So youy can either change the property on the zpool for all volumes or specific for each volume individual or a mix of both.
To set a property for a volume:
To show the setting for a particular property on a volume:
You can get a list of all properties set on any zpool with the following command:
This is a partial list of properties that can be set on either zpools or volumes, for a full list see man zfs:
|quota=||20m,none||set a quota of 20MB for the volume|
|reservation=||20m,none||reserves 20MB for the volume within it's zpool|
|compression=||zle,gzip,on,off||uses the given compression method or the default method for compression which should be gzip|
|sharenfs=||on,off,ro,nfsoptions||shares the volume via NFS|
|exec=||on,off||controls if programs can be executed on the volume|
|setuid=||on,off||controls if SUID or GUID can be set on the volume|
|readonly=||on,off||sets read only atribute to on/off|
|atime=||on,off||update access times for files in the volume|
|dedup=||on,off||sets deduplication on or off|
|mountpoint=||none,path||sets the mountpoint for the volume below the zpool or elsewhere in the file system, a mountpoint set to none prevents the volume from being mounted|
Set the mountpoint for a volume, use the following command:
The volume will be automatically moved to /mnt/data
Create a volume as NFS share:
Check what file systems are shared via NFS:
Per default the volume is shared to all networks, to specify share options:
To stop sharing the volume:
Snapshots are volumes which have no initial size and save changes made to another volume. With increasing changes between the snapshot and the original volume it grows in size.
To create a snapshot of a volume, use the following command:
Every time a file in volume1 changes, the old data of the file will be linked into the snapshot.
List all available snapshots:
To rollback a full volume to a previous state:
ZFS can clone snapshots to new volumes, so you can access the files from previous states individually:
In the folder /zfs_test/volume1_restore can now be worked on in the version of a previous state
Remove snapshots of a volume with the following command:
Start a scrubbing for zpool zfs_test:
To check the history of commands that were executed:
Monitor I/O activity on all zpools (refreshes every 6 seconds):