ZFS

ZFS is Article description::a next generation [[filesystem created by Matthew Ahrens and Jeff Bonwick.]] It was designed around a few key ideas:


 * Administration of storage should be simple.
 * Redundancy should be handled by the filesystem.
 * File-systems should never be taken offline for repair.
 * Automated simulations of worst case scenarios before shipping code is important.
 * Data integrity is paramount.

Development of ZFS started in 2001 at Sun Microsystems. It was released under the CDDL in 2005 as part of Open Solaris. Pawel Jakub Dawidek ported ZFS to FreeBSD in 2007. Brian Behlendorf at LLNL started the ZFSOnLinux project in 2008 to port ZFS to Linux for High Performance Computing. Oracle purchased Sun Microsystems in 2010 and discontinued Open Solaris later that year. The Illumos project started to replace Open Solaris and roughly 2/3 of the core ZFS team resigned, including Matthew Ahrens and Jeff Bonwick. Most of them took jobs at companies and continue to develop Open Source ZFS as part of the Illumos project. The 1/3 of the ZFS core team at Oracle that did not resign continue development of an incompatible proprietary branch of ZFS in Oracle Solaris. The first release of Solaris included a few innovative changes that were under development prior to the mass resignation. Subsequent releases of Solaris have included fewer and less ambitious changes. Significant innovation continues in the open source branch of ZFS developed in Illumos. Today, a growing community continues development of the open source branch of ZFS across multiple platforms, including FreeBSD, Illumos, Linux and Mac OS X.

Features
A detailed list of features can be found in a separate article.

Kernel
requires Zlib kernel support (module or builtin).

Notes: - if building of is complaining about CONFIG_ZLIB_DEFLATE option missing, that option is enabled automatically with this Cryptographic API option (CONFIG_CRYPTO_DEFLATE). CONFIG_ZLIB_DEFLATE cannot be enabled directly. - if spl fails in the configure phase, as it checks whether memory shrinkers, and fails with the line '->count_objects callback exists... configure: error: error' unset the CONFIG_FORTIFY_SOURCE kernel config option (verified v0.7.5). - Randomizing layout of sensitive kernel structures (CONFIG_GCC_PLUGIN_RANDSTRUCT) will cause "unsupported stack pointer realignment" errors in compiling zfs-kmod (verified v0.7.5).

Modules
There are out-of-tree Linux kernel modules available from the ZFSOnLinux Project. Since version 0.6.1, ZFS is considered "ready for wide scale deployment on everything from desktops to super computers" stable for wide scale deployment, by the ZFSOnLinux Project.

To install ZFS on Gentoo Linux requires keyword for  (starting at 0.8.0 zfs-kmod will provide spl) and its dependencies.

The latest upstream versions require keywording the live ebuilds (optional):

Add the zfs scripts to the run levels to do initialization at boot:

ARC
ZFSOnLinux uses ARC page replacement algorithm instead of the Last Recently Used page replacement algorithm used by other filesystems. This has a better hit rate, therefore providing better performance. The implementation of ARC in ZFS differs from the original paper in that the amount of memory used as cache can vary. This permits memory used by ARC to be reclaimed when the system is under memory pressure (via the kernel's shrinker mechanism) and grow when the system has memory to spare. The minimum and maximum amount of memory allocated to ARC varies based on your system memory. The default minimum is 1/32 of all memory, or 64MB, whichever is more. The default maximum is the larger of 1/2 of system memory or 64MB.

The manner in which Linux accounts for memory used by ARC differs from memory used by the page cache. Specifically, memory used by ARC is included under "used" rather than "cached" in the output used by the `free` program. This in no way prevents the memory from being released when the system is low on memory. However, it can give the impression that ARC (and by extension ZFS) will use all of system memory if given the opportunity.

Adjusting ARC memory usage
The minimum and maximum memory usage of ARC is tunable via zfs_arc_min and zfs_arc_max respectively. These properties can be set any of three ways. The first is at runtime (new in 0.6.2):

The second is via :

The third is on the kernel commandline by specifying "zfs.zfs_arc_max=536870912" (for 512MB).

Similarly, the same can be done to adjust zfs_arc_min.

Systemd
Enable the service so it is automatically started at boot time:

To manually start the daemon:

In order to mount zfs pools automatically on boot you need to enable the following services and targets:

Installing into the kernel directory (for static installs)
This example uses 9999, but just change it to the latest ~ or stable (when that happens) and you should be good. The only issue you may run into is having zfs and zfs-kmod out of sync with each other. Just try to avoid that :D

This will generate the needed files, and copy them into the kernel sources directory.

For versions of zfs < 0.8.0

For versions of zfs >= 0.8.0

After this, you just need to edit the kernel config to enable CONFIG_SPL and CONFIG_ZFS and emerge the zfs binaries.

The echo commands only need to be run once, but the emerge needs to be run every time you install a new version of zfs.

Usage
ZFS includes already all programs to manage the hardware and the file systems, there are no additional tools needed.

Preparation
ZFS supports the use of either block devices or files. Administration is the same in both cases, but for production use, the ZFS developers recommend the use of block devices (preferably whole disks). To take full advantage of block devices on Advanced Format disks, it is highly recommended to read the ZFS on Linux FAQ before creating your pool. To go through the different commands and scenarios we can use files in place of block devices.

The following commands create 2GB sparse image files in /var/lib/zfs_img/ that we use as our hard drives. This uses at most 8GB disk space, but in practice will use very little because only written areas are allocated:

Now we check which loopback devices are in use:

Zpools
The program /usr/sbin/zpool is used with any operation regarding zpools.

import/export Zpool
To export (unmount) an existing zpool named zfs_test into the file system, you can use the following command:

To import (mount) the zpool named zfs_test use this command:

The root mountpoint of zfs_test is a property and can be changed the same way as for volumes. To import (mount) the zpool named zfs_test root on /mnt/gentoo, use this command:

To search for all zpools available in the system issue the command:

One hard drive
Create a new zpool named zfs_test with one hard drive:

The zpool will automatically be mounted, default is the root file system aka /zfs_test

To delete a zpool use this command:

Two hard drives (MIRROR)
In ZFS you can have several hard drives in a MIRROR, where equal copies exist on each storage. This increases the performance and redundancy. To create a new zpool named zfs_test with two hard drives as MIRROR:

To delete the zpool:

Three hard drives (RAIDZ1)
RAIDZ1 is the equivalent to RAID5, where data is written to the first two drives and a parity onto the third. You need at least three hard drives, one can fail and the zpool is still ONLINE but the faulty drive should be replaced as soon as possible.

To create a pool with RAIDZ1 and three hard drives:

To delete the zpool:

Four hard drives (RAIDZ2)
RAIDZ2 is the equivalent to RAID6, where data is written to the first two drives and a parity onto the next two. You need at least four hard drives, two can fail and the zpool is still ONLINE but the faulty drives should be replaced as soon as possible.

To create a pool with RAIDZ2 and four hard drives:

To delete the zpool:

Four hard drives (STRIPED MIRROR)
A STRIPED MIRROR is the equivalent to RAID10, where data is striped across sets of disks then the striped data is mirrored. You need at least four hard drives; this configuration provides redundancy and an increase in read speed. You can lose all disks but one per mirror.

To create a STRIPED MIRRORED pool with four hard drives:

To delete the zpool:

Spares/Replace vdev
You can add hot-spares into your zpool. In case a failure, those are already installed and available to replace faulty vdevs.

In this example, we use RAIDZ1 with three hard drives and a zpool named zfs_test:

The status of /dev/loop3 will stay AVAIL until it is set to be online, now we let /var/lib/zfs_img/zfs0.img fail:

We replace /var/lib/zfs_img/zfs0.img with our spare /var/lib/zfs_img/zfs3.img:

The original vdev will automatically get removed asynchronously. If this is not the case, the old vdev may need to be detached with the "zpool detach" command. Later you will see it leave the zpool status output:

Now start a manual scrub:

Zpool version update
With every update of, you are likely to also get a more recent ZFS version. Also the status of your zpools will indicate a warning that a new version is available and the zpools could be upgraded. To display the current version on a zpool:

To upgrade the version of zpool zfs_test:

To upgrade the version of all zpools in the system:

Zpool tips/tricks

 * You cannot shrink a zpool and remove vdevs after its initial creation.
 * It is possible to add more vdevs to a MIRROR after its initial creation. Use the following command (/dev/loop0 is the first drive in the MIRROR):


 * More than 9 vdevs in one RAIDZ could cause performance regression. For example it is better to use 2xRAIDZ with each five vdevs rather than 1xRAIDZ with 10 vdevs in a zpool
 * RAIDZ1 and RAIDZ2 cannot be resized after intial creation (you may only add additional hot spares). You can, however, replace the hard drives with bigger ones (one at a time), e.g. replace 1T drives with 2T drives to double the available space in the zpool.
 * It is possible to mix MIRROR, RAIDZ1 and RAIDZ2 in a zpool. For example to add two more vdevs in a MIRROR in a zpool with RAIDZ1 named zfs_test, use:


 * It is possible to restore a destroyed zpool, by reimporting it straight after the accident happened:

Volumes
The program /usr/sbin/zfs is used for any operation regarding volumes. To control the size of a volume you can set quota and you can reserve a certain amount of storage within a zpool. By default zpool uses the full storage size.

Create Volumes
We use our zpool zfs_test to create a new volume called volume1:

The volume will be mounted automatically as /zfs_test/volumes1/

Mount/umount volumes
Volumes can be mounted with the following command, the mountpoint is defined by the property mountpoint of the volume:

To unmount the volume:

The folder /zfs_test/volume1 stays without the volume behind it. If you write data to it and then try to mount the volume again, you will see the following error message:

Remove volumes
To remove volumes volume1 from zpool zfs_test:

Properties
Properties for volumes are inherited from the zpool. So you can either change the property on the zpool for all volumes or specifically per individual volume or a mix of both.

To set a property for a volume:

To show the setting for a particular property on a volume:

You can get a list of all properties set on any zpool with the following command:

This is a partial list of properties that can be set on either zpools or volumes, for a full list see man zfs:

Set mountpoint
Set the mountpoint for a volume, use the following command:

The volume will be automatically moved to /mnt/data

NFS volume share
Activate NFS share on volume:

Per default the volume is shared using the exportfs command in the following manner. See exportfs(8) and exports(5) for more information.

Otherwise, the command is invoked with options equivalent to the contents of this property:

To stop sharing the volume:

Snapshots
Snapshots are volumes which have no initial size and save changes made to another volume. With increasing changes between the snapshot and the original volume it grows in size.

Creating
To create a snapshot of a volume, use the following command:

Every time a file in volume1 changes, the old data of the file will be linked into the snapshot.

Listing
List all available snapshots:

Rollback
To rollback a full volume to a previous state:

Cloning
ZFS can clone snapshots to new volumes, so you can access the files from previous states individually:

In the folder /zfs_test/volume1_restore can now be worked on in the version of a previous state

Removal
Remove snapshots of a volume with the following command:

Scrubbing
Start a scrubbing for zpool zfs_test:

Log Files
To check the history of commands that were executed:

Monitor I/O
Monitor I/O activity on all zpools (refreshes every 6 seconds):

ZFS root
To boot from a ZFS volume as the root filesystem requires a ZFS capable kernel and an initial ramdisk (initramfs) which has the ZFS userspace utilities. The easiest way to set this up is as follows.

First, make sure to have compiled a kernel with ZFS support and used  to copy it to   and   to make the modules available at boot time.

Install and configure genkernel.

Install a bootloader, for example GRUB2.

Configure grub to use ZFS, and which volume to boot from.

Finally, install grub to your boot device and create the grub configuration.

Caveats

 * Swap: On systems with extremely high memory pressure, using a zvol for swap can result in lockup, regardless of how much swap is still available. This issue is currently being investigated in . Please check the current ZOL documentation on swap at github"
 * Memory fragmentation: Memory fragmentation on Linux can cause memory allocations to consume more memory than is actually measured, which means that actual ARC memory usage can exceed zfs_arc_max by a constant factor. This effect will be dramatically reduced once zfsonlinux/zfs#75 is fixed. Recent versions of ZFS on Linux include the arcstats.py script which allows you to monitor ARC usage.

External resources

 * zfs-fuse.net
 * ZFS on Linux
 * ZFS Best Practices Guide
 * ZFS Evil Tuning Guide
 * article about ZFS on Linux/Gentoo (german)
 * ZFS Administration