ZFS

ZFS is Article description::a next generation [[filesystem created by Matthew Ahrens and Jeff Bonwick.]] It was designed around a few key ideas:


 * Administration of storage should be simple.
 * Redundancy should be handled by the filesystem.
 * File-systems should never be taken offline for repair.
 * Automated simulations of worst case scenarios before shipping code is important.
 * Data integrity is paramount.

Development of ZFS started in 2001 at Sun Microsystems. It was released under the CDDL in 2005 as part of OpenSolaris. Pawel Jakub Dawidek ported ZFS to FreeBSD in 2007. Brian Behlendorf at LLNL started the ZFSOnLinux project in 2008 to port ZFS to Linux for High Performance Computing. Oracle purchased Sun Microsystems in 2010 and discontinued OpenSolaris later that year.

The Illumos project started to replace OpenSolaris and roughly 2/3 of the core ZFS team resigned, including Matthew Ahrens and Jeff Bonwick. Most of them took jobs at companies which continue to develop OpenZFS, initially as part of the Illumos project. The 1/3 of the ZFS core team at Oracle that did not resign continue development of an incompatible proprietary branch of ZFS in Oracle Solaris.

The first release of Solaris included a few innovative changes that were under development prior to the mass resignation. Subsequent releases of Solaris have included fewer and less ambitious changes. Today, a growing community continues development of OpenZFS across multiple platforms, including FreeBSD, Illumos, Linux and Mac OS X.

Features
A detailed list of features can be found in a separate article.

Kernel
requires Zlib kernel support (module or builtin).

Modules
There are out-of-tree Linux kernel modules available from the ZFSOnLinux Project.

Since version 0.6.1, ZFS is considered "ready for wide scale deployment on everything from desktops to super computers" stable for wide scale deployment, by the OpenZFS Project.

Emerge
To install ZFS, run:

Add the zfs scripts to runlevels for initialization at boot:

ARC
OpenZFS uses ARC page replacement algorithm instead of the Last Recently Used page replacement algorithm used by other filesystems. This has a better hit rate, therefore providing better performance. The implementation of ARC in ZFS differs from the original paper in that the amount of memory used as cache can vary. This permits memory used by ARC to be reclaimed when the system is under memory pressure (via the kernel's shrinker mechanism) and grow when the system has memory to spare. The minimum and maximum amount of memory allocated to ARC varies based on your system memory. The default minimum is 1/32 of all memory, or 64MB, whichever is more. The default maximum is the larger of 1/2 of system memory or 64MB.

The manner in which Linux accounts for memory used by ARC differs from memory used by the page cache. Specifically, memory used by ARC is included under "used" rather than "cached" in the output used by the `free` program. This in no way prevents the memory from being released when the system is low on memory. However, it can give the impression that ARC (and by extension ZFS) will use all of system memory if given the opportunity.

Adjusting ARC memory usage
The minimum and maximum memory usage of ARC is tunable via zfs_arc_min and zfs_arc_max respectively. These properties can be set any of three ways. The first is at runtime (new in 0.6.2):

The second is via :

The third is on the kernel commandline by specifying "zfs.zfs_arc_max=536870912" (for 512MB).

Similarly, the same can be done to adjust zfs_arc_min.

systemd
Enable the service so it is automatically started at boot time:

To manually start the daemon:

In order to mount zfs pools automatically on boot you need to enable the following services and targets:

Installing into the kernel directory (for static installs)
This example uses 0.8.4, but just change it to the latest ~ or stable (when that happens) and you should be good. The only issue you may run into is having zfs and zfs-kmod out of sync with each other - avoid that.

This will generate the needed files, and copy them into the kernel sources directory.

After this, you just need to edit the kernel config to enable CONFIG_SPL and CONFIG_ZFS and emerge the zfs binaries.

The echo commands only need to be run once, but the emerge needs to be run every time you install a new version of zfs.

Alternative steps
Be sure to read through the steps above; the following steps only replace some of the above steps. The following was done on an amd64 gentoo install with llvm-12.0.1/clang-12.0.1/musl-1.2.2-r3 without binutils/gcc/glibc. needs to be lightly patched. On the line defining, add after. On the line calling, add.

The patch should look something like this:

You only have to go through the patching steps again if the patch stops working. Now proceed as usual:

Usage
ZFS includes already all programs to manage the hardware and the file systems, there are no additional tools needed.

Preparation
ZFS supports the use of either block devices or files. Administration is the same in both cases, but for production use, the ZFS developers recommend the use of block devices (preferably whole disks). To take full advantage of block devices on Advanced Format disks, it is highly recommended to read the ZFS on Linux FAQ before creating your pool. To go through the different commands and scenarios we can use files in place of block devices.

The following commands create 2GB sparse image files in /var/lib/zfs_img/ that we use as our hard drives. This uses at most 8GB disk space, but in practice will use very little because only written areas are allocated:

Zpools
The program /usr/sbin/zpool is used with any operation on zpools.

One hard drive
Create a new zpool named zfs_test with one hard drive:

The zpool will automatically be mounted, default is the root file system aka /zfs_test

To delete a zpool use this command:

Two hard drives (MIRROR)
In ZFS you can have several hard drives in a MIRROR vdev, where equal copies exist on each disk. This increases the performance and redundancy. To create a new zpool named zfs_test with two hard drives as a MIRROR:

To delete the zpool:

Three hard drives (RAIDZ1)
RAIDZ1 is the redundancy equivalent to RAID5, where data is written to two drives and a parity onto the third. You need at least three hard drives, one can fail and the zpool is still functional but DEGRADED, and the faulty drive should be replaced as soon as possible.

To create a pool with a RAIDZ1 vdev on three hard drives:

To delete the zpool:

Four hard drives (RAIDZ2)
RAIDZ2 is the redundancy equivalent to RAID6, where (roughly) data is written to the first two drives and a parity onto the other two. You need at least four hard drives, two can fail and the zpool is still ONLINE but the faulty drives should be replaced as soon as possible.

To create a pool with a RAIDZ2 vdev on four hard drives:

To delete the zpool:

Four hard drives (STRIPED MIRROR)
STRIPED MIRRORs are the redundancy equivalent to RAID10, where data is striped across sets of disks then the striped data is mirrored. You need at least four hard drives; this configuration provides redundancy and an increase in read speed. You can lose all disks but one per mirror.

To create a STRIPED MIRRORED pool with four hard drives:

To delete the zpool:

Import/Export zpool
To import (mount) the zpool named zfs_test use this command:

The root mountpoint of zfs_test is a property and can be changed the same way as for datasets. To import (mount) the zpool named zfs_test root on /mnt/gentoo, use this command:

To search for and list all zpools available in the system issue the command:

To export (unmount) an existing zpool named zfs_test into the file system, you can use the following command:

Spares/Replace vdev
You can add hot-spares into your zpool. In case a failure, those are already installed and available to replace faulty disks.

In this example, we use a RAIDZ1 with three hard drives in a zpool named zfs_test:

The status of /dev/loop3 will stay AVAIL until it is set to be online, now we let /var/lib/zfs_img/zfs0.img fail:

We replace /var/lib/zfs_img/zfs0.img with our spare /var/lib/zfs_img/zfs3.img:

The original disk will automatically get removed asynchronously. If this is not the case, the old disk may need to be detached with the "zpool detach" command. Later you will see it leave the zpool status output:

Now start a manual scrub:

Zpool version update
With every update of, you are likely to also get a more recent ZFS version. Also, the status of your zpools will indicate a notice that a new version has been installed and the zpools could be upgraded. To display the current version on a zpool:

To upgrade the version of zpool zfs_test (and enable all feature flags):

To upgrade the version of all zpools in the system:

Zpool tips/tricks

 * You sometimes cannot shrink a zpool after initial creation - if the pool has no raidz vdevs and the pool has all vdevs of the same ashift, the "device removal" feature in 0.8 and above can be used. There are performance implications to doing this, however, so always be careful when creating pools or adding vdevs/disks!
 * It is possible to add more disks to a MIRROR after its initial creation. Use the following command (/dev/loop0 is the first drive in the MIRROR):


 * Sometimes, a wider RAIDZ vdev can be less suitable than two (or more) smaller RAIDZ vdevs. Try testing your intended use before settling on one and moving all your data onto it.
 * RAIDZ vdevs cannot (currently) be resized after initial creation (you may only add additional hot spares). You can, however, replace the hard drives with bigger ones (one at a time), e.g. replace 1T drives with 2T drives to double the available space in the zpool.
 * It is possible to mix MIRROR and RAIDZ vdevs in a zpool. For example to add two more disks as a MIRROR vdev in a zpool with a RAIDZ1 vdev named zfs_test, use:


 * It is possible to restore a destroyed zpool, by reimporting it straight after the accident happened:

File systems and datasets
The program /usr/sbin/zfs is used for any operation regarding datasets (which encompasses filesystems, volumes, snapshots, and bookmarks).


 * Filesystems are a way of logically grouping data with shared properties on a pool - data you might want to set the same compression type, or recordsize, or snapshot all together, would be good examples of a use case for a separate filesystem.


 * Volumes are a way of exposing some space from a pool as a block device, which can be useful e.g. for VM storage, or iSCSI export to some other host.
 * Snapshots are read-only point-in-time representations of a filesystem/volume - which implies that if you take snapshots of a filesystem or volume, space that was used at the point of the snapshot is not actually freed when later deleted/overwritten until all snapshots referencing that data are destroyed. Snapshot names are formatted like pool/fs1@snapname. For filesystems, you can commonly access them at [FS mountpoint]/.zfs/snapshot/[snapshot name]/; for volumes, /dev/ nodes for snapshots default to hidden (since, for example, if you're doing an FS mount by UUID, and you see 30+ copies of the same FS, it may end poorly), but you can adjust the snapdev property of the volume to change that, or clone the volume snapshot you want to examine. Snapshots are very useful both for later reference of earlier states, and for use in zfs send+receive for backup/restore/transfer. (See also bookmarks and clones later.)


 * Bookmarks are a very minimal kind of dataset - you create them with zfs bookmark [snapshot name] [bookmark name], and their purpose in life is to be used as the source of an incremental zfs send without having to keep the snapshot around - that is, if you have pool/fs1@snap3, @snap4, and @snap5, and you already used zfs send|recv to copy pool/fs1@snap3 somewhere, you could make pool/fs1#snap3 and destroy pool/fs1@snap3, and later be able to do zfs send -i pool/fs1#snap3 pool/fs1@snap4.
 * Clones are not really a separate type of dataset, but merit mentioning here. Whenever you have a snapshot of a filesystem or volume, but want to make a read-write version of it, you could clone it with zfs clone pool/fs1@snap1 pool/clonefs1, and you'll have a filesystem at pool/clonefs1 that is read-write and starts out identical to the snapshot state at pool/fs1@snap1.

To control the size of a filesystem/volume you can set a quota as a maximum, and/or you can reserve a certain amount of storage within a zpool to avoid any other dataset on the pool being able to use the free space before that dataset can. Filesystems default to being able to use all unreserved space on the pool, and have no reservation - volumes have a size (which can be adjusted) at creation time, which is an implicit quota, and unless created sparse, set a reservation for their whole size, to avoid the situation of running out of space when trying to overwrite a block.

Create a filesystem
We use our zpool zfs_test to create a new filesystem called dataset1:

The filesystem will be mounted automatically as /zfs_test/dataset1/

Mount/umount filesystem
Datasets can be mounted with the following command, the mountpoint is defined by the property mountpoint of the dataset:

To unmount the dataset:

The folder /zfs_test/dataset1 stays without the dataset behind it. If you write data to it and then try to mount the dataset again (and have the overlay property set to off, which is not the default), you will see the following error message:

Remove datasets
To remove the filesystem dataset1 from zpool zfs_test:

Properties
Properties for datasets are inherited from its parent dataset, all the way to the "root" dataset with the same name as the pool. So you can change properties by changing them on a dataset, or on its parent that it's inheriting from, and so on up to the "root", depending on how widely you want the change to happen.

To set a property for a dataset:

To show the setting for a particular property on a dataset:

You can get a list of all properties set on every dataset with the following command:

This is a partial list of properties that can be set on either zpools or datasets, for a full list see zfsprops.7:

Set mountpoint
Set the mountpoint for a filesystem, use the following command:

The dataset1 mount will be automatically moved to /mnt/data.

NFS filesystem share
Activate NFS share on a filesystem:

Per default the filesystem is shared using the exportfs command in the following manner. See exportfs(8) and exports(5) for more information.

Otherwise, the command is invoked with options equivalent to the contents of this property:

To stop sharing the filesystem:

Creating a snapshot
To create a snapshot of a dataset, use the following command:

Whenever data is overwritten or outright deleted on the filesystem, it starts counting against the space only referenced by snapshots instead - if it's only referenced by one snapshot, it'll show up in the USED property for that snapshot.

Listing
List all available snapshots:

Rollback
To rollback a full dataset to a previous state:

Removal
Remove snapshots of a dataset1 with the following command:

Scrubbing
To start a scrub for the zpool zfs_test:

Log files
To check the history of commands that were executed:

Monitor I/O
Monitor I/O activity on all zpools (refreshes every 6 seconds):

ZFS root
To boot from a ZFS filesystem as the root filesystem requires a ZFS capable kernel and an initial ramdisk (initramfs) which has the ZFS userspace utilities. The easiest way to set this up is as follows.

First, make sure to have compiled a kernel with ZFS support and used  to copy it to   and   to make the modules available at boot time.

Install and configure genkernel.

Install a bootloader, for example GRUB2.

Configure grub to use ZFS, and which dataset to boot from.

Finally, install grub to your boot device and create the grub configuration.

Caveats

 * Swap: On systems with extremely high memory pressure, using a zvol for swap can result in lockup, regardless of how much swap is still available. This issue is currently being investigated in . Please check the current OpenZFS documentation on swap"

External resources

 * ZFS on Linux
 * OpenZFS
 * ZFS Best Practices Guide
 * ZFS Evil Tuning Guide
 * article about ZFS on Linux/Gentoo (german)
 * ZFS Administration