User:Ali3nx/Installing Gentoo Linux EFISTUB On ZFS
Install Gentoo Linux on OpenZFS using EFIStub Boot
Author: Michael Crawford (ali3nx)
Contact: mcrawford@eliteitminds.com
Preface
This guide will show you how to install Gentoo Linux on AMD64 with:
* UEFI-GPT (EFI System Partition) - This will be on a FAT32 unencrypted partition as per UEFI Spec. * /, /home/username, on segregated ZFS datasets * /home, /usr, /var, /var/lib zfs dataset containers created for pool dataset structure * swap on regular partition * OpenZFS 0.8.5 * efistub boot without Grub * genkernel initramfs * systemd or openrc * Gentoo Stable (amd64)
Why efistub boot!? grub works for everyone!
- UEFI bios motherboards have been the default on all modern computer hardware since around 2013 entirely depreciating legacy bios.
- The modernization and wide availability of UEFI motherboards has retired the mandatory requirement for software bootloaders such as grub.
- grub itself when UEFI booted uses efistub to boot both itself and linux OS installs. This additional interference is unnecessary to boot Linux.
- Intel has publicly stated that legacy bios CSM compatibility switch support will be entirely depreciated on new hardware manufactured after 2020 forcing use of true uefi boot modes
Why not use grub with zfs!?
- The wiki guides for zfsroot from zfsonlinux and many distros all advise using grub bootloader which can work however grub doesn't fully support the newest zfs pool feature flags and using grub can be an added risk as well as added complication that can be entirely mitigated by using a uefi boot efistub configuration to boot your zfs root pool directly.
- The risk of using grub with zfs arises from the lack of modern pool feature support for zfsonlinux which requires the administrator tread carefully to ensure that a global zpool upgrade is never run or your zfsroot configuration becomes unbootable due to the legacy zfs pool feature flags required for grub to function having been upgraded. Such an occurrence having happened cannot be undone and recovery would require some major surgery from a livecd.
- Building a new system install initially using a legacy configuration implies additional ongoing maintenance be accepted to maintain a legacy configuration.
- zfs rootfs dataset encryption is easier to configure utilizing efistub boot.
Required Tools
Download the System Rescue CD + ZFS ISO
LiveUSB Creation
We will be creating a UEFI Bootable USB since this guide will be showing you how to install Gentoo Linux on ZFS with UEFI Enabled.
For the following commands, we will assume that your USB is /dev/sdg.
Format the USB
root #
parted -a optimal /dev/sdg
(parted)
mklabel msdos
(parted)
mkpart primary 1 -1
(parted)
set 1 boot on
(parted)
print
Model: ATA VBOX HARDDISK (scsi) Disk /dev/sdg: 8192MiB Sector size (logical/physical): 512B/512B Partition Table: msdos Disk Flags: Number Start End Size Type File system Flags 1 1.00MiB 8191MiB 8190MiB primary boot, lba
(parted)
quit
Create the FAT32 Filesystem on the USB
We will now create the FAT32 filesystem on the USB. This needs to be FAT32 since this is the filesystem used in the UEFI Specification. The label we will use for this partition will be in the following format SYSRCDXYZ, where XYZ is the version number of the System Rescue CD you downloaded.
For example, if you are using System Rescue CD 6.1.3, the label will be SYSRCD613.
root #
mkfs.fat -F32 -n SYSRCD613 /dev/sdg1
Make folders to mount USB and ISO
root #
mkdir /tmp/usb
root #
mkdir /tmp/sysresccd
Mount your USB and your ISO
root #
mount -o loop,ro sysresccd-6.1.3_zfs_0.8.4.iso /tmp/sysresccd
root #
mount /dev/sdg1 /tmp/usb
Copy files over from ISO to USB
root #
rsync -avP /tmp/sysresccd/ /tmp/usb/
And that's it! You now have a Bootable UEFI USB.
Windows
Rufus is the USB Utility I recommend when on Windows for Feardbliss zfs iso. You can Download Rufus here.
- Start Rufus
- Select your USB Device from the Device drop down.
- Select GPT partition label type and UEFI Non-CSM options
- Do not use MBR or uefi boot will not function
- Select Fat 32 Filesystem
- Select your ISO by clicking SELECT.
- Click START.
This should be all that's necessary to have a Bootable UEFI USB.
Assumptions
- Only installing Gentoo on one drive called /dev/sda (or /dev/nvme0n1, etc)
- Fearedbliss System Rescue CD + ZFS iso is being used.
- genkernel is being used as your initramfs.
- gentoo-sources is being used as your kernel.
Boot your system into the zfs LiveUSB
Since this is highly computer dependent, you will need to figure out how to boot your USB on your system and get to the live environment. You may need to disable Secure Boot if that causes your USB to be rejected. Make sure your system BIOS/UEFI is set up to boot UEFI devices, rather than BIOS devices (Legacy).
Confirm that you booted in UEFI Mode
After you booted into the Live CD, make sure that you booted into UEFI mode by typing the following:
root #
ls /sys/firmware/efi
If the above directory is empty or doesn't exist, you are not in UEFI mode. Reboot and boot into UEFI mode.
Continuing the installation without being in UEFI mode will most likely yield an unbootable system. If you want to install in BIOS mode, you will need a different setup.
Partition
We will now partition the drive and aim to create the following layout:
/dev/sda1 | 512 MB | EFI System Partition | /efi /dev/sda2 | 32768 MB | swap | swap /dev/sda3 | Rest of Disk | ZFS | /, /home/username ...
There are many UEFI motherboard firmwares that are extremely buggy. We will attempt to use a 512 MiB FAT32 partition configuration to increase success.
512MB esp will be beneficial to provide adequate space the 250MB genkernel initramfs file.
Open up your drive in GNU parted and tell it to use optimal alignment:
root #
parted -a optimal /dev/sda
Keep in mind that all of the following operations will affect the disk immediately. GNU parted does not stage changes like fdisk or gdisk.
Create GPT partition layout
This will delete all partitions and create a new GPT table.
Larger swap will accommodate hibernation should that be desired. 32GB swap is used in the below example to accommodate many different hardware configurations.
(parted)
mklabel gpt
Create and label your partitions
(parted)
mkpart esp fat32 1 513
(parted)
mkpart swap linux-swap 513 33280
(parted)
mkpart rootfs ext4 33280 100%
parted does not offer a zfs filesystem type so ext4 is used temporarily. the filesystem label name is largely autodetected and as a result will become irrelevant after zpool creation.
Set the bootable flag on the ESP partition
(parted)
set 1 boot on
Final View
(parted)
print
Model: Virtio Block Device (virtblk) Disk /dev/vda: 500GB Sector size (logical/physical): 512B/512B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 1 1049kB 513MB 512MB fat32 esp boot, esp 2 513MB 33.3GB 32.8GB linux-swap(v1) swap 3 33.3GB 500GB 467GB rootfs
Exit the application
(parted)
quit
Format your drives
Format your uefi esp partition
root #
mkfs.vfat -F32 /dev/sda1
This partition needs to be FAT32 due to it being an UEFI requirement. If it isn't, your system will not boot!
Create your swap
root #
mkswap -f /dev/sda2
root #
swapon /dev/sda2
- Do not put your swap inside a zvol. System lockups are possible when RAM is 100% and the system starts swapping while the swap is on ZFS. There has been an open unresolved bug in Openzfs regarding this is and as a result is best avoided. Swap memory pressure doesn't crash when the swap is on a normal partition.
Determine disk/by-id identifier
Using traditional block device identifiers such as /dev/sda or /dev/nvme0n1 with zfs can work but can also be undesirable due to the possibility of a block device name changing. Something as simple as connecting a usb storage device can cause this to occur.
Should this ever happen zfs pools are unaware of the change having occurred which can render a zfs pool inoperable. Use of non generic device specific disk identifiers which are also identified by disk serial number is more desirable for use with zfs as a result of this complication. This also provides added utility advantages for identifying a faulty disk in larger zfs pools.
To determine the non generic ata disk identifier id type the following
root #
ls -l /dev/disk/by-id
lrwxrwxrwx 1 root root 9 Mar 2 11:28 ata-Samsung_SSD_860_EVO_500GB_serialnum -> ../../sda
lrwxrwxrwx 1 root root 10 Mar 2 11:28 ata-Samsung_SSD_860_EVO_500GB_serialnum-part1 -> ../../sda1
lrwxrwxrwx 1 root root 10 Mar 2 11:28 ata-Samsung_SSD_860_EVO_500GB_serialnum-part2 -> ../../sda2
lrwxrwxrwx 1 root root 10 Mar 2 11:28 ata-Samsung_SSD_860_EVO_500GB_serialnum-part3 -> ../../sda3
Nvme storage devices would resemble this example
root #
ls -l /dev/disk/by-id
lrwxrwxrwx 1 root root 13 Mar 2 11:28 nvme-Samsung_SSD_960_PRO_512GB_serialnum -> ../../nvme0n1
lrwxrwxrwx 1 root root 15 Mar 2 11:28 nvme-Samsung_SSD_960_PRO_512GB_serialnum-part1 -> ../../nvme0n1p1
lrwxrwxrwx 1 root root 15 Mar 2 11:28 nvme-Samsung_SSD_960_PRO_512GB_serialnum-part2 -> ../../nvme0n1p2
lrwxrwxrwx 1 root root 15 Mar 2 11:28 nvme-Samsung_SSD_960_PRO_512GB_serialnum-part3 -> ../../nvme0n1p3
Generally using /dev/disk/by-id/ata-disk or /dev/disk/by-id/nvme-disk is more desirable to ensure the disk block device is more specific.
There may be /dev/disk/by-id/wmm or /dev/disk/by-id/nvme-eui.
Use of these block device identifiers in the example below should be avoided if possible for use with this guide.
root #
ls -l /dev/disk/by-id/wwn*
lrwxrwxrwx 1 root root 10 Mar 2 11:28 wwn-0x5002538e40aba28d-part1 -> ../../sda1
lrwxrwxrwx 1 root root 10 Mar 2 11:28 wwn-0x5002538e40aba28d-part2 -> ../../sda2
root #
ls -l /dev/disk/by-id/nvme*
lrwxrwxrwx 1 root root 13 Mar 2 11:28 nvme-eui.0025385971b064dd -> ../../nvme0n1
lrwxrwxrwx 1 root root 15 Mar 2 11:28 nvme-eui.0025385971b064dd-part1 -> ../../nvme0n1p1
Create your zpool
Create your zpool which will contain your drives and datasets:
xattrs and posixacl are enabled to provide support for modern filesystem security features. Relative atime updates which are a global default in ext4 are enabled as well.
xattrs is necessary for proper functionality of systemd-journald
Substitute ata-disk1-part3 for nvme-disk1-part3 if you have an nvme ssd disk.
root #
zpool create -f -o ashift=12 -o cachefile=/etc/zfs/zpool.cache -O compression=lz4 -O xattr=sa -O relatime=on -O acltype=posixacl -O dedup=off -m none -R /mnt/gentoo rpool /dev/disk/by-id/ata-disk1-part3
Create your rootfs zfs datasets
Create the dataset container structure and dataset necessary for /.
root #
zfs create -o mountpoint=none -o canmount=off rpool/ROOT
root #
zfs create -o mountpoint=/ rpool/ROOT/gentoo
Set the boot flag for zfs root dataset
root #
zpool set bootfs=rpool/ROOT/gentoo rpool
Create /usr, /var, /var/lib and /home zfs dataset containers
Creation of several unmounted dataset containers is necessary to provide dataset structure for the zfs pool. Creation of these containers after install is complete can be disruptive, involved and best completed before filesystem contents are written to disk to ensure the system will boot.
Dataset containers for /usr and /var especially benefit from this having been completed in advance.
This structures datasets within the pool for correct dataset segregation.
The /var/lib dataset container is created to allow for easy creation of /var/lib/foo datasets for system or network services if desired at a later date.
rpool/home dataset container is created to segregate user home directory dataset contents from the rootfs dataset for improved rootfs dataset incremental snapshot size management to ensure that rootfs snapshots do not fill the available pool storage space.
Additional accomodation must be made when using systemd with zfs to ensure that zfs /home dataset container is not configured to use a mountpoint as systemd may attempt to create a new /home directory on system boot causing the user home directory datasets to fail to mount on system boot due to a pool import mountpoint conflict.
Creating the rpool/home dataset container using the canmount=off option omitting a directory mountpoint ensures this complication will be unlikely to occur.
root #
zfs create -o canmount=off rpool/usr
root #
zfs create -o canmount=off rpool/var
root #
zfs create -o canmount=off rpool/var/lib
root #
zfs create -o canmount=off rpool/home
Create user home directory dataset
Replace username with the desired user name
root #
zfs create -o mountpoint=/home/username rpool/home/username
Verify everything looks good
You can verify that all of these things worked by running the following:
root #
zpool status
pool: rpool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 vda3 ONLINE 0 0 0 errors: No known data errors
I created a qemu vm to provide the zpool status representation. qemu and the livecd I used did not provide /dev/disk/by-id for qemu virtual disks. If installing on bare metal hardware this should not be a complication.
root #
zfs list
NAME USED AVAIL REFER MOUNTPOINT rpool 1.20M 418G 96K none rpool/ROOT 192K 418G 96K none rpool/ROOT/gentoo 96K 418G 96K /mnt/gentoo rpool/home 192K 418G 96K none rpool/home/username 96K 418G 96K /mnt/gentoo/home/username rpool/usr 96K 418G 96K none rpool/var 192K 418G 96K none rpool/var/lib 96K 418G 96K none
Now we are ready to install Gentoo!
Installing Gentoo
Set your date and time
We use ntpdate to set accurate time,date and hardware clock to mitigate clock skew that can cause software compilation to malfunction
root #
ntpdate -u pool.ntp.org
2 Mar 19:32:19 ntpdate[12777]: adjust time server 216.232.132.31 offset 0.454897 sec
Preparing to chroot
First let's mount our efi boot partition in our chroot directory:
root #
cd /mnt/gentoo
root #
mkdir efi
root #
mount /dev/sda1 efi
We'll use the Oregon State University Gentoo Linux mirror.
If you desire use a different regional mirror from the official Gentoo Linux mirror list
Download the systemd amd64 stage3 system archive and extract it
root #
wget <file>
root #
tar xJpvf stage3-*.tar.xz --xattrs-include='*.*' --numeric-owner
Copy zpool cache
root #
mkdir etc/zfs
root #
cp /etc/zfs/zpool.cache etc/zfs
Copy network settings
root #
cp --dereference /etc/resolv.conf /mnt/gentoo/etc/
Mounting the necessary filesystems
root #
mount --types proc /proc /mnt/gentoo/proc
root #
mount --rbind /sys /mnt/gentoo/sys
root #
mount --make-rslave /mnt/gentoo/sys
root #
mount --rbind /dev /mnt/gentoo/dev
root #
mount --make-rslave /mnt/gentoo/dev
Entering the new environment
root #
chroot /mnt/gentoo /bin/bash
root #
source /etc/profile
root #
export PS1="(chroot) ${PS1}"
Inside the chroot
Edit fstab
Use of disk UUID's to denote block devices entries in fstab has become the more desirable default to ensure an unpredicted block device alteration never renders a filesystem unmountable as a result of fstab becoming inaccurate.
Something as simple as connecting a usb storage device to a booted system has been known to cause this to occur.
The blkid command reveals these disk identifiers that are available for disk partitions created on gpt disk partition labels.
Despite having created disk partition names disk UUID's are more specific.
root #
blkid
/dev/loop0: TYPE="squashfs"
/dev/vda1: UUID="9E40-2218" TYPE="vfat" PARTLABEL="esp" PARTUUID="ce3ca4f8-bf90-42ae-9ed3-fbd34a718fd9"
/dev/vda2: UUID="fac87c68-50ef-424b-9673-dfd0a9890aff" TYPE="swap" PARTLABEL="swap" PARTUUID="5475ac59-f72a-40eb-80f1-7a634bc04f5c"
/dev/vda3: LABEL="rpool" UUID="3195477004188779862" UUID_SUB="13330732843625778565" TYPE="zfs_member" PARTLABEL="rootfs" PARTUUID="7997947d-1530-4c4e-be93-c76b6c966822"
/dev/sr0: UUID="2019-09-27-14-03-43-10" LABEL="Gentoo amd64 latest" TYPE="iso9660" PTUUID="2db7a891" PTTYPE="dos"
Everything is on zfs so we don't need anything in here except for the boot and swap entries. fstab should resemble the following example. Substitute the provided UUID's from your blkid command:
root #
nano /etc/fstab
UUID=9E40-2218 /efi vfat noauto,defaults 1 2 UUID=fac87c68-50ef-424b-9673-dfd0a9890aff none swap sw 0 0
Modify make.conf
Let's modify our /etc/portage/make.conf so we can start installing stuff with a good base (Change it to what you need):
root #
nano /etc/portage/make.conf
USE="caps cgroup-hybrid" # This should be your number of processors + 1 MAKEOPTS="-j5" EMERGE_DEFAULT_OPTS="--with-bdeps y --complete-graph y" # knight rider rides again! FEATURES="candy fixlafiles unmerge-orphans" ACCEPT_LICENSE="*"
Get the portage tree
Copy the default example portage config
root #
mkdir /etc/portage/repos.conf
root #
cp /usr/share/portage/config/repos.conf /etc/portage/repos.conf/gentoo.conf
root #
emerge-webrsync
Install required applications
Now install the initial apps:
root #
emerge bash-completion eix gentoolkit genkernel efibootmgr dosfstools gentoo-sources linux-firmware cronie intel-microcode parted
Kernel Configuration
Reviewing the current gentoo-sources Linux kernel version
Gentoo provides eselect to manage many core system environment variables including the active /usr/src/linux symlink.
root #
eselect kernel list
Available kernel symlink targets:
[1] linux-5.4.72-gentoo *
The command result of eselect should match the active linux kernel symlink
root #
ls -l /usr/src/
total 9
lrwxrwxrwx 1 root root 20 Mar 3 00:20 linux -> linux-5.4.72-gentoo
drwxr-xr-x 26 root root 39 Mar 3 00:20 linux-5.4.72-gentoo
Necessary kernel configuration features
efistub boot relies on a key Linux kernel configuration feature to function
---> Processor type and features [*] EFI runtime service support
sys-fs/zfs requires Zlib kernel support (module or builtin).
General Architecture Dependent Options ---> GCC plug ins ---> [ ] Randomize layout of sensitive kernel structures Cryptographic API ---> <*> Deflate compression algorithm Security options ---> [ ] Harden common str/mem functions against buffer overflows
sys-apps/systemd relies on the following menu options provided by sys-kernel/gentoo-sources
Gentoo Linux ---> [*] Gentoo Linux support [*] Linux dynamic and persistent device naming (userspace devfs) support [*] Select options required by Portage features Support for init systems, system and service managers ---> [*] systemd [*] openrc
The Linux kernel provides a console based configuration menu. Select the required configuration features in addition to necessary configuration features for your hardware.
root #
cd /usr/src/linux
root #
make menuconfig
Compile the Linux kernel
root #
cd /usr/src/linux
root #
make && make modules_install install
Install zfs software and kernel module
sys-fs/zfs and sys-fs/zfs-kmod must be installed after kernel configuration is complete
Install ZFS software
root #
emerge sys-fs/zfs-kmod sys-fs/zfs
Enable zfs systemd services- Systemd Only
root #
systemctl enable zfs.target
root #
systemctl enable zfs-import-cache
root #
systemctl enable zfs-mount
root #
systemctl enable zfs-import.target
Enable zfs openrc services - Openrc Only
root #
rc-update add zfs-import boot
root #
rc-update add zfs-mount boot
root #
rc-update add zfs-share default
root #
rc-update add zfs-zed default
Generate and verify the zfs hostid file
This is necessary for genkernel initramfs generation and zfs pool import integrity verification
root #
zgenhostid
root #
file /etc/hostid
Installing the gentoo-sources kernel binary
Install the kernel
root #
mkdir -p /efi/efi/gentoo
root #
cd /efi/efi/gentoo
root #
cp /boot/vmlinuz-5.4.72-gentoo vmlinuz-5.4.72-gentoo.efi
Generate and copy initramfs file to its correct location
root #
genkernel initramfs --zfs --firmware --compress-initramfs --microcode-initramfs --kernel-config=/usr/src/linux/.config
root #
cd /efi/efi/gentoo
root #
cp /boot/initramfs-5.4.72-gentoo.img .
Installing the bootloader onto your drive
We will need to configure the bootloader entry in uefi firmware to direct boot the linux kernel and initramfs.
The following command will install the uefi bootloader entry in uefi firmware referencing the kernel and initramfs located at /efi/efi/gentoo
Edit the Linux kernel version to the desired current version used.
root #
efibootmgr --disk /dev/sda --part 1 --create --label "Gentoo ZFS 5.4.72" --loader "efi\gentoo\vmlinuz-5.4.72-gentoo.efi" --unicode 'root=ZFS=rpool/ROOT/gentoo ro initrd=\efi\gentoo\initramfs-5.4.72-gentoo.img'
efibootmgr will print the uefi firmware loader table contents upon success also revealing the newely altered boot order
root #
efibootmgr
BootCurrent: 0001
Timeout: 0 seconds
BootOrder: 0003,0001,0000,0002
Boot0000* UiApp
Boot0001* UEFI QEMU DVD-ROM QM00001
Boot0002* EFI Internal Shell
Boot0003* Gentoo ZFS 5.4.72
Final steps before reboot
root #
passwd
root #
exit
root #
reboot
After you reboot
Take a snapshot of your new system
Since we now have a working system, we will snapshot it in case we ever want to go back or recover files:
root #
zfs snapshot rpool/ROOT/gentoo@2020-03-02-0000-01-INSTALL
root #
zfs snapshot rpool/home/username@2020-03-02-0000-01-INSTALL
You can view the status of these snapshots using the zfs command
root #
zfs list -t snapshot
ZFS dataset snapshot automation
sys-fs/zfs-auto-snapshot can be installed and configured to provide dataset snapshot automation.
root #
emerge sys-process/cronie sys-fs/zfs-auto-snapshot
Enable the system service and start cronie cron daemon as required for functionality of zfs-auto-snapshot.
root #
systemctl enable cronie.service
root #
systemctl start cronie.service
Configure daily and weekly snapshot generation for rpool/ROOT/gentoo
root #
zfs set com.sun:auto-snapshot:daily=true rpool/ROOT/gentoo
root #
zfs set com.sun:auto-snapshot:weekly=true rpool/ROOT/gentoo
Limiting the ARC size
If you want to cap the ZFS ARC from growing past a certain point, you can put the number of bytes inside the /etc/modprobe.d/zfs.conf file, and then remake your initramfs. When the system starts up, and the module is loaded, these options will be passed to the zfs kernel module.
ARC cache memory usage will vary depending on zfs pool sizes. I've had a 50TB single vdev raidz2 pool consume 24GB of memory at system idle when unlimted however zfs wll generally default to using 50% of available system memory for the ARC cache
(Temporary) Change the ARC max for the running system to 4 GB
root #
echo 4294967296 >> /sys/module/zfs/parameters/zfs_arc_max
(Permanent) Save the 4 GB ARC cap as a loadable kernel parameter
root #
echo "options zfs zfs_arc_max=4294967296" >> /etc/modprobe.d/zfs.conf
Once we have the above file created, let's regenerate the initramfs. genkernel will automatically detect that this file exists and copy it into the initramfs. When you reboot your machine, the initramfs will load up the zfs kernel module with the parameters found in the file.
root #
genkernel initramfs --zfs --firmware --compress-initramfs --microcode-initramfs --kernel-config=/usr/src/linux/.config
root #
mount /efi
root #
cd /efi/efi/gentoo
root #
cp /boot/initramfs-5.4.72-gentoo.img .
root #
cd
root #
umount /efi
Limiting maximum trim I/Os active to each device. ( Optional )
Some hard disk controllers or ssd disks may exhibit disk controller resets when zpool trim <poolname> is run due to either the disk controller or disk not being able to process multiple synchronous disk controller driver commands being issued to a disk.
A known workaround is to reduce the default value of zfs_vdev_trim_max_active from the default value of 2 to 1 using a zfs driver parameter in the /etc/modprobe.d/zfs.conf file, and then remake your initramfs. When the system starts up, and the module is loaded, these options will be passed to the zfs kernel module.
I've had this behavior or symptom occur using an LSI 9305-16i HBA controller which relies on the mpt3sas kernel driver with Samsung 860 evo ssd's.
There is an open bug on openzfs git discussing this issue.
If this symptom did occur and a sysadmin had zpool trim configured to run from a crontab schedule a zfs pool scrub may be required, pool desync or data corruption at the very worst may occur. zfs has always detected the controller reset behavior as the pool or disk within the pool having been affected by an unrecoverable error prompting zpool replace to be used or zpool clear to clear the error state.
(Temporary) Change maximum trim I/Os active to each device.
root #
echo 1 > /sys/module/zfs/parameters/zfs_vdev_trim_max_active
(Permanent) Save the maximum trim I/Os active to each device as a loadable kernel parameter
root #
echo "options zfs zfs_vdev_trim_max_active=1" >> /etc/modprobe.d/zfs.conf
Once we have the above file created, let's regenerate the initramfs. genkernel will automatically detect that this file exists and copy it into the initramfs. When you reboot your machine, the initramfs will load up the zfs kernel module with the parameters found in the file.
root #
genkernel initramfs --zfs --firmware --compress-initramfs --microcode-initramfs --kernel-config=/usr/src/linux/.config
root #
mount /efi
root #
cd /efi/efi/gentoo
root #
cp /boot/initramfs-5.4.72-gentoo.img .
root #
cd
root #
umount /efi
Successful Installations
- My custom gentoo zfs HTPC nas server
- Gentoo HTPC zfs NAS Neofetch
- TdDF Gentoo zfs nas server - Austin Texas USA. Installed remotely 12/2019. i9-9900k, 32GB DDR4, 7x10TB WD Red's raidz2, Adata SSD root mirror pool.
Credit and Thanks
- Fearedbliss and Rayo - zfs and Gentoo wouldn't be what has become without their generous dedication and contributions.
- Everyone that helped me learn in 16 years using gentoo. I promise to pay it forward.
- Kerframil for the Low latency coffee! Go Kerf :)