Difference between revisions of "Zram"

From Gentoo Wiki
Jump to:navigation Jump to:search
(→‎Introduction: Dedup content.)
(→‎Checking that zram is used: Add Troubleshooting section. zram already mounted or mount point busy)
Line 204: Line 204:
 
/dev/zram1 /var/tmp/portage ext4 rw,nosuid,nodev,block_validity,discard,delalloc,barrier,user_xattr,acl 0 0
 
/dev/zram1 /var/tmp/portage ext4 rw,nosuid,nodev,block_validity,discard,delalloc,barrier,user_xattr,acl 0 0
 
</pre>}}
 
</pre>}}
 +
 +
== Troubleshooting ==
 +
 +
=== OpenRC: zram already mounted or mount point busy ===
 +
 +
'''Problem:''' "* Mounting local filesystems ... " doesn't wait for mkfs.ext4 to finish, thus zram will fail to mount at boot time.
 +
 +
'''Solution:''' Set <code>rc_need="udev-settle"</code> in {{Path|/etc/conf.d/localmount}}.
 +
 +
This situation occurs when using {{C|mkfs.ext4}} in {{Path|/etc/udev/rules.d/10-zram.rules}}. It is caused by {{c|mkfs.ext4}} being not finished running when {{Path|/etc/init.d/localmount}} runs (even with <code>rc_parallel="NO"</code> in {{Path|/etc/rc.conf}}) and thus causes {{C|mount}} with fail with <code>mount: /tmp: /dev/zram1 already mounted or mount point busy.</code>
  
 
== See also ==
 
== See also ==

Revision as of 23:21, 18 January 2022

zram (previously called compcache) is a Linux kernel feature and userspace tools for creating compressible RAM-based block devices. It has been included as a module of the mainline Linux since kernel version 3.14. Starting with kernel version 3.15, zram supports multiple compression streams and the ability to change the compression algorithms without a system restart.

Introduction

The zram kernel module enables support for creating compressed block devices in RAM. These block devices can then be used for swap or general purpose RAM disks. Popular uses for it on Gentoo are to extend the available amount of RAM to processes (swap space), virtualize /tmp, and /var/tmp/portage - which is Portage's temporary directory used for software compilation.

The RAM used for the block device is dynamically obtained and released up to its predefined uncompressed maximum size. The way it extends the amount of available RAM to a system is by using a portion of the RAM as compressed swap. It can therefore hold more pages of memory in the compressed swap than the amount of actual memory used.

Typically it compresses to a 3:1 ratio. So, 1 GiB of swap uses only 333 MiB of RAM on average. The compression ratio including memory used for disk overhead varies depending on the percent of maximum space utilized. This may typically vary from 1.5:1 for a 1.5 GiB disk with only 5% space used, to over 3:1 when nearly full. It also is much faster at swapping pages than typical speeds for hard disk swap.

Combining zram with a correctly tuned Portage configuration should keep a desktop system running in a responsive manner, even during intensive software compilation.

Caveats/Cons

Prior to kernel 3.15, each zram device contains its own compression buffer, memory pools, and other metadata as well as per-device locks. This can become a serious bottleneck for multi-core machines. To work around this problem, zram is capable of initializing multiple devices. For this reason, the recommended amount of devices for swap is 1 per CPU core for kernels prior to 3.15.

Another caveat for systems with limited memory, non swap use can reduce the amount of available memory to run applications.

When using this with a really fast SSD (e.g. Samsung 840 Pro), avoid setting rc_parallel=YES in /etc/rc.conf. Depending on the size of the zram partitions and the speed of the RAM, some swap partitions and filesystems might not be ready when the swap and localmount services are started.

In such case, if you absolutely have to use parallel, consider removing these services from runlevel boot and adding them to default instead.

Enabling zram support in kernel

Enable the following options are needed in the kernel's configuration file:

KERNEL CONFIG_ZSMALLOC, CONFIG_ZSMALLOC_STAT, CONFIG_ZRAM, CONFIG_ZRAM_WRITEBACK, CONFIG_ZRAM_MEMORY_TRACKING
'"`UNIQ--pre-00000000-QINU`"'

It is recommended that zram be built as a loadable module. This allows you to change number of zram devices without reboot, by deactivating zram devices and re-loading module with new parameters. If you have zram built-in, then you can only change the number of devices at boot time by using the kernel boot parameter:

zram.num_devices=#

In order to use the LZ4 compression algorithm, you must also enable it in kernel config:

KERNEL
'"`UNIQ--pre-00000003-QINU`"'

Initialization

Using zram-init service

By far the easiest method of utilizing zram disk(s) is by using Martin Väth's zram-init script.

Note, that version 2.7 is fully compatible with kernels < 3.15. If version >= 3.0 is used, the maxs (maximum concurrent streams) and algo (compression algorithm selection) is only functional for kernels >= 3.15.

root #emerge --ask sys-block/zram-init

OpenRC

Edit the /etc/conf.d/zram-init file and create/configure the desired zram devices. There are lots of comments/instructions in the file. So proceed with editing and be sure to save it when the appropaite modificaitons have been made.

Note
# For multicore systems, set maxs equal to the number of cores. When using old kernel (< 3.15), configure separate swap devices per core.
  1. Set priority of hard drive swap to low, e.g. via fstab.

An example:

  • Specs: Dual core CPU, 2GiB total RAM.
  • Configure 1G of two-stream swap, 512MiB of /tmp
FILE /etc/conf.d/zram-init
load_on_start="yes"

unload_on_stop="yes"
 
num_devices="2"

type0="swap"
flag0=
size0="512"
maxs0=2
algo0=lz4

type1="/tmp"
flag1="ext4"
size1="512"

Then, add the init script to the desired runlevel, usually boot, and start the service:

root #rc-config add zram-init boot
root #/etc/init.d/zram-init start

In this case the boot runlevel is preferable to the default runlevel becaues zram is providing temporary storage filesystems at /tmp or /var/tmp which are prerequisites for other services which will start during the default runlevel.

systemd

The sys-block/zram-init package provides systemd units with self explained names:

  • zram_swap.service
  • zram_tmp.service
  • zram_var_tmp.service

These should be copied to /etc/systemd/system directory in order to be edited, then enabled

root #cp /usr/lib/systemd/system/zram* /etc/systemd/system/

For example, to enable the /var/tmp directory (which includes Portage's temporary directory used for compiling packages) in zram:

root #systemctl enable zram_var_tmp
Created symlink /etc/systemd/system/local-fs-pre.target.wants/zram_var_tmp.service → /etc/systemd/system/zram_var_tmp.service.

Note, the file size should be adjusted to a size large enough to hold the working directories for the packages that will be compiling. Consider adjusting the default value from 2048 to at least 16384 (16GiB). See this table for estimates of uncompressed space necessary for successful compilation.

Using OpenRC

For manual creation create two /etc/local.d files: zram.start and zram.stop. OpenRC will run these during the appropriate as part of service run process when booting or changing run levels.

An example:

  • Specs: 4 cpu cores, 4G RAM total
  • Configure 6G zram swap and activate.
  • Estimated maximum ram used 2G @ 3:1 compression ratio.
FILE /etc/local.d/zram.start
#!/bin/bash

modprobe zram

SIZE=6144
echo $(($SIZE*1024*1024)) > /sys/block/zram0/disksize

mkswap /dev/zram0

swapon /dev/zram0 -p 10
FILE /etc/local.d/zram.stop
#!/bin/bash

swapoff /dev/zram0

echo 1 > /sys/block/zram0/reset

modprobe -r zram
Note
Disksize may also be specified using mem suffixes (K, M, G): echo 6144M > /sys/block/zram0/disksize

Using udev

Other possibility is to use existing configuration files - this option works on vanilla Gentoo without need to install additional software, also useful if you are using systemd instead of OpenRC. The first example can be implemented using:

FILE /etc/udev/rules.d/10-zram.rules
KERNEL=="zram0", SUBSYSTEM=="block", DRIVER=="", ACTION=="add", ATTR{disksize}=="0", ATTR{disksize}="512M", RUN+="/sbin/mkswap $env{DEVNAME}"
KERNEL=="zram1", SUBSYSTEM=="block", DRIVER=="", ACTION=="add", ATTR{disksize}=="0", ATTR{disksize}="512M", RUN+="/sbin/mkswap $env{DEVNAME}"
KERNEL=="zram2", SUBSYSTEM=="block", DRIVER=="", ACTION=="add", ATTR{disksize}=="0", ATTR{disksize}="512M", RUN+="/sbin/mkfs.ext4 $env{DEVNAME}"
# if you want lz4 support (since kernel 3.15) and without ext4 journaling 
# KERNEL=="zram2", SUBSYSTEM=="block", DRIVER=="", ACTION=="add", ATTR{initstate}=="0", ATTR{comp_algorithm}="lz4", ATTR{disksize}="512M", RUN+="/sbin/mkfs.ext4 -O ^has_journal -L $name $env{DEVNAME}"
FILE /etc/fstab
/dev/zram0              swap                    swap            pri=16383                                                       0 0
/dev/zram1              swap                    swap            pri=16383                                                       0 0
/dev/zram2              /tmp                    ext4            defaults                                                        0 0
FILE /etc/modprobe.d/zram.conf
options zram num_devices=3

Additionally, the zramctl utility is part of sys-apps/util-linux and can be used to configure zram devices. See man zramctl for examples of usage.

systemd

If using systemd with this method, you must ensure that the zram module is loaded by systemd. The simplest way to achieve that is to include a file in /etc/modules-load.d/ like this one:

FILE /etc/modules-load.d/zram.conf
zram

Checking that zram is used

Check if zram is mounted as swap:

user $grep zram /proc/swaps
/dev/zram0                              partition       2097148 2816    16383

Check if zram is mounted as directories:

user $grep zram /proc/mounts
/dev/zram1 /var/tmp/portage ext4 rw,nosuid,nodev,block_validity,discard,delalloc,barrier,user_xattr,acl 0 0

Troubleshooting

OpenRC: zram already mounted or mount point busy

Problem: "* Mounting local filesystems ... " doesn't wait for mkfs.ext4 to finish, thus zram will fail to mount at boot time.

Solution: Set rc_need="udev-settle" in /etc/conf.d/localmount.

This situation occurs when using mkfs.ext4 in /etc/udev/rules.d/10-zram.rules. It is caused by mkfs.ext4 being not finished running when /etc/init.d/localmount runs (even with rc_parallel="NO" in /etc/rc.conf) and thus causes mount with fail with mount: /tmp: /dev/zram1 already mounted or mount point busy.

See also

  • Portage TMPDIR on tmpfs — Building packages in tmpfs both speeds up emerge times and reduces HDD/SSD wear.
  • Zswap — a lightweight compressed cache for swap pages.

External resources

  • zram in official kernel documentation.