zram (previously called compcache) can create RAM based block devices. It is an experimental (staging) module of the Linux kernel since 3.2.
What it does is create a compressed block device in ram. That block device can then be used for swap or general purpose ram disk. The two most popular uses for it are swap to extend the available amount of ram to processes and /tmp. The ram used for the block device is dynamicaly obtained and released up to it's predefined uncompressed maximum size. The way it extends the amount of available ram to a system is by using a portion of the ram as compressed swap. It can therefore hold more pages of memory in the compressed swap than the amount of actual memory used. Typically it compresses to a 3:1 ratio. So, 1G of swap uses only 333MB of ram on average. The compression ratio including memory used for disk overhead varies depending on the % maximum space used. I found it to vary from 1.5:1 for a 1.5G disk with only 5% space used, to over 3:1 when nearly full. It also is much faster at swapping pages than typical hard disk swap.
My experience with using it, my system is still fully functional, with only slight slow downs at times. This is for a Xfce4 desktop with several apps and emerge running with PORTAGE_NICENESS=10. The memory and swap spaces were nearly maxed out. Intel Core2 Quad core 2.6Ghz, 4G ram. I had 4 - 1.5G zram disks for swap, plus 1G partition of hard drive swap as backup. At one point during linking chromium, I saw the system using just over 5G of zram swap, while using about 1.2G of ram, about 100MB of hard disk swap. The desktop was still responsive :)
Each zram device contains it's own compression buffer, memory pools and other metadata as well as per-device locks. This can become a serious bottleneck for multi-core machines. To work around this problem, zram is capable of initializing multiple devices. The recommended amount of devices for swap is 1 per cpu core.
For systems with limited memory, non swap use can reduce the amount of available memory to run applications.
I recommend you enable zram as a loadable module. The reason is the number of devices created are set during the module's loading. If you compile the module into the kernel, it will default to 1 device and size it to 25% of total memory. It is possible to re-configure the size later, but you can only change the number of devices at boot time by using the kernel boot parameter
When it is a loadable module, the number of devices can be changed later without a reboot. You will however have to de-activate any existing devices before being able to increase or decrease the number of devices.
Gentoo/OpenRC init script
By far the easiest method of utilizing zram disk(s) is by using Martin Väth's zram-init script. Currently it is available in the main tree:
Edit the /etc/conf.d/zram-init file and create/configure your desired zram devices. There is lots of comments/instructions in the file. So, proceed with editing, save it when your done.
- Specs: Dual core cpu, 2G total ram
- Configure total 1G of swap, 512Mb of /tmp
Add the init script to the desired runlevel, usually "default":
sys-block/zram-init also provides systemd units with self explaining names:
For manual creation, you can use /etc/local.d and supply it with 2 files. A zram.start and a zram.stop. Openrc will run these as appropriate as part of it's normal operation.
- Specs: 4 cpu cores, 4G ram total
- Configure 4 1.5G zram swap and activate.
- Estimated maximum ram used 2G @ 3:1 compression
Using existing tools
Other possibility is to use existing configuration files - this option works on vanilla Gentoo without need to install additional software, also useful if you are using systemd instead of OpenRC. The first example can be implemented using:
Usage with a SSD
When using this with a really fast SSD (e.g. Samsung 840 Pro), avoid setting 'rc_parallel=YES' in /etc/rc.conf. Depending on the size of the zram partitions and the speed of your RAM, some swap partitions and filesystems might not be ready when the swap and localmount services are started. In such case, if you absolutely have to use parallel, consider removing these services from runlevel boot and adding them to default instead.