SSD

This article Solid State Drives) on Linux.]] It presumes the user has a basic understanding of partitioning and formatting disk drives.

Introduction
The term Solid State Drive is commonly used for flash-based block devices. Compared to conventional HDD, flash-based technology offers a much faster access time, lower latency, silent operation, power savings (no moving parts), and more. However, the flash-based technology brings a few issues which require some special system attention and care.

Dealing with empty blocks
Generally, traditional filesystems do not erase deleted data blocks but only flags them as such. Due to nature of flash memory cells any write operation has to be done to empty cells only. Thus writing to physically non-empty cells, flagged as deleted by a filesystem, requires their erasure which makes the operation slower than writing to empty cells. This problem is further amplified by hardware limitations.

For modern kernels it is possible to hint the deleted (not-used) data blocks to SSD. The described mechanism is called discard. Names of implementations differ — TRIM for ATAPI and UNMAP for SCSI. Filesystem's support is required in order to use discard. Majority of modern filesystems (like Ext4, XFS or Btrfs ) support discard. Also there are filesystems developed primarily for flash-based devices, such as F2FS.

There are two basic approaches to issue the discard command — using   option  for continuous discard or periodic calls of  utility.

Slowing wear out
Each write operation performed on a NAND flash cell causes its wear. This fact limits the SSD lifespan. The cell endurance varies with used technology. On the other hand, read operations are straightforward and do not cause cell wear.

A basic method increasing SSD lifespan is to uniformly distribute writes across all the blocks. This method is called wear leveling and is deployed via SSD firmware.

From system point of view, it is appropriate to generally reduce amount of writes.

Discard (trim) support
Device's support of discard (sometimes referred to as trim) should be verified before performing any form of discarding on the drive.

It is possible to use utility from :

A device supporting discard has non-zero values in the columns of  (discard granularity) and   (discard max bytes). In the example listing above, the supports discard and  does not.

Partitioning
Sizes of SSD internal data structures (blocks and pages) varies across different devices. Filesystems operates on data structures of different sizes. For optimal performance filesystem data structures should aim not to cross boundaries of underlying SSD internal data structures. Thus effectively minimizing the number of required internal SSD operations. This can be achieved by aligning start of each partition — the common alignment is to 1 MiB.

Both and  partitioning utilities support partition alignment. For, there is  option. Recent versions of should use optimal alignment by default.

It is possible to easily check the alignment for given partition using :

For further details about the partitioning, follow dedicated handbook chapter.

blkdiscard
utility from (or later) discards all data blocks on given device.

LVM
LVM aligns to MiB boundaries and passes discards to underlying devices by default. No additional configuration is required.

In order to discard all unused space in a Volume Group (VG) use the SSD utility:

Alternatively, there is a discard option in which makes LVM discard entire Logical Volume (LV) on, ,  and other actions that free Physical Extents (PE) in a VG.

dm-crypt/LUKS
For discards to pass through encrypted LUKS devices, they have to be opened with the  option.

When root-device exists on LUKS, enabling discards depends on the Initramfs implementation. When using genkernel for creating your initramfs, pass the following kernel option:

When using dracut for creating the initramfs, pass the following kernel option:

To evaluate if discard is enabled on a LUKS device, you can check if the output of the following command contains the string :

Formatting
Similarly to partitions, performance can be improved if a filesystem is configured the way it can align its data structures with device's internal structures sizes — namely its erase block size.

This configuration gets important in case of a software RAID, when one really should know the erase block size. Consider this information when making your purchase.

Configuring for erase block size
When device's erase block size is known, it can be used when creating a filesystem.

For example for ext4 using on an average-sized partition, it will apply 4KiB blocks. Using  and   options, it is possible to set the alignment to erase block size. Both options should be set as erase block size / block size.

For a drive with 512KiB erase block size, it makes 512KiB / 4KiB = 128:

List of devices with known erase block sizes

 * OCZ drives; stride an stripe-width are 128


 * Crucial M500 240GB; stride and stripe-width are 2048


 * SanDisk z400s; stride an stripe-width are 4096

Mounting
For rootfs it is usually recommended to periodically use utility. Using   option results in continuous discard that could potentially cause degradation of older or poor-quality SSDs.

The following command can be used manually or be setup as a periodic job to run once a week :

For mount points with a low amount of disk writes occurring on a SSD it should be safe to use   option in. Also it is recommended to use the option when maintaining performance is required.

Given the considerations above, a discard-enabled could look like this:

Once the has been modified, remount all filesystems mentioned there via:

Periodic fstrim jobs
There are multiple ways how to setup a periodic block discarding process. As of 2018, the default recommended frequency is once a week.

cron
Run on all mounted devices that support discard on a weekly basis:

Similarly, it is possible to run only for a selected mount point:

fstrimDaemon
If the system is powered off when cron scheduled its job, would not be called at all. fstrimDaemon can be installed to solve this problem.

SSDcronTRIM
There is also a semi-automatic cron job available on GitHub called SSDcronTRIM which has the following features:


 * Distribution independent script (developed on a Gentoo system).
 * The script decides every time depending on the disk usage how often (monthly, weekly, daily, hourly) each partition has to be trimmed.
 * Recognizes if it should install itself into, or any other defined directory and if it should make an entry into.
 * Checks if the kernel meets the requirements, the filesystem is able to and if the SSD supports trimming.

systemd timer
When running a system with systemd version 212 or newer, a persistent systemd timer can be created that will run weekly. Thanks to timer's persistency it will be issued immediately if a scheduled run was missed.

Two systemd unit files need to be created in the directory:

Service called  which actually executes the :

Timer which wakes up the  service weekly:

Make sure the permissions are correct:

Tell systemd to reload its unit files, then enable it:

It is now possible to see if it has been run and when the next time it will be ran by issuing:

Also the command can be used to verify the timer is running successfully.

Reducing amount of writes
The flash-based SSDs have a limited write lifetime - the number of writes performed. Thus when using a SSD, administrators generally want to reduce the amount of writes.

Portage TMPDIR on tmpfs
When building packages via Portage it is possible to perform the operations on tmpfs and get the tmpfs' benefits. See Portage TMPDIR on tmpfs guide.

Temporal files on tmpfs
It is possible to mount desired mount points as tmpfs. Since tmpfs stores files in volatile memory all the I/O operations directed to the given mount points are not performed on the solid state disk. This reduces the amount of writes and also improves performance.

This is an example of both and  being mounted as tmpfs:

XDG cache on tmpfs
When running a Gentoo desktop, many programs, using X Window System (Chromium, Firefox, Skype, etc.) are making frequent disk I/O every few seconds to cache.

The cache directory location usually complies to XDG Base Directory Specification, namely to the XDG_CACHE_HOME environment variable. The default cache location is, which is usually mounted on a hard drive and could be moved to tmpfs.

To remap the cache directory location create file :

Web browser profile/s and cache on tmpfs
The web browser profile/s, cache, etc. can be relocated to tmpfs. The corresponding I/O associated with using the browser gets redirected from the SSD drive to tmpfs' volatile memory, resulting in reduced wear to the physical drive and also improving browser speed and responsiveness.

You can relocate the browser components mentioned above with the utility :

Next add the users whose browser/s profile/s will get symlinked to a tmpfs or another mountpoint in the variable :

Finally, close all the browsers, start and enable the daemon.

On systemd:

On OpenRC:

Now it is possible to view all symlinks by printing the status of the started daemon:

More info about Profile-Sync-Daemon can be found on Arch's wiki.

External resources

 * Aligning an SSD on Linux — Drives internal structures explained.
 * Aligning filesystems to an SSD’s erase block size — Aligning explained by Ted T'so.
 * Magic soup: ext4 with SSD, stripes and strides — ext4 aligning discussion