Project:Security/Vulnerabilities/MDS - Microarchitectural Data Sampling aka ZombieLoad

From Gentoo Wiki
Jump to:navigation Jump to:search

Summary

Logo designed by Natascha Eibl.

Four new microprocessor flaws have been discovered. These flaws, if exploited by an attacker with local shell access to a system, could allow data in the CPU's cache to be exposed to unauthorized processes. While difficult to execute, a skilled attacker could use these flaws to read memory from a virtual or containerized instance, or the underlying host system.

Issue details and background information

Gentoo Linux has been made aware of a series of microarchitectural (hardware) implementation issues that could allow an unprivileged local attacker to bypass conventional memory security restrictions in order to gain read access to privileged memory that would otherwise be inaccessible. These flaws could also be exploited by malicious code running within a container. These issues affect many modern Intel microprocessors, requiring updates to the Linux kernel, virtualization stack, and updates to CPU microcode. The issues have been assigned CVE-2018-12130, as a severity impact of Important, CVE-2018-12126, CVE-2018-12127, and CVE-2019-11091 ​are considered Moderate severity.

At this time, these specific flaws are only known to affect Intel-based processors.

Flaws were found with the manner in which Intel microprocessor designs implement several performance micro-optimizations. Exploitation of the vulnerabilities provide attackers a side channel to access recently used data on the system belonging to other processes, containers, virtual machines, or to the kernel.

These vulnerabilities are referred to as Microarchitectural Data Sampling (MDS) due to the fact that they rely upon leveraging speculation to obtain state left within internal CPU structures.

CVE-2018-12126 - Microarchitectural Store Buffer Data Sampling (MSBDS)

A flaw was found in many Intel microprocessor designs related to a possible information leak of the processor store buffer structure which contains recent stores (writes) to memory.

Modern Intel microprocessors implement hardware-level micro-optimizations to improve the performance of writing data back to CPU caches. The write operation is split into STA (STore Address) and STD (STore Data) sub-operations. These sub-operations allow the processor to hand-off address generation logic into these sub-operations for optimized writes. Both of these sub-operations write to a shared distributed processor structure called the 'processor store buffer'.

The processor store buffer is conceptually a table of address, value, and "is valid" entries. As the sub-operations can execute independently of each other, they can each update the address, and/or value columns of the table independently. This means that at different points in time the address or value may be invalid.

The processor may speculatively forward entries from the store buffer. The split design used allows for such forwarding to speculatively use stale values, such as the wrong address, returning data from a previous unrelated store. Since this only occurs for loads that will be reissued following the fault/assist resolution, the program is not architecturally impacted, but store buffer state can be leaked to malicious code carefully crafted to retrieve this data via side-channel analysis.

The processor store buffer entries are equally divided between the number of active Hyper-Threads. Conditions such as power-state change can reallocate the processor store buffer entries in a half-updated state to another thread without ensuring that the entries have been cleared.

This issue is referred to by the researchers as Fallout.

CVE-2018-12127 - Microarchitectural Load Port Data Sampling (MLPDS)

Microprocessors use "load ports" to perform load operations from memory or IO. During a load operation, the load port receives data from the memory or IO subsystem and then provides the data to the CPU registers and operations in the CPU’s pipelines.

In some implementations, the writeback data bus within each load port can retain data values from older load operations until newer load operations overwrite that data

MLPDS can reveal stale load port data to malicious actors when:

  • A faulting/assisting SSE/AVX/AVX-512 loads that are more than 64 bits in size
  • A faulting/assisting load which spans a 64-byte boundary.

In the above cases, the load operation speculatively provides stale data values from the internal data structures to dependent operations. Speculatively forwarding this data does not end up modifying program execution, but this can be used as a widget to speculatively infer the contents of a victim process's data value through timing access to the load port.

CVE-2018-12130 - Microarchitectural Fill Buffer Data Sampling (MFBDS)

This issue has the most risk associated. A flaw was found by researchers in the implementation of fill buffers used by Intel microprocessors.

A fill buffer holds data that has missed in the processor L1 data cache, as a result of an attempt to use a value that is not present. When a Level 1 data cache miss occurs within an Intel core, the fill buffer design allows the processor to continue with other operations while the value to be accessed is loaded from higher levels of cache. The design also allows the result to be forwarded to the Execution Unit, acquiring the load directly without being written into the Level 1 data cache.

A load operation is not decoupled in the same way that a store is, but it does involve an Address Generation Unit (AGU) operation. If the AGU generates a fault (#PF, etc.) or an assist (A/D bits) then the classical Intel design would block the load and later reissue it. In contemporary designs, it instead allows subsequent speculation operations to temporarily see a forwarded data value from the fill buffer slot prior to the load actually taking place. Thus it is possible to read data that was recently accessed by another thread if the fill buffer entry is not overwritten.

This issue is referred to by researchers as RIDL.

CVE-2019-11091 - Microarchitectural Data Sampling Uncacheable Memory (MDSUM)

A flaw was found in the implementation of the "fill buffer," a mechanism used by modern CPUs when a cache-miss is made on L1 CPU cache. If an attacker can generate a load operation that would create a page fault, the execution will continue speculatively with incorrect data from the fill buffer, while the data is fetched from higher-level caches. This response time can be measured to infer data in the fill buffer.

Resolution

The issues identified above share the same mitigations:

  • A combination of CPU microcode updates,
  • and kernel and VMM mitigations via software updates

Microcode updates

First batch of CPU microcode updates are available in >=sys-firmware/intel-microcode-20190514_p20190512 package. To install the latest version, please run

root #emerge --ask --oneshot --verbose ">=sys-firmware/intel-microcode-20190514_p20190512"
Important
Please keep in mind that depending on how you apply microcode updates, a reboot might be necessary. See dedicated Microcode page for further information on how to update microcode(s) in Gentoo.

To learn more about affected products and their microcode status, Intel has published a PDF document showing the current status of available microcode updates.

Once you have loaded new microcode(s), the Intel CPU instruction called "VERW" was enhanced such that it flushes all buffers and ports. The VERW instruction will be called during task switch or VM switch by the patched kernels and hypervisors.

Kernel updates

LTS branch Version with complete MDS mitigation Recommended version (stabilization candidate)
4.4 >=sys-kernel/gentoo-sources-4.4.180 =sys-kernel/gentoo-sources-4.4.180
4.9 >=sys-kernel/gentoo-sources-4.9.176 =sys-kernel/gentoo-sources-4.9.177
4.14 >=sys-kernel/gentoo-sources-4.14.119 =sys-kernel/gentoo-sources-4.14.120
4.19 >=sys-kernel/gentoo-sources-4.19.43 =sys-kernel/gentoo-sources-4.19.44

You can subscribe to bug bug #685984 to get notified.

Once you updated your kernel to a version with MDS mitigation there will be a new kernel command line parameter:

mds=off
The mitigation is fully disabled.
mds=full
Enable all available mitigations for the MDS vulnerability, CPU buffer clearing on exit to userspace and when entering a VM. Idle transitions are protected as well if SMT is enabled.
It does not automatically disable SMT.
Note
This is the default if no option is given.
mds=full,nosmt
Enables the same as mds=full, with SMT (HyperThreading) disabled on vulnerable CPUs. This is the complete mitigation.

The new mds parameter is also included in the new generic mitigations kernel command line parameter, which has the following settings:

mitigations=off
All CPU side channel mitigations are disabled. This setting gives the highest performance, but least security and should only be used in settings where no untrusted code is used.
mitigations=auto
All CPU side channel mitigations are enabled as they are detected based on the CPU type. The auto-detection handles both unaffected older CPUs and unaffected newly released CPUs and transparently disables mitigations.
Note
This options leave SMT enabled.
mitigations=auto,nosmt
The same as the auto option for mitigations, additionally the symmetric multi-threading of the CPU is disabled if necessary, for instance to mitigate the L1 Terminal Fault side channel issue.

SMT

Disabling SMT for affected systems will reduce some of the attack surface, but will not completely eliminate all threats from these vulnerabilities. To mitigate the risks these vulnerabilities introduce, systems will need updated microcode, updated kernel, virtualization patches, and administrators will need to evaluate if disabling SMT/HT is the right choice for their deployments. Additionally, applications may have a performance impact.

Warning
Since we cannot fully prevent cross-thread attacks, complete mitigation of MDS may require that some users disable Intel Hyper-Threading Technology. This is typically the case when running untrusted workloads, especially containers or virtual machines in a multi-tenant environment, such as in a public cloud. In this case, part of the mitigation advice is to specify a kernel command line option mds=full,nosmt.

QEMU updates

Please upgrade to >=app-emulation/qemu-4.0.0-r3:

root #emerge --ask --oneshot --verbose ">=app-emulation/qemu-4.0.0-r3"

See bug bug #686026 for more details.

Xen updates

Please upgrade to >=app-emulation/xen-4.12.0-r1:

root #emerge --ask --oneshot --verbose ">=app-emulation/xen-4.12.0-r1"

See bug bug #686024 for more details.

Check status

The state of the vulnerability and its mitigations can be found in /sys/devices/system/cpu/vulnerabilities/mds.

Note
If this file is missing, you are not running a patched kernel!

The following values can appear there:

Not affected
The processor is not affected by these issues.
Vulnerable
There is no mitigation enabled for this issue.
Vulnerable: Clear CPU buffers attempted, no microcode
No microcode is not present that the kernel can use.
Mitigation: Clear CPU buffers
The microcode is present and used to clear CPU buffers.

The variable will also include the SMT mitigation state appended to it, separated by ';':

SMT: vulnerable
SMT is enabled and the CPU is affected by the load port and fill buffer issues.
SMT: disabled
SMT is disabled and so not affected by cross-thread information leakage.
SMT Host state unknown
Kernel runs in a VM, and the host's SMT state is unknown.
SMT: mitigated
This will be displayed if the CPU is only affected by the store buffer issue (CVE-2018-12126), and the mitigation is enabled.

A fully mitigated system will show output similar to Mitigation: Clear CPU buffers; SMT: disabled

Known performance impact

The MDS mitigations have shown to cause a performance impact. The impact will be felt more in applications with high rates of user-kernel-user space transitions. For example system calls, NMIs, and interrupts.

Although there is no way to say what the impact will be for any given workload, the following impact was reported:

  • Applications that spend a lot of time in user mode tended to show the smallest slowdown, usually in the 0-5% range.
  • Applications that did a lot of small block or small packet network I/O showed slowdowns in the 10-25% range.
  • Some microbenchmarks that did nothing other than enter and return from user space to kernel space showed higher slowdowns.

The performance impact from MDS mitigation can be measured by running your application with MDS enabled and then disabled. MDS mitigation is enabled by default. MDS mitigation can be fully enabled, with SMT also disabled by adding the mds=full,nosmt flag to the kernel command line. MDS mitigation can be fully disabled by adding the mds=off flag to the kernel command line. There no way to disable it at runtime.

References