Kernel/Gentoo Kernel Configuration Guide/de

This document aims to introduce the concepts of manual kernel configuration and details some of the most common configuration pitfalls.

Introduction
Gentoo provides two ways for users to handle kernel configuration, installation, and upgrades: automatic (genkernel) and manual. Although the automatic method can be regarded as easier for most users, there are a number of reasons why a large proportion of Gentoo users choose to configure their kernels manually:


 * 1) Greater flexibility
 * 2) Smaller (kernel) sizes
 * 3) Shorter compilation times
 * 4) The learning experience
 * 5) Severe boredom
 * 6) Absolute knowledge of kernel configuration, and/or
 * 7) Complete control

This guide does not cover the automatic method (genkernel). If genkernel is preferred method of handling matters related to the kernel head over to the Genkernel article for details.

This guide does not attempt to document the manual configuration process from start to finish — the configuration process relies upon a large degree of common sense and a relatively high level of technical knowledge about the system being used. Instead it will introduce the concepts of manual configuration and detail the most common pitfalls which users face.

At this point, the user is presumed to have Linux kernel sources unpacked on the hard disk (usually somewhere under ), and is expected to know how to enter the menuconfig configuration utility with knowledge to navigate through the ncursers-based menu system. If the user is not at this stage, other documentation is available to help. Read the following articles, then return to this guide:


 * The Kernel sources overview article contains information on the various kernel source packages available in the Portage tree.
 * The Kernel upgrade article explains how to upgrade a kernel or switch from one kernel to another.
 * The Gentoo Handbook's kernel configuration section covers some aspects of kernel installation. Select the appropriate architecture, then navigate to section titled "Configuring the Linux kernel".

The basics
The general process is actually rather simple: a series of options, categorized into individual menus and sub-menus, are presented and the desired hardware support and kernel features relevant to the system are selected.

The kernel includes a default configuration, which is presented the first time menuconfig is run on a particular set of sources. The defaults are generally broad and sensible, which means that the majority of users will only have to make a small number of changes to the base configuration. When deciding to disable an option that was enabled from kernel's default configuration, make sure a good understanding has been obtained of exactly what that option does, and the consequences of disabling it.

During a first time Linux kernel configuring, aim to be conservative; do not be too adventurous, and try make as few modifications to the default settings as possible. At the same time, keep in mind that there are certain parts to a system's setup that must be customized to actually allow for the system to boot.

Built-in vs modular
Most configuration options are tristate: they can be either not built at all, built directly into the kernel  , or built as a module. Modules are stored externally on the filesystem, whereas built-in items are built directly into the kernel image itself.

There is an important difference between built-in and modular: with a few exceptions, the kernel makes no attempt whatsoever to load any external modules when the system might need them; it is left up to the user to decide when, or when to not, load a module. While certain other parts of the system may have load-on-demand facilities, and there are some automatic module loading utilities available, it is recommended to build hardware support and kernel features directly into the kernel. The kernel can then ensure the functionality and hardware support is available whenever needed. This is done by setting each kernel feature to (Y). For this setup to be coherent it is also necessary to include firmware support in the kernel. This is done setting  and   in the kernel's  or by the following:

For other parts of the configuration, built-in is an absolute requirement. For example, if the root partition was an btrfs filesystem the system would not boot if btrfs was built as a module. The system would have to look on the root partition to find the btrfs module (since modules are stored in the root partition), but it cannot look on the root partition unless it already has btrfs support loaded! If btrfs has not been built-in then the init process will fail to find the root device.

Hardware support
Beyond detecting the architecture type of the system, the configuration utility makes no attempt to identify what hardware is actually present in the system. While there are default settings for some hardware support, users almost certainly need to find and select the configuration options relevant to each system's hardware configuration.

Selecting the proper configuration options requires a knowledge of the components inside and connected to the computer. Most of the time these components can be identified without taking the lid off the system. For most internal components, users need to identify the chipset used on each device, rather than the retailed product name. Many expansion cards are retailed with a certain brand name, but use another manufacturer's chipset.

There are some utilities available to help users determine what kernel configuration options to use. lspci (part of the package) will identify PCI-based and AGP-based hardware, this includes components built onto the motherboard itself. lsusb (from the package) will identify various devices connected to the system's USB ports.

The situation is somewhat confused by varying degrees of standardization in the hardware world. Unless the user selects extreme deviation from the default configuration settings, the IDE hard disks should "just work", as will the PS/2 or USB keyboard and mouse. Basic VGA display support is also included. However, some devices such as Ethernet adapters are hardly standardized at all; for these devices users will have to identify the Ethernet chipset and select the appropriate hardware support for the specific card to get network access.

In addition, while some things just-about-work with the default settings, more specialized options may need to be selected to get the full potential from the system. For example, if support for the appropriate IDE chipset has not been enabled, the IDE hard disks will run very slowly.

Kernel features
In addition to hardware support, users need to consider the software features that will be required in the kernel. One important example of such a feature is filesystem support: users must select support for the filesystems in use on their hard disks, as well as any filesystems they might use on external storage devices (e.g. VFAT on USB drives).

Another common software feature example is advanced network functionality. In order to do some kind of routing or firewalling the relevant configuration items must be included in the kernel configuration.

Ready?
Now that the concepts have been introduced, it should be easy to start identifying the system hardware, browsing through the menuconfig interface, and selecting the required kernel options for the system.

The rest of this guide should clear up common areas of confusion, and provide advice for how to avoid common problems which users often run into. Best wishes!

SATA disks are SCSI
Most modern desktop systems ship with storage devices (hard disk and CD/DVD drives) on a Serial ATA bus, rather than the older IDE (ribbon cable) bus type.

SATA support in Linux is implemented in a layer referred to as libata, which sits below the SCSI subsystem. For this reason, SATA drivers are found in the SCSI driver section of the configuration. Additionally, the system's storage devices will be treated as SCSI devices, which means SCSI disk/cdrom support will also be required. The first SATA hard disk will be named and the first SATA CD/DVD drive will be named.

Although the majority of these drivers are for SATA controllers, libata was not designed to be SATA-specific. All common IDE drivers will also be ported to libata in the near future, and at this point, the above considerations will also apply for IDE users.

IDE chipsets and DMA
Despite the introduction of SATA, IDE devices are still very common and depended upon by many systems. IDE is a fairly generic technology, and as such, Linux supports almost all IDE controllers out-of-the-box without any controller-specific options selected.

However, IDE is an old technology, and in its original Programmed Input/Output incarnation, it is unable to provide the transfer rates required for speedy access to modern storage devices. The generic IDE driver is limited to these PIO transfer modes which result in slow data transfer rates and significantly high CPU usage while data is being transferred to and from disk.

Unless a user is dealing with a pre-1995 system, the IDE controller will also support an alternative transfer mode, known as Direct Memory Access (DMA). DMA is much much faster, and CPU utilization is barely affected while data transfers are taking place. If the system is suffering from really poor general system performance while it is using an IDE disk, chances are that DMA is not being used and should be enabled.

When not using libata for IDE disks check for DMA usage and enable it. The following command can be used to determine if DMA is being used:

To enable DMA for older IDE devices (which is a deprecated setting), enable the following kernel features.

USB host controllers
USB is a widely adopted bus for connecting external peripherals to a computer. One of the reasons behind the success of USB is that it is a standardized protocol, however the USB host controller devices (HCDs) implemented on the host computer do vary a little. There are 4 main types:


 * 1)   is the Universal Host Controller Interface. It supports USB 1.1, and is usually found on motherboards based on a VIA or Intel chipset.
 * 2)   is the Open Host Controller Interface. It supports USB 1.1 and is usually found on motherboards based on an Nvidia or SiS chipset.
 * 3)   is the Extended Host Controller Interface. It is the only common host controller to support USB 2.0, and can typically be found on any computer that supports USB 2.0.
 * 4)   is the eXtensible Host Controller Interface. It is the host controller for USB 3.0 and is compatible with USB 1.0, 1.1, 2.0, 3.0 and future speeds. Enable this feature when the board supports USB 3.0.

Most systems come with two of the above interface types: XHCI (USB 3.0) and EHCI (USB 2.0). To use USB devices, it is no longer necessary to select both options since XHCI is compatible with slower USB-controllers. Users can also enable EHCI to be "extra" safe; it does no harm if USB 2.0 controllers are unavailable.

If the relevant options corresponding to the USB HCD types present on the system are not selected, then 'dead' USB ports may be experienced. This case can be determined if a working USB device is plugged in, but it does not get power or respond in any way.

A neat lspci trick (from the package) makes it relatively easy to detect which HCDs are present on system. Ignoring the SATA controller which was also matched, it is easy to spot that this system requires EHCI and XHCI support:

Select the HCDs present on the system. In general select all three options for maximum support, or if the correct option is uncertain:

In Linux kernel 3.12.13 and later, OHCI support for PCI-bus USB controllers has to be enabled if the USB controller is OHCI and a USB keyboard or mouse is used.

Multiprocessor, Hyper-Threading and multi-core systems
Many computer systems are based on multiple processors, but not always in an immediately obvious way.


 * Many of Intel's CPUs support a technology which they call hyper-threading. This technology enables a single CPU to be viewed by the system as two logical processors.
 * Most Intel/AMD CPUs actually consist of multiple physical processors inside a single package, these processors are known as multi-core processors.
 * Some high-end computer systems actually have multiple physical processors installed on specialized motherboards to provide a significant performance increase over a uniprocessor system. System users will probably know if they have such a system, since they are not cheap.

In all of these cases, the appropriate kernel options must be selected to obtain optimum performance from these setups:

The next option not only enables power management features, but might also be a requirement for making all CPUs available to the system:

x86 High Memory support
Due to limitations in the 32-bit address space of the x86 architecture, a kernel with default configuration can only support up to 896MB RAM. If a system has more memory, only the first 896MB will be visible, unless high memory support has been enabled.

High memory support is not enabled by default, because it introduces a small system overhead. Do not be distracted by this, the overhead is insignificant when compared to the performance increase of having more memory available!

Choose the 4GB option, unless your system has more than 4GB of RAM:

Compressed kernel modules
From kernel version 3.18.x (and up) compression of kernel modules has been possible. gzip and xz compression are available. It is important to emerge with the proper USE flags before compiling a kernel with compressed modules:

Re-emerge :

Enable module compression and select a preferred compression method:

Usually make modules_install runs depmod. If did not have the proper USE flags set (see the  step above) the first time it was run, then the dependency list will be empty. The system will therefore be unable to load any modules that were built compressed.

After kmod has been recompiled, re-run depmod as a solution to this problem:

Introduction
When reading about kernel configuration, often times settings are described as. This short-hand notation is what the kernel configuration actually uses internally, and is what will be found in the kernel configuration file (be it or in the auto-generated  file). Of course, using short-hand notation would not do much good if this cannot translate this to the real location in the kernel configuration. The make menuconfig tool makes this possible allows.

Translating CONFIG_FOO to the real configuration location
Suppose the  feature needs to be enabled. Launch the kernel configuration menu (make menuconfig</tt>) and press the key. This will open a search box. In the search box, type.

The following is an output of the result of this search:

This output yields lots of interesting information.

With this information, it should be possible to translate any  requirements fairly easily. In short, it means a user must


 * 1) Enable the settings described in the Depends on field;
 * 2) Navigate where Location: points;
 * 3) Toggle the value referred to by Prompt:;

Other kernel configuration documentation
So far only general concepts and specific problems related to kernel configuration has been discussed; precise details have been left up to the user to discover. However, other parts of the Gentoo documentation collection provide specialized details for the topics at hand.

Such documents may be helpful while configuring specific areas of the kernel. Although this warning was mentioned previously in this guide, remember: users who are new to kernel configuration should not be adventurous when attempting to configure their kernels. Start by getting a basic system up and running, support for your audio, printing, etc., can always be added at a later date.

Getting the basics of a kernel operational will help users in later configuration steps because the user will know what is breaking their system and what is not. It is always wise to save the base (working) kernel configuration in a folder other than the kernel's sources folder before attempting to add new features or hardware.


 * The ALSA article details the configuration options required for sound card support. Note that ALSA is an exception to the suggested scheme of not building things as modules: ALSA is actually much easier to configure when the components are modular.


 * The Bluetooth article details the options needed in order to use Bluetooth devices.


 * The IPv6 router guide describes how to configure the kernel for routing using the next generation network addressing scheme.


 * If the closed-source nVidia graphics drivers will be used for improved 3D graphics performance, the nVidia Guide lists the options that should and should not be selected on such a system.


 * Amongst other things, the Power Management guide explains how to configure the kernel for CPU frequency scaling, and for suspend and hibernate functionality.


 * If running a PowerPC system, the PPC FAQ has a few sections about PPC kernel configuration.


 * The Printing guide lists the kernel options needed to support printing in Linux.


 * The USB Guide details the configuration settings required to use common USB devices such as keyboards, mice, storage devices, and USB printers.

Configuration changes do not take effect
It is very common for users to make a configuration change, but then make a small mistake in the process of actually booting to their newly configured kernel. They reboot into a kernel image that is not the one they just reconfigured, observe that whatever problem they were trying to solve is still present, and conclude that the configuration change does not solve the problem.

The process of compiling and installing kernels is outside the scope of this document; refer to the Kernel Upgrade Guide for general guidance. In short, the process to get a modified kernel is the following: 1) configure, 2) compile, 3) mount (if not already mounted), 4) copy new kernel image to, 5) Make sure the bootloader will reference the new kernel, 6) reboot. If one of those final stages has been missed, then the changes will not properly take effect.

It is possible to verify if the kernel that has booted matches the newly kernel compiled on the hard disk. This is performed by examining the date and time of the kernel's compilation. Assuming the system architecture is x86 and the kernel sources are installed at, the following command can be used:

The above command will display the date and time the currently booted kernel was compiled.

The above command displays the date and time that the kernel image on the hard disk was last compiled.

If the time stamps from the above commands differ by more than 2 minutes, it indicates a mistake was made during kernel reinstallation and the system has not booted from the newly modified kernel image.

Modules do not get loaded automatically
As mentioned earlier in this document, the kernel configuration system hides a large behavioral change when selecting a kernel component as a module (M) rather than built-in (Y). It is worth repeating this again because so many users fall into this trap.

When selecting a component as built-in, the code is built into the kernel image (bzImage). When the kernel needs to use that component, it can initialize and load it automatically, without any user intervention.

When selecting a component as a module, the code is built into a kernel module file and installed on the filesystem. In general, when the kernel needs to use that component, it will not be able to find it. With some exceptions, the kernel makes no effort to actually load these modules — this task is left up to the user.

If building support for a network card as a module, and it is discovered the network is not accessible, it is probably because the module is not loaded — either this must be done manually or the system must be configured to autoload the module at boot time.

Unless a user has a reasons to do otherwise, some time can be saved by building these components directly into the kernel image, so that the kernel can automatically configure these small settings by itself.