NVIDIA/nvidia-drivers

Article description::The package contains the proprietary graphics driver for NVIDIA graphic cards. An open source alternative is nouveau.

This proprietary driver contains some wrapper functions that will compile against the Linux kernel and a binary blob that does the heavy lifting for talking to the card. The driver consists of two parts: a kernel module and an X11 driver. Both parts are included in a single package. Due to the way the drivers are packaged, it is necessary to make some choices before installing the drivers.

The package contains the latest drivers from NVIDIA with support for most NVIDIA graphic cards, with several versions available depending on the age of the card. It uses an eclass to detect what kind of card the system is running so that it installs the proper version.

USE flags
>x11-drivers/nvidia-drivers-418.74 you can't use compat use flags otherwise you will get segmentation fault. If compat needs to be used you can't use driver above 418.74.

Hardware compatibility
The package supports a range of available NVIDIA cards. Multiple versions are available for installation, depending on the card(s) that the system has. See the official NVIDIA documentation, What's a legacy driver?, to find out what version of should be used. A pretty decent way to find this out through an interactive form. Enter the graphics card that is used by the system (mind the Legacy option in the 'Product Type' field) and the form should end up with the best supported version.

Legacy hardware
If the card has been identified as a legacy card then mask the more recent releases of, e.g.:

Note that Gentoo does not provide the 71.86.xx versions. If the system has a card that needs these drivers then it is recommended to use the nouveau driver.

Kernel
As mentioned above, the NVIDIA kernel driver installs and runs against the current kernel. It builds as a module, so the kernel must support the loading of kernel modules (see below).

The kernel module consists of a proprietary part (commonly known as the "binary blob") which drives the graphics chip(s), and an open source part (the "glue") which at runtime acts as intermediary between the proprietary part and the kernel. These all need to work nicely together as otherwise the user might be faced with data loss (through kernel panics, X servers crashing with unsaved data in X applications) and even hardware failure (overheating and other power management related issues should spring to mind).

Kernel compatibility
From time to time, a new kernel release changes the internal ABI for drivers, which means all drivers that use those ABIs must be changed accordingly. For open source drivers, especially those distributed with the kernel, these changes are nearly trivial to fix since the entire chain of calls between drivers and other parts of the kernel can be reviewed quite easily. For proprietary drivers like nvidia.ko, it doesn't work quite the same. When the internal ABIs change, then it is not possible to merely fix the "glue", because nobody knows how the glue is used by the proprietary part. Even after managing to patch things up to have things seem to work nicely, the user still risks that running nvidia.ko in the new, unsupported kernel will lead to data loss and hardware failure.

When a new, incompatible kernel version is released, it is probably best to stick with the newest supported kernel for a while. NVIDIA usually takes a few weeks to prepare a new proprietary release they think is fit for general use. Just be patient. If absolutely necessary, then it is possible to use the epatch_user command with the nvidia-drivers ebuilds: this allows the user to patch nvidia-drivers to somehow fit in with the latest, unsupported kernel release. Do note that neither the nvidia-drivers maintainers nor NVIDIA will support this situation. The hardware warranty will most likely be void, Gentoo's maintainers cannot begin to fix the issues since it's a proprietary driver that only NVIDIA can properly debug, and the kernel maintainers (both Gentoo's and upstream) will certainly not support proprietary drivers, or indeed any "tainted" system that happens to run into trouble.

If was used to configure the kernel, then everything is all set. If not, double check the kernel configuration so that this support is enabled:

Also enable Memory Type Range Register in the kernel:

With at least some if not all driver versions it may also be required to enable VGA Arbitration and the IPMI message handler:

If the system has an AGP graphics card, then optionally enable agpgart support to the kernel, either compiled in or as a module. If the in-kernel agpgart module is not used, then the drivers will use its own agpgart implementation, called NvAGP. On certain systems, this performs better than the in-kernel agpgart, and on others, it performs worse. Evaluate either choice on the system to get the best performance. When uncertain what to do, use the in-kernel agpgart:

A framebuffer alternative is uvesafb, which can be installed parallel to.

For (U)EFI systems, uvesafb will not work. Be warned that enabling efifb support in kernel causes intermittent problems with the initialization of the NVIDIA drivers. There are reports of success from marking legacy framebuffers as generic and enabling the simple framebuffer while disabling all others:

The nvidia-drivers ebuild automatically discovers the kernel version based on the symlink. Please ensure that this symlink is pointing to the correct sources and that the kernel is correctly configured. Please refer to the "Configuring the Kernel" section of the Gentoo Handbook for details on configuring the kernel.

First, choose the right kernel source using. When using version 3.7.10 for instance, the kernel listing might look something like this:

In the above output, notice that the linux-3.7.10-gentoo kernel is marked with an asterisk to show that it is the kernel that the symbolic link points to.

If the symlink is not pointing to the correct sources, update the link by selecting the number of the desired kernel sources, as in the example above.

Drivers
Now it's time to install the drivers. First follow the X Server Configuration Guide and set  in. During the installation of the X server, it will then install the right version of.

Once the installation has finished, run to load the kernel module into memory. If this is an upgrade, remove the previous module first.

To prevent from having to manually load the module on every bootup, have this done automatically each time the system is booted, so edit and add   to it.

Kernel module signing (optional)
If secure boot kernel signing is used, then the NVIDIA kernel modules need to be signed before they can be loaded.

This can be accomplished by using the kernel-provided script as follows.

As of driver version 358.09 a new module has been made to handle monitor mode setting and for this driver version this module must also be signed.

Once the modules are signed, the driver will load as expected on boot up. This module signing method can be used to sign other modules too - not only the nvidia-drivers. Just modify the path and corresponding module accordingly.

The X server
Once the appropriate drivers are installed, configure the X server to use the  driver instead of the default   driver.

Run so that the X server uses the NVIDIA GLX libraries:

Enabling global nvidia support
Some tools, such as and, use a local USE flag called   which enables XvMCNVIDIA support, useful when watching high resolution movies. Add in  in the USE variable in  or add it as USE flag to  and/or  in.

GeForce 8 series and later GPUs do come with VDPAU support which superseded XvMCNVIDIA support. See the VDPAU article for enabling VDPAU support.

There are also some applications that use the  USE flag, so it might be a good idea to add it to.

Then, run to rebuild the applications that benefit from the USE flag change.

Using the nVidia settings tool
NVIDIA also provides a settings tool. This tool allows the user to monitor and change graphical settings without restarting the X server and is available through Portage as part of with the   USE flag set.

Enable OpenGL/OpenCL
To enable OpenGL and OpenCL though the device, run:

Make sure that the Xorg server is not running during these changes.

Testing the card
To test the NVIDIA card, fire up X and run, which is part of the package. It should say that direct rendering is activated:

To monitor the FPS, run.

Troubleshooting
For an overview of the currently open bugs reported against the package, take a look at the.

Blinking console cursor and compat use flag
If a blinking console appears instead of X when using the  nvidia-drivers USE flag on nvidia-drivers-430 and newer there might have a segmentation fault when xorg starts.

Looking for the segfault:
 * 1) Boot to the blinking prompt screen
 * 2) Switch to a tty  +  +
 * 3) Close gdm. For OpenRC:  For systemd:
 * 4) Launch X to see the output:

If a segmentation fault error occurs on the nvidia module during, rebuild nvidia-drivers without the compat useflag.

FATAL: modpost: GPL-incompatible module *.ko uses GPL-only symbol
When the ebuild is complaining about the 'mutex_destroy' GPL-only symbol:

Be sure to disable CONFIG_DEBUG_MUTEXES in the kernel's file, as suggested by this forum thread.

Driver fails to initialize when MSI interrupts are enabled
The Linux NVIDIA driver uses Message Signaled Interrupts (MSI) by default. This provides compatibility and scalability benefits, mainly due to the avoidance of IRQ sharing. Some systems have been seen to have problems supporting MSI, while working fine with virtual wire interrupts. These problems manifest as an inability to start X with the NVIDIA driver, or CUDA initialization failures.

MSI interrupts can be disabled via the NVIDIA kernel module parameter. This can be set on the command line when loading the module, or more appropriately via the distribution's kernel module configuration files (such as those under ).

For instance:

Getting 2D acceleration to work on machines with 4GB memory or more
When NVIDIA 2D acceleration is giving problems, then it is likely that the system is unable to set up a write-combining range with MTRR. To verify, check the contents of :

Every line should contain  or. When a line shows up with  in it then it is necessary to change a BIOS setting to fix this.

Reboot and enter the BIOS, then find the MTRR settings (probably under "CPU Settings"). Change the setting from  to   and boot back into Linux. There is now no  entry anymore and 2D acceleration now works without any glitches.

Alternatively, it might be necessary to enable MTRR cleanup support (CONFIG_MTRR_SANITIZER=Y) in the Linux kernel:

"no such device" appears when trying to load the kernel module
This is usually caused by one of the following issues:


 * 1) The system does not have a NVIDIA card at all.  Check  output to confirm that the system has a NVIDIA graphics card installed and detected.
 * 2) The currently installed version of  does not support the installed graphics card model.  Check the README file in /usr/share/nvidia-drivers-*/ for a list of supported devices, or use the driver search at http://www.geforce.com/drivers.
 * 3) Another kernel driver has control of the hardware. Check  to see if another driver like "nouveau" or "efifb" is bound to the graphics card. If so, disable or blacklist this driver.

Xorg says it can't find any screens
When after booting the system, it ends up with a black screen or a console prompt instead of the GUI; then press ++ to bring up a virtual console. Next, run:

to see the output of Xorg. If one of the first errors is that Xorg can't find any screens, then follow the following steps to resolve the issue.

It should be enough to run the following command before rebooting:

But if that doesn't work, run and notice that the video card starts off like this:

Take the first bit,  and put it in the  file with the   option:

Direct rendering is not enabled
If direct rendering does not work, it may be because the kernel has Direct Rendering Manager enabled, which conflicts with the driver. See the direct rendering status by following instructions in the section Testing the card.

First, disable Direct Rendering Manager in the kernel :

Next, rebuild since the driver may have built against the kernel DRM symbols. It should fix the problem.

Video playback stuttering or slow
Lately there seems to be some breaking with playback of some types of video with the NVIDIA binary drivers, causing slow video playback or significant stuttering. This problem seems to be occurring within the Intel CPU Idle replacement instead of the common ACPI CPU idling method for certain CPU's.

Disable the Intel CPU idling method using  on the kernel command line boot method, which should cause the kernel to automatically fall back to the normal or older ACPI CPU idling method. Also, disabling the NVIDIA Powermizer feature, or setting Powermizer to maximum performance within has been said to help. Although the Intel CPU idling method recently was introduced as the default CPU idling method for i5 and i7 CPUs (versus using ACPI CPU idling) is the root cause here. This idling method significantly solves the problem, however some minimal stuttering or slow video is encountered if deinterlacing was enabled; this is when the video is likely already deinterlaced (ie. alias  with something similar to   as a work around.)

If you're using GRUB2 as your bootloader, you can add this kernel parameter to  like so:

Don't forget to run  after making the change, so that the new configuration is generated (see the GRUB2 page for further details).

After you have rebooted, you can verify that the change is active:

No vertical synchronization (no VSync, tearing) in OpenGL applications
Adding the following option to the screen section prevents tearing on GTX 660, 660 Ti, and probably some other GPUs (reference):

Documentation
The package also comes with comprehensive documentation. This is installed into and can be viewed with the following command:

Kernel module parameters
The  kernel module accepts a number of parameters (options) which can be used to tweak the behavior of the driver. Most of these are mentioned in the documentation. To add or change the values of these parameters, edit the file. Remember to run after modifying this file, and bear in mind to reload the   module before the new settings take effect.

Edit the file, and afterwards update the module information:

Unload the  module...

...and load it once again:

Advanced X configuration
The GLX layer also has a plethora of options which can be configured. These control the configuration of TV out, dual displays, monitor frequency detection, etc. Again, all of the available options are detailed in the documentation.

To use any of these options, list them in the relevant Device section of the X config file (usually ). For example, to disable the splash logo: