NVIDIA/nvidia-drivers

is the proprietary graphics driver for nVidia graphic cards. An open source alternative is nouveau.

The in the tree are released by nVidia and are built against the Linux kernel. They contain a binary blob that does the heavy lifting for talking to the card. The drivers consist of two parts, a kernel module, and an X11 driver. Both parts are included in a single package. Due to the way nVidia has been packaging their drivers, it is necessary to make some choices before installing the drivers.

The package contains the latest drivers from nVidia with support for all cards, with several versions available depending on how old the card is. It uses an eclass to detect what kind of card the system is running so that it installs the proper version.

Hardware compatibility
The package supports a range of available nVidia cards. Multiple versions are available for installation, depending on the card(s) that the system has. See the official nVidia documentation, What's a legacy driver?, to find out what version of should be used. A pretty decent way to find this out through an interactive form. Enter the graphics card that is used by the system (mind the Legacy option in the 'Product Type' field) and the form should end up with the best supported version.

If the card has been identified as a legacy card then mask the more recent releases of, i.e

Note that Gentoo does not provide the 71.86.xx versions. If the system has a card that needs these drivers then it is recommended to use the nouveau driver.

Kernel
As mentioned above, the nVidia kernel driver installs and runs against the current kernel. It builds as a module, so the kernel must support the loading of kernel modules (see below).

The kernel module (nvidia.ko) consists of a proprietary part (commonly known as the "binary blob") which drives the graphics chip(s), and an open source part (the "glue") which at runtime acts as intermediary between the proprietary part and the kernel. These all need to work nicely together as otherwise the user might be faced with data loss (through kernel panics, X servers crashing with unsaved data in X applications) and even hardware failure (overheating and other power management related issues should spring to mind).

Kernel compatibility
From time to time, a new kernel release changes the internal ABI for drivers, which means all drivers that use those ABIs must be changed accordingly. For open source drivers, especially those distributed with the kernel, these changes are nearly trivial to fix since the entire chain of calls between drivers and other parts of the kernel can be reviewed quite easily. For proprietary drivers like nvidia.ko, it doesn't work quite the same. When the internal ABIs change, then it is not possible to merely fix the "glue", because nobody knows how the glue is used by the proprietary part. Even after managing to patch things up to have things seem to work nicely, the user still risks that running nvidia.ko in the new, unsupported kernel will lead to data loss and hardware failure.

When a new, incompatible kernel version is released, it is probably best to stick with the newest supported kernel for a while. Nvidia usually takes a few weeks to prepare a new proprietary release they think is fit for general use. Just be patient. If absolutely necessary, then it is possible to use the epatch_user command with the nvidia-drivers ebuilds: this allows the user to patch nvidia-drivers to somehow fit in with the latest, unsupported kernel release. Do note that neither the nvidia-drivers maintainers nor Nvidia will support this situation. The hardware warranty will most likely be void, Gentoo's maintainers cannot begin to fix the issues since it's a proprietary driver that only Nvidia can properly debug, and the kernel maintainers (both Gentoo's and upstream) will certainly not support proprietary drivers, or indeed any "tainted" system that happens to run into trouble.

Required kernel options
If genkernel all was used to configure the kernel, then everything is all set. If not, double check the kernel configuration so that this support is enabled:

Also enable Memory Type Range Register in the kernel:

If the system has an AGP graphics card, then optionally enable agpgart support to the kernel, either compiled in or as a module. If the in-kernel agpgart module is not used, then the drivers will use its own agpgart implementation, called NvAGP. On certain systems, this performs better than the in-kernel agpgart, and on others, it performs worse. Evaluate either choice on the system to get the best performance. When uncertain what to do, use the in-kernel agpgart:

A framebuffer alternative is uvesafb, which can be installed parallel to.

The nvidia-drivers ebuild automatically discovers the kernel version based on the symlink. Please ensure that this symlink is pointing to the correct sources and that the kernel is correctly configured. Please refer to the "Configuring the Kernel" section of the Gentoo Handbook for details on configuring the kernel.

First, choose the right kernel source using eselect. When using gentoo-sources-3.7.10</tt>, the kernel listing might look something like this:

In the above output, notice that the linux-3.7.10-gentoo</tt> kernel is marked with an asterisk to show that it is the symlinked kernel.

If the symlink is not pointing to the correct sources, update the link by selecting the number of the desired kernel sources, as in the example above.

Drivers
Now it's time to install the drivers. First follow the X Server Configuration Guide and set  in. During the installation of the X server, it will then install the right version of.

Once the installation has finished, run modprobe nvidia</tt> to load the kernel module into memory. If this is an upgrade, remove the previous module first.

To prevent from having to manually load the module on every bootup, have this done automatically each time the system is booted, so edit and add   to it.

The X server
Once the appropriate drivers are installed, configure the X server to use the nvidia</tt> driver instead of the default nv</tt> driver.

Run eselect</tt> so that the X server uses the nVidia GLX libraries:

Testing the card
To test the nVidia card, fire up X and run glxinfo</tt>, which is part of the package. It should say that direct rendering is activated:

To monitor the FPS, run glxgears</tt>.

Enabling nvidia support
Some tools, such as and, use a local USE flag called   which enables XvMCNVIDIA support, useful when watching high resolution movies. Add in  in the USE variable in  or add it as USE flag to   and/or   in.

GeForce 8 series and later GPUs do come with VDPAU support which superseded XvMCNVIDIA support. See the VDPAU article for enabling VDPAU support.

There are also some applications that use the  USE flag, so it might be a good idea to add it to.

Then, run emerge -uD --newuse @world</tt> to rebuild the applications that benefit from the USE flag change.

Using the nVidia settings tool
nVidia also provides a settings tool. This tool allows the user to monitor and change graphical settings without restarting the X server and is available through Portage as. As mentioned earlier, it will be pulled in automatically when installing the drivers with the  USE flag set in  or in.

Enable OpenGL/OpenCL
To enable OpenGL and OpenCL.

Make sure that the Xorg server is not running during these changes.

Driver fails to initialize when MSI interrupts are enabled
The Linux NVIDIA driver uses Message Signaled Interrupts (MSI) by default. This provides compatibility and scalability benefits, mainly due to the avoidance of IRQ sharing. Some systems have been seen to have problems supporting MSI, while working fine with virtual wire interrupts. These problems manifest as an inability to start X with the NVIDIA driver, or CUDA initialization failures.

MSI interrupts can be disabled via the NVIDIA kernel module parameter. This can be set on the command line when loading the module, or more appropriately via the distribution's kernel module configuration files (such as those under ).

For instance:

Getting 2D acceleration to work on machines with 4GB memory or more
When nVidia 2D acceleration is giving problems, then it is likely that the system is unable to set up a write-combining range with MTRR. To verify, check the contents of :

Every line should contain write-back</tt> or write-combining</tt>. When a line shows up with uncachable</tt> in it then it is necessary to change a BIOS setting to fix this.

Reboot and enter the BIOS, then find the MTRR settings (probably under "CPU Settings"). Change the setting from continuous</tt> to discrete</tt> and boot back into Linux. There is now no uncachable</tt> entry anymore and 2D acceleration now works without any glitches.

"no such device" appears when trying to load the kernel module
This is usually caused by one of the following issues:


 * 1) The system does not have a nVidia card at all.  Check lspci</tt> output to confirm that the system has a nVidia graphics card installed and detected.
 * 2) The currently installed version of  does not support the installed graphics card model.  Check the README file in /usr/share/nvidia-drivers-*/ for a list of supported devices, or use the driver search at http://www.geforce.com/drivers.
 * 3) Another kernel driver has control of the hardware.  Check <tt>lspci -k</tt> to see if another driver like "nouveau" is bound to the graphics card.  If so, disable or blacklist this driver.

Xorg says it can't find any screens
When after booting the system, it ends up with a black screen or a console prompt instead of the GUI; then press ++ to bring up a virtual console. Next, run:

to see the output of Xorg. If one of the first errors is that Xorg can't find any screens, then follow the following steps to resolve the issue.

It should be enough to run the following command before rebooting:

But if that doesn't work, run <tt>lspci</tt> and notice that the video card starts off like this:

Take the first bit,  and put it in the  file with the   option:

Direct rendering is not enabled
If direct rendering does not work, it may be because the kernel has Direct Rendering Manager enabled, which conflicts with the driver. See the direct rendering status by following instructions in the section Testing the card.

First, disable Direct Rendering Manager in the kernel :

Next, rebuild since the driver may have built against the kernel DRM symbols. It should fix the problem.

Video playback stuttering or slow
Lately there seems to be some breaking with playback of some types of video with the NVidia binary drivers, causing slow video playback or significant stuttering. This problem seems to be occurring within the Intel CPU Idle replacement instead of the common ACPI CPU idling method for certain CPU's.

Disable the Intel CPU idling method using  on the kernel command line boot method, which should cause the kernel to automatically fall back to the normal or older ACPI CPU idling method. Also, disabling the NVidia Powermizer feature, or setting Powermizer to maximum performance within <tt>nvidia-settings</tt> has been said to help. Although the Intel CPU idling method recently was introduced as the default CPU idling method for i5 and i7 CPUs (versus using ACPI CPU idling) is the root cause here. This idling method significantly solves the problem, however some minimal stuttering or slow video is encountered if deinterlacing was enabled; this is when the video is likely already deinterlaced (ie. alias <tt>mplayer-nodeint</tt> with something similar to <tt>mplayer -vo vdpau:deint=0:denoise=0:nochroma-deint:colorspace=0:hqscaling=1, video.mpg</tt> as a work around.)

Documentation
The package also comes with comprehensive documentation. This is installed into and can be viewed with the following command:

Kernel module parameters
The <tt>nvidia</tt> kernel module accepts a number of parameters (options) which can be used to tweak the behaviour of the driver. Most of these are mentioned in the documentation. To add or change the values of these parameters, edit the file. Remember to run <tt>update-modules</tt> after modifying this file, and bear in mind to reload the  module before the new settings take effect.

Edit :

Update module information:

Unload the <tt>nvidia</tt> module...

...and load it once again:

Advanced X configuration
The GLX layer also has a plethora of options which can be configured. These control the configuration of TV out, dual displays, monitor frequency detection, etc. Again, all of the available options are detailed in the documentation.

To use any of these options, list them in the relevant Device section of the X config file (usually ). For example, to disable the splash logo: