From Gentoo Wiki
(Redirected from NFSv4)
Jump to: navigation, search

Network File System (NFS) is a file system protocol that allows client machines to connect to network attached file shares. The newest version is version 4.


USE flags

Optional USE flags for net-fs/nfs-utils:
USE flag (what is that?) Default Recommended Description
caps No Use Linux capabilities library to control privilege
ipv6 Yes Add support for IP version 6
kerberos No Add kerberos support
libmount Yes Link mount.nfs with libmount
nfsdcld No Enable nfsdcld NFSv4 clientid tracking daemon
nfsidmap Yes Enable support for newer nfsidmap helper
nfsv4 Yes Enable support for NFSv4
nfsv41 No Enable support for NFSv4.1
tcpd No Add support for TCP wrappers
uuid Yes Support UUID lookups in rpc.mountd


Install net-fs/nfs-utils:

root #emerge --ask net-fs/nfs-utils


The NFSv4 server configuration is mostly like configuring the version 3, with one major change all NFS shares are exported from one virtual root directory.

The server has 2 NFS shares which are:

  • /export/home - directory with user homes
  • /export/data - directory with example data

These 2 shares are mounted on the server system at following points in the tree:

user $df -h | egrep 'File|home|data'
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdb1        20G  977M   19G   5% /home
/dev/sdc1       200G   91G  110G  46% /data

The client uses now TCP protocol as default, to mount NFS shares. Previous NFS versions use UDP as the default protocol.


Verify if NFS version 4 is enabled in currently running Linux kernel.Tthis has to be ensured on the server as well as on the client installations:

root #cd /usr/src/linux
root #make menuconfig
File systems  --->
   Network File Systems  --->
      <*>   NFS client support
      [*]     NFS client support for NFS version 4
      <*>   NFS server support
      [*]     NFS server support for NFS version 4 (EXPERIMENTAL)
NFS server support is not needed on a NFS client installation and NFS client support is not necessarily needed on a NFS server.

Optionally NFSv4 support could be build as a kernel module. After NFSv4 support has been enabled a new Linux kernel must be built and installed in order to access the features. Restart the system after installing the Kernel.



A virtual NFS root directory needs to be created:

root #cd /
root #mkdir export
It is possible to substitute the name and the location of the virtual root (/export in this example) with any other directory name. For example /nfsroot or /home/NFSv4root could be used instead of /export.

Create two sub-directories in the /export directory for NFS shares. This can be done in one fell swoop from the command-line by using the && operator:

root #cd /export && mkdir {home,data}

NFS shares

Mount the shares to their mount points:

root #mount --bind /home /export/home && mount --bind /data /export/data

Add following two lines to fstab, so NFS shares will still be available after a system reboot:

FILE /etc/fstab
/home    /export/home   none    bind  0  0
/data    /export/data   none    bind  0  0


NFS shares are configured in /etc/exports file. This file has following structure:

source1         target1(option1,option2)
source2         target2(option1,option3)
  • source : is a directory the virtual root itself or particular nfs exported share f.e. /export/home.
  • target : single host f.e. larrysPC , or a network, or a wildcard like f.e. * which means here it can be accessed from all networks by all hosts on all interfaces.

For /etc/exports options consult following table:

Option Explanation
ro default the directory is shared read only; the client machine will not be able to write it.
rw The client machine will have read and write access to the directory.
no_root_squash If no_root_squash is selected, then root on the client machine will have the same level of access to the files on the system as root on the server. This can have serious security implications. You should not specify this option without a good reason.
no_subtree_check If only part of a volume is exported, a routine called subtree checking verifies that a file that is requested from the client is in the appropriate part of the volume. If the entire volume is exported, disabling this check will speed up transfers.
sync By default. It is possible to switch to async.
insecure Tells the NFS server to use unprivileged ports (ports above 1024). This may be needed to allow mounting the NFS share from MacOS X or through the nfs:/ kioslave in KDE.
fsid=0 NFS sever identifies each file system that it exports with a digit, for the NFSv4 server there is a virtual root filesystem which is the root of all exported file systems. This root is identified with fsid=0.

Specify the virtual root /export as the first entry, then define specific shares, in this particular case the file will look like in the example below:

FILE /etc/exports
/export       ,fsid=0,no_subtree_check)
/export/home  ,nohide,insecure,no_subtree_check)
/export/data  ,nohide,insecure,no_subtree_check)
Substitute the target network used in this example ( with your own network


To provide NFSv4 protocol access only, specify which version of the NFS protocol the server has to use -V 4 and which versions are not supported -N 3 -N 2 in the /etc/conf.d/nfs file.

FILE /etc/conf.d/nfs
# Number of servers to be started up by default
OPTS_RPC_NFSD="8 -V 4.1 -V 4 -N 4.2 -N 3 -N 2"
To provide NFS version 3 support consider of building a separate Linux module that can be loaded in the space of the version 4 module.

Starting service daemon

Finally start the configured NFS daemon:

root #/etc/init.d/nfs start
 * Starting rpcbind ...                                                                            [ ok ]
 * Starting NFS statd ...                                                                          [ ok ]
 * Starting idmapd ...                                                                             [ ok ]
 * Use of the opts variable is deprecated and will be
 * removed in the future.
 * Please use extra_commands, extra_started_commands or extra_stopped_commands.
 * Exporting NFS directories ...                                                                   [ ok ]
 * Starting NFS mountd ...                                                                         [ ok ]
 * Starting NFS daemon ...                                                                         [ ok ]
 * Starting NFS smnotify ...                                                                       [ ok ]

As shown many services are started in specific order, rpcbind is started as the first service. If there is a need to stop the NFS service, the easiest way to stop all NFS services at once is to stop the rpcbind service itself.

It command will shutdown each service shown in the upper starting routine example:

root #/etc/init.d/rpcbind stop

Add the nfs script to a runlevel to be able to use it after a reboot:

root #rc-update add nfs default
* service nfs added to runlevel default


Mounting remote directories

Before mounting remote directories, a few daemons must be be started first. This is the job of the nfsclient service.

root #/etc/init.d/nfsclient start
 * Starting rpcbind                                                       [ ok ]
 * Starting NFS statd                                                     [ ok ]
 * Starting NFS sm-notify                                                 [ ok ]

The directories can then be mounted with following command:

root #mount server:/home /home
Be sure to substitute the name server with the IP address or DNS name of your own NFS server and home with the name of the remote share and local mount point, respectively.

Mounting at boot time

Add NFS shares to the /etc/fstab file.

Option 1

Mount the NFS virtual root including all exported shares at once.

The /export directory, server exported virtual root is recognized as a root of a file system on the client side, which is simply a "/" in the example below. This is a different approach compared to NFSv3.
FILE /etc/fstab
server:/         /mnt     nfs     rw,_netdev,auto   0  0

Option 2

Define each NFS share individually, to have the ability to mount them to different local mount point:

FILE /etc/fstab
server:/home      /home     nfs     rw,_netdev,auto   0  0
server:/data      /data     nfs     rw,_netdev,auto   0  0

Finally start the netmount service:

root #/etc/init.d/netmount start
 * Mounting network filesystems ...                                       [ ok ]

Add nfsclient and netmount services to the default runlevel:

root #rc-update add nfsclient default
root #rc-update add netmount default

At this point the NFS shares should be mounted on the client. It can be verified with following command:

user $netstat -tn | egrep '2049|Active|Pro'
Active Internet connections (w/o servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State
tcp        0      0        ESTABLISHED

Or run:

user $df -h


Shutdown process hangs when trying to unmount NFS shares

If the system shutdown hangs at

* Unmounting network filesystems ...

then users must make sure the NFS shares are unmounted properly before udev tries to stop. One way to work around this is to create local.d scripts to unmount the NFS filesystems:

root #echo "umount -a -t nfs4 -f" > /etc/local.d/nfs4.stop
root #chmod a+x /etc/local.d/nfs4.stop
root #echo "umount -a -t nfs -f" > /etc/local.d/nfs.stop
root #chmod a+x /etc/local.d/nfs.stop

Additional troubleshooting tricks and tips

Verifying NFS server is running and listening for connections:

root #netstat -tupan | egrep 'rpc|Active|Proto'
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 *               LISTEN      1891/rpc.statd
tcp        0      0   *               LISTEN      1875/rpcbind
udp        0      0   *                           1875/rpcbind
udp        0      0 *                           1891/rpc.statd
udp        0      0   *                           1875/rpcbind
udp        0      0   *                           1891/rpc.statd

Verifying which NFS specific daemons are running:

root #rpcinfo -p
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100024    1   udp  57655  status
    100024    1   tcp  34950  status
    100003    2   tcp   2049  nfs
    100003    3   tcp   2049  nfs
    100003    4   tcp   2049  nfs
    100003    2   udp   2049  nfs
    100003    3   udp   2049  nfs
    100003    4   udp   2049  nfs
    100021    1   udp  44208  nlockmgr
    100021    3   udp  44208  nlockmgr
    100021    4   udp  44208  nlockmgr
    100021    1   tcp  44043  nlockmgr
    100021    3   tcp  44043  nlockmgr
    100021    4   tcp  44043  nlockmgr

Showing exported NFS shares on the server side:

root #exportfs -v

Verifying current open connections to the NFS server:

user $netstat -tn | egrep '2049|Active|Proto'
Active Internet connections (w/o servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State      
tcp        0      0        ESTABLISHED

For more specific troubleshooting examples visit following links:

External resources