From Gentoo Wiki
Jump to: navigation, search
External resources


Install net-fs/nfs-utils:

→ Information about USE flags
USE flag Default Recommended Description
nfsv3 No Enable support for NFSv3
nfsv4 No Yes Enable support for NFSv4
tcpd Yes Adds support for TCP wrappers
caps No Use Linux capabilities library to control privilege
ipv6 Yes Adds support for IP version 6
kerberos No Adds kerberos support
root # emerge --ask nfs-utils


This section explains how to setup a simple NFS server - NFS client configuration. The NFSv4 server configuration is mostly like configuring the version 3, with one major change all NFS shares are exported from one virtual root directory.

The server has 2 NFS shares which are:

  • /export/home - directory with user homes
  • /export/data - directory with example data

These 2 shares are mounted on the server system at following points in the tree:

user $ df -h | egrep 'File|home|data'
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdb1        20G  977M   19G   5% /home
/dev/sdc1       200G   91G  110G  46% /data

The client uses now TCP protocol as default, to mount NFS shares. Previous NFS versions use UDP as the default protocol.


Verify if NFS version 4 is enabled in current running linux kernel, this has to be ensured on the server as well as on the client installations:

root # cd /usr/src/linux
root # make nconfig
Kernel configuration

File systems  --->
   Network File Systems  --->
      <*>   NFS client support
      [*]     NFS client support for NFS version 4
      <*>   NFS server support
      [*]     NFS server support for NFS version 4 (EXPERIMENTAL)
NFS server support is not needed on a NFS client installation, also NFS client support is not necessarily needed on a NFS server.

Optionally NFSv4 support could be build as a kernel module. After NFSv4 support has been enabled, new linux kernel needs to be build, installed and the system has to be restarted.



A virtual NFS root directory needs to be created:

root # cd /
root # mkdir export
You can substitute the name and the location of the virtual root (here: /export), with anything else f.e. /nfsroot or /home/NFSv4root.

Create 2 directories in the /export directory for NFS shares:

root # cd /export && mkdir {home,data}

NFS shares

Mount the shares to its mount points:

root # mount --bind /home /export/home && mount --bind /data /export/data

Add following two lines to fstab, so NFS shares will still be available after a system reboot:


/home    /export/home   none    bind  0  0
/data    /export/data   none    bind  0  0


NFS shares are configured in /etc/exports file. This file has following structure:

source1         target1(option1,option2)
source2         target2(option1,option3)
  • source : is a directory the virtual root itself or particular nfs exported share f.e. /export/home.
  • target : single host f.e. larrysPC , or a network, or a wildcard like f.e. * which means here it can be accessed from all networks by all hosts on all interfaces.

For options consult following table:

/etc/exports options
Option Explanation
ro default the directory is shared read only; the client machine will not be able to write it.
rw The client machine will have read and write access to the directory.
no_root_squash If no_root_squash is selected, then root on the client machine will have the same level of access to the files on the system as root on the server. This can have serious security implications. You should not specify this option without a good reason.
no_subtree_check If only part of a volume is exported, a routine called subtree checking verifies that a file that is requested from the client is in the appropriate part of the volume. If the entire volume is exported, disabling this check will speed up transfers.
sync By default. It is possible to switch to async.
insecure Tells the NFS server to use unprivileged ports (ports above 1024). This may be needed to allow mounting the NFS share from MacOS X or through the nfs:/ kioslave in KDE.
fsid=0 NFS sever identifies each file system that it exports with a digit, for the NFSv4 server there is a virtual root filesystem which is the root of all exported file systems. This root is identified with fsid=0.

Specify the virtual root /export as the first entry, then define specific shares, in this particular case the file will look like in the example below:


/export       ,fsid=0,no_subtree_check)
/export/home  ,nohide,insecure,no_subtree_check)
/export/data  ,nohide,insecure,no_subtree_check)
Substitute the target network used in this example ( with your own network


To provide NFSv4 protocol access only, specify which version of the NFS protocol the server has to use -V 4 and which versions are not supported -N 3 -N 2 in the /etc/conf.d/nfs file


# Number of servers to be started up by default
OPTS_RPC_NFSD="8 -V 4.1 -V 4 -N 4.2 -N 3 -N 2"
To provide NFS version 3 support additionally, think of building a linux module.

Starting service daemon

Finally start the configured NFS daemon:

root # /etc/init.d/nfs start
 * Starting rpcbind ...                                                                            [ ok ]
 * Starting NFS statd ...                                                                          [ ok ]
 * Starting idmapd ...                                                                             [ ok ]
 * Use of the opts variable is deprecated and will be
 * removed in the future.
 * Please use extra_commands, extra_started_commands or extra_stopped_commands.
 * Exporting NFS directories ...                                                                   [ ok ]
 * Starting NFS mountd ...                                                                         [ ok ]
 * Starting NFS daemon ...                                                                         [ ok ]
 * Starting NFS smnotify ...                                                                       [ ok ]

As shown many services are started in specific order, rpcbind is started as the first service. If there is a need to stop the NFS service, the easiest way to stop all NFS services at once is to stop the rpcbind service itself.

It command will shutdown each service shown in the upper starting routine example:

root # /etc/init.d/rpcbind stop

Add nfs to a runlevel, to be able to use it after a reboot:

root # rc-update add nfs default
* service nfs added to runlevel default


Mounting remote directories

Before mounting remote directories 2 daemons should be be started first:

  • rpcbind
  • rpc.statd
root # /etc/init.d/rpc.statd start
 * Starting rpcbind ...                                                   [ ok ]
 * Starting NFS statd ...                                                 [ ok ]

The directories can be mounted with following command:

root # mount server:/home /home
substitute the name server with the IP address or DNS name of your own NFS server

Mounting at boot time

Add NFS shares to the /etc/fstab file.

  • 1st option is to mount the NFS virtual root including all exported shares at once:
The /export directory, server exported virtual root is recognized as a root of a file system on the client side, which is simply a "/" in the example below. This is a different approach compared to NFSv3

server:/         /mnt     nfs     rw,_netdev,auto   0  0
  • or the 2nd option is to define each NFS share individually, to have the ability to mount them to different local mount point:

server:/home      /home     nfs     rw,_netdev,auto   0  0
server:/data      /data     nfs     rw,_netdev,auto   0  0

Finally start the nfsmount service

root # /etc/init.d/nfsmount start
 * Starting rpcbind ...                                                   [ ok ]
 * Starting NFS statd ...                                                 [ ok ]
 * Starting idmapd ...                                                    [ ok ]
 * Starting NFS sm-notify ...                                             [ ok ]
 * Mounting NFS filesystems ...                                           [ ok ]

Add nfsmount to the default startup level

root # rc-update add nfsmount default

At this point the NFS shares should be mounted on the client. It can be verified with following command:

user $ netstat -tn | egrep '2049|Active|Pro'
Active Internet connections (w/o servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State
tcp        0      0        ESTABLISHED

Or run just:

user $ df -h



Verfiying NFS server is running and listening for connections:

root # netstat -tupan | egrep 'rpc|Active|Proto'
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 *               LISTEN      1891/rpc.statd
tcp        0      0   *               LISTEN      1875/rpcbind
udp        0      0   *                           1875/rpcbind
udp        0      0 *                           1891/rpc.statd
udp        0      0   *                           1875/rpcbind
udp        0      0   *                           1891/rpc.statd

Verifying which NFS specific daemons are running:

root # rpcinfo -p
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100024    1   udp  57655  status
    100024    1   tcp  34950  status
    100003    2   tcp   2049  nfs
    100003    3   tcp   2049  nfs
    100003    4   tcp   2049  nfs
    100003    2   udp   2049  nfs
    100003    3   udp   2049  nfs
    100003    4   udp   2049  nfs
    100021    1   udp  44208  nlockmgr
    100021    3   udp  44208  nlockmgr
    100021    4   udp  44208  nlockmgr
    100021    1   tcp  44043  nlockmgr
    100021    3   tcp  44043  nlockmgr
    100021    4   tcp  44043  nlockmgr

Showing exported NFS shares on the server side:

root # exportfs -v

Verfiying current open connections to the NFS server:

user $ netstat -tn | egrep '2049|Active|Proto'
Active Internet connections (w/o servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State      
tcp        0      0        ESTABLISHED

For more specific troubleshooting examples visit following links:

External resources