|nfsv3||No||Enable support for NFSv3|
|nfsv4||No||Yes||Enable support for NFSv4|
|tcpd||Yes||Adds support for TCP wrappers|
|caps||No||Use Linux capabilities library to control privilege|
|ipv6||Yes||Adds support for IP version 6|
|kerberos||No||Adds kerberos support|
This section explains how to setup a simple NFS server - NFS client configuration. The NFSv4 server configuration is mostly like configuring the version 3, with one major change all NFS shares are exported from one virtual root directory.
The server has 2 NFS shares which are:
- /export/home - directory with user homes
- /export/data - directory with example data
These 2 shares are mounted on the server system at following points in the tree:
The client uses now TCP protocol as default, to mount NFS shares. Previous NFS versions use UDP as the default protocol.
Verify if NFS version 4 is enabled in current running linux kernel, this has to be ensured on the server as well as on the client installations:
Optionally NFSv4 support could be build as a kernel module. After NFSv4 support has been enabled, new linux kernel needs to be build, installed and the system has to be restarted.
A virtual NFS root directory needs to be created:
Create 2 directories in the /export directory for NFS shares:
Mount the shares to its mount points:
Add following two lines to fstab, so NFS shares will still be available after a system reboot:
NFS shares are configured in /etc/exports file. This file has following structure:
source1 target1(option1,option2) source2 target2(option1,option3)
- source : is a directory the virtual root itself or particular nfs exported share f.e. /export/home.
- target : single host f.e. larrysPC , or a network 192.168.0.0/28, or a wildcard like f.e. * which means here it can be accessed from all networks by all hosts on all interfaces.
For options consult following table:
|ro||default the directory is shared read only; the client machine will not be able to write it.|
|rw||The client machine will have read and write access to the directory.|
|no_root_squash||If no_root_squash is selected, then root on the client machine will have the same level of access to the files on the system as root on the server. This can have serious security implications. You should not specify this option without a good reason.|
|no_subtree_check||If only part of a volume is exported, a routine called subtree checking verifies that a file that is requested from the client is in the appropriate part of the volume. If the entire volume is exported, disabling this check will speed up transfers.|
|sync||By default. It is possible to switch to async.|
|insecure||Tells the NFS server to use unprivileged ports (ports above 1024). This may be needed to allow mounting the NFS share from MacOS X or through the nfs:/ kioslave in KDE.|
|fsid=0||NFS sever identifies each file system that it exports with a digit, for the NFSv4 server there is a virtual root filesystem which is the root of all exported file systems. This root is identified with fsid=0.|
Specify the virtual root /export as the first entry, then define specific shares, in this particular case the file will look like in the example below:
To provide NFSv4 protocol access only, specify which version of the NFS protocol the server has to use -V 4 and which versions are not supported -N 3 -N 2 in the /etc/conf.d/nfs file
Starting service daemon
Finally start the configured NFS daemon:
As shown many services are started in specific order, rpcbind is started as the first service. If there is a need to stop the NFS service, the easiest way to stop all NFS services at once is to stop the rpcbind service itself.
It command will shutdown each service shown in the upper starting routine example:
Add nfs to a runlevel, to be able to use it after a reboot:
Mounting remote directories
Before mounting remote directories 2 daemons should be be started first:
The directories can be mounted with following command:
Mounting at boot time
Add NFS shares to the /etc/fstab file.
- 1st option is to mount the NFS virtual root including all exported shares at once:
- or the 2nd option is to define each NFS share individually, to have the ability to mount them to different local mount point:
Finally start the nfsmount service
Add nfsmount to the default startup level
At this point the NFS shares should be mounted on the client. It can be verified with following command:
Or run just:
Verfiying NFS server is running and listening for connections:
Verifying which NFS specific daemons are running:
Showing exported NFS shares on the server side:
Verfiying current open connections to the NFS server:
For more specific troubleshooting examples visit following links: