Setup NFSv4 on Ubuntu or Debian

NFSv4 is a tried and tested method of allowing client servers to access files over a network, in a very similar fashion to how the files would be accessed on a local file system. As a very mature piece of software, it has been successfully developed and used on production environments for over 15 years, and it is still widely accepted and supported with a long life ahead of it.

Setting it up is pretty easy and straight forward. As this is a network file system, it is strongly recommended to setup a private switch or private network between to the servers to ensure the lowest latency, as well as better security.

NFS Server – Installation

Install the required packages on the NFS server:

# Ubuntu and Debian
[[email protected] ~]# apt-get update
[[email protected] ~]# apt-get install rpcbind nfs-common nfs4-acl-tools nfs-kernel-server

NFS Server – Configuration

Out of the box, NFSv4 has the following option set which is getting outdated sorely at this time:
– Sets random ephemeral ports upon daemon startup.
– Enables only 8 NFS threads

To make things more easier for admin’s to lock down the firewalls, we are going to set static ports, and also enable 64 NFS threads since you will most likely run into IO problems before you hit this limit as it was meant for much older systems.

Stop the services so we can unload the lockd kernel module and configure the services. This step cannot be skipped!

# Ubuntu 12.04 and Ubuntu 14.04
[[email protected] ~]# service nfs-kernel-server stop
[[email protected] ~]# service statd stop
[[email protected] ~]# service idmapd stop
[[email protected] ~]# service rpcbind stop
[[email protected] ~]# modprobe -r nfsd nfs lockd

# Debian 7
[[email protected] ~]# service nfs-kernel-server stop
[[email protected] ~]# service nfs-common stop
[[email protected] ~]# service rpcbind stop
[[email protected] ~]# modprobe -r nfsd nfs lockd

Update the NFS thread count by:

[[email protected] ~]# vim /etc/default/nfs-kernel-server
...
RPCNFSDCOUNT=64
RPCNFSDPRIORITY=0
RPCMOUNTDOPTS="--manage-gids"
NEED_SVCGSSD="no"
RPCSVCGSSDOPTS=""
...

Next, set the domain as all servers and clients should resides within the same domain:

[[email protected] ~]# vim /etc/idmapd.conf
[General]
Domain = yourdomain.com

Update nfs-common to disable statd and rpc.gssd, then require idmapd:

# Ubuntu 12.04 and Ubuntu 14.04
[[email protected] ~]# vim /etc/default/nfs-common
NEED_STATD=no
STATDOPTS=
NEED_GSSD=no

# Debian 7
[[email protected] ~]# vim /etc/default/nfs-common
NEED_STATD=no
STATDOPTS=
NEED_IDMAPD=yes
NEED_GSSD=no

Open the firewall to allow your private network access to the NFS services. You may have to adjust your rules as my private network resides on eth2. Do not allow this on the public interface without adjusting the source IP’s accordingly!

[[email protected] ~]# ufw allow in on eth2 to 192.168.1.0/24 proto tcp
[[email protected] ~]# ufw allow in on eth2 to 192.168.1.0/24 proto udp

NFSv4 uses a pseudo filesystem for the exports. A pseudo filesystem allows NFS clients to browse the hierarchy of exported file systems, but remains limited to paths that are actually exported. There are a number of ways to go about this, but for this guide, we’ll assume the pseudo filesystem root will be /exports, and we’ll simply bind mount the desired directories into the /exports folder.

For this example, I am looking to export in /data. So we’ll bind mount that to the /exports folder as follows:

[[email protected] ~]# mkdir /data
[[email protected] ~]# touch /data/test-file
[[email protected] ~]# mkdir /exports
[[email protected] ~]# mkdir /exports/data
[[email protected] ~]# echo "/data  /exports/data  none  bind  0 0" >> /etc/fstab
[[email protected] ~]# mount -a
[[email protected] ~]# ls -al /exports/data
total 8
drwxr-xr-x 2 root     root     4096 Jan 11 22:19 .
drwxr-xr-x 3 root     root     4096 Jan 11 22:03 ..
-rw-r--r-- 1 root     root        0 Jan 11 22:03 test-file

If you can see the file, test-file, within /exports/data, then everything is setup properly.

Export the directory to be shared, along with its permissions, in /etc/exports:

[[email protected] ~]# vim /etc/exports

/exports      192.168.1.0/24(ro,no_subtree_check,fsid=0,crossmnt)
/exports/data 192.168.1.0/24(rw,no_subtree_check,no_root_squash)

Now start the services, and ensure they will start at boot time:

# Ubuntu 12.04 and Ubuntu 14.04
[[email protected] ~]# service rpcbind start
[[email protected] ~]# service idmapd start
[[email protected] ~]# service nfs-kernel-server start; update-rc.d nfs-kernel-server enable

# Debian 7
[[email protected] ~]# service rpcbind start; insserv rpcbind
[[email protected] ~]# service nfs-common start; insserv nfs-common
[[email protected] ~]# service nfs-kernel-server start; insserv nfs-kernel-server

Check to make sure the services are running:

[[email protected] ~]# showmount -e
Export list for nfs01:
/exports/data 192.168.1.0/24
/exports     192.168.1.0/24

[[email protected] ~]# rpcinfo -p
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100003    2   tcp   2049  nfs
    100003    3   tcp   2049  nfs
    100003    4   tcp   2049  nfs
    100227    2   tcp   2049
    100227    3   tcp   2049
    100003    2   udp   2049  nfs
    100003    3   udp   2049  nfs
    100003    4   udp   2049  nfs
    100227    2   udp   2049
    100227    3   udp   2049
    100021    1   udp  39482  nlockmgr
    100021    3   udp  39482  nlockmgr
    100021    4   udp  39482  nlockmgr
    100021    1   tcp  60237  nlockmgr
    100021    3   tcp  60237  nlockmgr
    100021    4   tcp  60237  nlockmgr
    100005    1   udp  39160  mountd
    100005    1   tcp  34995  mountd
    100005    2   udp  34816  mountd
    100005    2   tcp  56338  mountd
    100005    3   udp  49147  mountd
    100005    3   tcp  51938  mountd

NFS Client – Installation

Now that the NFS server is ready, the NFS clients now need to be setup to connect. Install the required packages on the NFS clients by:

# Ubuntu or Debian
[[email protected] ~]# apt-get update
[[email protected] ~]# apt-get install rpcbind nfs-common nfs4-acl-tools

NFS Client – Configuration

Stop the services so we can unload the lockd kernel module and configure the services. This step cannot be skipped!

# Ubuntu 12.04 and Ubuntu 14.04
[[email protected] ~]# service nfs-kernel-server stop
[[email protected] ~]# service statd stop
[[email protected] ~]# service idmapd stop
[[email protected] ~]# service rpcbind stop
[[email protected] ~]# modprobe -r nfsd nfs lockd

# Debian 7
[[email protected] ~]# service nfs-kernel-server stop
[[email protected] ~]# service nfs-common stop
[[email protected] ~]# service rpcbind stop
[[email protected] ~]# modprobe -r nfsd nfs lockd

Next, set the domain as all servers and clients should resides within the same domain:

[[email protected] ~]# vim /etc/idmapd.conf
[General]
Domain = yourdomain.com

Update nfs-common to disable statd and rpc.gssd, then require idmapd:

# Ubuntu 12.04 and Ubuntu 14.04
[[email protected] ~]# vim /etc/default/nfs-common
NEED_STATD=no
STATDOPTS=
NEED_GSSD=no

# Debian 7
[[email protected] ~]# vim /etc/default/nfs-common
NEED_STATD=no
STATDOPTS=
NEED_IDMAPD=yes
NEED_GSSD=no

Now start the services, and ensure they will start at boot time:

# Ubuntu 12.04 and Ubuntu 14.04
[[email protected] ~]# service rpcbind start
[[email protected] ~]# service idmapd start

# Debian 7
[[email protected] ~]# service rpcbind start; insserv rpcbind
[[email protected] ~]# service nfs-common start; insserv nfs-common

Confirm the NFS clients can see the NFS server:

[[email protected] ~]# showmount -e 192.168.1.1
Export list for 192.168.1.1:
/var/www/vhosts 192.168.1.0/24

[[email protected] ~]# rpcinfo -p 192.168.1.1
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100024    1   udp    662  status
    100024    1   tcp    662  status
    100003    2   tcp   2049  nfs
    100003    3   tcp   2049  nfs
    100003    4   tcp   2049  nfs
    100227    2   tcp   2049
    100227    3   tcp   2049
    100003    2   udp   2049  nfs
    100003    3   udp   2049  nfs
    100003    4   udp   2049  nfs
    100227    2   udp   2049
    100227    3   udp   2049
    100021    1   udp  32769  nlockmgr
    100021    3   udp  32769  nlockmgr
    100021    4   udp  32769  nlockmgr
    100021    1   tcp  32803  nlockmgr
    100021    3   tcp  32803  nlockmgr
    100021    4   tcp  32803  nlockmgr
    100005    1   udp    892  mountd
    100005    1   tcp    892  mountd
    100005    2   udp    892  mountd
    100005    2   tcp    892  mountd
    100005    3   udp    892  mountd
    100005    3   tcp    892  mountd

Configure the mount point in /etc/fstab:

[[email protected] ~]# vim /etc/fstab

192.168.1.1:/data  /data  nfs4  sec=sys,noatime  0  0

Now create the placeholder directory on the client, mount, and verify it works:

[[email protected] ~]# mkdir /data
[[email protected] ~]# mount -a
[[email protected] ~]# df -h
Filesystem          Size  Used Avail Use% Mounted on
/dev/xvda1           20G  1.2G   18G   7% /
none                4.0K     0  4.0K   0% /sys/fs/cgroup
udev                484M  8.0K  484M   1% /dev
tmpfs                99M  404K   99M   1% /run
none                5.0M     0  5.0M   0% /run/lock
none                495M     0  495M   0% /run/shm
none                100M     0  100M   0% /run/user
192.168.13.1:/data   20G  1.2G   18G   7% /data
[[email protected] ~]#
[[email protected] ~]# grep /data /proc/mounts 
192.168.1.1:/data /data nfs rw,noatime,vers=3,rsize=32768,wsize=32768,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.1.1,mountvers=3,mountport=892,mountproto=tcp,local_lock=none,addr=192.168.1.1 0 0
[[email protected] ~]#
[[email protected] ~]# touch /data/test-file
[[email protected] ~]# ls -al /data/test-file 
-rw-r--r-- 1 root root 0 Dec 20 01:45 /data/test-file

Finally, confirm the user mapping is the same on both servers. You can test to ensure both the server and the client’s show the same UID’s by:

# Create a user on the NFS server:
[[email protected] ~]# useradd -u 6000 testuser

# Create the same user on the NFS client:
[r[email protected] ~]# useradd -u 6000 testuser

# Set the ownership of a test file on the NFS server:
[[email protected] ~]# touch /data/test-file01
[[email protected] ~]# chown testuser:testuser /data/test-file01

# Check the ownership of the test file on the NFS server:
[[email protected] ~]# ls -al /data/test-file01

# Confirm the client sees the same ownership:
[[email protected] ~]# ls -al /data/test-file01