Setup NFSv4 on CentOS

NFSv4 is a tried and tested method of allowing client servers to access files over a network, in a very similar fashion to how the files would be accessed on a local file system. As a very mature piece of software, it has been successfully developed and used on production environments for over 15 years, and it is still widely accepted and supported with a long life ahead of it.

Setting it up is pretty easy and straight forward. As this is a network file system, it is strongly recommended to setup a private switch or private network between to the servers to ensure the lowest latency, as well as better security.

NFS Server – Installation

Install the required packages on the NFS server:

# CentOS 5, CentOS 6 and CentOS 7
[[email protected] ~]# yum install portmap nfs-utils nfs4-acl-tools -y

NFS Server – Configuration

Out of the box, NFSv4 has the following option set which is getting outdated sorely at this time:
– Enables only 8 NFS threads

We are going to enable 64 NFS threads since you will most likely run into IO problems before you hit this limit as it was meant for much older systems.

[[email protected] ~]# vim /etc/sysconfig/nfs
RPCNFSDCOUNT=64

Next, set the domain as all servers and clients should resides within the same domain:

[[email protected] ~]# vim /etc/idmapd.conf
[General]
Domain = yourdomain.com

Open the firewall to allow your private network access to the NFS services. You may have to adjust your rules as my private network resides on eth2. Do not allow this on the public interface without adjusting the source IP’s accordingly!

# CentOS 5 and CentOS 6
[[email protected] ~]# vim /etc/sysconfig/iptables
-A INPUT -i eth2 -s 192.168.1.0/24 -p tcp -m tcp --dport 111 -j ACCEPT
-A INPUT -i eth2 -s 192.168.1.0/24 -p udp -m udp --dport 111 -j ACCEPT
-A INPUT -i eth2 -s 192.168.1.0/24 -p tcp -m tcp --dport 662 -j ACCEPT
-A INPUT -i eth2 -s 192.168.1.0/24 -p udp -m udp --dport 662 -j ACCEPT
-A INPUT -i eth2 -s 192.168.1.0/24 -p tcp -m tcp --dport 892 -j ACCEPT
-A INPUT -i eth2 -s 192.168.1.0/24 -p udp -m udp --dport 892 -j ACCEPT
-A INPUT -i eth2 -s 192.168.1.0/24 -p tcp -m tcp --dport 2049 -j ACCEPT
-A INPUT -i eth2 -s 192.168.1.0/24 -p udp -m udp --dport 2049 -j ACCEPT
-A INPUT -i eth2 -s 192.168.1.0/24 -p tcp -m tcp --dport 32803 -j ACCEPT
-A INPUT -i eth2 -s 192.168.1.0/24 -p udp -m udp --dport 32769 -j ACCEPT

# CentOS 7
[[email protected] ~]# vim /etc/firewalld/services/nfs.xml
<?xml version="1.0" encoding="utf-8"?>
<service>
  <short>NFS
  <description>NFS service
  <port protocol="tcp" port="111"/>
  <port protocol="udp" port="111"/>
  <port protocol="tcp" port="662"/>
  <port protocol="udp" port="662"/>
  <port protocol="tcp" port="892"/>
  <port protocol="udp" port="892"/>
  <port protocol="tcp" port="2049"/>
  <port protocol="udp" port="2049"/>
  <port protocol="tcp" port="32803"/>
  <port protocol="udp" port="32803"/>
  <port protocol="tcp" port="38467"/>
  <port protocol="udp" port="38467"/>
  <port protocol="tcp" port="32769"/>
  <port protocol="udp" port="32769"/>
</service>

Then apply the rules by:
[[email protected] ~]# systemctl reload firewalld.service

Now add a zone to the private network interface and set PEERDNS to no:
[[email protected] ~]# vim /etc/sysconfig/network-scripts/ifcfg-eth2
...
PEERDNS=no
ZONE=internal
...

Then apply the changes by:
[[email protected] ~]# ifdown eth2 && ifup eth2

Now add the NFS rules to the private network interface:
[[email protected] ~]# firewall-cmd --zone=internal --add-interface eth2
[[email protected] ~]# firewall-cmd --zone=internal --add-service=nfs
[[email protected] ~]# firewall-cmd --zone=internal --add-interface eth2 --permanent
[[email protected] ~]# firewall-cmd --zone=internal --add-service=nfs --permanent

NFSv4 uses a pseudo filesystem for the exports. A pseudo filesystem allows NFS clients to browse the hierarchy of exported file systems, but remains limited to paths that are actually exported. There are a number of ways to go about this, but for this guide, we’ll assume the pseudo filesystem root will be /exports, and we’ll simply bind mount the desired directories into the /exports folder.

For this example, I am looking to export in /data. So we’ll bind mount that to the /exports folder as follows:

[[email protected] ~]# touch /data/test-file
[[email protected] ~]# mkdir /exports
[[email protected] ~]# mkdir /exports/data
[[email protected] ~]# echo "/data  /exports/data  none  bind  0 0" >> /etc/fstab
[[email protected] ~]# mount -a
[[email protected] ~]# ls -al /exports/data
total 8
drwxr-xr-x 2 root     root     4096 Jan 11 22:19 .
drwxr-xr-x 3 root     root     4096 Jan 11 22:03 ..
-rw-r--r-- 1 root     root        0 Jan 11 22:03 test-file

If you can see the file, test-file, within /exports/data, then everything is setup properly.

Export the directory to be shared, along with its permissions, in /etc/exports:

[[email protected] ~]# vi /etc/exports
/exports      192.168.1.0/24(ro,no_subtree_check,fsid=0,crossmnt)
/exports/data 192.168.1.0/24(rw,no_subtree_check,no_root_squash)

Now start the services, and enable them to start at boot time:

# CentOS 5
[[email protected] ~]# service portmap start; chkconfig portmap on
[[email protected] ~]# service rpcidmapd start; chkconfig rpcidmapd on
[[email protected] ~]# service nfs start; chkconfig nfs on

# CentOS 6
[[email protected] ~]# service rpcbind start; chkconfig rpcbind on
[[email protected] ~]# service rpcidmapd start; chkconfig rpcidmapd on
[[email protected] ~]# service nfs start; chkconfig nfs on

# CentOS 7
[[email protected] ~]# systemctl start rpcbind nfs-idmap nfs-server
[[email protected] ~]# systemctl enable rpcbind nfs-idmap nfs-server

Check to make sure the services are running:

[[email protected] ~]# showmount -e
Export list for nfs01:
/exports/data 192.168.1.0/24
/exports     192.168.1.0/24

[[email protected] ~]# rpcinfo -p
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100005    1   udp  41418  mountd
    100005    1   tcp  50139  mountd
    100005    2   udp  50228  mountd
    100005    2   tcp  52070  mountd
    100005    3   udp  33496  mountd
    100005    3   tcp  54673  mountd
    100003    2   tcp   2049  nfs
    100003    3   tcp   2049  nfs
    100003    4   tcp   2049  nfs
    100227    2   tcp   2049  nfs_acl
    100227    3   tcp   2049  nfs_acl
    100003    2   udp   2049  nfs
    100003    3   udp   2049  nfs
    100003    4   udp   2049  nfs
    100227    2   udp   2049  nfs_acl
    100227    3   udp   2049  nfs_acl
    100021    1   udp  38895  nlockmgr
    100021    3   udp  38895  nlockmgr
    100021    4   udp  38895  nlockmgr
    100021    1   tcp  39908  nlockmgr
    100021    3   tcp  39908  nlockmgr
    100021    4   tcp  39908  nlockmgr

NFS Client – Installation

Now that the NFS server is ready, the NFS clients now need to be setup to connect. Install the required packages on the NFS clients by:

# CentOS 5
[[email protected] ~]# yum install portmap nfs-utils nfs4-acl-tools -y

# CentOS 6 and CentOS 7
[[email protected] ~]# yum install rpcbind nfs-utils nfs4-acl-tools -y

Next, set the domain as all servers and clients should resides within the same domain:

[[email protected] ~]# vim /etc/idmapd.conf
[General]
Domain = yourdomain.com

Now start the services, and enable them to start at boot time.

# CentOS 5
[[email protected] ~]# service portmap start; chkconfig portmap on
[[email protected] ~]# service rpcidmapd start; chkconfig rpcidmapd on
[[email protected] ~]# chkconfig netfs on

# CentOS 6
[[email protected] ~]# service rpcbind start; chkconfig rpcbind on
[[email protected] ~]# service rpcidmapd start; chkconfig rpcidmapd on
[[email protected] ~]# chkconfig netfs on

# CentOS 7
[[email protected] ~]# systemctl start rpcbind nfs-idmap
[[email protected] ~]# systemctl enable rpcbind nfs-idmap

NFS Client – Configuration

Confirm the NFS clients can see the NFS server:

[[email protected] ~]# showmount -e 192.168.1.1
Export list for 192.168.1.1:
/data 192.168.1.0/24

Configure the mount point in /etc/fstab:

[[email protected] ~]# vim /etc/fstab
192.168.1.1:/  /data  nfs4  sec=sys,noatime  0  0

Now create the placeholder directory on the client, mount, and verify it works:

[[email protected] ~]# mkdir /data
[[email protected] ~]# mount -a
[[email protected] ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root
                       14G  1.8G   11G  15% /
tmpfs                 939M     0  939M   0% /dev/shm
/dev/sda1             477M   74M  378M  17% /boot
192.168.1.1:/data      14G  1.9G   11G  15% /data
[[email protected] ~]#
[[email protected] ~]# grep /data /proc/mounts 
192.168.1.1:/data/ /data nfs4 rw,noatime,vers=4,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=192.168.1.164,minorversion=0,local_lock=none,addr=192.168.1.1 0 0
[[email protected] ~]#
[[email protected] ~]# touch /data/test-file01
[[email protected] ~]# ls -al /data/test-file01
-rw-r--r-- 1 root root 0 Dec 19 17:57 /data/test-file01

Finally, confirm the user mapping is the same on both servers. You can test to ensure both the server and the client’s show the same UID’s by:

# Create a user on the NFS server:
[[email protected] ~]# useradd -u 6000 testuser

# Create the same user on the NFS client:
[[email protected] ~]# useradd -u 6000 testuser

# Set the ownership of a test file on the NFS server:
[[email protected] ~]# touch /data/test-file01
[[email protected] ~]# chown testuser:testuser /data/test-file01

# Check the ownership of the test file on the NFS server:
[[email protected] ~]# ls -al /data/test-file01

# Confirm the client sees the same ownership:
[[email protected] ~]# ls -al /data/test-file01