Setup NFSv4 on CentOS

NFSv4 is a tried and tested method of allowing client servers to access files over a network, in a very similar fashion to how the files would be accessed on a local file system. As a very mature piece of software, it has been successfully developed and used on production environments for over 15 years, and it is still widely accepted and supported with a long life ahead of it.

Setting it up is pretty easy and straight forward. As this is a network file system, it is strongly recommended to setup a private switch or private network between to the servers to ensure the lowest latency, as well as better security.

NFS Server – Installation

Install the required packages on the NFS server:

# CentOS 5, CentOS 6 and CentOS 7
[root@nfs01 ~]# yum install portmap nfs-utils nfs4-acl-tools -y

NFS Server – Configuration

Out of the box, NFSv4 has the following option set which is getting outdated sorely at this time:
– Enables only 8 NFS threads

We are going to enable 64 NFS threads since you will most likely run into IO problems before you hit this limit as it was meant for much older systems.

[root@nfs01 ~]# vim /etc/sysconfig/nfs
RPCNFSDCOUNT=64

Next, set the domain as all servers and clients should resides within the same domain:

[root@nfs01 ~]# vim /etc/idmapd.conf
[General]
Domain = yourdomain.com

Open the firewall to allow your private network access to the NFS services. You may have to adjust your rules as my private network resides on eth2. Do not allow this on the public interface without adjusting the source IP’s accordingly!

# CentOS 5 and CentOS 6
[root@nfs01 ~]# vim /etc/sysconfig/iptables
-A INPUT -i eth2 -s 192.168.1.0/24 -p tcp -m tcp --dport 111 -j ACCEPT
-A INPUT -i eth2 -s 192.168.1.0/24 -p udp -m udp --dport 111 -j ACCEPT
-A INPUT -i eth2 -s 192.168.1.0/24 -p tcp -m tcp --dport 662 -j ACCEPT
-A INPUT -i eth2 -s 192.168.1.0/24 -p udp -m udp --dport 662 -j ACCEPT
-A INPUT -i eth2 -s 192.168.1.0/24 -p tcp -m tcp --dport 892 -j ACCEPT
-A INPUT -i eth2 -s 192.168.1.0/24 -p udp -m udp --dport 892 -j ACCEPT
-A INPUT -i eth2 -s 192.168.1.0/24 -p tcp -m tcp --dport 2049 -j ACCEPT
-A INPUT -i eth2 -s 192.168.1.0/24 -p udp -m udp --dport 2049 -j ACCEPT
-A INPUT -i eth2 -s 192.168.1.0/24 -p tcp -m tcp --dport 32803 -j ACCEPT
-A INPUT -i eth2 -s 192.168.1.0/24 -p udp -m udp --dport 32769 -j ACCEPT

# CentOS 7
[root@nfs01 ~]# vim /etc/firewalld/services/nfs.xml
<?xml version="1.0" encoding="utf-8"?>
<service>
  <short>NFS
  <description>NFS service
  <port protocol="tcp" port="111"/>
  <port protocol="udp" port="111"/>
  <port protocol="tcp" port="662"/>
  <port protocol="udp" port="662"/>
  <port protocol="tcp" port="892"/>
  <port protocol="udp" port="892"/>
  <port protocol="tcp" port="2049"/>
  <port protocol="udp" port="2049"/>
  <port protocol="tcp" port="32803"/>
  <port protocol="udp" port="32803"/>
  <port protocol="tcp" port="38467"/>
  <port protocol="udp" port="38467"/>
  <port protocol="tcp" port="32769"/>
  <port protocol="udp" port="32769"/>
</service>

Then apply the rules by:
[root@nfs01 ~]# systemctl reload firewalld.service

Now add a zone to the private network interface and set PEERDNS to no:
[root@nfs01 ~]# vim /etc/sysconfig/network-scripts/ifcfg-eth2
...
PEERDNS=no
ZONE=internal
...

Then apply the changes by:
[root@nfs01 ~]# ifdown eth2 && ifup eth2

Now add the NFS rules to the private network interface:
[root@nfs01 ~]# firewall-cmd --zone=internal --add-interface eth2
[root@nfs01 ~]# firewall-cmd --zone=internal --add-service=nfs
[root@nfs01 ~]# firewall-cmd --zone=internal --add-interface eth2 --permanent
[root@nfs01 ~]# firewall-cmd --zone=internal --add-service=nfs --permanent

NFSv4 uses a pseudo filesystem for the exports. A pseudo filesystem allows NFS clients to browse the hierarchy of exported file systems, but remains limited to paths that are actually exported. There are a number of ways to go about this, but for this guide, we’ll assume the pseudo filesystem root will be /exports, and we’ll simply bind mount the desired directories into the /exports folder.

For this example, I am looking to export in /data. So we’ll bind mount that to the /exports folder as follows:

[root@nfs01 ~]# touch /data/test-file
[root@nfs01 ~]# mkdir /exports
[root@nfs01 ~]# mkdir /exports/data
[root@nfs01 ~]# echo "/data  /exports/data  none  bind  0 0" >> /etc/fstab
[root@nfs01 ~]# mount -a
[root@nfs01 ~]# ls -al /exports/data
total 8
drwxr-xr-x 2 root     root     4096 Jan 11 22:19 .
drwxr-xr-x 3 root     root     4096 Jan 11 22:03 ..
-rw-r--r-- 1 root     root        0 Jan 11 22:03 test-file

If you can see the file, test-file, within /exports/data, then everything is setup properly.

Export the directory to be shared, along with its permissions, in /etc/exports:

[root@nfs01 ~]# vi /etc/exports
/exports      192.168.1.0/24(ro,no_subtree_check,fsid=0,crossmnt)
/exports/data 192.168.1.0/24(rw,no_subtree_check,no_root_squash)

Now start the services, and enable them to start at boot time:

# CentOS 5
[root@nfs01 ~]# service portmap start; chkconfig portmap on
[root@nfs01 ~]# service rpcidmapd start; chkconfig rpcidmapd on
[root@nfs01 ~]# service nfs start; chkconfig nfs on

# CentOS 6
[root@nfs01 ~]# service rpcbind start; chkconfig rpcbind on
[root@nfs01 ~]# service rpcidmapd start; chkconfig rpcidmapd on
[root@nfs01 ~]# service nfs start; chkconfig nfs on

# CentOS 7
[root@nfs01 ~]# systemctl start rpcbind nfs-idmap nfs-server
[root@nfs01 ~]# systemctl enable rpcbind nfs-idmap nfs-server

Check to make sure the services are running:

[root@nfs01 ~]# showmount -e
Export list for nfs01:
/exports/data 192.168.1.0/24
/exports     192.168.1.0/24

[root@nfs01 ~]# rpcinfo -p
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100005    1   udp  41418  mountd
    100005    1   tcp  50139  mountd
    100005    2   udp  50228  mountd
    100005    2   tcp  52070  mountd
    100005    3   udp  33496  mountd
    100005    3   tcp  54673  mountd
    100003    2   tcp   2049  nfs
    100003    3   tcp   2049  nfs
    100003    4   tcp   2049  nfs
    100227    2   tcp   2049  nfs_acl
    100227    3   tcp   2049  nfs_acl
    100003    2   udp   2049  nfs
    100003    3   udp   2049  nfs
    100003    4   udp   2049  nfs
    100227    2   udp   2049  nfs_acl
    100227    3   udp   2049  nfs_acl
    100021    1   udp  38895  nlockmgr
    100021    3   udp  38895  nlockmgr
    100021    4   udp  38895  nlockmgr
    100021    1   tcp  39908  nlockmgr
    100021    3   tcp  39908  nlockmgr
    100021    4   tcp  39908  nlockmgr

NFS Client – Installation

Now that the NFS server is ready, the NFS clients now need to be setup to connect. Install the required packages on the NFS clients by:

# CentOS 5
[root@web01 ~]# yum install portmap nfs-utils nfs4-acl-tools -y

# CentOS 6 and CentOS 7
[root@web01 ~]# yum install rpcbind nfs-utils nfs4-acl-tools -y

Next, set the domain as all servers and clients should resides within the same domain:

[root@web01 ~]# vim /etc/idmapd.conf
[General]
Domain = yourdomain.com

Now start the services, and enable them to start at boot time.

# CentOS 5
[root@web01 ~]# service portmap start; chkconfig portmap on
[root@web01 ~]# service rpcidmapd start; chkconfig rpcidmapd on
[root@web01 ~]# chkconfig netfs on

# CentOS 6
[root@web01 ~]# service rpcbind start; chkconfig rpcbind on
[root@web01 ~]# service rpcidmapd start; chkconfig rpcidmapd on
[root@web01 ~]# chkconfig netfs on

# CentOS 7
[root@web01 ~]# systemctl start rpcbind nfs-idmap
[root@web01 ~]# systemctl enable rpcbind nfs-idmap

NFS Client – Configuration

Confirm the NFS clients can see the NFS server:

[root@web01 ~]# showmount -e 192.168.1.1
Export list for 192.168.1.1:
/data 192.168.1.0/24

Configure the mount point in /etc/fstab:

[root@web01 ~]# vim /etc/fstab
192.168.1.1:/  /data  nfs4  sec=sys,noatime  0  0

Now create the placeholder directory on the client, mount, and verify it works:

[root@web01 ~]# mkdir /data
[root@web01 ~]# mount -a
[root@web01 ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root
                       14G  1.8G   11G  15% /
tmpfs                 939M     0  939M   0% /dev/shm
/dev/sda1             477M   74M  378M  17% /boot
192.168.1.1:/data      14G  1.9G   11G  15% /data
[root@web01 ~]#
[root@web01 ~]# grep /data /proc/mounts 
192.168.1.1:/data/ /data nfs4 rw,noatime,vers=4,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=192.168.1.164,minorversion=0,local_lock=none,addr=192.168.1.1 0 0
[root@web01 ~]#
[root@web01 ~]# touch /data/test-file01
[root@web01 ~]# ls -al /data/test-file01
-rw-r--r-- 1 root root 0 Dec 19 17:57 /data/test-file01

Finally, confirm the user mapping is the same on both servers. You can test to ensure both the server and the client’s show the same UID’s by:

# Create a user on the NFS server:
[root@nfs01 ~]# useradd -u 6000 testuser

# Create the same user on the NFS client:
[root@web01 ~]# useradd -u 6000 testuser

# Set the ownership of a test file on the NFS server:
[root@nfs01 ~]# touch /data/test-file01
[root@nfs01 ~]# chown testuser:testuser /data/test-file01

# Check the ownership of the test file on the NFS server:
[root@nfs01 ~]# ls -al /data/test-file01

# Confirm the client sees the same ownership:
[root@web01 ~]# ls -al /data/test-file01

Setup NFSv4 on Ubuntu or Debian

NFSv4 is a tried and tested method of allowing client servers to access files over a network, in a very similar fashion to how the files would be accessed on a local file system. As a very mature piece of software, it has been successfully developed and used on production environments for over 15 years, and it is still widely accepted and supported with a long life ahead of it.

Setting it up is pretty easy and straight forward. As this is a network file system, it is strongly recommended to setup a private switch or private network between to the servers to ensure the lowest latency, as well as better security.

NFS Server – Installation

Install the required packages on the NFS server:

# Ubuntu and Debian
[root@nfs01 ~]# apt-get update
[root@nfs01 ~]# apt-get install rpcbind nfs-common nfs4-acl-tools nfs-kernel-server

NFS Server – Configuration

Out of the box, NFSv4 has the following option set which is getting outdated sorely at this time:
– Sets random ephemeral ports upon daemon startup.
– Enables only 8 NFS threads

To make things more easier for admin’s to lock down the firewalls, we are going to set static ports, and also enable 64 NFS threads since you will most likely run into IO problems before you hit this limit as it was meant for much older systems.

Stop the services so we can unload the lockd kernel module and configure the services. This step cannot be skipped!

# Ubuntu 12.04 and Ubuntu 14.04
[root@nfs01 ~]# service nfs-kernel-server stop
[root@nfs01 ~]# service statd stop
[root@nfs01 ~]# service idmapd stop
[root@nfs01 ~]# service rpcbind stop
[root@nfs01 ~]# modprobe -r nfsd nfs lockd

# Debian 7
[root@nfs01 ~]# service nfs-kernel-server stop
[root@nfs01 ~]# service nfs-common stop
[root@nfs01 ~]# service rpcbind stop
[root@nfs01 ~]# modprobe -r nfsd nfs lockd

Update the NFS thread count by:

[root@nfs01 ~]# vim /etc/default/nfs-kernel-server
...
RPCNFSDCOUNT=64
RPCNFSDPRIORITY=0
RPCMOUNTDOPTS="--manage-gids"
NEED_SVCGSSD="no"
RPCSVCGSSDOPTS=""
...

Next, set the domain as all servers and clients should resides within the same domain:

[root@nfs01 ~]# vim /etc/idmapd.conf
[General]
Domain = yourdomain.com

Update nfs-common to disable statd and rpc.gssd, then require idmapd:

# Ubuntu 12.04 and Ubuntu 14.04
[root@nfs01 ~]# vim /etc/default/nfs-common
NEED_STATD=no
STATDOPTS=
NEED_GSSD=no

# Debian 7
[root@nfs01 ~]# vim /etc/default/nfs-common
NEED_STATD=no
STATDOPTS=
NEED_IDMAPD=yes
NEED_GSSD=no

Open the firewall to allow your private network access to the NFS services. You may have to adjust your rules as my private network resides on eth2. Do not allow this on the public interface without adjusting the source IP’s accordingly!

[root@nfs01 ~]# ufw allow in on eth2 to 192.168.1.0/24 proto tcp
[root@nfs01 ~]# ufw allow in on eth2 to 192.168.1.0/24 proto udp

NFSv4 uses a pseudo filesystem for the exports. A pseudo filesystem allows NFS clients to browse the hierarchy of exported file systems, but remains limited to paths that are actually exported. There are a number of ways to go about this, but for this guide, we’ll assume the pseudo filesystem root will be /exports, and we’ll simply bind mount the desired directories into the /exports folder.

For this example, I am looking to export in /data. So we’ll bind mount that to the /exports folder as follows:

[root@nfs01 ~]# mkdir /data
[root@nfs01 ~]# touch /data/test-file
[root@nfs01 ~]# mkdir /exports
[root@nfs01 ~]# mkdir /exports/data
[root@nfs01 ~]# echo "/data  /exports/data  none  bind  0 0" >> /etc/fstab
[root@nfs01 ~]# mount -a
[root@nfs01 ~]# ls -al /exports/data
total 8
drwxr-xr-x 2 root     root     4096 Jan 11 22:19 .
drwxr-xr-x 3 root     root     4096 Jan 11 22:03 ..
-rw-r--r-- 1 root     root        0 Jan 11 22:03 test-file

If you can see the file, test-file, within /exports/data, then everything is setup properly.

Export the directory to be shared, along with its permissions, in /etc/exports:

[root@nfs01 ~]# vim /etc/exports

/exports      192.168.1.0/24(ro,no_subtree_check,fsid=0,crossmnt)
/exports/data 192.168.1.0/24(rw,no_subtree_check,no_root_squash)

Now start the services, and ensure they will start at boot time:

# Ubuntu 12.04 and Ubuntu 14.04
[root@nfs01 ~]# service rpcbind start
[root@nfs01 ~]# service idmapd start
[root@nfs01 ~]# service nfs-kernel-server start; update-rc.d nfs-kernel-server enable

# Debian 7
[root@nfs01 ~]# service rpcbind start; insserv rpcbind
[root@nfs01 ~]# service nfs-common start; insserv nfs-common
[root@nfs01 ~]# service nfs-kernel-server start; insserv nfs-kernel-server

Check to make sure the services are running:

[root@nfs01 ~]# showmount -e
Export list for nfs01:
/exports/data 192.168.1.0/24
/exports     192.168.1.0/24

[root@nfs01 ~]# rpcinfo -p
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100003    2   tcp   2049  nfs
    100003    3   tcp   2049  nfs
    100003    4   tcp   2049  nfs
    100227    2   tcp   2049
    100227    3   tcp   2049
    100003    2   udp   2049  nfs
    100003    3   udp   2049  nfs
    100003    4   udp   2049  nfs
    100227    2   udp   2049
    100227    3   udp   2049
    100021    1   udp  39482  nlockmgr
    100021    3   udp  39482  nlockmgr
    100021    4   udp  39482  nlockmgr
    100021    1   tcp  60237  nlockmgr
    100021    3   tcp  60237  nlockmgr
    100021    4   tcp  60237  nlockmgr
    100005    1   udp  39160  mountd
    100005    1   tcp  34995  mountd
    100005    2   udp  34816  mountd
    100005    2   tcp  56338  mountd
    100005    3   udp  49147  mountd
    100005    3   tcp  51938  mountd

NFS Client – Installation

Now that the NFS server is ready, the NFS clients now need to be setup to connect. Install the required packages on the NFS clients by:

# Ubuntu or Debian
[root@web01 ~]# apt-get update
[root@web01 ~]# apt-get install rpcbind nfs-common nfs4-acl-tools

NFS Client – Configuration

Stop the services so we can unload the lockd kernel module and configure the services. This step cannot be skipped!

# Ubuntu 12.04 and Ubuntu 14.04
[root@web01 ~]# service nfs-kernel-server stop
[root@web01 ~]# service statd stop
[root@web01 ~]# service idmapd stop
[root@web01 ~]# service rpcbind stop
[root@web01 ~]# modprobe -r nfsd nfs lockd

# Debian 7
[root@web01 ~]# service nfs-kernel-server stop
[root@web01 ~]# service nfs-common stop
[root@web01 ~]# service rpcbind stop
[root@web01 ~]# modprobe -r nfsd nfs lockd

Next, set the domain as all servers and clients should resides within the same domain:

[root@web01 ~]# vim /etc/idmapd.conf
[General]
Domain = yourdomain.com

Update nfs-common to disable statd and rpc.gssd, then require idmapd:

# Ubuntu 12.04 and Ubuntu 14.04
[root@web01 ~]# vim /etc/default/nfs-common
NEED_STATD=no
STATDOPTS=
NEED_GSSD=no

# Debian 7
[root@web01 ~]# vim /etc/default/nfs-common
NEED_STATD=no
STATDOPTS=
NEED_IDMAPD=yes
NEED_GSSD=no

Now start the services, and ensure they will start at boot time:

# Ubuntu 12.04 and Ubuntu 14.04
[root@web01 ~]# service rpcbind start
[root@web01 ~]# service idmapd start

# Debian 7
[root@web01 ~]# service rpcbind start; insserv rpcbind
[root@web01 ~]# service nfs-common start; insserv nfs-common

Confirm the NFS clients can see the NFS server:

[root@web01 ~]# showmount -e 192.168.1.1
Export list for 192.168.1.1:
/var/www/vhosts 192.168.1.0/24

[root@web01 ~]# rpcinfo -p 192.168.1.1
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100024    1   udp    662  status
    100024    1   tcp    662  status
    100003    2   tcp   2049  nfs
    100003    3   tcp   2049  nfs
    100003    4   tcp   2049  nfs
    100227    2   tcp   2049
    100227    3   tcp   2049
    100003    2   udp   2049  nfs
    100003    3   udp   2049  nfs
    100003    4   udp   2049  nfs
    100227    2   udp   2049
    100227    3   udp   2049
    100021    1   udp  32769  nlockmgr
    100021    3   udp  32769  nlockmgr
    100021    4   udp  32769  nlockmgr
    100021    1   tcp  32803  nlockmgr
    100021    3   tcp  32803  nlockmgr
    100021    4   tcp  32803  nlockmgr
    100005    1   udp    892  mountd
    100005    1   tcp    892  mountd
    100005    2   udp    892  mountd
    100005    2   tcp    892  mountd
    100005    3   udp    892  mountd
    100005    3   tcp    892  mountd

Configure the mount point in /etc/fstab:

[root@web01 ~]# vim /etc/fstab

192.168.1.1:/data  /data  nfs4  sec=sys,noatime  0  0

Now create the placeholder directory on the client, mount, and verify it works:

[root@web01 ~]# mkdir /data
[root@web01 ~]# mount -a
[root@web01 ~]# df -h
Filesystem          Size  Used Avail Use% Mounted on
/dev/xvda1           20G  1.2G   18G   7% /
none                4.0K     0  4.0K   0% /sys/fs/cgroup
udev                484M  8.0K  484M   1% /dev
tmpfs                99M  404K   99M   1% /run
none                5.0M     0  5.0M   0% /run/lock
none                495M     0  495M   0% /run/shm
none                100M     0  100M   0% /run/user
192.168.13.1:/data   20G  1.2G   18G   7% /data
[root@web01 ~]#
[root@web01 ~]# grep /data /proc/mounts 
192.168.1.1:/data /data nfs rw,noatime,vers=3,rsize=32768,wsize=32768,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.1.1,mountvers=3,mountport=892,mountproto=tcp,local_lock=none,addr=192.168.1.1 0 0
[root@web01 ~]#
[root@web01 ~]# touch /data/test-file
[root@web01 ~]# ls -al /data/test-file 
-rw-r--r-- 1 root root 0 Dec 20 01:45 /data/test-file

Finally, confirm the user mapping is the same on both servers. You can test to ensure both the server and the client’s show the same UID’s by:

# Create a user on the NFS server:
[root@nfs01 ~]# useradd -u 6000 testuser

# Create the same user on the NFS client:
[root@web01 ~]# useradd -u 6000 testuser

# Set the ownership of a test file on the NFS server:
[root@nfs01 ~]# touch /data/test-file01
[root@nfs01 ~]# chown testuser:testuser /data/test-file01

# Check the ownership of the test file on the NFS server:
[root@nfs01 ~]# ls -al /data/test-file01

# Confirm the client sees the same ownership:
[root@web01 ~]# ls -al /data/test-file01

Setup NFSv3 on Ubuntu or Debian

NFSv3 is a tried and tested method of allowing client servers to access files over a network, in a very similar fashion to how the files would be accessed on a local file system. As a very mature piece of software, it has been successfully developed and used on production environments for almost 20 years, and it is still widely accepted and supported with a long life ahead of it. Some could make the argument that NFSv4.1 is faster now with pNFS that is now available, but I personally still prefer NFSv3 on many environments.

Setting it up is pretty easy and straight forward. As this is a network file system, it is strongly recommended to setup a private switch or private network between to the servers to ensure the lowest latency, as well as better security.

NFS Server – Installation

Install the required packages on the NFS server:

# Ubuntu and Debian
[root@nfs01 ~]# apt-get update
[root@nfs01 ~]# apt-get install rpcbind nfs-common nfs-kernel-server

NFS Server – Configuration

Out of the box, NFSv3 has the following options set which is getting outdated sorely at this time:
– Sets random ephemeral ports upon daemon startup.
– Enables only 8 NFS threads

To make things more easier for admin’s to lock down the firewalls, we are going to set static ports, and also enable 64 NFS threads since you will most likely run into IO problems before you hit this limit as it was meant for much older systems.

Stop the services so we can unload the lockd kernel module and configure static ports. This step cannot be skipped!

# Ubuntu 12.04 and Ubuntu 14.04
service nfs-kernel-server stop
service statd stop
service idmapd stop
service rpcbind stop
service portmap stop
modprobe -r nfsd nfs lockd

# Debian 7
service nfs-kernel-server stop
service nfs-common stop
service rpcbind stop
modprobe -r nfsd nfs lockd

Configure STATD and define the static ports:

# Ubuntu 12.04 and Ubuntu 14.04
echo "manual" > /etc/init/idmapd.override

vim /etc/default/nfs-common
NEED_STATD=yes
STATDOPTS="-p 662 -o 2020"
NEED_GSSD=no

# Debian 7
vim /etc/default/nfs-common
NEED_STATD=yes
STATDOPTS="-p 662 -o 2020"
NEED_IDMAPD=no
NEED_GSSD=no

Set the static port for LOCKD:

echo "options lockd nlm_udpport=32769 nlm_tcpport=32803" > /etc/modprobe.d/nfs-lockd.conf

Finally, update the NFS thread count by:

vim /etc/default/nfs-kernel-server
...
RPCNFSDCOUNT=64
RPCNFSDPRIORITY=0
RPCMOUNTDOPTS="--manage-gids -p 892"
NEED_SVCGSSD=no
RPCSVCGSSDOPTS=
...

Open the firewall to allow your private network access to the NFS services. You may have to adjust your rules as my private network resides on eth2. Do not allow this on the public interface without adjusting the source IP’s accordingly!

[root@nfs01 ~]# ufw allow in on eth2 to 192.168.1.0/24 proto tcp
[root@nfs01 ~]# ufw allow in on eth2 to 192.168.1.0/24 proto udp

Export the directory to be shared, along with its permissions, in /etc/exports:

[root@nfs01 ~]# vim /etc/exports

/data 192.168.1.0/24(rw,no_root_squash,no_subtree_check)

Now start the services, and ensure they will start at boot time:

# Ubuntu 12.04 and Ubuntu 14.04
service rpcbind start
service statd start
service nfs-kernel-server start; update-rc.d nfs-kernel-server enable

# Debian 7
service rpcbind start; insserv rpcbind
service nfs-common start; insserv nfs-common
service nfs-kernel-server start; insserv nfs-kernel-server

Check to make sure the services are running:

[root@nfs01 ~]# showmount -e
Export list for nfs01.domain.com:
/data 192.168.1.0/24

[root@nfs01 ~]# rpcinfo -p
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100024    1   udp    662  status
    100024    1   tcp    662  status
    100003    2   tcp   2049  nfs
    100003    3   tcp   2049  nfs
    100003    4   tcp   2049  nfs
    100227    2   tcp   2049
    100227    3   tcp   2049
    100003    2   udp   2049  nfs
    100003    3   udp   2049  nfs
    100003    4   udp   2049  nfs
    100227    2   udp   2049
    100227    3   udp   2049
    100021    1   udp  32769  nlockmgr
    100021    3   udp  32769  nlockmgr
    100021    4   udp  32769  nlockmgr
    100021    1   tcp  32803  nlockmgr
    100021    3   tcp  32803  nlockmgr
    100021    4   tcp  32803  nlockmgr
    100005    1   udp    892  mountd
    100005    1   tcp    892  mountd
    100005    2   udp    892  mountd
    100005    2   tcp    892  mountd
    100005    3   udp    892  mountd
    100005    3   tcp    892  mountd

NFS Client – Installation

Now that the NFS server is ready, the NFS clients now need to be setup to connect. Install the required packages on the NFS clients by:

# Ubuntu or Debian
[root@web01 ~]# apt-get update
[root@web01 ~]# apt-get install rpcbind nfs-common

Now start the services:

# Ubuntu 12.04 and Ubuntu 14.04
service rpcbind start
service statd start
service idmapd stop
echo "manual" > /etc/init/idmapd.override

# Debian 7
service rpcbind start; insserv rpcbind
service nfs-common start; insserv nfs-common
insserv mountnfs.sh

NFS Client – Configuration

Confirm the NFS clients can see the NFS server:

[root@web01 ~]# showmount -e 192.168.1.1
Export list for 192.168.1.1:
/var/www/vhosts 192.168.1.0/24

[root@web01 ~]# rpcinfo -p 192.168.1.1
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100024    1   udp    662  status
    100024    1   tcp    662  status
    100003    2   tcp   2049  nfs
    100003    3   tcp   2049  nfs
    100003    4   tcp   2049  nfs
    100227    2   tcp   2049
    100227    3   tcp   2049
    100003    2   udp   2049  nfs
    100003    3   udp   2049  nfs
    100003    4   udp   2049  nfs
    100227    2   udp   2049
    100227    3   udp   2049
    100021    1   udp  32769  nlockmgr
    100021    3   udp  32769  nlockmgr
    100021    4   udp  32769  nlockmgr
    100021    1   tcp  32803  nlockmgr
    100021    3   tcp  32803  nlockmgr
    100021    4   tcp  32803  nlockmgr
    100005    1   udp    892  mountd
    100005    1   tcp    892  mountd
    100005    2   udp    892  mountd
    100005    2   tcp    892  mountd
    100005    3   udp    892  mountd
    100005    3   tcp    892  mountd

Configure the mount point in /etc/fstab:

[root@web01 ~]# vim /etc/fstab

192.168.1.1:/data  /data  nfs  vers=3,proto=tcp,hard,intr,rsize=32768,wsize=32768,noatime  0  0

Now create the placeholder directory on the client, mount, and verify it works:

[root@web01 ~]# mkdir /data
[root@web01 ~]# mount -a
[root@web01 ~]# df -h
Filesystem          Size  Used Avail Use% Mounted on
/dev/xvda1           20G  1.2G   18G   7% /
none                4.0K     0  4.0K   0% /sys/fs/cgroup
udev                484M  8.0K  484M   1% /dev
tmpfs                99M  404K   99M   1% /run
none                5.0M     0  5.0M   0% /run/lock
none                495M     0  495M   0% /run/shm
none                100M     0  100M   0% /run/user
192.168.13.1:/data   20G  1.2G   18G   7% /data
[root@web01 ~]#
[root@web01 ~]# grep /data /proc/mounts 
192.168.1.1:/data /data nfs rw,noatime,vers=3,rsize=32768,wsize=32768,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.1.1,mountvers=3,mountport=892,mountproto=tcp,local_lock=none,addr=192.168.1.1 0 0
[root@web01 ~]#
[root@web01 ~]# touch /data/test-file
[root@web01 ~]# ls -al /data/test-file 
-rw-r--r-- 1 root root 0 Dec 20 01:45 /data/test-file

Setup NFSv3 on CentOS

NFSv3 is a tried and tested method of allowing client servers to access files over a network, in a very similar fashion to how the files would be accessed on a local file system. As a very mature piece of software, it has been successfully developed and used on production environments for almost 20 years, and it is still widely accepted and supported with a long life ahead of it. Some could make the argument that NFSv4.1 is faster now with pNFS that is now available, but I personally still prefer NFSv3 on many environments.

Setting it up is pretty easy and straight forward. As this is a network file system, it is strongly recommended to setup a private switch or private network between to the servers to ensure the lowest latency, as well as better security.

NFS Server – Installation

Install the required packages on the NFS server:

# CentOS 5
[root@nfs01 ~]# yum install portmap nfs-utils -y

# CentOS 6 and CentOS 7
[root@nfs01 ~]# yum install rpcbind nfs-utils -y

NFS Server – Configuration

Out of the box, NFSv3 has the following options set which is getting outdated sorely at this time:
– Sets random ephemeral ports upon daemon startup.
– Enables only 8 NFS threads

To make things more easier for admin’s to lock down the firewalls, we are going to set static ports, and also enable 64 NFS threads since you will most likely run into IO problems before you hit this limit as it was meant for much older systems.

Uncomment or add the following variables in /etc/sysconfig/nfs

[root@nfs01 ~]# vim /etc/sysconfig/nfs

# CentOS 5 and CentOS 6
RPCNFSDCOUNT=64
RQUOTAD_PORT=875
LOCKD_TCPPORT=32803
LOCKD_UDPPORT=32769
MOUNTD_PORT=892
STATD_PORT=662
STATD_OUTGOING_PORT=2020

# CentOS 7
RPCRQUOTADOPTS="-p 875"
LOCKD_TCPPORT=32803
LOCKD_UDPPORT=32769
RPCNFSDCOUNT=64
RPCMOUNTDOPTS="-p 892"
STATDARG="-p 662 -o 2020"
GSS_USE_PROXY="no"

Open the firewall to allow your private network access to the NFS services. You may have to adjust your rules as my private network resides on eth2. Do not allow this on the public interface without adjusting the source IP’s accordingly!

# CentOS 5 and CentOS 6
[root@nfs01 ~]# vim /etc/sysconfig/iptables
-A INPUT -i eth2 -s 192.168.1.0/24 -p tcp -m tcp --dport 111 -j ACCEPT
-A INPUT -i eth2 -s 192.168.1.0/24 -p udp -m udp --dport 111 -j ACCEPT
-A INPUT -i eth2 -s 192.168.1.0/24 -p tcp -m tcp --dport 662 -j ACCEPT
-A INPUT -i eth2 -s 192.168.1.0/24 -p udp -m udp --dport 662 -j ACCEPT
-A INPUT -i eth2 -s 192.168.1.0/24 -p tcp -m tcp --dport 892 -j ACCEPT
-A INPUT -i eth2 -s 192.168.1.0/24 -p udp -m udp --dport 892 -j ACCEPT
-A INPUT -i eth2 -s 192.168.1.0/24 -p tcp -m tcp --dport 2049 -j ACCEPT
-A INPUT -i eth2 -s 192.168.1.0/24 -p udp -m udp --dport 2049 -j ACCEPT
-A INPUT -i eth2 -s 192.168.1.0/24 -p tcp -m tcp --dport 32803 -j ACCEPT
-A INPUT -i eth2 -s 192.168.1.0/24 -p udp -m udp --dport 32769 -j ACCEPT

# CentOS 7
# Note:  A space was added in the tags so WordPress wouldn't interpret as markup
vim /etc/firewalld/services/nfs.xml
<?xml version="1.0" encoding="utf-8"?>
<service>
  <short>NFS
  <description>NFS service
  <port protocol="tcp" port="111"/>
  <port protocol="udp" port="111"/>
  <port protocol="tcp" port="662"/>
  <port protocol="udp" port="662"/>
  <port protocol="tcp" port="892"/>
  <port protocol="udp" port="892"/>
  <port protocol="tcp" port="2049"/>
  <port protocol="udp" port="2049"/>
  <port protocol="tcp" port="32803"/>
  <port protocol="udp" port="32803"/>
  <port protocol="tcp" port="38467"/>
  <port protocol="udp" port="38467"/>
  <port protocol="tcp" port="32769"/>
  <port protocol="udp" port="32769"/>
</service>

Then apply the rules by:
systemctl reload firewalld.service

Now add a zone to the private network interface and set PEERDNS to no:
vim /etc/sysconfig/network-scripts/ifcfg-eth2
...
PEERDNS=no
ZONE=internal
...

Then apply the changes by:
ifdown eth2 && ifup eth2

Now add the NFS rules to the private network interface:
firewall-cmd --zone=internal --add-interface eth2
firewall-cmd --zone=internal --add-service=nfs
firewall-cmd --zone=internal --add-interface eth2 --permanent
firewall-cmd --zone=internal --add-service=nfs --permanent

Export the directory to be shared, along with its permissions, in /etc/exports:

[root@nfs01 ~]# vim /etc/exports

/data 192.168.1.0/24(rw,no_root_squash)

Now start the services, and enable them to start at boot time:

# CentOS 5
[root@nfs01 ~]# service portmap start; chkconfig portmap on
[root@nfs01 ~]# service nfslock start; chkconfig nfslock on
[root@nfs01 ~]# service nfs start; chkconfig nfs on

# CentOS 6
[root@nfs01 ~]# service rpcbind start; chkconfig rpcbind on
[root@nfs01 ~]# service nfslock start; chkconfig nfslock on
[root@nfs01 ~]# service nfs start; chkconfig nfs on

# CentOS 7
[root@nfs01 ~]# systemctl start rpcbind nfs-lock nfs-server
[root@nfs01 ~]# systemctl enable rpcbind nfs-lock nfs-server

Check to make sure the services are running:

[root@nfs01 ~]# showmount -e
Export list for nfs01.domain.com:
/data 192.168.1.0/24

[root@nfs01 ~]# rpcinfo -p
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100024    1   udp    662  status
    100024    1   tcp    662  status
    100005    1   udp    892  mountd
    100005    1   tcp    892  mountd
    100005    2   udp    892  mountd
    100005    2   tcp    892  mountd
    100005    3   udp    892  mountd
    100005    3   tcp    892  mountd
    100003    2   tcp   2049  nfs
    100003    3   tcp   2049  nfs
    100003    4   tcp   2049  nfs
    100227    2   tcp   2049  nfs_acl
    100227    3   tcp   2049  nfs_acl
    100003    2   udp   2049  nfs
    100003    3   udp   2049  nfs
    100003    4   udp   2049  nfs
    100227    2   udp   2049  nfs_acl
    100227    3   udp   2049  nfs_acl
    100021    1   udp  32769  nlockmgr
    100021    3   udp  32769  nlockmgr
    100021    4   udp  32769  nlockmgr
    100021    1   tcp  32803  nlockmgr
    100021    3   tcp  32803  nlockmgr
    100021    4   tcp  32803  nlockmgr

NFS Client – Installation

Now that the NFS server is ready, the NFS clients now need to be setup to connect. Install the required packages on the NFS clients by:

# CentOS 5
[root@web01 ~]# yum install portmap nfs-utils -y

# CentOS 6 and CentOS 7
[root@web01 ~]# yum install rpcbind nfs-utils -y

Now start the services, and enable them to start at boot time.

# CentOS 5
[root@web01 ~]# service portmap start; chkconfig portmap on
[root@web01 ~]# service nfslock start; chkconfig nfslock on
[root@web01 ~]# chkconfig netfs on

# CentOS 6
[root@web01 ~]# service rpcbind start; chkconfig rpcbind on
[root@web01 ~]# service nfslock start; chkconfig nfslock on
[root@web01 ~]# chkconfig netfs on

# CentOS 7
[root@web01 ~]# systemctl start rpcbind nfs-lock
[root@web01 ~]# systemctl enable rpcbind nfs-lock

NFS Client – Configuration

Confirm the NFS clients can see the NFS server:

[root@web01 ~]# showmount -e 192.168.1.1
Export list for 192.168.1.1:
/var/www/vhosts 192.168.1.0/24

[root@web01 ~]# rpcinfo -p 192.168.1.1
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100024    1   udp    662  status
    100024    1   tcp    662  status
    100005    1   udp    892  mountd
    100005    1   tcp    892  mountd
    100005    2   udp    892  mountd
    100005    2   tcp    892  mountd
    100005    3   udp    892  mountd
    100005    3   tcp    892  mountd
    100003    2   tcp   2049  nfs
    100003    3   tcp   2049  nfs
    100003    4   tcp   2049  nfs
    100227    2   tcp   2049  nfs_acl
    100227    3   tcp   2049  nfs_acl
    100003    2   udp   2049  nfs
    100003    3   udp   2049  nfs
    100003    4   udp   2049  nfs
    100227    2   udp   2049  nfs_acl
    100227    3   udp   2049  nfs_acl
    100021    1   udp  32769  nlockmgr
    100021    3   udp  32769  nlockmgr
    100021    4   udp  32769  nlockmgr
    100021    1   tcp  32803  nlockmgr
    100021    3   tcp  32803  nlockmgr
    100021    4   tcp  32803  nlockmgr

Configure the mount point in /etc/fstab:

[root@web01 ~]# vim /etc/fstab

192.168.1.1:/data  /data  nfs  vers=3,proto=tcp,hard,intr,rsize=32768,wsize=32768,noatime  0  0

Now create the placeholder directory on the client, mount, and verify it works:

[root@web01 ~]# mkdir /data
[root@web01 ~]# mount -a
[root@web01 ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root
                       14G  1.8G   11G  15% /
tmpfs                 939M     0  939M   0% /dev/shm
/dev/sda1             477M   74M  378M  17% /boot
192.168.1.1:/data      14G  1.9G   11G  15% /data
[root@web01 ~]#
[root@web01 ~]# grep /data /proc/mounts 
192.168.1.1:/data /data nfs rw,noatime,vers=3,rsize=32768,wsize=32768,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.1.1,mountvers=3,mountport=892,mountproto=tcp,local_lock=none,addr=192.168.1.1 0 0
[root@web01 ~]#
[root@web01 ~]# touch /data/test-file
[root@web01 ~]# ls -al /data/test-file 
-rw-r--r-- 1 root root 0 Dec 19 17:57 /data/test-file

RHCSA Study Guide – Objective 7 : File Sharing

############################
Everything below are my raw notes that I took while attending an unofficial RHCSA training session.  I am posting them here in hopes they will assist others who may be preparing to take this exam.  

My notes are my own interpretation of the lectures, and are certainly not a replacement to classroom training either through your company, or by taking the official RHCSA classes offered through Red Hat.  If you are new to the Red Hat world, I strongly suggest looking into their training courses over at Red Hat.
############################

NFS

The network file service (NFS) is used to share data with other servers.

To see if the NFS server has the ports listening:

[root@web01 ~]# rpcinfo -p server1

To see what shares are setup on the NFS server:

[root@web01 ~]# showmount -e server1

To mount the NFS share:

[root@web01 ~]# mount x.x.x.x:/share1 /mnt

To make it persistent across reboots:

[root@web01 ~]# vi /etc/fstab
...
x.x.x.x:/share /mnt nfs defaults 0 0
...

EXAM NOTE: You just need to know how to mount a share for the rhcsa. No real nfs configuration needed

Lab

Mount the /share NFS share from server1, and add it to your fstab for persistence across reboots
[root@web01 ~]# mount -t nfs server1:/share /mnt
[root@web01 ~]# vim /etc/fstab
...
server1:/share  /mnt nfs defaults 0 0
...

VSFTPD

The default FTP server is vsftpd. The primary configuration file is:

/etc/vsftpd/vsftpd.conf

Two types of access are allowed:

1.  Anonymous : By default, these users are chrooted to /var/ftp for security.  (NOTE for SElinux), could use that --reference flag if changing dir
2.  User :  By default, users do not get chrooted.

Indivudual users can be denied by placing their names in:

[root@web01 ~]# vim /etc/vsftpd/ftpusers

Lab

1.  Configure VSFTPd to only allow the user 'richard' to ftp to your server
[root@web01 ~]# yum install vsftpd
[root@web01 ~]# chkconfig vsftpd on

# Now, need to set selinux to allow users to write to their homedir
[root@web01 ~]# getsebool -a |grep ftp
[root@web01 ~]# setsebool -P ftp_home_dir on
[root@web01 ~]# setsebool -P sftpd_enable_homedirs on

# EXAM NOTE: DO NOT FORGET TO SPECIFY THE -P SO THE CHANGE IS PERSISTENT ACROSS REBOOTS!

# Now, set vsftpd to only allow richard in:
[root@web01 ~]# vi /etc/vsftpd/vsftpd.conf
...
userlist_enable=NO
...

[root@web01 ~]# vi /etc/vsftpd/user_list
# Remove everything and add
richard

# Test by:
[root@web01 ~]# ftp localhost

2.  Browse through the man page on vsftpd.conf
[root@web01 ~]# man vsftpd.conf

3.  Make sure vsftpd is started at boot time
[root@web01 ~]# chkconfig vsftpd on