Exporting and importing compressed MySQL Dumps

Typically when creating MySQL dumps, I prefer to use Holland. However sometimes you just need to be able to dump and import a database for various reasons.

Below are two quick commands for doing this with compressed files. If you have a 30G database dump, you probably don’t want to waste time uncompressing the whole thing first and using up valuable disk space. And you certainly don’t want a dump a 30G file on your file system.

Export to a compressed MySQL dump

This command will allow you to compress a database dump as you dump the database. Just be sure to remove the {} when you run the command, and replace database with the name of your database:

mysqldump -u {user} -p {database} | gzip > {database}.sql.gz

Or if you prefer to add a timestamp:

mysqldump -u {user} -p {database} | gzip > `date -I`.{database}.sql.gz

Import a compressed MySQL dump

This single command will allow you to import the compressed dump directly into MySQL without having to uncompress it first. This assumes you have already created the database within MySQL.

The command to import it is as follows:

gzip -dc < {database}.sql.gz | mysql -u {user} -p {database}

Bind mounts

While no one likes opening up /etc/fstab and seeing 30 bind mounts, (a sure sign of an over caffeinated sysadmin), bind mounts can have their place in the world of system administration.

To put it in english, a bind mount is simply the process of mounting a specific directory from one path into another. And to describe it another way, it allows you to mount part of a filesystem to another location, and have it accessible from both locations.

Some common uses for bind mounts include:
– Providing access to log files from within a users home directory
– Providing access to files that reside outside a chroot
– Setting up NFSv4 /exports
– Setup a link that crosses filesystems, for example, a directory that resides on drive /dev/xvda1 to another drive on /dev/xvdb1

Basic bind mount example

Setting up a bind mount is simple. In the example below, we are going to bind mount our websites folder /var/www/vhosts/domain.com into our user’s folder: /home/bob

[root@server01 ~]# mkdir /home/bob/domain.com
[root@server01 ~]# mount --bind /var/www/vhosts/domain.com /home/bob/domain.com

Then set the mount point to be persistent at boot:

[root@server01 ~]# vim /etc/fstab
/var/www/vhosts/domain.com  /home/bob/domain.com  none  bind  0 0

Read only bind mounts

Here is an example for setting up a ready only bind mount. In this example, we are going to bind mount our Apache logs in /var/log/httpd into our user’s folder: /home/bob, and only allow them read only permissions so they can troubleshoot their website:

[root@server01 ~]# mkdir /home/bob/logs
[root@server01 ~]# mount --bind /var/log/httpd /home/bob/logs
[root@server01 ~]# mount --bind -o remount,ro /home/bob/logs

Then set the mount point to be persistent at boot, retaining the read only permissions. This one is a bit more tricky as you cannot set the read only flag in /etc/fstab. Therefore, you need to for the remount command at startup by adding it into /etc/rc.local so it’ll run at boot time:

[root@server01 ~]# mkdir /home/bob/logs
[root@server01 ~]# vim /etc/fstab
/var/log/httpd  /home/bob/logs  none  bind  0 0

[root@server01 ~]# chmod +x /etc/rc.local
[root@server01 ~]# vim /etc/rc.local
[root@server01 ~]# mount --bind -o remount,ro /home/bob/logs

Then consider rebooting your system to ensure the directory mounts properly in read only mode. You will know its successful when you cannot delete a file, or write a new file to that directory, even as the root user. For example:

[root@server01 ~]# touch /home/bob/logs/mylogfile
touch: cannot touch `/home/bob/logs/mylogfile': Read-only file system

[root@server01 ~]# rm -f /home/bob/logs/messages
rm: cannot remove `/home/bob/logs/messages': Read-only file system

Looking at whats underneath a mount point

Sometimes a sysadmin may have setup a new drive as the existing one was full. Then they transferred over a large directory to the new disk and symlinked it back to the original location. However they forgot to cleanup the directory on the original disk, so how does one find whats using up all the space since du -sh will not be able to see whats in there?

In other words, if a filesystem is mounted on a directory that contains files, any files below the mount point are hidden by the mounted file system. So how can we easily see whats in that directory without un-mounting the filesystem?

You can use bind mounts to essentially ‘peek’ under a mount point. Lets say the mount point you wanted to peek under is in /home/mydirectory. You can peek under the /home/mydirectory mount point by:

[root@server01 ~]# mount --bind /home/mydirectory /tmp/mnt-peek

Now you can see what files are ‘hidden’ by checking in /tmp/mnt-new. Once your done, be sure to unmount it by:

[root@server01 ~]# umount /tmp/mnt-peek

Setup NFSv4 on CentOS

NFSv4 is a tried and tested method of allowing client servers to access files over a network, in a very similar fashion to how the files would be accessed on a local file system. As a very mature piece of software, it has been successfully developed and used on production environments for over 15 years, and it is still widely accepted and supported with a long life ahead of it.

Setting it up is pretty easy and straight forward. As this is a network file system, it is strongly recommended to setup a private switch or private network between to the servers to ensure the lowest latency, as well as better security.

NFS Server – Installation

Install the required packages on the NFS server:

# CentOS 5, CentOS 6 and CentOS 7
[root@nfs01 ~]# yum install portmap nfs-utils nfs4-acl-tools -y

NFS Server – Configuration

Out of the box, NFSv4 has the following option set which is getting outdated sorely at this time:
– Enables only 8 NFS threads

We are going to enable 64 NFS threads since you will most likely run into IO problems before you hit this limit as it was meant for much older systems.

[root@nfs01 ~]# vim /etc/sysconfig/nfs
RPCNFSDCOUNT=64

Next, set the domain as all servers and clients should resides within the same domain:

[root@nfs01 ~]# vim /etc/idmapd.conf
[General]
Domain = yourdomain.com

Open the firewall to allow your private network access to the NFS services. You may have to adjust your rules as my private network resides on eth2. Do not allow this on the public interface without adjusting the source IP’s accordingly!

# CentOS 5 and CentOS 6
[root@nfs01 ~]# vim /etc/sysconfig/iptables
-A INPUT -i eth2 -s 192.168.1.0/24 -p tcp -m tcp --dport 111 -j ACCEPT
-A INPUT -i eth2 -s 192.168.1.0/24 -p udp -m udp --dport 111 -j ACCEPT
-A INPUT -i eth2 -s 192.168.1.0/24 -p tcp -m tcp --dport 662 -j ACCEPT
-A INPUT -i eth2 -s 192.168.1.0/24 -p udp -m udp --dport 662 -j ACCEPT
-A INPUT -i eth2 -s 192.168.1.0/24 -p tcp -m tcp --dport 892 -j ACCEPT
-A INPUT -i eth2 -s 192.168.1.0/24 -p udp -m udp --dport 892 -j ACCEPT
-A INPUT -i eth2 -s 192.168.1.0/24 -p tcp -m tcp --dport 2049 -j ACCEPT
-A INPUT -i eth2 -s 192.168.1.0/24 -p udp -m udp --dport 2049 -j ACCEPT
-A INPUT -i eth2 -s 192.168.1.0/24 -p tcp -m tcp --dport 32803 -j ACCEPT
-A INPUT -i eth2 -s 192.168.1.0/24 -p udp -m udp --dport 32769 -j ACCEPT

# CentOS 7
[root@nfs01 ~]# vim /etc/firewalld/services/nfs.xml
<?xml version="1.0" encoding="utf-8"?>
<service>
  <short>NFS
  <description>NFS service
  <port protocol="tcp" port="111"/>
  <port protocol="udp" port="111"/>
  <port protocol="tcp" port="662"/>
  <port protocol="udp" port="662"/>
  <port protocol="tcp" port="892"/>
  <port protocol="udp" port="892"/>
  <port protocol="tcp" port="2049"/>
  <port protocol="udp" port="2049"/>
  <port protocol="tcp" port="32803"/>
  <port protocol="udp" port="32803"/>
  <port protocol="tcp" port="38467"/>
  <port protocol="udp" port="38467"/>
  <port protocol="tcp" port="32769"/>
  <port protocol="udp" port="32769"/>
</service>

Then apply the rules by:
[root@nfs01 ~]# systemctl reload firewalld.service

Now add a zone to the private network interface and set PEERDNS to no:
[root@nfs01 ~]# vim /etc/sysconfig/network-scripts/ifcfg-eth2
...
PEERDNS=no
ZONE=internal
...

Then apply the changes by:
[root@nfs01 ~]# ifdown eth2 && ifup eth2

Now add the NFS rules to the private network interface:
[root@nfs01 ~]# firewall-cmd --zone=internal --add-interface eth2
[root@nfs01 ~]# firewall-cmd --zone=internal --add-service=nfs
[root@nfs01 ~]# firewall-cmd --zone=internal --add-interface eth2 --permanent
[root@nfs01 ~]# firewall-cmd --zone=internal --add-service=nfs --permanent

NFSv4 uses a pseudo filesystem for the exports. A pseudo filesystem allows NFS clients to browse the hierarchy of exported file systems, but remains limited to paths that are actually exported. There are a number of ways to go about this, but for this guide, we’ll assume the pseudo filesystem root will be /exports, and we’ll simply bind mount the desired directories into the /exports folder.

For this example, I am looking to export in /data. So we’ll bind mount that to the /exports folder as follows:

[root@nfs01 ~]# touch /data/test-file
[root@nfs01 ~]# mkdir /exports
[root@nfs01 ~]# mkdir /exports/data
[root@nfs01 ~]# echo "/data  /exports/data  none  bind  0 0" >> /etc/fstab
[root@nfs01 ~]# mount -a
[root@nfs01 ~]# ls -al /exports/data
total 8
drwxr-xr-x 2 root     root     4096 Jan 11 22:19 .
drwxr-xr-x 3 root     root     4096 Jan 11 22:03 ..
-rw-r--r-- 1 root     root        0 Jan 11 22:03 test-file

If you can see the file, test-file, within /exports/data, then everything is setup properly.

Export the directory to be shared, along with its permissions, in /etc/exports:

[root@nfs01 ~]# vi /etc/exports
/exports      192.168.1.0/24(ro,no_subtree_check,fsid=0,crossmnt)
/exports/data 192.168.1.0/24(rw,no_subtree_check,no_root_squash)

Now start the services, and enable them to start at boot time:

# CentOS 5
[root@nfs01 ~]# service portmap start; chkconfig portmap on
[root@nfs01 ~]# service rpcidmapd start; chkconfig rpcidmapd on
[root@nfs01 ~]# service nfs start; chkconfig nfs on

# CentOS 6
[root@nfs01 ~]# service rpcbind start; chkconfig rpcbind on
[root@nfs01 ~]# service rpcidmapd start; chkconfig rpcidmapd on
[root@nfs01 ~]# service nfs start; chkconfig nfs on

# CentOS 7
[root@nfs01 ~]# systemctl start rpcbind nfs-idmap nfs-server
[root@nfs01 ~]# systemctl enable rpcbind nfs-idmap nfs-server

Check to make sure the services are running:

[root@nfs01 ~]# showmount -e
Export list for nfs01:
/exports/data 192.168.1.0/24
/exports     192.168.1.0/24

[root@nfs01 ~]# rpcinfo -p
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100005    1   udp  41418  mountd
    100005    1   tcp  50139  mountd
    100005    2   udp  50228  mountd
    100005    2   tcp  52070  mountd
    100005    3   udp  33496  mountd
    100005    3   tcp  54673  mountd
    100003    2   tcp   2049  nfs
    100003    3   tcp   2049  nfs
    100003    4   tcp   2049  nfs
    100227    2   tcp   2049  nfs_acl
    100227    3   tcp   2049  nfs_acl
    100003    2   udp   2049  nfs
    100003    3   udp   2049  nfs
    100003    4   udp   2049  nfs
    100227    2   udp   2049  nfs_acl
    100227    3   udp   2049  nfs_acl
    100021    1   udp  38895  nlockmgr
    100021    3   udp  38895  nlockmgr
    100021    4   udp  38895  nlockmgr
    100021    1   tcp  39908  nlockmgr
    100021    3   tcp  39908  nlockmgr
    100021    4   tcp  39908  nlockmgr

NFS Client – Installation

Now that the NFS server is ready, the NFS clients now need to be setup to connect. Install the required packages on the NFS clients by:

# CentOS 5
[root@web01 ~]# yum install portmap nfs-utils nfs4-acl-tools -y

# CentOS 6 and CentOS 7
[root@web01 ~]# yum install rpcbind nfs-utils nfs4-acl-tools -y

Next, set the domain as all servers and clients should resides within the same domain:

[root@web01 ~]# vim /etc/idmapd.conf
[General]
Domain = yourdomain.com

Now start the services, and enable them to start at boot time.

# CentOS 5
[root@web01 ~]# service portmap start; chkconfig portmap on
[root@web01 ~]# service rpcidmapd start; chkconfig rpcidmapd on
[root@web01 ~]# chkconfig netfs on

# CentOS 6
[root@web01 ~]# service rpcbind start; chkconfig rpcbind on
[root@web01 ~]# service rpcidmapd start; chkconfig rpcidmapd on
[root@web01 ~]# chkconfig netfs on

# CentOS 7
[root@web01 ~]# systemctl start rpcbind nfs-idmap
[root@web01 ~]# systemctl enable rpcbind nfs-idmap

NFS Client – Configuration

Confirm the NFS clients can see the NFS server:

[root@web01 ~]# showmount -e 192.168.1.1
Export list for 192.168.1.1:
/data 192.168.1.0/24

Configure the mount point in /etc/fstab:

[root@web01 ~]# vim /etc/fstab
192.168.1.1:/  /data  nfs4  sec=sys,noatime  0  0

Now create the placeholder directory on the client, mount, and verify it works:

[root@web01 ~]# mkdir /data
[root@web01 ~]# mount -a
[root@web01 ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root
                       14G  1.8G   11G  15% /
tmpfs                 939M     0  939M   0% /dev/shm
/dev/sda1             477M   74M  378M  17% /boot
192.168.1.1:/data      14G  1.9G   11G  15% /data
[root@web01 ~]#
[root@web01 ~]# grep /data /proc/mounts 
192.168.1.1:/data/ /data nfs4 rw,noatime,vers=4,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=192.168.1.164,minorversion=0,local_lock=none,addr=192.168.1.1 0 0
[root@web01 ~]#
[root@web01 ~]# touch /data/test-file01
[root@web01 ~]# ls -al /data/test-file01
-rw-r--r-- 1 root root 0 Dec 19 17:57 /data/test-file01

Finally, confirm the user mapping is the same on both servers. You can test to ensure both the server and the client’s show the same UID’s by:

# Create a user on the NFS server:
[root@nfs01 ~]# useradd -u 6000 testuser

# Create the same user on the NFS client:
[root@web01 ~]# useradd -u 6000 testuser

# Set the ownership of a test file on the NFS server:
[root@nfs01 ~]# touch /data/test-file01
[root@nfs01 ~]# chown testuser:testuser /data/test-file01

# Check the ownership of the test file on the NFS server:
[root@nfs01 ~]# ls -al /data/test-file01

# Confirm the client sees the same ownership:
[root@web01 ~]# ls -al /data/test-file01

Setup NFSv4 on Ubuntu or Debian

NFSv4 is a tried and tested method of allowing client servers to access files over a network, in a very similar fashion to how the files would be accessed on a local file system. As a very mature piece of software, it has been successfully developed and used on production environments for over 15 years, and it is still widely accepted and supported with a long life ahead of it.

Setting it up is pretty easy and straight forward. As this is a network file system, it is strongly recommended to setup a private switch or private network between to the servers to ensure the lowest latency, as well as better security.

NFS Server – Installation

Install the required packages on the NFS server:

# Ubuntu and Debian
[root@nfs01 ~]# apt-get update
[root@nfs01 ~]# apt-get install rpcbind nfs-common nfs4-acl-tools nfs-kernel-server

NFS Server – Configuration

Out of the box, NFSv4 has the following option set which is getting outdated sorely at this time:
– Sets random ephemeral ports upon daemon startup.
– Enables only 8 NFS threads

To make things more easier for admin’s to lock down the firewalls, we are going to set static ports, and also enable 64 NFS threads since you will most likely run into IO problems before you hit this limit as it was meant for much older systems.

Stop the services so we can unload the lockd kernel module and configure the services. This step cannot be skipped!

# Ubuntu 12.04 and Ubuntu 14.04
[root@nfs01 ~]# service nfs-kernel-server stop
[root@nfs01 ~]# service statd stop
[root@nfs01 ~]# service idmapd stop
[root@nfs01 ~]# service rpcbind stop
[root@nfs01 ~]# modprobe -r nfsd nfs lockd

# Debian 7
[root@nfs01 ~]# service nfs-kernel-server stop
[root@nfs01 ~]# service nfs-common stop
[root@nfs01 ~]# service rpcbind stop
[root@nfs01 ~]# modprobe -r nfsd nfs lockd

Update the NFS thread count by:

[root@nfs01 ~]# vim /etc/default/nfs-kernel-server
...
RPCNFSDCOUNT=64
RPCNFSDPRIORITY=0
RPCMOUNTDOPTS="--manage-gids"
NEED_SVCGSSD="no"
RPCSVCGSSDOPTS=""
...

Next, set the domain as all servers and clients should resides within the same domain:

[root@nfs01 ~]# vim /etc/idmapd.conf
[General]
Domain = yourdomain.com

Update nfs-common to disable statd and rpc.gssd, then require idmapd:

# Ubuntu 12.04 and Ubuntu 14.04
[root@nfs01 ~]# vim /etc/default/nfs-common
NEED_STATD=no
STATDOPTS=
NEED_GSSD=no

# Debian 7
[root@nfs01 ~]# vim /etc/default/nfs-common
NEED_STATD=no
STATDOPTS=
NEED_IDMAPD=yes
NEED_GSSD=no

Open the firewall to allow your private network access to the NFS services. You may have to adjust your rules as my private network resides on eth2. Do not allow this on the public interface without adjusting the source IP’s accordingly!

[root@nfs01 ~]# ufw allow in on eth2 to 192.168.1.0/24 proto tcp
[root@nfs01 ~]# ufw allow in on eth2 to 192.168.1.0/24 proto udp

NFSv4 uses a pseudo filesystem for the exports. A pseudo filesystem allows NFS clients to browse the hierarchy of exported file systems, but remains limited to paths that are actually exported. There are a number of ways to go about this, but for this guide, we’ll assume the pseudo filesystem root will be /exports, and we’ll simply bind mount the desired directories into the /exports folder.

For this example, I am looking to export in /data. So we’ll bind mount that to the /exports folder as follows:

[root@nfs01 ~]# mkdir /data
[root@nfs01 ~]# touch /data/test-file
[root@nfs01 ~]# mkdir /exports
[root@nfs01 ~]# mkdir /exports/data
[root@nfs01 ~]# echo "/data  /exports/data  none  bind  0 0" >> /etc/fstab
[root@nfs01 ~]# mount -a
[root@nfs01 ~]# ls -al /exports/data
total 8
drwxr-xr-x 2 root     root     4096 Jan 11 22:19 .
drwxr-xr-x 3 root     root     4096 Jan 11 22:03 ..
-rw-r--r-- 1 root     root        0 Jan 11 22:03 test-file

If you can see the file, test-file, within /exports/data, then everything is setup properly.

Export the directory to be shared, along with its permissions, in /etc/exports:

[root@nfs01 ~]# vim /etc/exports

/exports      192.168.1.0/24(ro,no_subtree_check,fsid=0,crossmnt)
/exports/data 192.168.1.0/24(rw,no_subtree_check,no_root_squash)

Now start the services, and ensure they will start at boot time:

# Ubuntu 12.04 and Ubuntu 14.04
[root@nfs01 ~]# service rpcbind start
[root@nfs01 ~]# service idmapd start
[root@nfs01 ~]# service nfs-kernel-server start; update-rc.d nfs-kernel-server enable

# Debian 7
[root@nfs01 ~]# service rpcbind start; insserv rpcbind
[root@nfs01 ~]# service nfs-common start; insserv nfs-common
[root@nfs01 ~]# service nfs-kernel-server start; insserv nfs-kernel-server

Check to make sure the services are running:

[root@nfs01 ~]# showmount -e
Export list for nfs01:
/exports/data 192.168.1.0/24
/exports     192.168.1.0/24

[root@nfs01 ~]# rpcinfo -p
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100003    2   tcp   2049  nfs
    100003    3   tcp   2049  nfs
    100003    4   tcp   2049  nfs
    100227    2   tcp   2049
    100227    3   tcp   2049
    100003    2   udp   2049  nfs
    100003    3   udp   2049  nfs
    100003    4   udp   2049  nfs
    100227    2   udp   2049
    100227    3   udp   2049
    100021    1   udp  39482  nlockmgr
    100021    3   udp  39482  nlockmgr
    100021    4   udp  39482  nlockmgr
    100021    1   tcp  60237  nlockmgr
    100021    3   tcp  60237  nlockmgr
    100021    4   tcp  60237  nlockmgr
    100005    1   udp  39160  mountd
    100005    1   tcp  34995  mountd
    100005    2   udp  34816  mountd
    100005    2   tcp  56338  mountd
    100005    3   udp  49147  mountd
    100005    3   tcp  51938  mountd

NFS Client – Installation

Now that the NFS server is ready, the NFS clients now need to be setup to connect. Install the required packages on the NFS clients by:

# Ubuntu or Debian
[root@web01 ~]# apt-get update
[root@web01 ~]# apt-get install rpcbind nfs-common nfs4-acl-tools

NFS Client – Configuration

Stop the services so we can unload the lockd kernel module and configure the services. This step cannot be skipped!

# Ubuntu 12.04 and Ubuntu 14.04
[root@web01 ~]# service nfs-kernel-server stop
[root@web01 ~]# service statd stop
[root@web01 ~]# service idmapd stop
[root@web01 ~]# service rpcbind stop
[root@web01 ~]# modprobe -r nfsd nfs lockd

# Debian 7
[root@web01 ~]# service nfs-kernel-server stop
[root@web01 ~]# service nfs-common stop
[root@web01 ~]# service rpcbind stop
[root@web01 ~]# modprobe -r nfsd nfs lockd

Next, set the domain as all servers and clients should resides within the same domain:

[root@web01 ~]# vim /etc/idmapd.conf
[General]
Domain = yourdomain.com

Update nfs-common to disable statd and rpc.gssd, then require idmapd:

# Ubuntu 12.04 and Ubuntu 14.04
[root@web01 ~]# vim /etc/default/nfs-common
NEED_STATD=no
STATDOPTS=
NEED_GSSD=no

# Debian 7
[root@web01 ~]# vim /etc/default/nfs-common
NEED_STATD=no
STATDOPTS=
NEED_IDMAPD=yes
NEED_GSSD=no

Now start the services, and ensure they will start at boot time:

# Ubuntu 12.04 and Ubuntu 14.04
[root@web01 ~]# service rpcbind start
[root@web01 ~]# service idmapd start

# Debian 7
[root@web01 ~]# service rpcbind start; insserv rpcbind
[root@web01 ~]# service nfs-common start; insserv nfs-common

Confirm the NFS clients can see the NFS server:

[root@web01 ~]# showmount -e 192.168.1.1
Export list for 192.168.1.1:
/var/www/vhosts 192.168.1.0/24

[root@web01 ~]# rpcinfo -p 192.168.1.1
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100024    1   udp    662  status
    100024    1   tcp    662  status
    100003    2   tcp   2049  nfs
    100003    3   tcp   2049  nfs
    100003    4   tcp   2049  nfs
    100227    2   tcp   2049
    100227    3   tcp   2049
    100003    2   udp   2049  nfs
    100003    3   udp   2049  nfs
    100003    4   udp   2049  nfs
    100227    2   udp   2049
    100227    3   udp   2049
    100021    1   udp  32769  nlockmgr
    100021    3   udp  32769  nlockmgr
    100021    4   udp  32769  nlockmgr
    100021    1   tcp  32803  nlockmgr
    100021    3   tcp  32803  nlockmgr
    100021    4   tcp  32803  nlockmgr
    100005    1   udp    892  mountd
    100005    1   tcp    892  mountd
    100005    2   udp    892  mountd
    100005    2   tcp    892  mountd
    100005    3   udp    892  mountd
    100005    3   tcp    892  mountd

Configure the mount point in /etc/fstab:

[root@web01 ~]# vim /etc/fstab

192.168.1.1:/data  /data  nfs4  sec=sys,noatime  0  0

Now create the placeholder directory on the client, mount, and verify it works:

[root@web01 ~]# mkdir /data
[root@web01 ~]# mount -a
[root@web01 ~]# df -h
Filesystem          Size  Used Avail Use% Mounted on
/dev/xvda1           20G  1.2G   18G   7% /
none                4.0K     0  4.0K   0% /sys/fs/cgroup
udev                484M  8.0K  484M   1% /dev
tmpfs                99M  404K   99M   1% /run
none                5.0M     0  5.0M   0% /run/lock
none                495M     0  495M   0% /run/shm
none                100M     0  100M   0% /run/user
192.168.13.1:/data   20G  1.2G   18G   7% /data
[root@web01 ~]#
[root@web01 ~]# grep /data /proc/mounts 
192.168.1.1:/data /data nfs rw,noatime,vers=3,rsize=32768,wsize=32768,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.1.1,mountvers=3,mountport=892,mountproto=tcp,local_lock=none,addr=192.168.1.1 0 0
[root@web01 ~]#
[root@web01 ~]# touch /data/test-file
[root@web01 ~]# ls -al /data/test-file 
-rw-r--r-- 1 root root 0 Dec 20 01:45 /data/test-file

Finally, confirm the user mapping is the same on both servers. You can test to ensure both the server and the client’s show the same UID’s by:

# Create a user on the NFS server:
[root@nfs01 ~]# useradd -u 6000 testuser

# Create the same user on the NFS client:
[root@web01 ~]# useradd -u 6000 testuser

# Set the ownership of a test file on the NFS server:
[root@nfs01 ~]# touch /data/test-file01
[root@nfs01 ~]# chown testuser:testuser /data/test-file01

# Check the ownership of the test file on the NFS server:
[root@nfs01 ~]# ls -al /data/test-file01

# Confirm the client sees the same ownership:
[root@web01 ~]# ls -al /data/test-file01

Determine table size in MySQL

Understanding which tables within your database that contain large amounts of data can be a very useful. Tables that have a ton of data could indicate that some pruning may be needed.

Perhaps the space may be taken up by stats data, lead generation, or some other type of data which dates back 10 years. It will be up to the DBA’s to determine if any of that data can be archived of course.

Very large tables can give a performance penalty if the queries are not well written and return too many rows. It can also slow down your database dumps, or make them use up more disk space. Disk space in general is also a very real concern as well!

You can easily determine what tables within your database(s) are using up significant space by running:

mysql> SELECT table_schema as `Database`, table_name AS `Table`,
round(((data_length + index_length) / 1024 / 1024), 2) `Size in MB`
FROM information_schema.TABLES
ORDER BY (data_length + index_length) DESC;

And the returned results look something like the following:

+---------------------+---------------------------------------+------------+
| Database            | Table                                 | Size in MB |
+---------------------+---------------------------------------+------------+
| example_one         | tracker                               |    2854.70 |
| example_one         | corehits                              |    1321.00 |
| example_two         | reg_submissions                       |    1313.14 |
| example_one         | unsubscribes                          |    1312.00 |
| example_one         | members                               |     930.08 |
| example_two         | admin                                 |     939.55 |
| example_one         | submissions                           |     829.91 |
| example_two         | tracker_archive                       |     813.17 |
| example_two         | tracker_archive2                      |     757.56 |
| example_two         | affiliates                            |     749.56 |
| example_two         | post_data                             |     712.92 |
| example_one         | signups                               |     711.52 |
| example_one         | guests                                |     703.31 |
| example_one         | success                               |     586.41 |
| example_two         | tracker                               |     574.70 |
| example_one         | e_data                                |     434.28 |
| example_one         | reg_data                              |     426.88 |
+---------------------+---------------------------------------+------------+

Another possible way to run this that I recently found on https://www.percona.com/blog/2008/02/04/finding-out-largest-tables-on-mysql-server/ is posted below.

This one shows more detailed information about each table including how many rows it has, how much data, index size, total size, and the last column shows how much data the index takes compared to the data. If you see a large index size (idx column), it could be worth checking the indexes to see if you have any redundant indexes in place.

The command is:

mysql> SELECT CONCAT(table_schema, '.', table_name), CONCAT(ROUND(table_rows / 1000000, 2), 'M') rows, CONCAT(ROUND(data_length / ( 1024 * 1024 * 1024 ), 2), 'G') DATA, CONCAT(ROUND(index_length / ( 1024 * 1024 * 1024 ), 2), 'G') idx, CONCAT(ROUND(( data_length + index_length ) / ( 1024 * 1024 * 1024 ), 2), 'G') total_size, ROUND(index_length / data_length, 2) idxfrac FROM information_schema.TABLES ORDER  BY data_length + index_length DESC LIMIT  10;

I shortened the column titles a bit so it will fit on this blog, but the output will look something like:

+-----------------------+---------+-------+-------+------------+---------+
| CONCAT(.. table_name) | rows    | DATA  | idx   | total_size | idxfrac |
+-----------------------+---------+-------+-------+------------+---------+
| page_cache            | 38.58M  | 3.09G | 2.97G | 6.06G      |    0.96 |
| exit_pages            | 106.84M | 4.78G | 1.02G | 5.80G      |    0.21 |
| shipping_log004       | 33.94M  | 4.06G | 1.16G | 5.22G      |    0.29 |
| visitor_data          | 9.55M   | 4.61G | 0.09G | 4.70G      |    0.02 |
| orders                | 2.40M   | 1.57G | 0.30G | 1.87G      |    0.19 |
| marketing_info        | 25.16M  | 1.85G | 0.00G | 1.85G      |    0.00 |
| clients               | 1.27M   | 1.29G | 0.13G | 1.42G      |    0.10 |
| directory             | 5.87M   | 0.93G | 0.17G | 1.10G      |    0.18 |
| promos                | 2.31M   | 0.79G | 0.05G | 0.84G      |    0.07 |
| product_information   | 9.33M   | 0.39G | 0.35G | 0.74G      |    0.89 |
+-----------------------+---------+-------+-------+------------+---------+
10 rows in set (0.26 sec)

Maybe you have to see if you can set a retention rate on the data stored in a table if its for caching or logging purposes.

Perhaps you may need to run an optimize tables to reclaim the physical storage space. Just keep in mind that the optimize table command will lock your tables, so it may be best to run that during a low traffic time. See the following URL for more information:
http://dev.mysql.com/doc/refman/5.7/en/optimize-table.html

Setup NFSv3 on Ubuntu or Debian

NFSv3 is a tried and tested method of allowing client servers to access files over a network, in a very similar fashion to how the files would be accessed on a local file system. As a very mature piece of software, it has been successfully developed and used on production environments for almost 20 years, and it is still widely accepted and supported with a long life ahead of it. Some could make the argument that NFSv4.1 is faster now with pNFS that is now available, but I personally still prefer NFSv3 on many environments.

Setting it up is pretty easy and straight forward. As this is a network file system, it is strongly recommended to setup a private switch or private network between to the servers to ensure the lowest latency, as well as better security.

NFS Server – Installation

Install the required packages on the NFS server:

# Ubuntu and Debian
[root@nfs01 ~]# apt-get update
[root@nfs01 ~]# apt-get install rpcbind nfs-common nfs-kernel-server

NFS Server – Configuration

Out of the box, NFSv3 has the following options set which is getting outdated sorely at this time:
– Sets random ephemeral ports upon daemon startup.
– Enables only 8 NFS threads

To make things more easier for admin’s to lock down the firewalls, we are going to set static ports, and also enable 64 NFS threads since you will most likely run into IO problems before you hit this limit as it was meant for much older systems.

Stop the services so we can unload the lockd kernel module and configure static ports. This step cannot be skipped!

# Ubuntu 12.04 and Ubuntu 14.04
service nfs-kernel-server stop
service statd stop
service idmapd stop
service rpcbind stop
service portmap stop
modprobe -r nfsd nfs lockd

# Debian 7
service nfs-kernel-server stop
service nfs-common stop
service rpcbind stop
modprobe -r nfsd nfs lockd

Configure STATD and define the static ports:

# Ubuntu 12.04 and Ubuntu 14.04
echo "manual" > /etc/init/idmapd.override

vim /etc/default/nfs-common
NEED_STATD=yes
STATDOPTS="-p 662 -o 2020"
NEED_GSSD=no

# Debian 7
vim /etc/default/nfs-common
NEED_STATD=yes
STATDOPTS="-p 662 -o 2020"
NEED_IDMAPD=no
NEED_GSSD=no

Set the static port for LOCKD:

echo "options lockd nlm_udpport=32769 nlm_tcpport=32803" > /etc/modprobe.d/nfs-lockd.conf

Finally, update the NFS thread count by:

vim /etc/default/nfs-kernel-server
...
RPCNFSDCOUNT=64
RPCNFSDPRIORITY=0
RPCMOUNTDOPTS="--manage-gids -p 892"
NEED_SVCGSSD=no
RPCSVCGSSDOPTS=
...

Open the firewall to allow your private network access to the NFS services. You may have to adjust your rules as my private network resides on eth2. Do not allow this on the public interface without adjusting the source IP’s accordingly!

[root@nfs01 ~]# ufw allow in on eth2 to 192.168.1.0/24 proto tcp
[root@nfs01 ~]# ufw allow in on eth2 to 192.168.1.0/24 proto udp

Export the directory to be shared, along with its permissions, in /etc/exports:

[root@nfs01 ~]# vim /etc/exports

/data 192.168.1.0/24(rw,no_root_squash,no_subtree_check)

Now start the services, and ensure they will start at boot time:

# Ubuntu 12.04 and Ubuntu 14.04
service rpcbind start
service statd start
service nfs-kernel-server start; update-rc.d nfs-kernel-server enable

# Debian 7
service rpcbind start; insserv rpcbind
service nfs-common start; insserv nfs-common
service nfs-kernel-server start; insserv nfs-kernel-server

Check to make sure the services are running:

[root@nfs01 ~]# showmount -e
Export list for nfs01.domain.com:
/data 192.168.1.0/24

[root@nfs01 ~]# rpcinfo -p
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100024    1   udp    662  status
    100024    1   tcp    662  status
    100003    2   tcp   2049  nfs
    100003    3   tcp   2049  nfs
    100003    4   tcp   2049  nfs
    100227    2   tcp   2049
    100227    3   tcp   2049
    100003    2   udp   2049  nfs
    100003    3   udp   2049  nfs
    100003    4   udp   2049  nfs
    100227    2   udp   2049
    100227    3   udp   2049
    100021    1   udp  32769  nlockmgr
    100021    3   udp  32769  nlockmgr
    100021    4   udp  32769  nlockmgr
    100021    1   tcp  32803  nlockmgr
    100021    3   tcp  32803  nlockmgr
    100021    4   tcp  32803  nlockmgr
    100005    1   udp    892  mountd
    100005    1   tcp    892  mountd
    100005    2   udp    892  mountd
    100005    2   tcp    892  mountd
    100005    3   udp    892  mountd
    100005    3   tcp    892  mountd

NFS Client – Installation

Now that the NFS server is ready, the NFS clients now need to be setup to connect. Install the required packages on the NFS clients by:

# Ubuntu or Debian
[root@web01 ~]# apt-get update
[root@web01 ~]# apt-get install rpcbind nfs-common

Now start the services:

# Ubuntu 12.04 and Ubuntu 14.04
service rpcbind start
service statd start
service idmapd stop
echo "manual" > /etc/init/idmapd.override

# Debian 7
service rpcbind start; insserv rpcbind
service nfs-common start; insserv nfs-common
insserv mountnfs.sh

NFS Client – Configuration

Confirm the NFS clients can see the NFS server:

[root@web01 ~]# showmount -e 192.168.1.1
Export list for 192.168.1.1:
/var/www/vhosts 192.168.1.0/24

[root@web01 ~]# rpcinfo -p 192.168.1.1
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100024    1   udp    662  status
    100024    1   tcp    662  status
    100003    2   tcp   2049  nfs
    100003    3   tcp   2049  nfs
    100003    4   tcp   2049  nfs
    100227    2   tcp   2049
    100227    3   tcp   2049
    100003    2   udp   2049  nfs
    100003    3   udp   2049  nfs
    100003    4   udp   2049  nfs
    100227    2   udp   2049
    100227    3   udp   2049
    100021    1   udp  32769  nlockmgr
    100021    3   udp  32769  nlockmgr
    100021    4   udp  32769  nlockmgr
    100021    1   tcp  32803  nlockmgr
    100021    3   tcp  32803  nlockmgr
    100021    4   tcp  32803  nlockmgr
    100005    1   udp    892  mountd
    100005    1   tcp    892  mountd
    100005    2   udp    892  mountd
    100005    2   tcp    892  mountd
    100005    3   udp    892  mountd
    100005    3   tcp    892  mountd

Configure the mount point in /etc/fstab:

[root@web01 ~]# vim /etc/fstab

192.168.1.1:/data  /data  nfs  vers=3,proto=tcp,hard,intr,rsize=32768,wsize=32768,noatime  0  0

Now create the placeholder directory on the client, mount, and verify it works:

[root@web01 ~]# mkdir /data
[root@web01 ~]# mount -a
[root@web01 ~]# df -h
Filesystem          Size  Used Avail Use% Mounted on
/dev/xvda1           20G  1.2G   18G   7% /
none                4.0K     0  4.0K   0% /sys/fs/cgroup
udev                484M  8.0K  484M   1% /dev
tmpfs                99M  404K   99M   1% /run
none                5.0M     0  5.0M   0% /run/lock
none                495M     0  495M   0% /run/shm
none                100M     0  100M   0% /run/user
192.168.13.1:/data   20G  1.2G   18G   7% /data
[root@web01 ~]#
[root@web01 ~]# grep /data /proc/mounts 
192.168.1.1:/data /data nfs rw,noatime,vers=3,rsize=32768,wsize=32768,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.1.1,mountvers=3,mountport=892,mountproto=tcp,local_lock=none,addr=192.168.1.1 0 0
[root@web01 ~]#
[root@web01 ~]# touch /data/test-file
[root@web01 ~]# ls -al /data/test-file 
-rw-r--r-- 1 root root 0 Dec 20 01:45 /data/test-file

Setup NFSv3 on CentOS

NFSv3 is a tried and tested method of allowing client servers to access files over a network, in a very similar fashion to how the files would be accessed on a local file system. As a very mature piece of software, it has been successfully developed and used on production environments for almost 20 years, and it is still widely accepted and supported with a long life ahead of it. Some could make the argument that NFSv4.1 is faster now with pNFS that is now available, but I personally still prefer NFSv3 on many environments.

Setting it up is pretty easy and straight forward. As this is a network file system, it is strongly recommended to setup a private switch or private network between to the servers to ensure the lowest latency, as well as better security.

NFS Server – Installation

Install the required packages on the NFS server:

# CentOS 5
[root@nfs01 ~]# yum install portmap nfs-utils -y

# CentOS 6 and CentOS 7
[root@nfs01 ~]# yum install rpcbind nfs-utils -y

NFS Server – Configuration

Out of the box, NFSv3 has the following options set which is getting outdated sorely at this time:
– Sets random ephemeral ports upon daemon startup.
– Enables only 8 NFS threads

To make things more easier for admin’s to lock down the firewalls, we are going to set static ports, and also enable 64 NFS threads since you will most likely run into IO problems before you hit this limit as it was meant for much older systems.

Uncomment or add the following variables in /etc/sysconfig/nfs

[root@nfs01 ~]# vim /etc/sysconfig/nfs

# CentOS 5 and CentOS 6
RPCNFSDCOUNT=64
RQUOTAD_PORT=875
LOCKD_TCPPORT=32803
LOCKD_UDPPORT=32769
MOUNTD_PORT=892
STATD_PORT=662
STATD_OUTGOING_PORT=2020

# CentOS 7
RPCRQUOTADOPTS="-p 875"
LOCKD_TCPPORT=32803
LOCKD_UDPPORT=32769
RPCNFSDCOUNT=64
RPCMOUNTDOPTS="-p 892"
STATDARG="-p 662 -o 2020"
GSS_USE_PROXY="no"

Open the firewall to allow your private network access to the NFS services. You may have to adjust your rules as my private network resides on eth2. Do not allow this on the public interface without adjusting the source IP’s accordingly!

# CentOS 5 and CentOS 6
[root@nfs01 ~]# vim /etc/sysconfig/iptables
-A INPUT -i eth2 -s 192.168.1.0/24 -p tcp -m tcp --dport 111 -j ACCEPT
-A INPUT -i eth2 -s 192.168.1.0/24 -p udp -m udp --dport 111 -j ACCEPT
-A INPUT -i eth2 -s 192.168.1.0/24 -p tcp -m tcp --dport 662 -j ACCEPT
-A INPUT -i eth2 -s 192.168.1.0/24 -p udp -m udp --dport 662 -j ACCEPT
-A INPUT -i eth2 -s 192.168.1.0/24 -p tcp -m tcp --dport 892 -j ACCEPT
-A INPUT -i eth2 -s 192.168.1.0/24 -p udp -m udp --dport 892 -j ACCEPT
-A INPUT -i eth2 -s 192.168.1.0/24 -p tcp -m tcp --dport 2049 -j ACCEPT
-A INPUT -i eth2 -s 192.168.1.0/24 -p udp -m udp --dport 2049 -j ACCEPT
-A INPUT -i eth2 -s 192.168.1.0/24 -p tcp -m tcp --dport 32803 -j ACCEPT
-A INPUT -i eth2 -s 192.168.1.0/24 -p udp -m udp --dport 32769 -j ACCEPT

# CentOS 7
# Note:  A space was added in the tags so WordPress wouldn't interpret as markup
vim /etc/firewalld/services/nfs.xml
<?xml version="1.0" encoding="utf-8"?>
<service>
  <short>NFS
  <description>NFS service
  <port protocol="tcp" port="111"/>
  <port protocol="udp" port="111"/>
  <port protocol="tcp" port="662"/>
  <port protocol="udp" port="662"/>
  <port protocol="tcp" port="892"/>
  <port protocol="udp" port="892"/>
  <port protocol="tcp" port="2049"/>
  <port protocol="udp" port="2049"/>
  <port protocol="tcp" port="32803"/>
  <port protocol="udp" port="32803"/>
  <port protocol="tcp" port="38467"/>
  <port protocol="udp" port="38467"/>
  <port protocol="tcp" port="32769"/>
  <port protocol="udp" port="32769"/>
</service>

Then apply the rules by:
systemctl reload firewalld.service

Now add a zone to the private network interface and set PEERDNS to no:
vim /etc/sysconfig/network-scripts/ifcfg-eth2
...
PEERDNS=no
ZONE=internal
...

Then apply the changes by:
ifdown eth2 && ifup eth2

Now add the NFS rules to the private network interface:
firewall-cmd --zone=internal --add-interface eth2
firewall-cmd --zone=internal --add-service=nfs
firewall-cmd --zone=internal --add-interface eth2 --permanent
firewall-cmd --zone=internal --add-service=nfs --permanent

Export the directory to be shared, along with its permissions, in /etc/exports:

[root@nfs01 ~]# vim /etc/exports

/data 192.168.1.0/24(rw,no_root_squash)

Now start the services, and enable them to start at boot time:

# CentOS 5
[root@nfs01 ~]# service portmap start; chkconfig portmap on
[root@nfs01 ~]# service nfslock start; chkconfig nfslock on
[root@nfs01 ~]# service nfs start; chkconfig nfs on

# CentOS 6
[root@nfs01 ~]# service rpcbind start; chkconfig rpcbind on
[root@nfs01 ~]# service nfslock start; chkconfig nfslock on
[root@nfs01 ~]# service nfs start; chkconfig nfs on

# CentOS 7
[root@nfs01 ~]# systemctl start rpcbind nfs-lock nfs-server
[root@nfs01 ~]# systemctl enable rpcbind nfs-lock nfs-server

Check to make sure the services are running:

[root@nfs01 ~]# showmount -e
Export list for nfs01.domain.com:
/data 192.168.1.0/24

[root@nfs01 ~]# rpcinfo -p
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100024    1   udp    662  status
    100024    1   tcp    662  status
    100005    1   udp    892  mountd
    100005    1   tcp    892  mountd
    100005    2   udp    892  mountd
    100005    2   tcp    892  mountd
    100005    3   udp    892  mountd
    100005    3   tcp    892  mountd
    100003    2   tcp   2049  nfs
    100003    3   tcp   2049  nfs
    100003    4   tcp   2049  nfs
    100227    2   tcp   2049  nfs_acl
    100227    3   tcp   2049  nfs_acl
    100003    2   udp   2049  nfs
    100003    3   udp   2049  nfs
    100003    4   udp   2049  nfs
    100227    2   udp   2049  nfs_acl
    100227    3   udp   2049  nfs_acl
    100021    1   udp  32769  nlockmgr
    100021    3   udp  32769  nlockmgr
    100021    4   udp  32769  nlockmgr
    100021    1   tcp  32803  nlockmgr
    100021    3   tcp  32803  nlockmgr
    100021    4   tcp  32803  nlockmgr

NFS Client – Installation

Now that the NFS server is ready, the NFS clients now need to be setup to connect. Install the required packages on the NFS clients by:

# CentOS 5
[root@web01 ~]# yum install portmap nfs-utils -y

# CentOS 6 and CentOS 7
[root@web01 ~]# yum install rpcbind nfs-utils -y

Now start the services, and enable them to start at boot time.

# CentOS 5
[root@web01 ~]# service portmap start; chkconfig portmap on
[root@web01 ~]# service nfslock start; chkconfig nfslock on
[root@web01 ~]# chkconfig netfs on

# CentOS 6
[root@web01 ~]# service rpcbind start; chkconfig rpcbind on
[root@web01 ~]# service nfslock start; chkconfig nfslock on
[root@web01 ~]# chkconfig netfs on

# CentOS 7
[root@web01 ~]# systemctl start rpcbind nfs-lock
[root@web01 ~]# systemctl enable rpcbind nfs-lock

NFS Client – Configuration

Confirm the NFS clients can see the NFS server:

[root@web01 ~]# showmount -e 192.168.1.1
Export list for 192.168.1.1:
/var/www/vhosts 192.168.1.0/24

[root@web01 ~]# rpcinfo -p 192.168.1.1
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100024    1   udp    662  status
    100024    1   tcp    662  status
    100005    1   udp    892  mountd
    100005    1   tcp    892  mountd
    100005    2   udp    892  mountd
    100005    2   tcp    892  mountd
    100005    3   udp    892  mountd
    100005    3   tcp    892  mountd
    100003    2   tcp   2049  nfs
    100003    3   tcp   2049  nfs
    100003    4   tcp   2049  nfs
    100227    2   tcp   2049  nfs_acl
    100227    3   tcp   2049  nfs_acl
    100003    2   udp   2049  nfs
    100003    3   udp   2049  nfs
    100003    4   udp   2049  nfs
    100227    2   udp   2049  nfs_acl
    100227    3   udp   2049  nfs_acl
    100021    1   udp  32769  nlockmgr
    100021    3   udp  32769  nlockmgr
    100021    4   udp  32769  nlockmgr
    100021    1   tcp  32803  nlockmgr
    100021    3   tcp  32803  nlockmgr
    100021    4   tcp  32803  nlockmgr

Configure the mount point in /etc/fstab:

[root@web01 ~]# vim /etc/fstab

192.168.1.1:/data  /data  nfs  vers=3,proto=tcp,hard,intr,rsize=32768,wsize=32768,noatime  0  0

Now create the placeholder directory on the client, mount, and verify it works:

[root@web01 ~]# mkdir /data
[root@web01 ~]# mount -a
[root@web01 ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root
                       14G  1.8G   11G  15% /
tmpfs                 939M     0  939M   0% /dev/shm
/dev/sda1             477M   74M  378M  17% /boot
192.168.1.1:/data      14G  1.9G   11G  15% /data
[root@web01 ~]#
[root@web01 ~]# grep /data /proc/mounts 
192.168.1.1:/data /data nfs rw,noatime,vers=3,rsize=32768,wsize=32768,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.1.1,mountvers=3,mountport=892,mountproto=tcp,local_lock=none,addr=192.168.1.1 0 0
[root@web01 ~]#
[root@web01 ~]# touch /data/test-file
[root@web01 ~]# ls -al /data/test-file 
-rw-r--r-- 1 root root 0 Dec 19 17:57 /data/test-file

Setup central log server on CentOS 6

For security purposes, it is best practice to store all critical logs to a secured centralized syslog server, and keep the retention rate set for at least 1 year.

In this example, there are 4 servers:
syslog01 (192.168.1.200)
web01-local (192.168.1.201)
web02-local (192.168.1.202)
web03-local (192.168.1.203)

The goal here is to store all the system’s logs, as well as the individual Apache vhost’s logs to the central log server.

Install and perform initial configuration of rsyslog

On syslog01, confirm rsyslog is installed and running by:

[root@syslog01 ~]# yum install rsyslog
[root@syslog01 ~]# chkconfig rsyslog on
[root@syslog01 ~]# service rsyslog start

Open the firewall to allow inbound connections over port 514 to syslog01 by:

[root@syslog01 ~]# vim /etc/sysconfig/iptables
...
-A INPUT -s 192.168.1.0/24 -p tcp -m tcp --dport 514 -j ACCEPT
...
[root@syslog01 ~]# service iptables restart

Finally, create the directory that will be storing the logs:

[root@syslog01 ~]# mkdir -p /var/log/remote
[root@syslog01 ~]# chown root:root /var/log/remote
[root@syslog01 ~]# chmod 700 /var/log/remote

Determine log storage style on central log server

Here is where it needs to be decided how to store the logs on the central logging server. I am unaware of any recommend best practices as there are many ways to go about this. I’ll outline 4 common ways below:
– Option 1: Store each server’s logs in its own directory.
– Option 2: Store each server’s logs in its own directory with natural log rotation
– Option 3: Store each server’s logs in its own directory, broken down by program name
– Option 4: Store all server’s logs in a single file

Option 1: Store each server’s logs in its own directory

This will configure rsyslog to store the servers logs in a directory named after the remote server’s hostname. Log rotation will happen daily via logrotate.

To implement this, first create a backup of the existing /etc/rsyslog.conf, and create a new one as shown below. The parts I added or modified are in bold for reference:

[root@syslog01 ~]# mv /etc/rsyslog.conf /etc/rsyslog.orig
[root@syslog01 ~]# vim /etc/rsyslog.conf
# rsyslog v5 configuration file

# For more information see /usr/share/doc/rsyslog-*/rsyslog_conf.html
# If you experience problems, see http://www.rsyslog.com/doc/troubleshoot.html

#### MODULES ####

$ModLoad imuxsock # provides support for local system logging (e.g. via logger command)
$ModLoad imklog   # provides kernel logging support (previously done by rklogd)
#$ModLoad immark  # provides --MARK-- message capability

# Provides UDP syslog reception
$ModLoad imudp
# Moved below for RuleSets
#$UDPServerRun 514

# Provides TCP syslog reception
$ModLoad imtcp
# Moved below for RuleSets
#$InputTCPServerRun 514


#### GLOBAL DIRECTIVES ####

# Use default timestamp format
$ActionFileDefaultTemplate RSYSLOG_TraditionalFileFormat

# File syncing capability is disabled by default. This feature is usually not required,
# not useful and an extreme performance hit
#$ActionFileEnableSync on

# Include all config files in /etc/rsyslog.d/
$IncludeConfig /etc/rsyslog.d/*.conf

$template RemoteHost, "/var/log/remote/%HOSTNAME%/syslog.log"

#### RULES ####

$RuleSet local

# Log all kernel messages to the console.
# Logging much else clutters up the screen.
#kern.*                                                 /dev/console

# Log anything (except mail) of level info or higher.
# Don't log private authentication messages!
*.info;mail.none;authpriv.none;cron.none                /var/log/messages

# The authpriv file has restricted access.
authpriv.*                                              /var/log/secure

# Log all the mail messages in one place.
mail.*                                                  -/var/log/maillog


# Log cron stuff
cron.*                                                  /var/log/cron

# Everybody gets emergency messages
*.emerg                                                 *

# Save news errors of level crit and higher in a special file.
uucp,news.crit                                          /var/log/spooler

# Save boot messages also to boot.log
local7.*                                                /var/log/boot.log

local0.*                                                /var/log/local0.log
local1.*                                                /var/log/local1.log
local2.*                                                /var/log/local2.log
local3.*                                                /var/log/local3.log
local4.*                                                /var/log/local4.log
local5.*                                                /var/log/local5.log
local6.*                                                /var/log/local6.log

# Log all messages also to the central log directory
*.* ?RemoteHost

# Bind the above ruleset as default
$DefaultRuleset local

# This uses the template above
$RuleSet remote
*.* ?RemoteHost

# Bind the above ruleset for remote logs
$InputUDPServerBindRuleset remote
$UDPServerRun 514
$InputTCPServerBindRuleset remote
$InputTCPServerRun 514


# ### begin forwarding rule ###
# The statement between the begin ... end define a SINGLE forwarding
# rule. They belong together, do NOT split them. If you create multiple
# forwarding rules, duplicate the whole block!
# Remote Logging (we use TCP for reliable delivery)
#
# An on-disk queue is created for this action. If the remote host is
# down, messages are spooled to disk and sent when it is up again.
#$WorkDirectory /var/lib/rsyslog # where to place spool files
#$ActionQueueFileName fwdRule1 # unique name prefix for spool files
#$ActionQueueMaxDiskSpace 1g   # 1gb space limit (use as much as possible)
#$ActionQueueSaveOnShutdown on # save messages to disk on shutdown
#$ActionQueueType LinkedList   # run asynchronously
#$ActionResumeRetryCount -1    # infinite retries if host is down
# remote host is: name/ip:port, e.g. 192.168.0.1:514, port optional
#*.* @@remote-host:514
# ### end of the forwarding rule ###

Then restart rsyslog:

[root@syslog01 ~]# service rsyslog restart

Now setup logrotate to rotate out the logs, compress them, and keep them on file for 1 year as shown below:

[root@syslog01 ~]# vim /etc/logrotate.d/centralrsyslog
/var/log/remote/*/*.log
{
daily
    rotate 365
    missingok
    daily
    compress
    delaycompress
    sharedscripts
    postrotate
        /bin/kill -HUP `cat /var/run/syslogd.pid 2> /dev/null` 2> /dev/null || true
    endscript
}

Finally, test the log rotation by running:

[root@syslog01 ~]# logrotate -f /etc/logrotate.d/centralrsyslog

Option 2: Store each server’s logs in its own directory with natural log rotation

Both the local and remote system logs will be stored in its own directory as follows:

/var/log/remote/2015-12_syslog01/syslog.conf
/var/log/remote/2015-12_web01-local/syslog.conf
/var/log/remote/2015-12_web02-local/syslog.conf
/var/log/remote/2015-12_web03-local/syslog.conf

As the months change, syslog will automatically create a new folder accordingly for that month, creating a type of natural log rotation.

To implement this, first create a backup of the existing /etc/rsyslog.conf, and create a new one as shown below. The parts I added or modified are in bold for reference:

[root@syslog01 ~]# mv /etc/rsyslog.conf /etc/rsyslog.orig
[root@syslog01 ~]# vim /etc/rsyslog.conf
# rsyslog v5 configuration file

# For more information see /usr/share/doc/rsyslog-*/rsyslog_conf.html
# If you experience problems, see http://www.rsyslog.com/doc/troubleshoot.html

#### MODULES ####

$ModLoad imuxsock # provides support for local system logging (e.g. via logger command)
$ModLoad imklog   # provides kernel logging support (previously done by rklogd)
#$ModLoad immark  # provides --MARK-- message capability

# Provides UDP syslog reception
$ModLoad imudp
# Moved below for RuleSets
#$UDPServerRun 514

# Provides TCP syslog reception
$ModLoad imtcp
# Moved below for RuleSets
#$InputTCPServerRun 514


#### GLOBAL DIRECTIVES ####

# Use default timestamp format
$ActionFileDefaultTemplate RSYSLOG_TraditionalFileFormat

# File syncing capability is disabled by default. This feature is usually not required,
# not useful and an extreme performance hit
#$ActionFileEnableSync on

# Include all config files in /etc/rsyslog.d/
$IncludeConfig /etc/rsyslog.d/*.conf

$template RemoteHost, "/var/log/remote/%$YEAR%-%$MONTH%_%HOSTNAME%/syslog.log"

#### RULES ####

$RuleSet local

# Log all kernel messages to the console.
# Logging much else clutters up the screen.
#kern.*                                                 /dev/console

# Log anything (except mail) of level info or higher.
# Don't log private authentication messages!
*.info;mail.none;authpriv.none;cron.none                /var/log/messages

# The authpriv file has restricted access.
authpriv.*                                              /var/log/secure

# Log all the mail messages in one place.
mail.*                                                  -/var/log/maillog


# Log cron stuff
cron.*                                                  /var/log/cron

# Everybody gets emergency messages
*.emerg                                                 *

# Save news errors of level crit and higher in a special file.
uucp,news.crit                                          /var/log/spooler

# Save boot messages also to boot.log
local7.*                                                /var/log/boot.log

local0.*                                                /var/log/local0.log
local1.*                                                /var/log/local1.log
local2.*                                                /var/log/local2.log
local3.*                                                /var/log/local3.log
local4.*                                                /var/log/local4.log
local5.*                                                /var/log/local5.log
local6.*                                                /var/log/local6.log

# Log all messages also to the central log directory
*.* ?RemoteHost

# Bind the above ruleset as default
$DefaultRuleset local

# This uses the template above
$RuleSet remote
*.* ?RemoteHost

# Bind the above ruleset for remote logs
$InputUDPServerBindRuleset remote
$UDPServerRun 514
$InputTCPServerBindRuleset remote
$InputTCPServerRun 514


# ### begin forwarding rule ###
# The statement between the begin ... end define a SINGLE forwarding
# rule. They belong together, do NOT split them. If you create multiple
# forwarding rules, duplicate the whole block!
# Remote Logging (we use TCP for reliable delivery)
#
# An on-disk queue is created for this action. If the remote host is
# down, messages are spooled to disk and sent when it is up again.
#$WorkDirectory /var/lib/rsyslog # where to place spool files
#$ActionQueueFileName fwdRule1 # unique name prefix for spool files
#$ActionQueueMaxDiskSpace 1g   # 1gb space limit (use as much as possible)
#$ActionQueueSaveOnShutdown on # save messages to disk on shutdown
#$ActionQueueType LinkedList   # run asynchronously
#$ActionResumeRetryCount -1    # infinite retries if host is down
# remote host is: name/ip:port, e.g. 192.168.0.1:514, port optional
#*.* @@remote-host:514
# ### end of the forwarding rule ###

Then restart rsyslog:

[root@syslog01 ~]# service rsyslog restart

Option 3: Store each server’s logs in its own directory, broken down by program name

This one will break down the logs to store them in a directory named after the remote server’s hostname, and further more break them down by program name.

To implement this, first create a backup of the existing /etc/rsyslog.conf, and create a new one as shown below. The parts I added or modified are in bold for reference:

[root@syslog01 ~]# mv /etc/rsyslog.conf /etc/rsyslog.orig
[root@syslog01 ~]# vim /etc/rsyslog.conf
# rsyslog v5 configuration file

# For more information see /usr/share/doc/rsyslog-*/rsyslog_conf.html
# If you experience problems, see http://www.rsyslog.com/doc/troubleshoot.html

#### MODULES ####

$ModLoad imuxsock # provides support for local system logging (e.g. via logger command)
$ModLoad imklog   # provides kernel logging support (previously done by rklogd)
#$ModLoad immark  # provides --MARK-- message capability

# Provides UDP syslog reception
$ModLoad imudp
# Moved below for RuleSets
#$UDPServerRun 514

# Provides TCP syslog reception
$ModLoad imtcp
# Moved below for RuleSets
#$InputTCPServerRun 514


#### GLOBAL DIRECTIVES ####

# Use default timestamp format
$ActionFileDefaultTemplate RSYSLOG_TraditionalFileFormat

# File syncing capability is disabled by default. This feature is usually not required,
# not useful and an extreme performance hit
#$ActionFileEnableSync on

# Include all config files in /etc/rsyslog.d/
$IncludeConfig /etc/rsyslog.d/*.conf

$template RemoteHost, "/var/log/remote/%HOSTNAME%/%PROGRAMNAME%.log"

#### RULES ####

$RuleSet local

# Log all kernel messages to the console.
# Logging much else clutters up the screen.
#kern.*                                                 /dev/console

# Log anything (except mail) of level info or higher.
# Don't log private authentication messages!
*.info;mail.none;authpriv.none;cron.none                /var/log/messages

# The authpriv file has restricted access.
authpriv.*                                              /var/log/secure

# Log all the mail messages in one place.
mail.*                                                  -/var/log/maillog


# Log cron stuff
cron.*                                                  /var/log/cron

# Everybody gets emergency messages
*.emerg                                                 *

# Save news errors of level crit and higher in a special file.
uucp,news.crit                                          /var/log/spooler

# Save boot messages also to boot.log
local7.*                                                /var/log/boot.log

local0.*                                                /var/log/local0.log
local1.*                                                /var/log/local1.log
local2.*                                                /var/log/local2.log
local3.*                                                /var/log/local3.log
local4.*                                                /var/log/local4.log
local5.*                                                /var/log/local5.log
local6.*                                                /var/log/local6.log

# Log all messages also to the central log directory
*.* ?RemoteHost

# Bind the above ruleset as default
$DefaultRuleset local

# This uses the template above
$RuleSet remote
*.* ?RemoteHost

# Bind the above ruleset for remote logs
$InputUDPServerBindRuleset remote
$UDPServerRun 514
$InputTCPServerBindRuleset remote
$InputTCPServerRun 514


# ### begin forwarding rule ###
# The statement between the begin ... end define a SINGLE forwarding
# rule. They belong together, do NOT split them. If you create multiple
# forwarding rules, duplicate the whole block!
# Remote Logging (we use TCP for reliable delivery)
#
# An on-disk queue is created for this action. If the remote host is
# down, messages are spooled to disk and sent when it is up again.
#$WorkDirectory /var/lib/rsyslog # where to place spool files
#$ActionQueueFileName fwdRule1 # unique name prefix for spool files
#$ActionQueueMaxDiskSpace 1g   # 1gb space limit (use as much as possible)
#$ActionQueueSaveOnShutdown on # save messages to disk on shutdown
#$ActionQueueType LinkedList   # run asynchronously
#$ActionResumeRetryCount -1    # infinite retries if host is down
# remote host is: name/ip:port, e.g. 192.168.0.1:514, port optional
#*.* @@remote-host:514
# ### end of the forwarding rule ###

Then restart rsyslog:

[root@syslog01 ~]# service rsyslog restart

Now setup logrotate to rotate out the logs, compress them, and keep them on file for 1 year as shown below:

[root@syslog01 ~]# vim /etc/logrotate.d/centralrsyslog
/var/log/remote/*/*.log
{
daily
    rotate 365
    missingok
    daily
    compress
    delaycompress
    sharedscripts
    postrotate
        /bin/kill -HUP `cat /var/run/syslogd.pid 2> /dev/null` 2> /dev/null || true
    endscript
}

Finally, test the log rotation by running:

[root@syslog01 ~]# logrotate -f /etc/logrotate.d/centralrsyslog

Option 4: Store all server’s logs in a single file

This is useful is you want to be able to use grep to find the information you need from a single file. Just keep in mind that if you have 10-20+ servers, this log file could get very large, even with a daily log rotation in place.

This will tell rsyslog to store ALL your servers logs in /var/log/remote/syslog.log.

To implement this, first create a backup of the existing /etc/rsyslog.conf, and create a new one as shown below. The parts I added or modified are in bold for reference:

[root@syslog01 ~]# mv /etc/rsyslog.conf /etc/rsyslog.orig
[root@syslog01 ~]# vim /etc/rsyslog.conf
# rsyslog v5 configuration file

# For more information see /usr/share/doc/rsyslog-*/rsyslog_conf.html
# If you experience problems, see http://www.rsyslog.com/doc/troubleshoot.html

#### MODULES ####

$ModLoad imuxsock # provides support for local system logging (e.g. via logger command)
$ModLoad imklog   # provides kernel logging support (previously done by rklogd)
#$ModLoad immark  # provides --MARK-- message capability

# Provides UDP syslog reception
$ModLoad imudp
# Moved below for RuleSets
#$UDPServerRun 514

# Provides TCP syslog reception
$ModLoad imtcp
# Moved below for RuleSets
#$InputTCPServerRun 514


#### GLOBAL DIRECTIVES ####

# Use default timestamp format
$ActionFileDefaultTemplate RSYSLOG_TraditionalFileFormat

# File syncing capability is disabled by default. This feature is usually not required,
# not useful and an extreme performance hit
#$ActionFileEnableSync on

# Include all config files in /etc/rsyslog.d/
$IncludeConfig /etc/rsyslog.d/*.conf

$template RemoteHost, "/var/log/remote/syslog.log"

#### RULES ####

$RuleSet local

# Log all kernel messages to the console.
# Logging much else clutters up the screen.
#kern.*                                                 /dev/console

# Log anything (except mail) of level info or higher.
# Don't log private authentication messages!
*.info;mail.none;authpriv.none;cron.none                /var/log/messages

# The authpriv file has restricted access.
authpriv.*                                              /var/log/secure

# Log all the mail messages in one place.
mail.*                                                  -/var/log/maillog


# Log cron stuff
cron.*                                                  /var/log/cron

# Everybody gets emergency messages
*.emerg                                                 *

# Save news errors of level crit and higher in a special file.
uucp,news.crit                                          /var/log/spooler

# Save boot messages also to boot.log
local7.*                                                /var/log/boot.log

local0.*                                                /var/log/local0.log
local1.*                                                /var/log/local1.log
local2.*                                                /var/log/local2.log
local3.*                                                /var/log/local3.log
local4.*                                                /var/log/local4.log
local5.*                                                /var/log/local5.log
local6.*                                                /var/log/local6.log

# Log all messages also to the central log directory
*.* ?RemoteHost

# Bind the above ruleset as default
$DefaultRuleset local

# This uses the template above
$RuleSet remote
*.* ?RemoteHost

# Bind the above ruleset for remote logs
$InputUDPServerBindRuleset remote
$UDPServerRun 514
$InputTCPServerBindRuleset remote
$InputTCPServerRun 514


# ### begin forwarding rule ###
# The statement between the begin ... end define a SINGLE forwarding
# rule. They belong together, do NOT split them. If you create multiple
# forwarding rules, duplicate the whole block!
# Remote Logging (we use TCP for reliable delivery)
#
# An on-disk queue is created for this action. If the remote host is
# down, messages are spooled to disk and sent when it is up again.
#$WorkDirectory /var/lib/rsyslog # where to place spool files
#$ActionQueueFileName fwdRule1 # unique name prefix for spool files
#$ActionQueueMaxDiskSpace 1g   # 1gb space limit (use as much as possible)
#$ActionQueueSaveOnShutdown on # save messages to disk on shutdown
#$ActionQueueType LinkedList   # run asynchronously
#$ActionResumeRetryCount -1    # infinite retries if host is down
# remote host is: name/ip:port, e.g. 192.168.0.1:514, port optional
#*.* @@remote-host:514
# ### end of the forwarding rule ###

Then restart rsyslog:

[root@syslog01 ~]# service rsyslog restart

Now setup logrotate to rotate out the logs, compress them, and keep them on file for 1 year as shown below:

[root@syslog01 ~]# vim /etc/logrotate.d/centralrsyslog
/var/log/remote/*/*.log
{
daily
    rotate 365
    missingok
    daily
    compress
    delaycompress
    sharedscripts
    postrotate
        /bin/kill -HUP `cat /var/run/syslogd.pid 2> /dev/null` 2> /dev/null || true
    endscript
}

Finally, test the log rotation by running:

[root@syslog01 ~]# logrotate -f /etc/logrotate.d/centralrsyslog

Setup clients to forward their logs to your central log server

For this, I am opting to use TCP. Append the following at the bottom of /etc/rsyslog.conf on each client server:

[root@syslog01 ~]# vim /etc/rsyslog.conf
*.* @@192.168.1.200:514

Then restart Rsyslog:

[root@syslog01 ~]# service rsyslog restart

Apache logs

If there was a requirement within the organization to also send Apache error logs access logs to the central syslog server, that can be done pretty easily. However as these can get very large, make sure you have enough disk space on your central syslog server to handle this.

Apache does not have the ability to natively send over the logs, so logger must be used. The following would need to be done on each vhost configured for both 80 and 443. Be sure to change the tags for each site accordingly in the logger entry below:

[root@syslog01 ~]# vim /etc/httpd/vhost.d/example.com.conf
...
CustomLog "|/usr/bin/logger -p local0.info -t example.com-access" combined
ErrorLog "|/usr/bin/logger -p local0.error -t example.com-error"
...

Restart Apache when done

[root@syslog01 ~]# service httpd restart

Additional RemoteHost templates

As seen in the examples above, there are a number of ways you can work with the directories created. Some common ones that I know of are below:

Each folder is labeled with the hostname it came from:

$template RemoteHost, "/var/log/remote/%HOSTNAME%/syslog.log"

Each folder is labeled with the hostname and IP address it came from:

$template RemoteHost, "/var/log/remote/%HOSTNAME%-%fromhost-ip%/syslog.log"

Each folder is labeled with the hostname, and break down the logs within by program name:

$template RemoteHost, "/var/log/remote/%HOSTNAME%/%PROGRAMNAME%.log"

Each folder is labeled with the year, month and hostname, creating a type of natural log rotation:

$template RemoteHost, "/var/log/remote/%$YEAR%-%$MONTH%_%HOSTNAME%/syslog.log"

Backing up MySQL with Holland

Taken directly from the vendors website, Holland is an Open Source backup framework originally developed at Rackspace and written in Python. Its goal is to help facilitate backing up databases with greater configurability, consistency, and ease. Holland is capable of backing up other types of data, too. Because of its plugin structure, Holland can be used to backup anything you want by whatever means you want.

Notable Features
– Pluggable Framework
– Supports Multiple Backup Sets
– Database and Table Filtering (Using GLOBs)
– Auto-Detection of Transactional DBs
– Safe use of –single-transaction with mysqldump
– In-Line and Pluggable Compression
– Backups Suitable for Point-In-Time Recovery / Replication
– MySQL + LVM Snapshot and Logical Backups
– PostgreSQL backups using pgdump

Website: http://hollandbackup.org
Documentation: http://docs.hollandbackup.org

How to install Holland on Ubuntu / Debian

As the packages don’t exist in the distro’s repositories, pull them from the official repositories.

This works on the followings OS’s:
– Ubuntu 12.04
– Ubuntu 14.04
– Ubuntu 14.10
– Ubuntu 16.04
– Debian 7
– Debian 8

You can setup the repo by running the following on the commandline as root:

eval $(cat /etc/os-release)
 
DIST="xUbuntu_${VERSION_ID}"
[ $ID == "debian" ] && DIST="Debian_${VERSION_ID}.0"
 
curl -s http://download.opensuse.org/repositories/home:/holland-backup/${DIST}/Release.key | sudo apt-key add -
echo "deb http://download.opensuse.org/repositories/home:/holland-backup/${DIST}/ ./" > /etc/apt/sources.list.d/holland.list

Then install Holland:

apt-get update
apt-get install holland holland-mysqldump holland-common

How to install Holland on CentOS and RedHat

The holland packages exist in the EPEL repositories. Using the list below, install the EPEL repo for your distro:

# CentOS 5 / RedHat 5
rpm -ivh http://dl.fedoraproject.org/pub/epel/5/x86_64/epel-release-5-4.noarch.rpm
 
# CentOS 6 / RedHat 6
rpm -ivh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm

# CentOS 7 / RedHat 7
rpm -ivh http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-5.noarch.rpm

Then install Holland by:

yum install holland holland-mysqldump holland-common

How to configure Holland

Now that Holland is installed, its now time to configure it.

First create the backup directory where you will be storing your database backups:

mkdir -p /var/lib/mysqlbackup

Then configure holland to store the backups in the directory created above:

vim /etc/holland/holland.conf
...
backup_directory = /var/lib/mysqlbackup
backupsets = default
...

Next we setup default backupset within Holland that will control the retention period, compression, credentials, etc:

cat << EOF > /etc/holland/backupsets/default.conf
 
## Default Backup-Set
##
## Backs up all MySQL databases in a one-file-per-database fashion using
## lightweight in-line compression and engine auto-detection. This backup-set
## is designed to provide reliable backups "out of the box", however it is
## generally advisable to create additional custom backup-sets to suit
## one's specific needs.
##
## For more inforamtion about backup-sets, please consult the online Holland
## documentation. Fully-commented example backup-sets are also provided, by
## default, in /etc/holland/backupsets/examples.
 
[holland:backup]
plugin = mysqldump
backups-to-keep = 7
auto-purge-failures = yes
purge-policy = after-backup
estimated-size-factor = 1.0
 
# This section defines the configuration options specific to the backup
# plugin. In other words, the name of this section should match the name
# of the plugin defined above.
[mysqldump]
file-per-database       = yes
#lock-method        = auto-detect
#databases          = "*"
#tables             = "*"
#stop-slave         = no
#bin-log-position   = no
 
# The following section is for compression. The default, unless the
# mysqldump provider has been modified, is to use inline fast gzip
# compression (which is identical to the commented section below).
[compression]
method = gzip
options = "--rsyncable"
 
[mysql:client]
defaults-extra-file       = /root/.my.cnf
EOF

In order for Holland to backup the databases, setup the /root/.my.cnf by running:

cat << EOF > /root/.my.cnf
[client]
user=root
password=your_root_mysql_password_here
EOF

Now setup the nightly cronjob that will run Holland:

cat << EOF > /etc/cron.d/holland
30 3 * * * root /usr/sbin/holland -q bk
EOF

Finally, run the backup job to ensure Holland works. Please note this will lock your tables when it runs, so do not run this during peak times as it could cause downtime for your site or application!

/usr/sbin/holland -q bk

You can verify Holland ran successfully by checking the logs:

tail -f /var/log/holland/holland.log
2015-12-16 03:58:01,789 [INFO] Wrote backup manifest /var/lib/mysqlbackup/default/20151216_035801/backup_data/MANIFEST.txt
2015-12-16 03:58:01,793 [INFO] Executing: /usr/bin/mysqldump --defaults-file=/var/lib/mysqlbackup/default/20151216_035801/my.cnf --flush-privileges --max-allowed-packet=128M --lock-tables mysql
2015-12-16 03:58:01,888 [ERROR] /usr/bin/mysqldump[25783]: -- Warning: Skipping the data of table mysql.event. Specify the --events option explicitly.
2015-12-16 03:58:01,888 [INFO] Final on-disk backup size 181.77KB
2015-12-16 03:58:01,889 [INFO] 26.47% of estimated size 686.67KB
2015-12-16 03:58:01,889 [INFO] Backup completed in 0.24 seconds
2015-12-16 03:58:01,902 [INFO] Purged default/20151209_035801
2015-12-16 03:58:01,902 [INFO] 1 backups purged
2015-12-16 03:58:01,909 [INFO] Released lock /etc/holland/backupsets/default.conf
2015-12-16 03:58:01,909 [INFO] --- Ending backup run ---

Holland Tips and Tricks

Below are some of the more common tips and tricks for working with Holland. There are broken down as follows:
1. Alternate username for Holland
2. Change Default Retention
3. Hourly Backups
4. Backing up 2 or more database servers

1. Alternate username for Holland

For security purposes, a client may want to have a user, other then root, performing the Holland backups. To configure this, first create a new MySQL user for Holland:

mysql
GRANT SELECT, RELOAD, SUPER, LOCK TABLES, REPLICATION CLIENT, SHOW VIEW ON *.* TO 'holland_backup'@'localhost' IDENTIFIED BY 'secretpassword';
flush privileges;

Now update /etc/holland/backupsets/default.conf to reflect the new credentials:

vi /etc/holland/backupsets/default.conf
...
[mysql:client]
# defaults-file       = /root/.my.cnf
user = holland_backup
password = secretpassword
...

2. Change Default Retention

This guide sets Holland to keep 7 backups of your databases. Please note that this does not mean 7 days as it makes the assumption that you are only running one Holland backup job each night.

With that said, if you wanted to configure a retention of 14 days, you would update “backups-to-keep” in /etc/holland/backupsets/default.conf as shown below:

vi /etc/holland/backupsets/default.conf
...
[holland:backup]
plugin = mysqldump
backups-to-keep = 14
auto-purge-failures = yes
...

3. Hourly Backups

You can configure Holland to be more aggressive with how often it runs. Assuming we have a MySQL Slave server with a small database, and we want
– Holland backups every 4 hours
– 7 day retention

You would first update the cronjob as follows:

vi /etc/cron.d/holland
30 */4 * * * root /usr/sbin/holland -q bk

Now update the “backups-to-keep” in /etc/holland/backupsets/default.conf to keep 42 backups (6 backups a day * 7 days) as shown below

vi /etc/holland/backupsets/default.conf
...
[holland:backup]
plugin = mysqldump
backups-to-keep = 42
auto-purge-failures = yes
...

4. Backing up 2 or more database servers

An environment may have two or more database servers. Perhaps they exist on another server, or perhaps they are using something like Rackspace Cloud Database, or Amazon RDS. Below is how you can configure Holland to backup multiple targets:

First, create a copy of the default.conf we configured in this guide to serve as a starting point:

cd /etc/holland/backupsets/
cp default.conf clouddatabase.conf

Now update the clouddatabase.conf to configure the remote database server IP and credentials. You must comment out the defaults-extra-file as shown below:

vim clouddatabase.conf

...
[mysql:client]
#defaults-extra-file = /root/.my.cnf

# define external database information

host     = xxxxxxxxx.rackspaceclouddb.com
user     = holland
password = your_holland_mysql_password_here
port     = 3306
...

Now update your retention settings:

vi /etc/holland/backupsets/clouddatabase.conf
...
[holland:backup]
plugin = mysqldump
backups-to-keep = 7
auto-purge-failures = yes
...

Finally, update the holland.conf to include your second config file:

vim /etc/holland/holland.conf
...
backupsets = default, clouddatabase
...

Finally, run the backup job to ensure Holland works. Please note this will lock your tables which it runs, so do not run this during peak times as it could cause downtime for your site or application!

/usr/sbin/holland -q bk

How to install and configure Lsyncd

People looking to create a load balanced web server solution often ask, how can they keep their web servers in sync with each other? There are many ways to go about this: NFS, lsync, rsync, etc. This guide will discuss using Lsyncd to keep their slave web servers in sync every 20 seconds.

Taken directly from the vendors website, Lsyncd watches a local directory trees event monitor interface (inotify or fsevents). It aggregates and combines events for a few seconds and then spawns one (or more) process(es) to synchronize the changes. By default this is rsync. Lsyncd is thus a light-weight live mirror solution that is comparatively easy to install not requiring new filesystems or blockdevices and does not hamper local filesystem performance.

This article will be broken down by the following operating systems, as each has some minor catches. Simply scroll down to your desired operating system listed below:
CentOS 6 – How To Install And Configure Lsyncd
CentOS 7 – How To Install And Configure Lsyncd
Ubuntu 12.04 – How To Install And Configure Lsyncd
Ubuntu 14.04 – How To Install And Configure Lsyncd

CentOS 6 – How To Install And Configure Lsyncd
Install Lsyncd via yum. Please note, this will automatically setup:
– Lsyncd 2.1.5
– /etc/logrotate.d/lsyncd
– /etc/init.d/lsyncd
– Provide a place holder for /etc/lsyncd.conf

Install it by running:

yum -y install lsyncd
chkconfig lsyncd on

Now setup the lsyncd configuration by:

vim /etc/lsyncd.conf
 
settings {
   logfile = "/var/log/lsyncd/lsyncd.log",
   statusFile = "/var/log/lsyncd/lsyncd-status.log",
   statusInterval = 20
}
servers = {
 "x.x.x.x",
 "x.x.x.x",
 "x.x.x.x"
}
 
for _, server in ipairs(servers) do
sync {
    default.rsyncssh,
    source="/var/www/",
    host=server,
    targetdir="/var/www/",
    excludeFrom="/etc/lsyncd-excludes.txt",
    rsync = {
        compress = true,
        archive = true,
        verbose = true,
        rsh = "/usr/bin/ssh -p 22 -o StrictHostKeyChecking=no"
    }
}
end

Create the place holder for lsyncd-excludes.txt

touch /etc/lsyncd-excludes.txt

Finally, start the service

chkconfig lsyncd on
service lsyncd start

CentOS 7 – How To Install And Configure Lsyncd

Install Lsyncd via yum. Please note, this will automatically setup
– Lsyncd 2.1.5
– /etc/logrotate.d/lsyncd
– /etc/init.d/lsyncd
– Provide a place holder for /etc/lsyncd.conf

Install it by running

yum -y install lsyncd
systemctl enable lsyncd.service

Now setup the lsyncd configuration by:

vim /etc/lsyncd.conf
 
settings {
   logfile = "/var/log/lsyncd/lsyncd.log",
   statusFile = "/var/log/lsyncd/lsyncd-status.log",
   statusInterval = 20
}
servers = {
 "x.x.x.x",
 "x.x.x.x",
 "x.x.x.x"
}
 
for _, server in ipairs(servers) do
sync {
    default.rsyncssh,
    source="/var/www/",
    host=server,
    targetdir="/var/www/",
    excludeFrom="/etc/lsyncd-excludes.txt",
    rsync = {
        compress = true,
        archive = true,
        verbose = true,
        rsh = "/usr/bin/ssh -p 22 -o StrictHostKeyChecking=no"
    }
}
end

Create the place holder for lsyncd-excludes.txt and /var/log/lsyncd:

touch /etc/lsyncd-excludes.txt
mkdir /var/log/lsyncd

Finally, start the service

systemctl start lsyncd.service

Ubuntu 12.04 – How To Install Lsyncd

Install Lsyncd via apt. Please note, this will automatically setup
– Lsyncd 2.0.4
– /etc/init.d/lsyncd

But it will not setup:
– /etc/logrotate.d/lsyncd
– /etc/lsyncd/lsyncd.conf.lua

Install it by running:

apt-get update
apt-get install lsyncd

Now setup the lsyncd configuration by:

mkdir /etc/lsyncd
vim /etc/lsyncd/lsyncd.conf.lua
 
settings = {
logfile = "/var/log/lsyncd/lsyncd.log",
statusFile = "/var/log/lsyncd/lsyncd-status.log",
statusInterval = 20
}

servers = {
 "x.x.x.x",
 "x.x.x.x",
 "x.x.x.x"
}

for _, server in ipairs(servers) do
sync { 
    default.rsync, 
    source="/var/www/",
    target=server..":/var/www/",
    excludeFrom="/etc/lsyncd/lsyncd-excludes.txt",
    rsyncOps={"-e", "/usr/bin/ssh -o StrictHostKeyChecking=no", "-avz"}
}
end

Create the place holder for lsyncd-excludes.txt and logs directory

touch /etc/lsyncd/lsyncd-excludes.txt
mkdir /var/log/lsyncd

Setup log rotate script

vim /etc/logrotate.d/lsyncd

/var/log/lsyncd/*log {
    missingok
    notifempty
    sharedscripts
    postrotate
    if [ -f /var/lock/lsyncd ]; then
      /usr/sbin/service lsyncd restart > /dev/null 2>/dev/null || true
    fi
    endscript
}

Finally, start the service

service lsyncd start

Ubuntu 14.04 and 16.04 – How To Install Lsyncd

Install Lsyncd via apt. Please note, this will automatically setup
– Lsyncd 2.1.5
– /etc/init.d/lsyncd

But it will not setup:
– /etc/logrotate.d/lsyncd
– /etc/lsyncd/lsyncd.conf.lua

Install it by running

apt-get update
apt-get install lsyncd

Now setup the lsyncd configuration by:

mkdir /etc/lsyncd
vim /etc/lsyncd/lsyncd.conf.lua
 
settings {
   logfile = "/var/log/lsyncd/lsyncd.log",
   statusFile = "/var/log/lsyncd/lsyncd-status.log",
   statusInterval = 20
}
servers = {
 "x.x.x.x",
 "x.x.x.x",
 "x.x.x.x"
}
 
for _, server in ipairs(servers) do
sync {
    default.rsyncssh,
    source="/var/www/",
    host=server,
    targetdir="/var/www/",
    excludeFrom="/etc/lsyncd/lsyncd-excludes.txt",
    rsync = {
        compress = true,
        archive = true,
        verbose = true,
        rsh = "/usr/bin/ssh -p 22 -o StrictHostKeyChecking=no"
    }
}
end

Create the place holder for lsyncd-excludes.txt and logs directory

touch /etc/lsyncd/lsyncd-excludes.txt
mkdir /var/log/lsyncd

Setup log rotate script

vim /etc/logrotate.d/lsyncd

/var/log/lsyncd/*log {
    missingok
    notifempty
    sharedscripts
    postrotate
    if [ -f /var/lock/lsyncd ]; then
      /usr/sbin/service lsyncd restart > /dev/null 2>/dev/null || true
    fi
    endscript
}

Finally, start the service

service lsyncd start

Note: On Ubuntu 16.04, if ‘service lsyncd start’ does not start daemon, then manually start lsyncd, which will allow systemd to work with it from here on out by running:

/usr/bin/lsyncd /etc/lsyncd/lsyncd.conf.lua
service lsyncd stop
service lsyncd start