Exporting and importing compressed MySQL Dumps

Typically when creating MySQL dumps, I prefer to use Holland. However sometimes you just need to be able to dump and import a database for various reasons.

Below are two quick commands for doing this with compressed files. If you have a 30G database dump, you probably don’t want to waste time uncompressing the whole thing first and using up valuable disk space. And you certainly don’t want a dump a 30G file on your file system.

Export to a compressed MySQL dump

This command will allow you to compress a database dump as you dump the database. Just be sure to remove the {} when you run the command, and replace database with the name of your database:

mysqldump -u {user} -p {database} | gzip > {database}.sql.gz

Or if you prefer to add a timestamp:

mysqldump -u {user} -p {database} | gzip > `date -I`.{database}.sql.gz

Import a compressed MySQL dump

This single command will allow you to import the compressed dump directly into MySQL without having to uncompress it first. This assumes you have already created the database within MySQL.

The command to import it is as follows:

gzip -dc < {database}.sql.gz | mysql -u {user} -p {database}

Bind mounts

While no one likes opening up /etc/fstab and seeing 30 bind mounts, (a sure sign of an over caffeinated sysadmin), bind mounts can have their place in the world of system administration.

To put it in english, a bind mount is simply the process of mounting a specific directory from one path into another. And to describe it another way, it allows you to mount part of a filesystem to another location, and have it accessible from both locations.

Some common uses for bind mounts include:
– Providing access to log files from within a users home directory
– Providing access to files that reside outside a chroot
– Setting up NFSv4 /exports
– Setup a link that crosses filesystems, for example, a directory that resides on drive /dev/xvda1 to another drive on /dev/xvdb1

Basic bind mount example

Setting up a bind mount is simple. In the example below, we are going to bind mount our websites folder /var/www/vhosts/domain.com into our user’s folder: /home/bob

[[email protected] ~]# mkdir /home/bob/domain.com
[[email protected] ~]# mount --bind /var/www/vhosts/domain.com /home/bob/domain.com

Then set the mount point to be persistent at boot:

[[email protected] ~]# vim /etc/fstab
/var/www/vhosts/domain.com  /home/bob/domain.com  none  bind  0 0

Read only bind mounts

Here is an example for setting up a ready only bind mount. In this example, we are going to bind mount our Apache logs in /var/log/httpd into our user’s folder: /home/bob, and only allow them read only permissions so they can troubleshoot their website:

[[email protected] ~]# mkdir /home/bob/logs
[[email protected] ~]# mount --bind /var/log/httpd /home/bob/logs
[[email protected] ~]# mount --bind -o remount,ro /home/bob/logs

Then set the mount point to be persistent at boot, retaining the read only permissions. This one is a bit more tricky as you cannot set the read only flag in /etc/fstab. Therefore, you need to for the remount command at startup by adding it into /etc/rc.local so it’ll run at boot time:

[[email protected] ~]# mkdir /home/bob/logs
[[email protected] ~]# vim /etc/fstab
/var/log/httpd  /home/bob/logs  none  bind  0 0

[[email protected] ~]# chmod +x /etc/rc.local
[[email protected] ~]# vim /etc/rc.local
[[email protected] ~]# mount --bind -o remount,ro /home/bob/logs

Then consider rebooting your system to ensure the directory mounts properly in read only mode. You will know its successful when you cannot delete a file, or write a new file to that directory, even as the root user. For example:

[[email protected] ~]# touch /home/bob/logs/mylogfile
touch: cannot touch `/home/bob/logs/mylogfile': Read-only file system

[[email protected] ~]# rm -f /home/bob/logs/messages
rm: cannot remove `/home/bob/logs/messages': Read-only file system

Looking at whats underneath a mount point

Sometimes a sysadmin may have setup a new drive as the existing one was full. Then they transferred over a large directory to the new disk and symlinked it back to the original location. However they forgot to cleanup the directory on the original disk, so how does one find whats using up all the space since du -sh will not be able to see whats in there?

In other words, if a filesystem is mounted on a directory that contains files, any files below the mount point are hidden by the mounted file system. So how can we easily see whats in that directory without un-mounting the filesystem?

You can use bind mounts to essentially ‘peek’ under a mount point. Lets say the mount point you wanted to peek under is in /home/mydirectory. You can peek under the /home/mydirectory mount point by:

[[email protected] ~]# mount --bind /home/mydirectory /tmp/mnt-peek

Now you can see what files are ‘hidden’ by checking in /tmp/mnt-new. Once your done, be sure to unmount it by:

[[email protected] ~]# umount /tmp/mnt-peek

Setup NFSv4 on CentOS

NFSv4 is a tried and tested method of allowing client servers to access files over a network, in a very similar fashion to how the files would be accessed on a local file system. As a very mature piece of software, it has been successfully developed and used on production environments for over 15 years, and it is still widely accepted and supported with a long life ahead of it.

Setting it up is pretty easy and straight forward. As this is a network file system, it is strongly recommended to setup a private switch or private network between to the servers to ensure the lowest latency, as well as better security.

NFS Server – Installation

Install the required packages on the NFS server:

# CentOS 5, CentOS 6 and CentOS 7
[[email protected] ~]# yum install portmap nfs-utils nfs4-acl-tools -y

NFS Server – Configuration

Out of the box, NFSv4 has the following option set which is getting outdated sorely at this time:
– Enables only 8 NFS threads

We are going to enable 64 NFS threads since you will most likely run into IO problems before you hit this limit as it was meant for much older systems.

[[email protected] ~]# vim /etc/sysconfig/nfs
RPCNFSDCOUNT=64

Next, set the domain as all servers and clients should resides within the same domain:

[[email protected] ~]# vim /etc/idmapd.conf
[General]
Domain = yourdomain.com

Open the firewall to allow your private network access to the NFS services. You may have to adjust your rules as my private network resides on eth2. Do not allow this on the public interface without adjusting the source IP’s accordingly!

# CentOS 5 and CentOS 6
[[email protected] ~]# vim /etc/sysconfig/iptables
-A INPUT -i eth2 -s 192.168.1.0/24 -p tcp -m tcp --dport 111 -j ACCEPT
-A INPUT -i eth2 -s 192.168.1.0/24 -p udp -m udp --dport 111 -j ACCEPT
-A INPUT -i eth2 -s 192.168.1.0/24 -p tcp -m tcp --dport 662 -j ACCEPT
-A INPUT -i eth2 -s 192.168.1.0/24 -p udp -m udp --dport 662 -j ACCEPT
-A INPUT -i eth2 -s 192.168.1.0/24 -p tcp -m tcp --dport 892 -j ACCEPT
-A INPUT -i eth2 -s 192.168.1.0/24 -p udp -m udp --dport 892 -j ACCEPT
-A INPUT -i eth2 -s 192.168.1.0/24 -p tcp -m tcp --dport 2049 -j ACCEPT
-A INPUT -i eth2 -s 192.168.1.0/24 -p udp -m udp --dport 2049 -j ACCEPT
-A INPUT -i eth2 -s 192.168.1.0/24 -p tcp -m tcp --dport 32803 -j ACCEPT
-A INPUT -i eth2 -s 192.168.1.0/24 -p udp -m udp --dport 32769 -j ACCEPT

# CentOS 7
[[email protected] ~]# vim /etc/firewalld/services/nfs.xml
<?xml version="1.0" encoding="utf-8"?>
<service>
  <short>NFS
  <description>NFS service
  <port protocol="tcp" port="111"/>
  <port protocol="udp" port="111"/>
  <port protocol="tcp" port="662"/>
  <port protocol="udp" port="662"/>
  <port protocol="tcp" port="892"/>
  <port protocol="udp" port="892"/>
  <port protocol="tcp" port="2049"/>
  <port protocol="udp" port="2049"/>
  <port protocol="tcp" port="32803"/>
  <port protocol="udp" port="32803"/>
  <port protocol="tcp" port="38467"/>
  <port protocol="udp" port="38467"/>
  <port protocol="tcp" port="32769"/>
  <port protocol="udp" port="32769"/>
</service>

Then apply the rules by:
[[email protected] ~]# systemctl reload firewalld.service

Now add a zone to the private network interface and set PEERDNS to no:
[[email protected] ~]# vim /etc/sysconfig/network-scripts/ifcfg-eth2
...
PEERDNS=no
ZONE=internal
...

Then apply the changes by:
[[email protected] ~]# ifdown eth2 && ifup eth2

Now add the NFS rules to the private network interface:
[[email protected] ~]# firewall-cmd --zone=internal --add-interface eth2
[[email protected] ~]# firewall-cmd --zone=internal --add-service=nfs
[[email protected] ~]# firewall-cmd --zone=internal --add-interface eth2 --permanent
[[email protected] ~]# firewall-cmd --zone=internal --add-service=nfs --permanent

NFSv4 uses a pseudo filesystem for the exports. A pseudo filesystem allows NFS clients to browse the hierarchy of exported file systems, but remains limited to paths that are actually exported. There are a number of ways to go about this, but for this guide, we’ll assume the pseudo filesystem root will be /exports, and we’ll simply bind mount the desired directories into the /exports folder.

For this example, I am looking to export in /data. So we’ll bind mount that to the /exports folder as follows:

[[email protected] ~]# touch /data/test-file
[[email protected] ~]# mkdir /exports
[[email protected] ~]# mkdir /exports/data
[[email protected] ~]# echo "/data  /exports/data  none  bind  0 0" >> /etc/fstab
[[email protected] ~]# mount -a
[[email protected] ~]# ls -al /exports/data
total 8
drwxr-xr-x 2 root     root     4096 Jan 11 22:19 .
drwxr-xr-x 3 root     root     4096 Jan 11 22:03 ..
-rw-r--r-- 1 root     root        0 Jan 11 22:03 test-file

If you can see the file, test-file, within /exports/data, then everything is setup properly.

Export the directory to be shared, along with its permissions, in /etc/exports:

[[email protected] ~]# vi /etc/exports
/exports      192.168.1.0/24(ro,no_subtree_check,fsid=0,crossmnt)
/exports/data 192.168.1.0/24(rw,no_subtree_check,no_root_squash)

Now start the services, and enable them to start at boot time:

# CentOS 5
[[email protected] ~]# service portmap start; chkconfig portmap on
[[email protected] ~]# service rpcidmapd start; chkconfig rpcidmapd on
[[email protected] ~]# service nfs start; chkconfig nfs on

# CentOS 6
[[email protected] ~]# service rpcbind start; chkconfig rpcbind on
[[email protected] ~]# service rpcidmapd start; chkconfig rpcidmapd on
[[email protected] ~]# service nfs start; chkconfig nfs on

# CentOS 7
[[email protected] ~]# systemctl start rpcbind nfs-idmap nfs-server
[[email protected] ~]# systemctl enable rpcbind nfs-idmap nfs-server

Check to make sure the services are running:

[[email protected] ~]# showmount -e
Export list for nfs01:
/exports/data 192.168.1.0/24
/exports     192.168.1.0/24

[[email protected] ~]# rpcinfo -p
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100005    1   udp  41418  mountd
    100005    1   tcp  50139  mountd
    100005    2   udp  50228  mountd
    100005    2   tcp  52070  mountd
    100005    3   udp  33496  mountd
    100005    3   tcp  54673  mountd
    100003    2   tcp   2049  nfs
    100003    3   tcp   2049  nfs
    100003    4   tcp   2049  nfs
    100227    2   tcp   2049  nfs_acl
    100227    3   tcp   2049  nfs_acl
    100003    2   udp   2049  nfs
    100003    3   udp   2049  nfs
    100003    4   udp   2049  nfs
    100227    2   udp   2049  nfs_acl
    100227    3   udp   2049  nfs_acl
    100021    1   udp  38895  nlockmgr
    100021    3   udp  38895  nlockmgr
    100021    4   udp  38895  nlockmgr
    100021    1   tcp  39908  nlockmgr
    100021    3   tcp  39908  nlockmgr
    100021    4   tcp  39908  nlockmgr

NFS Client – Installation

Now that the NFS server is ready, the NFS clients now need to be setup to connect. Install the required packages on the NFS clients by:

# CentOS 5
[[email protected] ~]# yum install portmap nfs-utils nfs4-acl-tools -y

# CentOS 6 and CentOS 7
[[email protected] ~]# yum install rpcbind nfs-utils nfs4-acl-tools -y

Next, set the domain as all servers and clients should resides within the same domain:

[[email protected] ~]# vim /etc/idmapd.conf
[General]
Domain = yourdomain.com

Now start the services, and enable them to start at boot time.

# CentOS 5
[[email protected] ~]# service portmap start; chkconfig portmap on
[[email protected] ~]# service rpcidmapd start; chkconfig rpcidmapd on
[[email protected] ~]# chkconfig netfs on

# CentOS 6
[[email protected] ~]# service rpcbind start; chkconfig rpcbind on
[[email protected] ~]# service rpcidmapd start; chkconfig rpcidmapd on
[[email protected] ~]# chkconfig netfs on

# CentOS 7
[[email protected] ~]# systemctl start rpcbind nfs-idmap
[[email protected] ~]# systemctl enable rpcbind nfs-idmap

NFS Client – Configuration

Confirm the NFS clients can see the NFS server:

[[email protected] ~]# showmount -e 192.168.1.1
Export list for 192.168.1.1:
/data 192.168.1.0/24

Configure the mount point in /etc/fstab:

[[email protected] ~]# vim /etc/fstab
192.168.1.1:/  /data  nfs4  sec=sys,noatime  0  0

Now create the placeholder directory on the client, mount, and verify it works:

[[email protected] ~]# mkdir /data
[[email protected] ~]# mount -a
[[email protected] ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root
                       14G  1.8G   11G  15% /
tmpfs                 939M     0  939M   0% /dev/shm
/dev/sda1             477M   74M  378M  17% /boot
192.168.1.1:/data      14G  1.9G   11G  15% /data
[[email protected] ~]#
[[email protected] ~]# grep /data /proc/mounts 
192.168.1.1:/data/ /data nfs4 rw,noatime,vers=4,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=192.168.1.164,minorversion=0,local_lock=none,addr=192.168.1.1 0 0
[[email protected] ~]#
[[email protected] ~]# touch /data/test-file01
[[email protected] ~]# ls -al /data/test-file01
-rw-r--r-- 1 root root 0 Dec 19 17:57 /data/test-file01

Finally, confirm the user mapping is the same on both servers. You can test to ensure both the server and the client’s show the same UID’s by:

# Create a user on the NFS server:
[[email protected] ~]# useradd -u 6000 testuser

# Create the same user on the NFS client:
[[email protected] ~]# useradd -u 6000 testuser

# Set the ownership of a test file on the NFS server:
[[email protected] ~]# touch /data/test-file01
[[email protected] ~]# chown testuser:testuser /data/test-file01

# Check the ownership of the test file on the NFS server:
[[email protected] ~]# ls -al /data/test-file01

# Confirm the client sees the same ownership:
[[email protected] ~]# ls -al /data/test-file01

Setup NFSv4 on Ubuntu or Debian

NFSv4 is a tried and tested method of allowing client servers to access files over a network, in a very similar fashion to how the files would be accessed on a local file system. As a very mature piece of software, it has been successfully developed and used on production environments for over 15 years, and it is still widely accepted and supported with a long life ahead of it.

Setting it up is pretty easy and straight forward. As this is a network file system, it is strongly recommended to setup a private switch or private network between to the servers to ensure the lowest latency, as well as better security.

NFS Server – Installation

Install the required packages on the NFS server:

# Ubuntu and Debian
[[email protected] ~]# apt-get update
[[email protected] ~]# apt-get install rpcbind nfs-common nfs4-acl-tools nfs-kernel-server

NFS Server – Configuration

Out of the box, NFSv4 has the following option set which is getting outdated sorely at this time:
– Sets random ephemeral ports upon daemon startup.
– Enables only 8 NFS threads

To make things more easier for admin’s to lock down the firewalls, we are going to set static ports, and also enable 64 NFS threads since you will most likely run into IO problems before you hit this limit as it was meant for much older systems.

Stop the services so we can unload the lockd kernel module and configure the services. This step cannot be skipped!

# Ubuntu 12.04 and Ubuntu 14.04
[[email protected] ~]# service nfs-kernel-server stop
[[email protected] ~]# service statd stop
[[email protected] ~]# service idmapd stop
[[email protected] ~]# service rpcbind stop
[[email protected] ~]# modprobe -r nfsd nfs lockd

# Debian 7
[[email protected] ~]# service nfs-kernel-server stop
[[email protected] ~]# service nfs-common stop
[[email protected] ~]# service rpcbind stop
[[email protected] ~]# modprobe -r nfsd nfs lockd

Update the NFS thread count by:

[[email protected] ~]# vim /etc/default/nfs-kernel-server
...
RPCNFSDCOUNT=64
RPCNFSDPRIORITY=0
RPCMOUNTDOPTS="--manage-gids"
NEED_SVCGSSD="no"
RPCSVCGSSDOPTS=""
...

Next, set the domain as all servers and clients should resides within the same domain:

[[email protected] ~]# vim /etc/idmapd.conf
[General]
Domain = yourdomain.com

Update nfs-common to disable statd and rpc.gssd, then require idmapd:

# Ubuntu 12.04 and Ubuntu 14.04
[[email protected] ~]# vim /etc/default/nfs-common
NEED_STATD=no
STATDOPTS=
NEED_GSSD=no

# Debian 7
[[email protected] ~]# vim /etc/default/nfs-common
NEED_STATD=no
STATDOPTS=
NEED_IDMAPD=yes
NEED_GSSD=no

Open the firewall to allow your private network access to the NFS services. You may have to adjust your rules as my private network resides on eth2. Do not allow this on the public interface without adjusting the source IP’s accordingly!

[[email protected] ~]# ufw allow in on eth2 to 192.168.1.0/24 proto tcp
[[email protected] ~]# ufw allow in on eth2 to 192.168.1.0/24 proto udp

NFSv4 uses a pseudo filesystem for the exports. A pseudo filesystem allows NFS clients to browse the hierarchy of exported file systems, but remains limited to paths that are actually exported. There are a number of ways to go about this, but for this guide, we’ll assume the pseudo filesystem root will be /exports, and we’ll simply bind mount the desired directories into the /exports folder.

For this example, I am looking to export in /data. So we’ll bind mount that to the /exports folder as follows:

[[email protected] ~]# mkdir /data
[[email protected] ~]# touch /data/test-file
[[email protected] ~]# mkdir /exports
[[email protected] ~]# mkdir /exports/data
[[email protected] ~]# echo "/data  /exports/data  none  bind  0 0" >> /etc/fstab
[[email protected] ~]# mount -a
[[email protected] ~]# ls -al /exports/data
total 8
drwxr-xr-x 2 root     root     4096 Jan 11 22:19 .
drwxr-xr-x 3 root     root     4096 Jan 11 22:03 ..
-rw-r--r-- 1 root     root        0 Jan 11 22:03 test-file

If you can see the file, test-file, within /exports/data, then everything is setup properly.

Export the directory to be shared, along with its permissions, in /etc/exports:

[[email protected] ~]# vim /etc/exports

/exports      192.168.1.0/24(ro,no_subtree_check,fsid=0,crossmnt)
/exports/data 192.168.1.0/24(rw,no_subtree_check,no_root_squash)

Now start the services, and ensure they will start at boot time:

# Ubuntu 12.04 and Ubuntu 14.04
[[email protected] ~]# service rpcbind start
[[email protected] ~]# service idmapd start
[[email protected] ~]# service nfs-kernel-server start; update-rc.d nfs-kernel-server enable

# Debian 7
[[email protected] ~]# service rpcbind start; insserv rpcbind
[[email protected] ~]# service nfs-common start; insserv nfs-common
[[email protected] ~]# service nfs-kernel-server start; insserv nfs-kernel-server

Check to make sure the services are running:

[[email protected] ~]# showmount -e
Export list for nfs01:
/exports/data 192.168.1.0/24
/exports     192.168.1.0/24

[[email protected] ~]# rpcinfo -p
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100003    2   tcp   2049  nfs
    100003    3   tcp   2049  nfs
    100003    4   tcp   2049  nfs
    100227    2   tcp   2049
    100227    3   tcp   2049
    100003    2   udp   2049  nfs
    100003    3   udp   2049  nfs
    100003    4   udp   2049  nfs
    100227    2   udp   2049
    100227    3   udp   2049
    100021    1   udp  39482  nlockmgr
    100021    3   udp  39482  nlockmgr
    100021    4   udp  39482  nlockmgr
    100021    1   tcp  60237  nlockmgr
    100021    3   tcp  60237  nlockmgr
    100021    4   tcp  60237  nlockmgr
    100005    1   udp  39160  mountd
    100005    1   tcp  34995  mountd
    100005    2   udp  34816  mountd
    100005    2   tcp  56338  mountd
    100005    3   udp  49147  mountd
    100005    3   tcp  51938  mountd

NFS Client – Installation

Now that the NFS server is ready, the NFS clients now need to be setup to connect. Install the required packages on the NFS clients by:

# Ubuntu or Debian
[[email protected] ~]# apt-get update
[[email protected] ~]# apt-get install rpcbind nfs-common nfs4-acl-tools

NFS Client – Configuration

Stop the services so we can unload the lockd kernel module and configure the services. This step cannot be skipped!

# Ubuntu 12.04 and Ubuntu 14.04
[[email protected] ~]# service nfs-kernel-server stop
[[email protected] ~]# service statd stop
[[email protected] ~]# service idmapd stop
[[email protected] ~]# service rpcbind stop
[[email protected] ~]# modprobe -r nfsd nfs lockd

# Debian 7
[[email protected] ~]# service nfs-kernel-server stop
[[email protected] ~]# service nfs-common stop
[[email protected] ~]# service rpcbind stop
[[email protected] ~]# modprobe -r nfsd nfs lockd

Next, set the domain as all servers and clients should resides within the same domain:

[[email protected] ~]# vim /etc/idmapd.conf
[General]
Domain = yourdomain.com

Update nfs-common to disable statd and rpc.gssd, then require idmapd:

# Ubuntu 12.04 and Ubuntu 14.04
[[email protected] ~]# vim /etc/default/nfs-common
NEED_STATD=no
STATDOPTS=
NEED_GSSD=no

# Debian 7
[[email protected] ~]# vim /etc/default/nfs-common
NEED_STATD=no
STATDOPTS=
NEED_IDMAPD=yes
NEED_GSSD=no

Now start the services, and ensure they will start at boot time:

# Ubuntu 12.04 and Ubuntu 14.04
[[email protected] ~]# service rpcbind start
[[email protected] ~]# service idmapd start

# Debian 7
[[email protected] ~]# service rpcbind start; insserv rpcbind
[[email protected] ~]# service nfs-common start; insserv nfs-common

Confirm the NFS clients can see the NFS server:

[[email protected] ~]# showmount -e 192.168.1.1
Export list for 192.168.1.1:
/var/www/vhosts 192.168.1.0/24

[[email protected] ~]# rpcinfo -p 192.168.1.1
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100024    1   udp    662  status
    100024    1   tcp    662  status
    100003    2   tcp   2049  nfs
    100003    3   tcp   2049  nfs
    100003    4   tcp   2049  nfs
    100227    2   tcp   2049
    100227    3   tcp   2049
    100003    2   udp   2049  nfs
    100003    3   udp   2049  nfs
    100003    4   udp   2049  nfs
    100227    2   udp   2049
    100227    3   udp   2049
    100021    1   udp  32769  nlockmgr
    100021    3   udp  32769  nlockmgr
    100021    4   udp  32769  nlockmgr
    100021    1   tcp  32803  nlockmgr
    100021    3   tcp  32803  nlockmgr
    100021    4   tcp  32803  nlockmgr
    100005    1   udp    892  mountd
    100005    1   tcp    892  mountd
    100005    2   udp    892  mountd
    100005    2   tcp    892  mountd
    100005    3   udp    892  mountd
    100005    3   tcp    892  mountd

Configure the mount point in /etc/fstab:

[[email protected] ~]# vim /etc/fstab

192.168.1.1:/data  /data  nfs4  sec=sys,noatime  0  0

Now create the placeholder directory on the client, mount, and verify it works:

[[email protected] ~]# mkdir /data
[[email protected] ~]# mount -a
[[email protected] ~]# df -h
Filesystem          Size  Used Avail Use% Mounted on
/dev/xvda1           20G  1.2G   18G   7% /
none                4.0K     0  4.0K   0% /sys/fs/cgroup
udev                484M  8.0K  484M   1% /dev
tmpfs                99M  404K   99M   1% /run
none                5.0M     0  5.0M   0% /run/lock
none                495M     0  495M   0% /run/shm
none                100M     0  100M   0% /run/user
192.168.13.1:/data   20G  1.2G   18G   7% /data
[[email protected] ~]#
[[email protected] ~]# grep /data /proc/mounts 
192.168.1.1:/data /data nfs rw,noatime,vers=3,rsize=32768,wsize=32768,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.1.1,mountvers=3,mountport=892,mountproto=tcp,local_lock=none,addr=192.168.1.1 0 0
[[email protected] ~]#
[[email protected] ~]# touch /data/test-file
[[email protected] ~]# ls -al /data/test-file 
-rw-r--r-- 1 root root 0 Dec 20 01:45 /data/test-file

Finally, confirm the user mapping is the same on both servers. You can test to ensure both the server and the client’s show the same UID’s by:

# Create a user on the NFS server:
[[email protected] ~]# useradd -u 6000 testuser

# Create the same user on the NFS client:
[[email protected] ~]# useradd -u 6000 testuser

# Set the ownership of a test file on the NFS server:
[[email protected] ~]# touch /data/test-file01
[[email protected] ~]# chown testuser:testuser /data/test-file01

# Check the ownership of the test file on the NFS server:
[[email protected] ~]# ls -al /data/test-file01

# Confirm the client sees the same ownership:
[[email protected] ~]# ls -al /data/test-file01

Determine table size in MySQL

Understanding which tables within your database that contain large amounts of data can be a very useful. Tables that have a ton of data could indicate that some pruning may be needed.

Perhaps the space may be taken up by stats data, lead generation, or some other type of data which dates back 10 years. It will be up to the DBA’s to determine if any of that data can be archived of course.

Very large tables can give a performance penalty if the queries are not well written and return too many rows. It can also slow down your database dumps, or make them use up more disk space. Disk space in general is also a very real concern as well!

You can easily determine what tables within your database(s) are using up significant space by running:

mysql> SELECT table_schema as `Database`, table_name AS `Table`,
round(((data_length + index_length) / 1024 / 1024), 2) `Size in MB`
FROM information_schema.TABLES
ORDER BY (data_length + index_length) DESC;

And the returned results look something like the following:

+---------------------+---------------------------------------+------------+
| Database            | Table                                 | Size in MB |
+---------------------+---------------------------------------+------------+
| example_one         | tracker                               |    2854.70 |
| example_one         | corehits                              |    1321.00 |
| example_two         | reg_submissions                       |    1313.14 |
| example_one         | unsubscribes                          |    1312.00 |
| example_one         | members                               |     930.08 |
| example_two         | admin                                 |     939.55 |
| example_one         | submissions                           |     829.91 |
| example_two         | tracker_archive                       |     813.17 |
| example_two         | tracker_archive2                      |     757.56 |
| example_two         | affiliates                            |     749.56 |
| example_two         | post_data                             |     712.92 |
| example_one         | signups                               |     711.52 |
| example_one         | guests                                |     703.31 |
| example_one         | success                               |     586.41 |
| example_two         | tracker                               |     574.70 |
| example_one         | e_data                                |     434.28 |
| example_one         | reg_data                              |     426.88 |
+---------------------+---------------------------------------+------------+

Another possible way to run this that I recently found on https://www.percona.com/blog/2008/02/04/finding-out-largest-tables-on-mysql-server/ is posted below.

This one shows more detailed information about each table including how many rows it has, how much data, index size, total size, and the last column shows how much data the index takes compared to the data. If you see a large index size (idx column), it could be worth checking the indexes to see if you have any redundant indexes in place.

The command is:

mysql> SELECT CONCAT(table_schema, '.', table_name), CONCAT(ROUND(table_rows / 1000000, 2), 'M') rows, CONCAT(ROUND(data_length / ( 1024 * 1024 * 1024 ), 2), 'G') DATA, CONCAT(ROUND(index_length / ( 1024 * 1024 * 1024 ), 2), 'G') idx, CONCAT(ROUND(( data_length + index_length ) / ( 1024 * 1024 * 1024 ), 2), 'G') total_size, ROUND(index_length / data_length, 2) idxfrac FROM information_schema.TABLES ORDER  BY data_length + index_length DESC LIMIT  10;

I shortened the column titles a bit so it will fit on this blog, but the output will look something like:

+-----------------------+---------+-------+-------+------------+---------+
| CONCAT(.. table_name) | rows    | DATA  | idx   | total_size | idxfrac |
+-----------------------+---------+-------+-------+------------+---------+
| page_cache            | 38.58M  | 3.09G | 2.97G | 6.06G      |    0.96 |
| exit_pages            | 106.84M | 4.78G | 1.02G | 5.80G      |    0.21 |
| shipping_log004       | 33.94M  | 4.06G | 1.16G | 5.22G      |    0.29 |
| visitor_data          | 9.55M   | 4.61G | 0.09G | 4.70G      |    0.02 |
| orders                | 2.40M   | 1.57G | 0.30G | 1.87G      |    0.19 |
| marketing_info        | 25.16M  | 1.85G | 0.00G | 1.85G      |    0.00 |
| clients               | 1.27M   | 1.29G | 0.13G | 1.42G      |    0.10 |
| directory             | 5.87M   | 0.93G | 0.17G | 1.10G      |    0.18 |
| promos                | 2.31M   | 0.79G | 0.05G | 0.84G      |    0.07 |
| product_information   | 9.33M   | 0.39G | 0.35G | 0.74G      |    0.89 |
+-----------------------+---------+-------+-------+------------+---------+
10 rows in set (0.26 sec)

Maybe you have to see if you can set a retention rate on the data stored in a table if its for caching or logging purposes.

Perhaps you may need to run an optimize tables to reclaim the physical storage space. Just keep in mind that the optimize table command will lock your tables, so it may be best to run that during a low traffic time. See the following URL for more information:
http://dev.mysql.com/doc/refman/5.7/en/optimize-table.html