Determine table size in MySQL

Understanding which tables within your database that contain large amounts of data can be a very useful. Tables that have a ton of data could indicate that some pruning may be needed.

Perhaps the space may be taken up by stats data, lead generation, or some other type of data which dates back 10 years. It will be up to the DBA’s to determine if any of that data can be archived of course.

Very large tables can give a performance penalty if the queries are not well written and return too many rows. It can also slow down your database dumps, or make them use up more disk space. Disk space in general is also a very real concern as well!

You can easily determine what tables within your database(s) are using up significant space by running:

mysql> SELECT table_schema as `Database`, table_name AS `Table`,
round(((data_length + index_length) / 1024 / 1024), 2) `Size in MB`
FROM information_schema.TABLES
ORDER BY (data_length + index_length) DESC;

And the returned results look something like the following:

+---------------------+---------------------------------------+------------+
| Database            | Table                                 | Size in MB |
+---------------------+---------------------------------------+------------+
| example_one         | tracker                               |    2854.70 |
| example_one         | corehits                              |    1321.00 |
| example_two         | reg_submissions                       |    1313.14 |
| example_one         | unsubscribes                          |    1312.00 |
| example_one         | members                               |     930.08 |
| example_two         | admin                                 |     939.55 |
| example_one         | submissions                           |     829.91 |
| example_two         | tracker_archive                       |     813.17 |
| example_two         | tracker_archive2                      |     757.56 |
| example_two         | affiliates                            |     749.56 |
| example_two         | post_data                             |     712.92 |
| example_one         | signups                               |     711.52 |
| example_one         | guests                                |     703.31 |
| example_one         | success                               |     586.41 |
| example_two         | tracker                               |     574.70 |
| example_one         | e_data                                |     434.28 |
| example_one         | reg_data                              |     426.88 |
+---------------------+---------------------------------------+------------+

Another possible way to run this that I recently found on https://www.percona.com/blog/2008/02/04/finding-out-largest-tables-on-mysql-server/ is posted below.

This one shows more detailed information about each table including how many rows it has, how much data, index size, total size, and the last column shows how much data the index takes compared to the data. If you see a large index size (idx column), it could be worth checking the indexes to see if you have any redundant indexes in place.

The command is:

mysql> SELECT CONCAT(table_schema, '.', table_name), CONCAT(ROUND(table_rows / 1000000, 2), 'M') rows, CONCAT(ROUND(data_length / ( 1024 * 1024 * 1024 ), 2), 'G') DATA, CONCAT(ROUND(index_length / ( 1024 * 1024 * 1024 ), 2), 'G') idx, CONCAT(ROUND(( data_length + index_length ) / ( 1024 * 1024 * 1024 ), 2), 'G') total_size, ROUND(index_length / data_length, 2) idxfrac FROM information_schema.TABLES ORDER  BY data_length + index_length DESC LIMIT  10;

I shortened the column titles a bit so it will fit on this blog, but the output will look something like:

+-----------------------+---------+-------+-------+------------+---------+
| CONCAT(.. table_name) | rows    | DATA  | idx   | total_size | idxfrac |
+-----------------------+---------+-------+-------+------------+---------+
| page_cache            | 38.58M  | 3.09G | 2.97G | 6.06G      |    0.96 |
| exit_pages            | 106.84M | 4.78G | 1.02G | 5.80G      |    0.21 |
| shipping_log004       | 33.94M  | 4.06G | 1.16G | 5.22G      |    0.29 |
| visitor_data          | 9.55M   | 4.61G | 0.09G | 4.70G      |    0.02 |
| orders                | 2.40M   | 1.57G | 0.30G | 1.87G      |    0.19 |
| marketing_info        | 25.16M  | 1.85G | 0.00G | 1.85G      |    0.00 |
| clients               | 1.27M   | 1.29G | 0.13G | 1.42G      |    0.10 |
| directory             | 5.87M   | 0.93G | 0.17G | 1.10G      |    0.18 |
| promos                | 2.31M   | 0.79G | 0.05G | 0.84G      |    0.07 |
| product_information   | 9.33M   | 0.39G | 0.35G | 0.74G      |    0.89 |
+-----------------------+---------+-------+-------+------------+---------+
10 rows in set (0.26 sec)

Maybe you have to see if you can set a retention rate on the data stored in a table if its for caching or logging purposes.

Perhaps you may need to run an optimize tables to reclaim the physical storage space. Just keep in mind that the optimize table command will lock your tables, so it may be best to run that during a low traffic time. See the following URL for more information:
http://dev.mysql.com/doc/refman/5.7/en/optimize-table.html

Backing up MySQL with Holland

Taken directly from the vendors website, Holland is an Open Source backup framework originally developed at Rackspace and written in Python. Its goal is to help facilitate backing up databases with greater configurability, consistency, and ease. Holland is capable of backing up other types of data, too. Because of its plugin structure, Holland can be used to backup anything you want by whatever means you want.

Notable Features
– Pluggable Framework
– Supports Multiple Backup Sets
– Database and Table Filtering (Using GLOBs)
– Auto-Detection of Transactional DBs
– Safe use of –single-transaction with mysqldump
– In-Line and Pluggable Compression
– Backups Suitable for Point-In-Time Recovery / Replication
– MySQL + LVM Snapshot and Logical Backups
– PostgreSQL backups using pgdump

Website: http://hollandbackup.org
Documentation: http://docs.hollandbackup.org

How to install Holland on Ubuntu / Debian

As the packages don’t exist in the distro’s repositories, pull them from the official repositories.

This works on the followings OS’s:
– Ubuntu 12.04
– Ubuntu 14.04
– Ubuntu 14.10
– Ubuntu 16.04
– Debian 7
– Debian 8

You can setup the repo by running the following on the commandline as root:

eval $(cat /etc/os-release)
 
DIST="xUbuntu_${VERSION_ID}"
[ $ID == "debian" ] && DIST="Debian_${VERSION_ID}.0"
 
curl -s http://download.opensuse.org/repositories/home:/holland-backup/${DIST}/Release.key | sudo apt-key add -
echo "deb http://download.opensuse.org/repositories/home:/holland-backup/${DIST}/ ./" > /etc/apt/sources.list.d/holland.list

Then install Holland:

apt-get update
apt-get install holland holland-mysqldump holland-common

How to install Holland on CentOS and RedHat

The holland packages exist in the EPEL repositories. Using the list below, install the EPEL repo for your distro:

# CentOS 5 / RedHat 5
rpm -ivh http://dl.fedoraproject.org/pub/epel/5/x86_64/epel-release-5-4.noarch.rpm
 
# CentOS 6 / RedHat 6
rpm -ivh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm

# CentOS 7 / RedHat 7
rpm -ivh http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-5.noarch.rpm

Then install Holland by:

yum install holland holland-mysqldump holland-common

How to configure Holland

Now that Holland is installed, its now time to configure it.

First create the backup directory where you will be storing your database backups:

mkdir -p /var/lib/mysqlbackup

Then configure holland to store the backups in the directory created above:

vim /etc/holland/holland.conf
...
backup_directory = /var/lib/mysqlbackup
backupsets = default
...

Next we setup default backupset within Holland that will control the retention period, compression, credentials, etc:

cat << EOF > /etc/holland/backupsets/default.conf
 
## Default Backup-Set
##
## Backs up all MySQL databases in a one-file-per-database fashion using
## lightweight in-line compression and engine auto-detection. This backup-set
## is designed to provide reliable backups "out of the box", however it is
## generally advisable to create additional custom backup-sets to suit
## one's specific needs.
##
## For more inforamtion about backup-sets, please consult the online Holland
## documentation. Fully-commented example backup-sets are also provided, by
## default, in /etc/holland/backupsets/examples.
 
[holland:backup]
plugin = mysqldump
backups-to-keep = 7
auto-purge-failures = yes
purge-policy = after-backup
estimated-size-factor = 1.0
 
# This section defines the configuration options specific to the backup
# plugin. In other words, the name of this section should match the name
# of the plugin defined above.
[mysqldump]
file-per-database       = yes
#lock-method        = auto-detect
#databases          = "*"
#tables             = "*"
#stop-slave         = no
#bin-log-position   = no
 
# The following section is for compression. The default, unless the
# mysqldump provider has been modified, is to use inline fast gzip
# compression (which is identical to the commented section below).
[compression]
method = gzip
options = "--rsyncable"
 
[mysql:client]
defaults-extra-file       = /root/.my.cnf
EOF

In order for Holland to backup the databases, setup the /root/.my.cnf by running:

cat << EOF > /root/.my.cnf
[client]
user=root
password=your_root_mysql_password_here
EOF

Now setup the nightly cronjob that will run Holland:

cat << EOF > /etc/cron.d/holland
30 3 * * * root /usr/sbin/holland -q bk
EOF

Finally, run the backup job to ensure Holland works. Please note this will lock your tables when it runs, so do not run this during peak times as it could cause downtime for your site or application!

/usr/sbin/holland -q bk

You can verify Holland ran successfully by checking the logs:

tail -f /var/log/holland/holland.log
2015-12-16 03:58:01,789 [INFO] Wrote backup manifest /var/lib/mysqlbackup/default/20151216_035801/backup_data/MANIFEST.txt
2015-12-16 03:58:01,793 [INFO] Executing: /usr/bin/mysqldump --defaults-file=/var/lib/mysqlbackup/default/20151216_035801/my.cnf --flush-privileges --max-allowed-packet=128M --lock-tables mysql
2015-12-16 03:58:01,888 [ERROR] /usr/bin/mysqldump[25783]: -- Warning: Skipping the data of table mysql.event. Specify the --events option explicitly.
2015-12-16 03:58:01,888 [INFO] Final on-disk backup size 181.77KB
2015-12-16 03:58:01,889 [INFO] 26.47% of estimated size 686.67KB
2015-12-16 03:58:01,889 [INFO] Backup completed in 0.24 seconds
2015-12-16 03:58:01,902 [INFO] Purged default/20151209_035801
2015-12-16 03:58:01,902 [INFO] 1 backups purged
2015-12-16 03:58:01,909 [INFO] Released lock /etc/holland/backupsets/default.conf
2015-12-16 03:58:01,909 [INFO] --- Ending backup run ---

Holland Tips and Tricks

Below are some of the more common tips and tricks for working with Holland. There are broken down as follows:
1. Alternate username for Holland
2. Change Default Retention
3. Hourly Backups
4. Backing up 2 or more database servers

1. Alternate username for Holland

For security purposes, a client may want to have a user, other then root, performing the Holland backups. To configure this, first create a new MySQL user for Holland:

mysql
GRANT SELECT, RELOAD, SUPER, LOCK TABLES, REPLICATION CLIENT, SHOW VIEW ON *.* TO 'holland_backup'@'localhost' IDENTIFIED BY 'secretpassword';
flush privileges;

Now update /etc/holland/backupsets/default.conf to reflect the new credentials:

vi /etc/holland/backupsets/default.conf
...
[mysql:client]
# defaults-file       = /root/.my.cnf
user = holland_backup
password = secretpassword
...

2. Change Default Retention

This guide sets Holland to keep 7 backups of your databases. Please note that this does not mean 7 days as it makes the assumption that you are only running one Holland backup job each night.

With that said, if you wanted to configure a retention of 14 days, you would update “backups-to-keep” in /etc/holland/backupsets/default.conf as shown below:

vi /etc/holland/backupsets/default.conf
...
[holland:backup]
plugin = mysqldump
backups-to-keep = 14
auto-purge-failures = yes
...

3. Hourly Backups

You can configure Holland to be more aggressive with how often it runs. Assuming we have a MySQL Slave server with a small database, and we want
– Holland backups every 4 hours
– 7 day retention

You would first update the cronjob as follows:

vi /etc/cron.d/holland
30 */4 * * * root /usr/sbin/holland -q bk

Now update the “backups-to-keep” in /etc/holland/backupsets/default.conf to keep 42 backups (6 backups a day * 7 days) as shown below

vi /etc/holland/backupsets/default.conf
...
[holland:backup]
plugin = mysqldump
backups-to-keep = 42
auto-purge-failures = yes
...

4. Backing up 2 or more database servers

An environment may have two or more database servers. Perhaps they exist on another server, or perhaps they are using something like Rackspace Cloud Database, or Amazon RDS. Below is how you can configure Holland to backup multiple targets:

First, create a copy of the default.conf we configured in this guide to serve as a starting point:

cd /etc/holland/backupsets/
cp default.conf clouddatabase.conf

Now update the clouddatabase.conf to configure the remote database server IP and credentials. You must comment out the defaults-extra-file as shown below:

vim clouddatabase.conf

...
[mysql:client]
#defaults-extra-file = /root/.my.cnf

# define external database information

host     = xxxxxxxxx.rackspaceclouddb.com
user     = holland
password = your_holland_mysql_password_here
port     = 3306
...

Now update your retention settings:

vi /etc/holland/backupsets/clouddatabase.conf
...
[holland:backup]
plugin = mysqldump
backups-to-keep = 7
auto-purge-failures = yes
...

Finally, update the holland.conf to include your second config file:

vim /etc/holland/holland.conf
...
backupsets = default, clouddatabase
...

Finally, run the backup job to ensure Holland works. Please note this will lock your tables which it runs, so do not run this during peak times as it could cause downtime for your site or application!

/usr/sbin/holland -q bk

Detecting if MySQLdump was running

Ever have those times when your site was down for a few minutes, and you have no way to explain why?

One possible thing to check for would be to see if one your developers accidentally ran a MySQLdump on your production databases during that time. You can see this pretty easily if you suspect its currently running by:

mysql
mysql> show processlist;
...
SELECT /*!40001 SQL_NO_CACHE */ * FROM `your_table`;

You can then verify it by running the following query:

mysql> select COUNT(1) mysqldumpThreads
from information_schema.processlist
where info like 'SELECT /*!40001 SQL_NO_CACHE */%';
+------------------+
| mysqldumpThreads |
+------------------+
|                1 |
+------------------+
1 row in set (0.00 sec)

If you get something other than 0, like what is shown above, then mysqldump is running.

If the issue already passed, and you are just trying to do a root cause analysis, check your MySQL slow query log for queries like what is posted below around the timestamp in question:

# Time: 151112 10:34:12
# User@Host: root[root] @ localhost []
# Query_time: 539.418400  Lock_time: 0.000041 Rows_sent: 60547211  Rows_examined: 60547211
SET timestamp=1420389252;
SELECT /*!40001 SQL_NO_CACHE */ * FROM `your_table`;

Increase open-files-limit in MariaDB on CentOS 7 with systemd

In the new age of linux known as systemd, a word that many still prepend colorful words to, I stumbled across another gotcha. Changing MariaDb’s open files limit.

Typically you would configure this in /etc/my.cnf or in /etc/security/limits.conf, but this no longer appears to be the case as systemd wants to control it.

I ran across having to do this cause I was receiving the following errors in the database when running mysqldump, mysqlcheck, etc:

mysqldump: Error: 'Out of resources when opening file '/tmp/#sql_7f96_2.MAI' (Errcode: 24)' when trying to dump tablespaces

mysqldump: Couldn't execute 'SHOW TRIGGERS LIKE 'EXAMPLE'': Can't read dir of './DB_NAME/' (errno: 24) (1018)

mysqldump: Got error: 23: "Out of resources when opening file './DB_NAME/TABLE.MYD' (Errcode: 24)" when using LOCK TABLES

So to increase the open files limit which is the accepted fix for this, you need to do the following with OS’s that use systemd:

First, check to see what your current open files is set to in MariaDB:

MariaDB [(none)]> SHOW VARIABLES LIKE 'Open_files_limit';
+------------------+-------+
| Variable_name    | Value |
+------------------+-------+
| open_files_limit | 1024  |
+------------------+-------+
1 row in set (0.00 sec)

Create a directory to store the MariaDB service changes for systemd:

mkdir -p /etc/systemd/system/mariadb.service.d/
cat /etc/systemd/system/mariadb.service.d/limits.conf
[Service]
LimitNOFILE=20000

Now reload the systemd daemon so it is aware of the changes applied to systemd. This does not restart any service:

systemctl daemon-reload

Finally, restart the MariaDB service so it will apply the new setting:

systemctl stop mariadb
systemctl start mariadb

You can now verify that you setting went into place by:

MariaDB [(none)]> SHOW VARIABLES LIKE 'Open_files_limit';
+------------------+-------+
| Variable_name    | Value |
+------------------+-------+
| open_files_limit | 20000 |
+------------------+-------+
1 row in set (0.00 sec)

More details and the reason behind this is found here:

cat /usr/lib/systemd/system/mariadb.service
# It's not recommended to modify this file in-place, because it will be
# overwritten during package upgrades.  If you want to customize, the
# best way is to create a file "/etc/systemd/system/mariadb.service",
# containing
#	.include /lib/systemd/system/mariadb.service
#	...make your changes here...
# or create a file "/etc/systemd/system/mariadb.service.d/foo.conf",
# which doesn't need to include ".include" call and which will be parsed
# after the file mariadb.service itself is parsed.
#
# For more info about custom unit files, see systemd.unit(5) or
# http://fedoraproject.org/wiki/Systemd#How_do_I_customize_a_unit_file.2F_add_a_custom_unit_file.3F

# For example, if you want to increase mariadb's open-files-limit to 10000,
# you need to increase systemd's LimitNOFILE setting, so create a file named
# "/etc/systemd/system/mariadb.service.d/limits.conf" containing:
#	[Service]
#	LimitNOFILE=10000

# Note: /usr/lib/... is recommended in the .include line though /lib/...
# still works.
# Don't forget to reload systemd daemon after you change unit configuration:
# root> systemctl --system daemon-reload

MySQL recovery mode

InnoDB tables rarely get corrupted, but it can occur when MySQL is randomly killed off by power outages, hardware problems, or the OOM killer running. When this happens, you will see some scary looking log entries in your MySQL error log, such as:

InnoDB: Assertion failure in thread 114256846 in file blah.c line 5423
InnoDB: Failing assertion: page_get_n_recs(page) > 1
InnoDB: We intentionally generate a memory trap.
InnoDB: Submit a detailed bug report to http://bugs.mysql.com.
InnoDB: If you get repeated assertion failures or crashes, even
InnoDB: immediately after the mysqld startup, there may be
InnoDB: corruption in the InnoDB tablespace. Please refer to
InnoDB: http://dev.mysql.com/doc/refman/5.5/en/forcing-innodb-recovery.html
InnoDB: about forcing recovery.
mysqld got signal 6 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
We will try our best to scrape up some info that will hopefully help diagnose
the problem, but since we have already crashed, something is definitely wrong
and this may fail.
...
obscure backtrace message
...
The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains
information that should help you find out what is causing the crash.

Usually, you can get MySQL back up in recovery mode pretty quickly by setting the following variable in /etc/my.cnf under the section [mysqld]:

vim /etc/my.cnf

[mysqld]
innodb_force_recovery = 1

Then try to restart MySQL:

service mysql restart

Once you have MySQL running, you can now check to see what tables may be corrupted:

mysqlcheck -c --all-databases

After you find the database that is corrupted, you can try to repair it by running:

mysqlcheck -r your_database_name
ie:  mysqlcheck -r drupal

Now confirm that the databases and tables are now showing ‘OK’:

mysqlcheck -c --all-databases

If all is well, take MySQL out of recovery mode by removing the following variable from /etc/my.cnf:

vim /etc/my.cnf
# Remove
innodb_force_recovery = 1

Finally, restart MySQL and check your logs to ensure that there are no further errors being recorded:

service mysql restart

MySQL slave promotion

There are times when you may need to promote to MySQL Slave server to a Master server. This task can be done pretty easily, but it is important to note that once this is done, you will need to rebuild your Master MySQL server as it is no longer the point of truth for your databases.

Stop the ‘IO_THREAD’ on the MySQL SLAVE:

mysql> stop slave io_thread;

Check the ‘processlist’ to make sure that the SLAVE has read the entire relay log:

mysql> show processlist;;
+----+-------------+-----------+------+---------+------+-----------------------------------------------------------------------------+------------------+
| Id | User | Host | db | Command | Time | State | Info |
+----+-------------+-----------+------+---------+------+-----------------------------------------------------------------------------+------------------+
| 2 | root | localhost | NULL | Query | 0 | NULL | SHOW PROCESSLIST |
| 4 | system user | | NULL | Connect | 124 | Slave has read all relay log; waiting for the slave I/O thread to update it | NULL |
+----+-------------+-----------+------+---------+------+-----------------------------------------------------------------------------+------------------+
2 rows in set (0.00 sec)

Stop the MySQL Slave process on MySQL 5.0 and MySQL 5.1

mysql> stop slave;
mysql> reset slave;

Stop the MySQL Slave process on MySQL 5.5 and MySQL 5.6

mysql> stop slave;
mysql> reset slave all;

Confirm that the following line is commented out in your /etc/my.cnf

# read-only = 1

Restart MySQL to re-read your my.cnf:

service mysql restart

Finally, don’t forget to update your web applications or anything else using the databases to point to the MySQL Slave IP address since it is now the new MySQL Master server.

Setting up MySQL Master Slave Replication using mysqldump

This article is part of a series of setting up MySQL replication. As with most things, there is always more than one way to do something. In the case of setting up MySQL replication, or rebuilding it, some options are better than others depending on your use case.

The articles in the series are below:
Setting up MySQL Replication using mysqldump
Setting up MySQL Replication using Percona XtraBackup
Setting up MySQL Replication using Rsync
Setting up MySQL Replication using LVM snapshots

This guide will document how to setup MySQL Master / Slave Replication using mysqldump.

MySQL Master / Slave Replication is useful for a number of situations, such as:
– Perform hourly backups on the Slave without locking the Master database
– Provide a degree of redundancy as you can promote the Slave
– Splitting the reads between the Master and Slave

Setting up MySQL Replication is pretty straight forward. Keep in mind that this guide requires you use mysqldump to create a backup of your existing databases, which will lock your tables while it runs. So be sure to schedule it accordingly!

Setup the Master MySQL server

Configure the my.cnf as shown below:

log-bin=/var/lib/mysql/db01-binary-log
expire-logs-days=5
server-id=1

Then restart MySQL to apply the settings:

# CentOS / RHEL:
[root@db01 ~]# service mysqld restart

# Ubuntu / Debian:
[root@db01 ~]# service mysql restart

Finally, grant access to the Slave so it has access to communicate with the Master:

mysql> GRANT REPLICATION SLAVE ON *.* to 'repl’@’10.x.x.x’ IDENTIFIED BY 'your_password';

Setup the Slave MySQL server

Configure the my.cnf as shown below:

relay-log=/var/lib/mysql/db02-relay-log
relay-log-space-limit = 4G
read-only=1
server-id=2

Then restart MySQL to apply the settings:

# CentOS / RHEL:
[root@db02 ~]# service mysqld restart

# Ubuntu / Debian:
[root@db02 ~]# service mysql restart

Finally, prep the slave server to connect to the Master MySQL server by:

# For MySQL 5.0 and 5.1
mysql> STOP SLAVE; RESET SLAVE;

# For MySQL 5.5 and 5.6
mysql> STOP SLAVE; RESET SLAVE ALL;

# Then run this command for all MySQL versions
mysql> CHANGE MASTER TO MASTER_HOST=’10.x.x.x’, MASTER_USER='repl', MASTER_PASSWORD='your_password'

RESET SLAVE will remove any previously configuration replication settings. Also, the MASTER_LOG_FILE and MASTER_LOG_POS is not needed as the following section will be dumping the databases with the –master-data flag.

Importing the databases onto the Slave MySQL server

A dump of the Master MySQL server is needed in order to get the Slave MySQL server in sync. Please note that this will lock your tables while this runs!

Perform this on the master server by:

[root@db01 ~]# mysqldump --all-databases --master-data | gzip -1 > /root/all.sql.gz

Now transfer over the database dump to the Slave MySQL server:

[root@db01 ~]# scp /root/all.sql.gz [email protected]:/root/

On the MySQL Slave server, import the dump by running:

[root@db02 ~]# zcat /root/all.sql.gz | mysql
[root@db02 ~]# mysql
mysql> FLUSH PRIVILEGES;
mysql> quit

As this dump contains the internal database ‘mysql’, all the passwords, including root, will be the same as they are on the Master MySQL server. So be sure to update /root/.my.cnf with the proper credentials:

[root@db02 ~]# vi /root/.my.cnf
[client]
user=root
password=db01_root_mysql_password

Also be sure to update any passwords your database backup scripts may use to reflect the new credentials!

Start and test replication

Start replication on the Slave MySQL server by:

[root@db02 ~]# mysql
mysql> START SLAVE;

And verify replication is online and working by confirming the following output:

mysql> SHOW SLAVE STATUS\G
...
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Seconds_Behind_Master: 0
...

If those values are the same as what is shown above, then replication is working properly! Perform a final test by creating a test database on the Master MySQL server, then check to ensure it shows up on the Slave MySQL server. Afterwards, feel free to drop that test database on the Master MySQL server.

From here, you should be good to go! Just be sure to setup a monitoring check to ensure that replication is always running and doesn’t encounter any errors. A very basic MySQL Replication check can be found here:
https://github.com/stephenlang/system-health-check

Memory backed filesystem for the MySQL tmpdir

If your queries are constantly writing to disk, and you cannot fix your query (which is strongly recommended) or properly adjust your buffer sizes for one reason or another, you can create a memory backed file system for the tmpdir, which should allow for incredibly faster writes and reads.

This should only be viewed as a temporary fix until you can get your query to stop writing its temp tables to disk.

Create the mount point:

mkdir -p /tmp/mysqltmp

Now give MySQL access to it:

chown mysql:mysql /tmp/mysqltmp

Set up the entry in /etc/fstab so it will mount on reboot:
* NOTE: Change the uid and gid to whatever MySQL runs as in /etc/passwd and /etc/group

vi /etc/fstab
# Add
tmpfs /tmp/mysqltmp tmpfs
rw,gid=27,uid=27,size=256M,nr_inodes=10k,mode=0700 0 0

Mount the memory backed file system:

mount -a

Update /etc/my.cnf and add the variable:

tmpdir = /tmp/mysqltmp

Now, restart MySQL:

service mysql restart

Finally, confirm MySQL is now utilizing the new memory backed filesystem:

mysql
mysql> show variables like '%tmpdir%';
+-------------------+----------------+
| Variable_name     | Value          |
+-------------------+----------------+
| slave_load_tmpdir | /tmp           |
| tmpdir            | /tmp/mysqltmp  |
+-------------------+----------------+
2 rows in set (0.00 sec)

How to dynamically enable general log in MySQL

Ever wonder what the exact queries are that cause MySQL to spike your CPU constantly? Here is an quick and easy way to enable the general log within MySQL without restarting MySQL.

First, confirm your initial variables before changing anything:

mysql> show variables like '%general%';
+------------------+--------------------------+
| Variable_name    | Value                    |
+------------------+--------------------------+
| general_log      | OFF                      |
| general_log_file | /var/lib/mysql/web01.log |
+------------------+--------------------------+
2 rows in set (0.00 sec)

So in this case, the logs will be recorded to /var/lib/mysql/web01.log. If you have nothing set for ‘general_log_file’, you can set it as shown below, but just make sure its in a directory that MySQL can write to:

mysql> set global general_log_file='/var/lib/mysql/web01.log';

With the log file ready, enable the general log by:

SET global general_log = 1;

Verify it is enabled by running:

mysql> show variables like '%general%';
+------------------+--------------------------+
| Variable_name    | Value                    |
+------------------+--------------------------+
| general_log      | On                       |
| general_log_file | /var/lib/mysql/web01.log |
+------------------+--------------------------+
2 rows in set (0.00 sec)

You can now view the queries being ran by looking at your general_log_file. Please note, your log will be in a different location then the one I have below:

tail -f /var/lib/mysql/web01.log

On busy MySQL servers, leaving the general_log setting enabled will quickly cause your hard drive to fill up. So after a few seconds or minutes, disable it by running:

SET global general_log = 0;

Then verify the setting is now off by:

mysql> show variables like '%general%';
+------------------+--------------------------+
| Variable_name    | Value                    |
+------------------+--------------------------+
| general_log      | OFF                      |
| general_log_file | /var/lib/mysql/web01.log |
+------------------+--------------------------+
2 rows in set (0.00 sec)

You can now go through your log file to see exactly what queries have been running.

Logrotate for MySQL slow query logs

Enabling the MySQL slow query log is extremely useful for finding out which queries are taking too long to execute, and either needs a table index created, or perhaps the query itself may need to be rewritten a bit more efficiently.

But oftentimes, it is forgotten that this log file can grow very large, and it needs to be rotated out. Below is a very quick and easy way to setup log rotation for the MySQL slow query logs:

vim /etc/logrotate.d/mysqllogs
/var/lib/mysql/slow-log {
        missingok
        rotate 2
        size 125M
        create 640 mysql mysql
}

That last line is critical. The file must be recreated with the owner and group set to ‘mysql’, otherwise, MySQL will not have permissions to write to that file.