Summarizing the MySQL Slow Query log

The MySQL slow query log is great for being able to tell you which queries are intensive on the CPU’s, which is most likely impacting your websites performance. When looking at the slow query log, it can appear like its an unorganized mess of random queries. So how do you summarize this list so you can know what queries to start fixing first? Welcome mysqldumpslow!

By default, MySQL comes with a tool called mysqldumpslow. Its primary purpose is to parse the slow query log and group similar queries together. This allows you to quickly see which queries need your attention in the slow query log.

If the slow query log is not already enabled, follow the older article I have here:
https://www.stephenrlang.com/2016/02/mysql-slow-query-log/

Assuming the slow query log is already enabled, locate the slow query log by running:

[root@db01 ~]# grep slow-query-log-file /etc/my.cnf
slow-query-log-file = /var/lib/mysql/slow-log

Queries that return a lot of rows and run often are typically the cause of CPU contention within MySQL. These CPU heavy queries may be able benefit from either a LIMIT clause, a table index or just needs to be rewritten so it returns less rows.

To display the top 10 queries that returned the highest amount of rows, run:

[root@db01 ~]# mysqldumpslow -a -s c -t 10 /var/lib/mysql/slow-log

Sometimes you just want to see which queries are constantly in the slow query log. Many times this could be from something unexpected and easily fixable by the developers.

To display the top 10 queries without any other sorting options, run:

[root@db01 ~]# mysqldumpslow -a -s r -t 10 /var/lib/mysql/slow-log

The data returned will show:

Count - How many times the query has been logged
Time - Both the average time and the total time in the ()
Lock - Table lock time
Rows - Number of rows returned

Below is one entry returned by the mysqldumpslow command that should give you a rough idea what information will be presented to you:

[root@db01 ~]# mysqldumpslow -a -s c -t 10 /var/lib/mysql/slow-log
...
Count: 4201  Time=68.34s (278868s)  Lock=0.00s (0s)  Rows=998512 (4194748912), root[root]@localhost
  SELECT PL.pl_title, P.page_title
  FROM page P
  INNER JOIN pagelinks PL
  ON PL.pl_namespace = P.page_namespace
  WHERE P.page_namespace = N
...

MySQL 1071 (42000): Specified key was too long

Just recording a basic MySQL error I ran across recently. On MySQL 5.5, the following error was being reported:

ERROR 1071 (42000): Specified key was too long; max key length is 767 bytes

To see the error in action, run the following on MySQL 5.5:

[root@db01 ~]# mysql
mysql> create database example;
mysql> use example;
mysql> create table if not exists utf8_test (
day date not null,
product_id int not null,
dimension1 varchar(500) character set utf8 collate utf8_bin not null,
dimension2 varchar(500) character set utf8 collate utf8_bin not null,
unique index unique_index (day, product_id, dimension1, dimension2)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;

ERROR 1071 (42000): Specified key was too long; max key length is 767 bytes

This was curious because the same exact query would run without problem on MySQL 5.7. When researching this, I found that MySQL 5.7 has innodb_large_prefix enabled by default. MySQL 5.5 and 5.6 do not as described in the Official MySQL documentation as they wanted to maintain backwards compatibility with MySQL 5.1.

So to get this to work, you have to enable innodb_large_prefix and also change the innodb_file_format to barracuda. You can see the temporary fix in action by running the following. Just be sure to also add the ROW_FORMAT=DYNAMIC to the end of the query:

[root@db01 ~]# mysql
mysql> set global innodb_file_format = BARRACUDA;
mysql> set global innodb_large_prefix = ON;
mysql> create table if not exists utf8_test (
day date not null,
product_id int not null,
dimension1 varchar(500) character set utf8 collate utf8_bin not null,
dimension2 varchar(500) character set utf8 collate utf8_bin not null,
unique index unique_index (day, product_id, dimension1, dimension2)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 ROW_FORMAT=DYNAMIC;

Query OK, 0 rows affected (0.00 sec)

You can make the two changes persistent across MySQL restarts by adding the following to the my.cnf:

[root@db01 ~]# vim /etc/my.cnf
[mysqld]
...
innodb-file-format = BARRACUDA
innodb-large-prefix = ON
...

IO Scheduler tuning

What is an I/O scheduler? The I/O scheduler is a kernel level tunable whose purpose is to optimize disk access requests. Traditionally this is critical for spinning disks as I/O requests can be grouped together to avoid “seeking”.

Different I/O schedulers have their pro’s and con’s, so choosing which one to use depends on the type of environment and workload. There is no one right I/O scheduler to use, it all simply ‘depends’. Benchmarking your application before and after the I/O scheduler change is usually your best indicator. The good news is, the I/O scheduler can be changed at run time and can be configured to persist after reboots.

The three common I/O schedulers are:
– noop
– deadline
– cfq

noop

The noop I/O scheduler is optimized for systems that don’t need an I/O scheduler such as VMware, AWS EC2, Google Cloud, Rackspace public cloud, etc. Since the hypervisor already controls the I/O scheduling, it doesn’t make sense for the VM to waste CPU cycles on it. The noop I/O scheduler simply works as a FIFO (First In First Out) queue.

You can update the I/O scheduler to noop by:

## CentOS 6

# Change at runtime
[root@db01 ~]# cat /sys/block/sda/queue/scheduler
noop anticipatory deadline [cfq] 
[root@db01 ~]# echo 'noop' > /sys/block/sda/queue/scheduler
[root@db01 ~]# cat /sys/block/sda/queue/scheduler
[noop] anticipatory deadline cfq

# Change at boot time by appending 'elevator=noop' to end of kernel line:
[root@db01 ~]# vim /boot/grub/grub.conf
kernel /vmlinuz-2.6.9-67.EL ro root=/dev/vg0/lv0 elevator=noop


## CentOS 7

# Change at run time
[root@db01 ~]# cat /sys/block/sda/queue/scheduler
noop anticipatory deadline [cfq] 
[root@db01 ~]# echo 'noop' > /sys/block/sda/queue/scheduler
[root@db01 ~]# cat /sys/block/sda/queue/scheduler
[noop] anticipatory deadline cfq

# Change at boot time by appending 'elevator=noop' end of the following line, then rebuild the grub config:
[root@db01 ~]# vim /etc/default/grub
...
GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=rhel00/root rd.lvm.lv=rhel00/swap elevator=noop"
...
[root@db01 ~]# grub2-mkconfig -o /boot/grub2/grub.cfg


## Ubuntu 14.04

# Change at runtime
[root@db01 ~]# cat /sys/block/sda/queue/scheduler
noop [deadline] cfq
[root@db01 ~]# echo noop > /sys/block/sda/queue/scheduler
[root@db01 ~]# cat /sys/block/sda/queue/scheduler
[noop] deadline cfq

# Change at boot time by appending 'elevator=noop' end of the following line, then rebuild the grub config:
[root@db01 ~]# vim /etc/default/grub
...
GRUB_CMDLINE_LINUX="elevator=noop"
...
[root@db01 ~]# grub-mkconfig -o /boot/grub/grub.cfg


## Ubuntu 16.04

# Change at runtime
[root@db01 ~]# cat /sys/block/sda/queue/scheduler
noop [deadline] cfq
[root@db01 ~]# echo noop > /sys/block/sda/queue/scheduler
[root@db01 ~]# cat /sys/block/sda/queue/scheduler
[noop] deadline cfq

# Change at boot time by appending 'elevator=noop' end of the following line, then rebuild the grub config:
[root@db01 ~]# vim /etc/default/grub
...
GRUB_CMDLINE_LINUX="elevator=noop"
...
[root@db01 ~]# grub2-mkconfig -o /boot/grub2/grub.cfg

deadline

The deadline I/O scheduler is optimized by default for read heavy workloads like MySQL. It attempts to optimize I/O request by putting it in a read queue or write queue and assigning a timestamp to the request. For requests in the read queue, they have 500ms (by default) to execute before they are given the highest priority to run. Requests entering the write queue have 5000ms to execute before they are given the highest priority to run.

This deadline assigned to each I/O request is what makes deadline I/O scheduler optimal for read heavy workloads like MySQL.

You can update the I/O scheduler to deadline by:

## CentOS 6

# Change at runtime
[root@db01 ~]# cat /sys/block/sda/queue/scheduler
noop anticipatory deadline [cfq] 
[root@db01 ~]# echo 'deadline' > /sys/block/sda/queue/scheduler
[root@db01 ~]# cat /sys/block/sda/queue/scheduler
noop anticipatory [deadline] cfq

# Change at boot time by appending 'elevator=deadline' to end of kernel line apply the changes to grub:
[root@db01 ~]# vim /boot/grub/grub.conf
kernel /vmlinuz-2.6.9-67.EL ro root=/dev/vg0/lv0 elevator=deadline


## CentOS 7

# Change at run time
[root@db01 ~]# cat /sys/block/sda/queue/scheduler
noop anticipatory deadline [cfq] 
[root@db01 ~]# echo 'deadline' > /sys/block/sda/queue/scheduler
[root@db01 ~]# cat /sys/block/sda/queue/scheduler
noop anticipatory [deadline] cfq

# Change at boot time by appending 'elevator=deadline' end of the following line and apply the changes to grub:
[root@db01 ~]# vim /etc/default/grub
...
GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=rhel00/root rd.lvm.lv=rhel00/swap elevator=deadline"
...
[root@db01 ~]# grub2-mkconfig -o /boot/grub2/grub.cfg


# Ubuntu 14.04

# Change at runtime
[root@db01 ~]# cat /sys/block/sda/queue/scheduler
noop deadline [cfq]
[root@db01 ~]# echo deadline > /sys/block/sda/queue/scheduler
[root@db01 ~]# cat /sys/block/sda/queue/scheduler
noop [deadline] cfq

# Change at boot time by appending 'elevator=deadline' end of the following line apply the changes to grub:
[root@db01 ~]# vim /etc/default/grub
...
GRUB_CMDLINE_LINUX="elevator=deadline"
...
[root@db01 ~]# grub-mkconfig -o /boot/grub/grub.cfg


# Ubuntu 16.04

# Change at runtime
[root@db01 ~]# cat /sys/block/sda/queue/scheduler
noop deadline [cfq]
[root@db01 ~]# echo deadline > /sys/block/sda/queue/scheduler
[root@db01 ~]# cat /sys/block/sda/queue/scheduler
noop [deadline] cfq

# Change at boot time by appending 'elevator=deadline' end of the following line apply the changes to grub:
[root@db01 ~]# vim /etc/default/grub
...
GRUB_CMDLINE_LINUX="elevator=deadline"
...
[root@db01 ~]# grub2-mkconfig -o /boot/grub2/grub.cfg

cfg

The cfg I/O scheduler is probably best geared towards things running GUIs (like a desktop) where each process needs a fast response. The goal of the cfq I/O scheduler (Complete Fairness Queueing) is to give a fair allocation of disk I/O bandwidth for all the processes which requests an I/O operation.

You can update the I/O scheduler to cfq by:

## CentOS 6

# Change at runtime
[root@server01 ~]# cat /sys/block/sda/queue/scheduler
noop anticipatory [deadline] cfq 
[root@server01 ~]# echo 'cfq' > /sys/block/sda/queue/scheduler
[root@server01 ~]# cat /sys/block/sda/queue/scheduler
noop anticipatory deadline [cfq]

# Change at boot time by appending 'elevator=cfq' to end of kernel line apply the changes to grub:
[root@server01 ~]# vim /boot/grub/grub.conf
kernel /vmlinuz-2.6.9-67.EL ro root=/dev/vg0/lv0 elevator=cfq


## CentOS 7

# Change at run time
[root@server01 ~]# cat /sys/block/sda/queue/scheduler
noop anticipatory [deadline] cfq 
[root@server01 ~]# echo 'cfg' > /sys/block/sda/queue/scheduler
[root@server01 ~]# cat /sys/block/sda/queue/scheduler
noop anticipatory deadline [cfq]

# Change at boot time by appending 'elevator=cfq' end of the following line and apply the changes to grub:
[root@server01 ~]# vim /etc/default/grub
...
GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=rhel00/root rd.lvm.lv=rhel00/swap elevator=cfq"
...
[root@server01 ~]# grub2-mkconfig -o /boot/grub2/grub.cfg


# Ubuntu 14.04

# Change at runtime
[root@server01 ~]# cat /sys/block/sda/queue/scheduler
noop [deadline] cfq
[root@server01 ~]# echo cfq > /sys/block/sda/queue/scheduler
[root@server01 ~]# cat /sys/block/sda/queue/scheduler
noop deadline [cfq]

# Change at boot time by appending 'elevator=cfq' end of the following line apply the changes to grub:
[root@server01 ~]# vim /etc/default/grub
...
GRUB_CMDLINE_LINUX="elevator=cfq"
...
[root@server01 ~]# grub-mkconfig -o /boot/grub/grub.cfg


# Ubuntu 16.04

# Change at runtime
[root@server01 ~]# cat /sys/block/sda/queue/scheduler
noop [deadline] cfq
[root@server01 ~]# echo cfq > /sys/block/sda/queue/scheduler
[root@server01 ~]# cat /sys/block/sda/queue/scheduler
noop deadline [cfq]

# Change at boot time by appending 'elevator=cfq' end of the following line apply the changes to grub:
[root@server01 ~]# vim /etc/default/grub
...
GRUB_CMDLINE_LINUX="elevator=cfq"
...
[root@server01 ~]# grub2-mkconfig -o /boot/grub2/grub.cfg

As with any performance tuning recommendations, there is never a one size fits all solution! Always benchmark your application to establish a baseline before you make the change. After the performance changes have been made, run the same benchmark and compare the results to ensure that they had the desired outcomes.

Disabling Transparent Huge Pages in Linux

Transparent Huge Pages (THP) is a Linux memory management system that reduces the overhead of Translation Lookaside Buffer (TLB) lookups on machines with large amounts of memory by using larger memory pages.

However, database workloads often perform poorly with THP, because they tend to have sparse rather than contiguous memory access patterns. The overall recommendation for MySQL, MongoDB, Oracle, etc is to disable THP on Linux machines to ensure best performance.

You can check to see if THP is enabled or not by running:

[root@db01 ~]# cat /sys/kernel/mm/transparent_hugepage/enabled
[always] madvise never
[root@db01 ~]# cat /sys/kernel/mm/transparent_hugepage/defrag
[always] madvise never

If the result shows [never], then THP is disabled. However if the result shows [always], then THP is enabled.

You can disable THP at runtime on CentOS 6/7 and Ubuntu 14.04/16.04 by running:

[root@db01 ~]# echo 'never' > /sys/kernel/mm/transparent_hugepage/enabled
[root@db01 ~]# echo 'never' > /sys/kernel/mm/transparent_hugepage/defrag

However once the system reboots, it will go back to its default value again. To make the setting persistent on CentOS 7 and Ubuntu 16.04, you can disable THP on system startup by making a systemd unit file:

# CentOS 7 / Ubuntu 16.04:
[root@db01 ~]# vim /etc/systemd/system/disable-thp.service
[Unit]
Description=Disable Transparent Huge Pages (THP)

[Service]
Type=simple
ExecStart=/bin/sh -c "echo 'never' > /sys/kernel/mm/transparent_hugepage/enabled && echo 'never' > /sys/kernel/mm/transparent_hugepage/defrag"

[Install]
WantedBy=multi-user.target

[root@db01 ~]# systemctl daemon-reload
[root@db01 ~]# systemctl start disable-thp
[root@db01 ~]# systemctl enable disable-thp

On CentOS 6 and Ubuntu 14.04, you can disable THP on system startup by adding the following to /etc/rc.local. If this is on Ubuntu 14.04, make sure its added before the ‘exit 0’:

# CentOS 6 / Ubuntu 14.04
[root@db01 ~]# vim /etc/rc.local
...
if test -f /sys/kernel/mm/transparent_hugepage/enabled; then
   echo never > /sys/kernel/mm/transparent_hugepage/enabled
fi
if test -f /sys/kernel/mm/transparent_hugepage/defrag; then
   echo never > /sys/kernel/mm/transparent_hugepage/defrag
fi
...

MySQL 5.1 to MySQL 5.7 upgrade on CentOS

Upgrading MySQL is pretty straight forward, however there are some catches. Per MySQL’s official documentation, you must upgrade from 5.1 to 5.5, then upgrade from 5.5 to 5.6, and finally again from 5.6 to 5.7. You cannot upgrade from 5.1 directly to 5.7.

This guide will outline how to upgrade MySQL 5.1 to MySQL 5.7 in sequence and will assume you are using the IUS repositories for MySQL. If the IUS repositories are not already setup, you can install them by:

# CentOS 6
[root@db01 ~]# yum install epel-release
[root@db01 ~]# rpm -ivh https://dl.iuscommunity.org/pub/ius/stable/CentOS/6/x86_64/ius-release-1.0-15.ius.centos6.noarch.rpm
# CentOS 7
[root@db01 ~]# yum install epel-release
[root@db01 ~]# rpm -ivh https://dl.iuscommunity.org/pub/ius/stable/CentOS/7/x86_64/ius-release-1.0-15.ius.centos7.noarch.rpm

Upgrade from MySQL 5.1 to MySQL 5.5

Create a restore point. This will be backing up the databases, the my.cnf, as well as a user and database listing. This can be done by:

[root@db01 ~]# mysql -S mysqldump
[root@db01 ~]# mkdir -p /root/mysqlupgrade/mysql51
[root@db01 ~]# mysql -e 'show databases;' > /root/mysqlupgrade/mysql51/mysql-5.1.databases
[root@db01 ~]# mysql --silent --skip-column-names --execute "select concat('\'',User,'\'@\'',Host,'\'') as User from mysql.user;" | sort | while read u; do echo "-- $u"; mysql --silent --skip-column-names --execute "show grants for $u" | sed 's/$/;/'; done > /root/mysqlupgrade/mysql51/mysql-5.1.grants
[root@db01 ~]# cp /etc/my.cnf /root/mysqlupgrade/mysql51/mysql-5.1.cnf.orig
[root@db01 ~]# yum install xz
[root@db01 ~]# mysqldump --routines --all-databases | xz > /root/mysqlupgrade/mysql51/mysql-5.1.dump.sql.xz

Now stop MySQL and upgrade it to MySQL 5.5:

[root@db01 ~]# service mysqld stop
[root@db01 ~]# yum --disableexcludes=all shell
remove mysql mysql-server mysql-libs
install mysql55 mysql55-server mysql55-libs mysqlclient16
ts solve
ts run
exit

Generate a version of the my.cnf that is valid for MySQL 5.5. Some options get deprecated or changed between versions, so it is important that this is not skipped. To help with this, we’ll be using dbsake as shown below:

[root@db01 ~]# wget -O /root/mysqlupgrade/dbsake http://get.dbsake.net; chmod u+x /root/mysqlupgrade/dbsake
[root@db01 ~]# /root/mysqlupgrade/dbsake upgrade-mycnf --config /root/mysqlupgrade/mysql51/mysql-5.1.cnf.orig --target 5.5 > /root/mysqlupgrade/mysql-5.5.cnf
[root@db01 ~]# mv /etc/my.cnf /etc/mysql-5.5.cnf.orig
[root@db01 ~]# mv -f /root/mysqlupgrade/mysql-5.5.cnf /etc/my.cnf

Start MySQL without the grant tables to verify it is running MySQL 5.5 and all the databases loaded:

[root@db01 ~]# sed -i 's/\[mysqld\]/[mysqld]\nskip-grant-tables\nskip-networking/' /etc/my.cnf
[root@db01 ~]# service mysqld start
[root@db01 ~]# mysql -sse "select @@version"
[root@db01 ~]# mysql -e "show databases;" > /root/mysqlupgrade/mysql51/mysql-5.5.databases
[root@db01 ~]# diff -U0 /root/mysqlupgrade/mysql51/mysql-5.1.databases /root/mysqlupgrade/mysql51/mysql-5.5.databases

Now update the database schema. If mysql_upgrade says that a table needs to repaired, repair it with mysqlcheck -A before moving on:

[root@db01 ~]# mysql_upgrade
[root@db01 ~]# sed -i '/\(skip-grant-tables\|skip-networking\)/d' /etc/my.cnf
[root@db01 ~]# service mysqld restart

Finally, confirm MySQL is running version 5.5:

[root@db01 ~]# mysqladmin version

Upgrade from MySQL 5.5 to MySQL 5.6

Create a restore point. This will be backing up the databases, the my.cnf, as well as a user and database listing. This can be done by:

[root@db01 ~]# mysql -S mysqldump
[root@db01 ~]# mkdir -p /root/mysqlupgrade/mysql55
[root@db01 ~]# mysql -e 'show databases;' > /root/mysqlupgrade/mysql55/mysql-5.5.databases
[root@db01 ~]# mysql --silent --skip-column-names --execute "select concat('\'',User,'\'@\'',Host,'\'') as User from mysql.user;" | sort | while read u; do echo "-- $u"; mysql --silent --skip-column-names --execute "show grants for $u" | sed 's/$/;/'; done > /root/mysqlupgrade/mysql55/mysql-5.5.grants
[root@db01 ~]# cp /etc/my.cnf /root/mysqlupgrade/mysql55/mysql-5.5.cnf.orig
[root@db01 ~]# yum install xz
[root@db01 ~]# mysqldump --routines --all-databases | xz > /root/mysqlupgrade/mysql55/mysql-5.5.dump.sql.xz

Now stop MySQL and upgrade it to MySQL 5.6:

[root@db01 ~]# service mysqld stop
[root@db01 ~]# yum --disableexcludes=all shell
remove mysql55 mysql55-server mysql55-libs
install mysql56u mysql56u-server mysql56u-libs mysqlclient16
ts solve
ts run
exit

Generate a version of the my.cnf that is valid for MySQL 5.6. Some options get deprecated or changed between versions, so it is important that this is not skipped. To help with this, we’ll be using dbsake as shown below:

[root@db01 ~]# wget -O /root/mysqlupgrade/dbsake http://get.dbsake.net; chmod u+x /root/mysqlupgrade/dbsake
[root@db01 ~]# /root/mysqlupgrade/dbsake upgrade-mycnf --config /root/mysqlupgrade/mysql55/mysql-5.5.cnf.orig --target 5.6 > /root/mysqlupgrade/mysql-5.6.cnf
[root@db01 ~]# mv -f /root/mysqlupgrade/mysql-5.6.cnf /etc/my.cnf

Start MySQL without the grant tables to verify it is running MySQL 5.6 and all the databases loaded:

[root@db01 ~]# sed -i 's/\[mysqld\]/[mysqld]\nskip-grant-tables\nskip-networking/' /etc/my.cnf
[root@db01 ~]# service mysqld start
[root@db01 ~]# mysql -sse "select @@version"
[root@db01 ~]# mysql -e "show databases;" > /root/mysqlupgrade/mysql55/mysql-5.6.databases
[root@db01 ~]# diff -U0 /root/mysqlupgrade/mysql55/mysql-5.5.databases /root/mysqlupgrade/mysql55/mysql-5.6.databases

Now update the database schema. If mysql_upgrade says that a table needs to repaired, repair it with mysqlcheck -A before moving on:

[root@db01 ~]# mysql_upgrade
[root@db01 ~]# sed -i '/\(skip-grant-tables\|skip-networking\)/d' /etc/my.cnf
[root@db01 ~]# service mysqld restart

Finally, confirm MySQL is running version 5.6:

[root@db01 ~]# mysqladmin version

Upgrade from MySQL 5.6 to MySQL 5.7

Create a restore point. This will be backing up the databases, the my.cnf, as well as a user and database listing. This can be done by:

[root@db01 ~]# mysql -S mysqldump
[root@db01 ~]# mkdir -p /root/mysqlupgrade/mysql56
[root@db01 ~]# mysql -e 'show databases;' > /root/mysqlupgrade/mysql56/mysql-5.6.databases
[root@db01 ~]# mysql --silent --skip-column-names --execute "select concat('\'',User,'\'@\'',Host,'\'') as User from mysql.user;" | sort | while read u; do echo "-- $u"; mysql --silent --skip-column-names --execute "show grants for $u" | sed 's/$/;/'; done > /root/mysqlupgrade/mysql56/mysql-5.6.grants
[root@db01 ~]# cp /etc/my.cnf /root/mysqlupgrade/mysql56/mysql-5.6.cnf.orig
[root@db01 ~]# yum install xz
[root@db01 ~]# mysqldump --routines --all-databases | xz > /root/mysqlupgrade/mysql56/mysql-5.6.dump.sql.xz

Now stop MySQL and upgrade to MySQL 5.7:

[root@db01 ~]# service mysqld stop
[root@db01 ~]# yum --disableexcludes=all shell
remove mysql56u mysql56u-server mysql56u-libs mysql56u-common
install mysql57u mysql57u-server mysql57u-libs mysqlclient16
ts solve
ts run
exit

Generate a version of the my.cnf that is valid for MySQL 5.7. Some options get deprecated or changed between versions, so it is important that this is not skipped. To help with this, we’ll be using dbsake as shown below:

[root@db01 ~]# wget -O /root/mysqlupgrade/dbsake http://get.dbsake.net; chmod u+x /root/mysqlupgrade/dbsake
[root@db01 ~]# /root/mysqlupgrade/dbsake upgrade-mycnf --config /root/mysqlupgrade/mysql56/mysql-5.6.cnf.orig --target 5.7 > /root/mysqlupgrade/mysql-5.7.cnf
[root@db01 ~]# mv -f /root/mysqlupgrade/mysql-5.7.cnf /etc/my.cnf

Start MySQL without the grant tables to verify it is running MySQL 5.6 and all the databases loaded:

[root@db01 ~]# sed -i 's/\[mysqld\]/[mysqld]\nskip-grant-tables\nskip-networking/' /etc/my.cnf
[root@db01 ~]# service mysqld start
[root@db01 ~]# mysql -sse "select @@version"
[root@db01 ~]# mysql -e "show databases;" > /root/mysqlupgrade/mysql56/mysql-5.7.databases
[root@db01 ~]# diff -U0 /root/mysqlupgrade/mysql56/mysql-5.6.databases /root/mysqlupgrade/mysql56/mysql-5.7.databases

If MySQL fails to start, check the logs as it may be due to MySQL looking for a /var/lib/mysqltmp directory. You can verify and correct it by:

[root@db01 ~]# cat /var/log/mysqld.log |grep ERROR
[ERROR] InnoDB: Unable to create temporary file; errno: 2

[root@db01 ~]# cat /etc/my.cnf |grep tmpdir
tmpdir                          = /var/lib/mysqltmp

[root@db01 ~]# mkdir /var/lib/mysqltmp
[root@db01 ~]# chown mysql:mysql /var/lib/mysqltmp
[root@db01 ~]# service mysqld start

Now update the database schema. If mysql_upgrade says that a table needs to repaired, repair it with mysqlcheck -A before moving on:

[root@db01 ~]# mysql_upgrade
[root@db01 ~]# sed -i '/\(skip-grant-tables\|skip-networking\)/d' /etc/my.cnf
[root@db01 ~]# service mysqld restart

Finally, confirm MySQL is running version 5.7:

[root@db01 ~]# mysqladmin version

Rollback plan

What happens if you need to roll back? If you followed the instructions in this article to create the backups, restoration is simple. Just keep in mind that you also need to restore the databases themselves from the original backups. So anything that changed in the database since the upgrade will be lost. If this is not acceptable, do not use these rollback instructions!

It is going to be assumed that you are going to roll all the way back from MySQL 5.7 to MySQL 5.1. Simply adjust the instructions below accordingly if you are going to roll back to a different version.

Stop MySQL and rollback to MySQL 5.1 by:

[root@db01 ~]# service mysqld stop
[root@db01 ~]# yum --disableexcludes=all shell
remove mysql*
install mysql mysql-server mysql-libs mysqlclient16
ts solve
ts run
exit

Restore the original /etc/my.cnf by:

[root@db01 ~]# cp /root/mysqlupgrade/mysql51/mysql-5.1.cnf.orig /etc/my.cnf

Startup MySQL and restore the 5.1-formatted databases:

[root@db01 ~]# mv /var/lib/mysql /var/lib/mysql.bak
[root@db01 ~]# mkdir /var/lib/mysql
[root@db01 ~]# chown mysql:mysql /var/lib/mysql
[root@db01 ~]# service mysqld start
[root@db01 ~]# mysql_secure_installation
[root@db01 ~]# unxz -c /root/mysqlupgrade/mysql51/mysql-5.1.dump.sql.xz | mysql
[root@db01 ~]# service mysqld restart
[root@db01 ~]# mysqladmin version

Convert MyISAM tables to InnoDB in MySQL

Converting MyISAM tables over to InnoDB comes with several benefits such as row level locking, being able to commit and roll back transactions and using tractional logs for autorecovery.

While converting MyISAM over to InnoDB is usually straight forward, you must use caution:

- Always create a backup of the database before making any changes!
- Confirm you are running MySQL 5.6.4 or above as that was when full-text indexing was introduced to InnoDB.

You can check to see what tables are using MyISAM in your database by running the following:

[root@db01 ~]# mysql
mysql> SELECT TABLE_SCHEMA,TABLE_NAME,ENGINE,TABLE_ROWS FROM information_schema.TABLES WHERE  ENGINE = 'MyISAM' and TABLE_ROWS
+--------------+-----------------------+--------+------------+
| TABLE_SCHEMA | TABLE_NAME            | ENGINE | TABLE_ROWS |
+--------------+-----------------------+--------+------------+
| mysql        | db                    | MyISAM |          3 |
| mysql        | help_category         | MyISAM |         39 |
| mysql        | help_keyword          | MyISAM |        464 |
| mysql        | help_relation         | MyISAM |       1028 |
| mysql        | help_topic            | MyISAM |        508 |
| mysql        | proxies_priv          | MyISAM |          2 |
| mysql        | user                  | MyISAM |          5 |
| example01    | wp_commentmeta        | MyISAM |         15 |
| example01    | wp_comments           | MyISAM |         72 |
| example01    | wp_links              | MyISAM |         11 |
| example01    | wp_options            | MyISAM |        224 |
| example01    | wp_postmeta           | MyISAM |        260 |
| example01    | wp_posts              | MyISAM |        277 |
| example01    | wp_term_relationships | MyISAM |        178 |
| example01    | wp_term_taxonomy      | MyISAM |         62 |
| example01    | wp_terms              | MyISAM |         62 |
| example01    | wp_usermeta           | MyISAM |         22 |
| example01    | wp_users              | MyISAM |          1 |
+--------------+-----------------------+--------+------------+

Alternatively, if you wanted to see MyISAM tables that have more than 5,000 rows, run:

mysql> SELECT TABLE_SCHEMA,TABLE_NAME,ENGINE,TABLE_ROWS FROM information_schema.TABLES WHERE  ENGINE = 'myISAM' and TABLE_ROWS > 5000;
+--------------+---------------------+--------+------------+
| TABLE_SCHEMA | TABLE_NAME          | ENGINE | TABLE_ROWS |
+--------------+---------------------+--------+------------+
| example02    | employees           | MyISAM |      80257 |
| example02    | templates           | MyISAM |     155341 |
| example02    | zipcodes            | MyISAM |      33178 |
| example02    | providers           | MyISAM |      34833 |
| example02    | portal              | MyISAM |      25184 |
+--------------+---------------------+--------+------------+

Converting the tables is pretty simple, but keep in mind that this will lock the table while it is running. Make sure you backup your database BEFORE converting any tables! Things can go wrong at the worst times, so be sure to have a solid roll back plan before proceeding!

Convert the desired table from MyISAM to InnoDB by running:

[root@db01 ~]# mysql
mysql> use example02;
mysql> ALTER TABLE providers ENGINE=InnoDB;

Purging MySQL binary logs

Binary logs contain a record of all changes to the database, both in data and structure. It does not keep track of simple SELECT statements. These are required for MySQL replication and can also be useful for performing point in time backups after a nightly database dump has been restored.

If binary logs are enabled within MySQL, but you are not expiring them after X amount of days, they will eventually cause your server to run out of disk space. Typically I see this value set to 5 days.

As a quick note before we begin, if the server is completely out of disk space, you may need to free up some space beforehand so you can work with the system. You can temporarily free up space by changing the filesystems reserved block setting from 5% to 1% by:

[root@db01 ~]# df -h /
/dev/xvda1       79G   76G     0 100% /
[root@web01 ~]# tune2fs -m 2 /dev/xvda1
[root@web01 ~]# df -h /
/dev/xvda1       79G   76G  1.8G  98% /

To determine how many days worth of logs you can purge, first identify if this server is Master MySQL running running MySQL replication to a Slave MySQL server. If it is, log onto the Slave MySQL server, and see which binary log it is currently reading from:

[root@db02 ~]# mysql
mysql> show slave status\G
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 10.xx.xx.xx
Master_User: replication
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000244
...

So based off that, we can purge the binary logs up to mysql-bin.000244. So back on the master server, check the MySQL bin log status by:

[root@db01 ~]# mysql
mysql> show binary logs;
+------------------+------------+
| Log_name         | File_size  |
+------------------+------------+
| mysql-bin.000230 |  454230880 |
| mysql-bin.000231 |  511519497 |
| mysql-bin.000232 |  483552032 |
| mysql-bin.000233 |  472847181 |
| mysql-bin.000234 |  443236582 |
| mysql-bin.000235 |  408021824 |
| mysql-bin.000236 |  531519875 |
| mysql-bin.000237 |  468583798 |
| mysql-bin.000238 |  495423661 |
| mysql-bin.000239 |  474475274 |
| mysql-bin.000240 |  689162898 |
| mysql-bin.000241 | 1073749465 |
| mysql-bin.000242 |  939200512 |
| mysql-bin.000243 |  702552334 |
| mysql-bin.000244 |   40656827 |
+------------------+------------+

In my example, as I only want to keep 5 days worth of binary logs, we can purge all the prior bin logs by:

[root@db01 ~]# mysql
mysql> purge binary logs to 'mysql-bin.000240;
mysql> show binary logs;
+------------------+------------+
| Log_name         | File_size  |
+------------------+------------+
| mysql-bin.000240 |  689162898 |
| mysql-bin.000241 | 1073749465 |
| mysql-bin.000242 |  939200512 |
| mysql-bin.000243 |  702552334 |
| mysql-bin.000244 |   40656827 |
+------------------+------------+

As I only want to have a 5 day retention set in general, you can do this live without restarting mysql by:

[root@db01 ~]# mysql
mysql> show variables like '%expire_logs_days%';
+------------------+-------+
| Variable_name    | Value |
+------------------+-------+
| expire_logs_days | 0     |
+------------------+-------+

mysql> SET GLOBAL expire_logs_days = 5;
Query OK, 0 rows affected (0.00 sec)

mysql> show variables like '%expire_logs_days%';
+------------------+-------+
| Variable_name    | Value |
+------------------+-------+
| expire_logs_days | 5     |
+------------------+-------+
1 row in set (0.00 sec)

Then make the setting persistent across MySQL restarts by adding it to the /etc/my.cnf:

[root@web01 ~]# vim /etc/my.cnf
[mysqld]
...
expire_logs_days = 5
...

If you had to modify the filesystems block reservation, change it back to how it was beforehand, which is typically 5%. You can do this by:

[root@db01 ~]# tune2fs -m 5 /dev/xvda1

Setting up MySQL Master Slave Replication with LVM snapshots

This article is part of a series of setting up MySQL replication. As with most things, there is always more than one way to do something. In the case of setting up MySQL replication, or rebuilding it, some options are better than others depending on your use case.

The articles in the series are below:
Setting up MySQL Replication using mysqldump
Setting up MySQL Replication using Percona XtraBackup
Setting up MySQL Replication using Rsync
Setting up MySQL Replication using LVM snapshots

This guide will document how to setup MySQL Master / Slave Replication using LVM snapshots. So why use LVM snapshots for setting up or rebuilding MySQL Replication? If your databases and tables are large, you can greatly limit the downtime felt to the application using LVM snapshots. This should still be performed during a scheduled maintenance window as you will be flushing the tables with READ LOCK.

Some prerequisites before proceeding are below:

1. Confirming that your datadir is indeed configured on a partition running LVM:

[root@db01 ~]# lvs

2. Confirming that you have enough free space in your Volume Group for the LVM snapshot:

[root@db01 ~]# vgs

So in the sections below, we’ll configure the Master and Slave MySQL server for replication, then we’ll use an LVM snapshot for syncing the databases over to db02.

Setup the Master MySQL server

Configure the my.cnf as shown below:

log-bin=/var/lib/mysql/db01-binary-log
expire-logs-days=5
server-id=1

Then restart MySQL to apply the settings:

# CentOS / RHEL:
[root@db01 ~]# service mysqld restart

# Ubuntu / Debian:
[root@db01 ~]# service mysql restart

Finally, grant access to the Slave so it has access to communicate with the Master:

mysql> GRANT REPLICATION SLAVE ON *.* to 'repl’@’10.x.x.x’ IDENTIFIED BY 'your_password';

Setup the Slave MySQL server

Configure the my.cnf as shown below:

relay-log=/var/lib/mysql/db02-relay-log
relay-log-space-limit = 4G
read-only=1
server-id=2

Then restart MySQL to apply the settings:

# CentOS / RHEL:
[root@db02 ~]# service mysqld restart

# Ubuntu / Debian:
[root@db02 ~]# service mysql restart

Use LVM snapshots for syncing over the databases

For reference, the rest of this guide will refer to the servers as follows:

db01 - Master MySQL Server
db02 - Slave MySQL Server

On db02 only, rename the existing MySQL datadir, and create a fresh folder:

[root@db02 ~]# service mysqld stop
[root@db02 ~]# mv /var/lib/mysql /var/lib/mysql.old
[root@db02 ~]# mkdir /var/lib/mysql
[root@db02 ~]# chown mysql:mysql /var/lib/mysql

On db01 only, create a snapshot script for MySQL to ensure things move quick to limit downtime:

[root@db01 ~]# vim /root/lvmscript.sql
FLUSH LOCAL TABLES;
FLUSH LOCAL TABLES WITH READ LOCK;
SHOW MASTER STATUS;
SYSTEM lvcreate -L 10G -s vglocal00/mysql00 -n mysqlsnapshot00 3>&-
SHOW MASTER STATUS;
UNLOCK TABLES;

On db01 only, during a scheduled maintenance window, run the script to create the LVM snapshot, and be sure to take note of master status information as that will be needed later:

[root@db01 ~]# mysql -t < /root/lvmscript.sql 

On db01 only, mount the snapshot, sync over the contents to db02, and then remove the snapshot since it will no longer be needed:

[root@db01 ~]# mount /dev/mapper/vglocal00-mysqlsnapshot00 /mnt
[root@db01 ~]# rsync -axvz --delete -e ssh /mnt/ root@db02:/var/lib/mysql/
[root@db01 ~]# umount /mnt
[root@db01 ~]# lvremove vglocal00/mysqlsnapshot00

On db02 only, remove the stale mysql.sock file, startup MySQL, configure db02 to connect to db01 using the information from the show master status command you ran on db01 previously, and start replication:

[root@db02 ~]# rm /var/lib/mysql/mysql.sock
[root@db02 ~]# service mysqld start
[root@db02 ~]# mysql
mysql> CHANGE MASTER TO MASTER_HOST='10.x.x.x', MASTER_USER='repl', MASTER_PASSWORD='your_password', MASTER_LOG_FILE='db01-bin-log.000001', MASTER_LOG_POS=1456783;
mysql> start slave;
mysql> show slave status\G
...
        Slave_IO_Running: Yes
        Slave_SQL_Running: Yes
        Seconds_Behind_Master: 0
...

If those values are the same as what is shown above, then replication is working properly! Perform a final test by creating a test database on the Master MySQL server, then check to ensure it shows up on the Slave MySQL server. Afterwards, feel free to drop that test database on the Master MySQL server.

From here, you should be good to go! Just be sure to setup a monitoring check to ensure that replication is always running and doesn’t encounter any errors. A very basic MySQL Replication check can be found here:
https://github.com/stephenlang/system-health-check

Setting up MySQL Master Slave Replication with rsync

This article is part of a series of setting up MySQL replication. As with most things, there is always more than one way to do something. In the case of setting up MySQL replication, or rebuilding it, some options are better than others depending on your use case.

The articles in the series are below:
Setting up MySQL Replication using mysqldump
Setting up MySQL Replication using Percona XtraBackup
Setting up MySQL Replication using Rsync
Setting up MySQL Replication using LVM snapshots

This guide will document how to setup MySQL Master / Slave Replication using Rsync. So why use Rsync for setting up or rebuilding MySQL Replication? If your databases and tables are large, but fairly quiet, you can limit the downtime felt by syncing over the majority of the content live with Rsync, then you perform a final Rsync during a scheduled maintenance window to catch any of the tables that may have changed by using a READ LOCK. This is also very useful when you do not have enough disk space available on db01 to perform a traditional backup using mysqldump or using Percona’s XtraBackup as the data is being rsync’ed directly over to db02.

This is a fairly simplistic method of setting of MySQL Replication, or rebuilding it. So in the sections below, we’ll configure the Master and Slave MySQL server for replication, then we’ll sync over the databases.

Setup the Master MySQL server

Configure the my.cnf as shown below:

log-bin=/var/lib/mysql/db01-binary-log
expire-logs-days=5
server-id=1

Then restart MySQL to apply the settings:

# CentOS / RHEL:
[root@db01 ~]# service mysqld restart

# Ubuntu / Debian:
[root@db01 ~]# service mysql restart

Finally, grant access to the Slave so it has access to communicate with the Master:

mysql> GRANT REPLICATION SLAVE ON *.* to 'repl’@’10.x.x.x’ IDENTIFIED BY 'your_password';

Setup the Slave MySQL server

Configure the my.cnf as shown below:

relay-log=/var/lib/mysql/db02-relay-log
relay-log-space-limit = 4G
read-only=1
server-id=2

Then restart MySQL to apply the settings:

# CentOS / RHEL:
[root@db02 ~]# service mysqld restart

# Ubuntu / Debian:
[root@db02 ~]# service mysql restart

Rsync the databases

For reference, the rest of this guide will refer to the servers as follows:

db01 - Master MySQL Server
db02 - Slave MySQL Server

On db02 only, rename the existing MySQL datadir, and create a fresh folder:

[root@db02 ~]# service mysqld stop
[root@db02 ~]# mv /var/lib/mysql /var/lib/mysql.old
[root@db02 ~]# mkdir /var/lib/mysql

On db01 only, perform the initial sync of data over to db02:

[root@db01 ~]# rsync -axvz /var/lib/mysql/ root@db02:/var/lib/mysql/

Now that you have the majority of the databases moved over, its time to perform the final sync of data during a scheduled maintenance window as you will be flushing the tables with a READ LOCK, then syncing over the data.

On db01 only, flush the tables with READ LOCK, and grab the master status information as we’ll need that later. It is critical that you do NOT exit MySQL while the READ LOCK is in place. Once you exit MySQL, the READ LOCK is removed. Therefore, the example below will run this in a screen session so it continues to run in the background:

[root@db01 ~]# screen -S mysql
[root@db01 ~]# mysql
mysql> FLUSH TABLES WITH READ LOCK;
mysql> SHOW MASTER STATUS;
(detach screen session with ctrl a d)

On db01 only, perform the final rsync of the databases to db02, then release the READ LOCK on db01:

[root@db01 ~]# rsync -axvz --delete /var/lib/mysql/ root@db02:/var/lib/mysql/
[root@db01 ~]# screen -dr mysql
mysql> quit
[root@db01 ~]# exit

On db02 only, remove the stale mysql.sock file, startup MySQL, configure db02 to connect to db01 using the information from the show master status command you ran on db01 previously, and start replication:

[root@db02 ~]# rm /var/lib/mysql/mysql.sock
[root@db02 ~]# rm /var/lib/mysql/auto.cnf
[root@db02 ~]# service mysqld start
[root@db02 ~]# mysql
mysql> CHANGE MASTER TO MASTER_HOST='10.x.x.x', MASTER_USER='repl', MASTER_PASSWORD='your_password', MASTER_LOG_FILE='db01-bin-log.000001', MASTER_LOG_POS=1456783;
mysql> start slave;
mysql> show slave status\G
...
        Slave_IO_Running: Yes
        Slave_SQL_Running: Yes
        Seconds_Behind_Master: 0
...

If those values are the same as what is shown above, then replication is working properly! Perform a final test by creating a test database on the Master MySQL server, then check to ensure it shows up on the Slave MySQL server. Afterwards, feel free to drop that test database on the Master MySQL server.

From here, you should be good to go! Just be sure to setup a monitoring check to ensure that replication is always running and doesn’t encounter any errors. A very basic MySQL Replication check can be found here:
https://github.com/stephenlang/system-health-check

Setting up MySQL Master Slave Replication with Percona XtraBackup

This article is part of a series of setting up MySQL replication. As with most things, there is always more than one way to do something. In the case of setting up MySQL replication, or rebuilding it, some options are better than others depending on your use case.

The articles in the series are below:
Setting up MySQL Replication using mysqldump
Setting up MySQL Replication using Percona XtraBackup
Setting up MySQL Replication using Rsync
Setting up MySQL Replication using LVM snapshots

This guide will document how to setup MySQL Master / Slave Replication using Percona XtraBackup. I strongly recommend reviewing the official documentation on Percona’s site at:
How innobackupex works
Official guide for setting up replication with Percona XtraBackup

So why use Percona XtraBackup for setting up or rebuilding MySQL Replication? Percona XtraBackup performs hot backups on unmodified versions of MySQL, MariaDB and Percona on versions 5.1 and above.

This basically means that you can generally run the backup on InnoDB tables without having the interruption/downtime associated with table locking on your site, like you would normally experience using MySQLdump. However, it is critical to note that table locking WILL still occur on tables using MyISAM and other non-InnoDB tables.

Some important prerequisites before proceeding:

1. Recommend using Percona XtraBackup on MySQL, MariaDB, and Percona 5.5 and above. Example:

[root@db01 ~]# mysql -V
mysql  Ver 14.14 Distrib 5.5.49, for Linux (x86_64) using readline 5.1

2. Confirm the MySQL client libraries are installed.

3. The MySQL master and slave server both have the same innodb-log-file-size defined in the my.cnf, and that they are at least 48M in size.

[root@db01 ~]# grep innodb-log-file-size /etc/my.cnf 
innodb-log-file-size = 128M

4. The master does not have symlinks in the MySQL datadir that could cause space to be underestimated. You can check for this by running:

[root@db01 ~]# du -sch /var/lib/mysql/ $(for i in $(find /var/lib/mysql/ -type l); do readlink $i; done)

5. Confirm there is only one instance of MySQL running on the master. When you have multiple versions of MySQL running, its easy to backup the wrong data.

6. Event scheduler is not enabled on the slave. You can check for this by running:

[root@db01 ~]# mysql
mysql> show variables where Variable_name like 'event_scheduler';
+-----------------+-------+
| Variable_name   | Value |
+-----------------+-------+
| event_scheduler | OFF   |
+-----------------+-------+
1 row in set (0.00 sec)

7. Confirm that no tables are using the MEMORY engine on the master. You can check for this by running:

[root@db01 ~]# mysql
mysql> select concat('ALTER TABLE `',table_schema,'`.`',table_name,'` ENGINE=INNODB;') from information_schema.tables where engine='memory' and table_schema not in ('information_schema','performance_schema');
Empty set (0.03 sec)

8. Confirm that both the master and slave server have NTP enabled and running, and that both servers are using the same timezone. Example:

[root@db01 ~]# ps waux |grep ntp
ntp       7276  0.0  0.0  30740  1684 ?        Ss   Aug09   0:02 ntpd -u ntp:ntp -p /var/run/ntpd.pid -g
[root@db01 ~]# date
Tue Aug 30 05:06:50 UTC 2016

9. Confirm the master and slave have the same binlog_format and expire_log_days variables in the my.cnf when binary logging is enabled on the slave. Example:

[root@db01 ~]# egrep 'binlog-format|expire-logs-days' /etc/my.cnf 
expire-logs-days = 5
# binlog-format = STATEMENT

Now that you confirmed that the prerequisites are out of the way, proceed with setting up MySQL Replication.

Setup the Master MySQL server

Configure the my.cnf as shown below:

log-bin=/var/lib/mysql/db01-binary-log
expire-logs-days=5
server-id=1

Then restart MySQL to apply the settings:

# CentOS / RHEL:
[root@db01 ~]# service mysqld restart

# Ubuntu / Debian:
[root@db01 ~]# service mysql restart

Finally, grant access to the Slave so it has access to communicate with the Master:

mysql> GRANT REPLICATION SLAVE ON *.* to 'repl’@’10.x.x.x’ IDENTIFIED BY 'your_password';

Setup the Slave MySQL server

Configure the my.cnf as shown below:

relay-log=/var/lib/mysql/db02-relay-log
relay-log-space-limit = 4G
read-only=1
server-id=2

Then restart MySQL to apply the settings:

# CentOS / RHEL:
[root@db02 ~]# service mysqld restart

# Ubuntu / Debian:
[root@db02 ~]# service mysql restart

Install Percona XtraBackup

For reference, the rest of this guide will refer to the servers as follows:

db01 - Master MySQL Server
db02 - Slave MySQL Server

On db01 only, install Percona XtraBackup using Percona’s repos:

[root@db01 ~]# yum install http://www.percona.com/downloads/percona-release/redhat/0.1-3/percona-release-0.1-3.noarch.rpm
[root@db01 ~]# yum install percona-xtrabackup-24

On db01 only, confirm it installed XtraBackup version 2.3 or newer:

[root@db01 ~]# xtrabackup --version

On db01 only, to prevent future issues, it is extremely important to remove or disable the yum repo for Percona:

[root@db01 ~]# yum remove percona-release

Setup MySQL replication using Percona XtraBackup

On db02 only, rename the existing MySQL datadir, and create a fresh folder:

[root@db02 ~]# service mysqld stop
[root@db02 ~]# mv /var/lib/mysql /var/lib/mysql.old
[root@db02 ~]# mkdir /var/lib/mysql

On db01 only, create the backup, make the snapshot consistent by applying the log, and rsync it over to db02:

[root@db01 ~]# mkdir /root/perconabackup
[root@db01 ~]# innobackupex /root/perconabackup
[root@db01 ~]# innobackupex --apply-log /root/perconabackup/TIMESTAMP/
[root@db01 ~]# rsync -axvz -e ssh /root/perconabackup/TIMESTAMP/ root@db02:/var/lib/mysql/

On db02, fix the ownership of the datadir, startup MySQL, and grab the binlog name and position:

[root@db02 ~]# chown -R mysql:mysql /var/lib/mysql
[root@db02 ~]# service mysqld start
[root@db02 ~]# cat /var/lib/mysql/xtrabackup_binlog_info
db01-bin-log.000001     1456783

On db02, startup slave replication:

[root@db02 ~]# mysql
mysql> CHANGE MASTER TO MASTER_HOST='10.x.x.x', MASTER_USER='repl', MASTER_PASSWORD='your_password', MASTER_LOG_FILE='db01-bin-log.000001', MASTER_LOG_POS=1456783;
mysql> start slave;
mysql> show slave status\G
...
        Slave_IO_Running: Yes
        Slave_SQL_Running: Yes
        Seconds_Behind_Master: 0
...

If those values are the same as what is shown above, then replication is working properly! Perform a final test by creating a test database on the Master MySQL server, then check to ensure it shows up on the Slave MySQL server. Afterwards, feel free to drop that test database on the Master MySQL server.

From here, you should be good to go! Just be sure to setup a monitoring check to ensure that replication is always running and doesn’t encounter any errors. A very basic MySQL Replication check can be found here:
https://github.com/stephenlang/system-health-check