Setting up MySQL Master Slave Replication using mysqldump

This article is part of a series of setting up MySQL replication. As with most things, there is always more than one way to do something. In the case of setting up MySQL replication, or rebuilding it, some options are better than others depending on your use case.

The articles in the series are below:
Setting up MySQL Replication using mysqldump
Setting up MySQL Replication using Percona XtraBackup
Setting up MySQL Replication using Rsync
Setting up MySQL Replication using LVM snapshots

This guide will document how to setup MySQL Master / Slave Replication using mysqldump.

MySQL Master / Slave Replication is useful for a number of situations, such as:
– Perform hourly backups on the Slave without locking the Master database
– Provide a degree of redundancy as you can promote the Slave
– Splitting the reads between the Master and Slave

Setting up MySQL Replication is pretty straight forward. Keep in mind that this guide requires you use mysqldump to create a backup of your existing databases, which will lock your tables while it runs. So be sure to schedule it accordingly!

Setup the Master MySQL server

Configure the my.cnf as shown below:

log-bin=/var/lib/mysql/db01-binary-log
expire-logs-days=5
server-id=1

Then restart MySQL to apply the settings:

# CentOS / RHEL:
[root@db01 ~]# service mysqld restart

# Ubuntu / Debian:
[root@db01 ~]# service mysql restart

Finally, grant access to the Slave so it has access to communicate with the Master:

mysql> GRANT REPLICATION SLAVE ON *.* to 'repl’@’10.x.x.x’ IDENTIFIED BY 'your_password';

Setup the Slave MySQL server

Configure the my.cnf as shown below:

relay-log=/var/lib/mysql/db02-relay-log
relay-log-space-limit = 4G
read-only=1
server-id=2

Then restart MySQL to apply the settings:

# CentOS / RHEL:
[root@db02 ~]# service mysqld restart

# Ubuntu / Debian:
[root@db02 ~]# service mysql restart

Finally, prep the slave server to connect to the Master MySQL server by:

# For MySQL 5.0 and 5.1
mysql> STOP SLAVE; RESET SLAVE;

# For MySQL 5.5 and 5.6
mysql> STOP SLAVE; RESET SLAVE ALL;

# Then run this command for all MySQL versions
mysql> CHANGE MASTER TO MASTER_HOST=’10.x.x.x’, MASTER_USER='repl', MASTER_PASSWORD='your_password'

RESET SLAVE will remove any previously configuration replication settings. Also, the MASTER_LOG_FILE and MASTER_LOG_POS is not needed as the following section will be dumping the databases with the –master-data flag.

Importing the databases onto the Slave MySQL server

A dump of the Master MySQL server is needed in order to get the Slave MySQL server in sync. Please note that this will lock your tables while this runs!

Perform this on the master server by:

[root@db01 ~]# mysqldump --all-databases --master-data | gzip -1 > /root/all.sql.gz

Now transfer over the database dump to the Slave MySQL server:

[root@db01 ~]# scp /root/all.sql.gz [email protected]:/root/

On the MySQL Slave server, import the dump by running:

[root@db02 ~]# zcat /root/all.sql.gz | mysql
[root@db02 ~]# mysql
mysql> FLUSH PRIVILEGES;
mysql> quit

As this dump contains the internal database ‘mysql’, all the passwords, including root, will be the same as they are on the Master MySQL server. So be sure to update /root/.my.cnf with the proper credentials:

[root@db02 ~]# vi /root/.my.cnf
[client]
user=root
password=db01_root_mysql_password

Also be sure to update any passwords your database backup scripts may use to reflect the new credentials!

Start and test replication

Start replication on the Slave MySQL server by:

[root@db02 ~]# mysql
mysql> START SLAVE;

And verify replication is online and working by confirming the following output:

mysql> SHOW SLAVE STATUS\G
...
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Seconds_Behind_Master: 0
...

If those values are the same as what is shown above, then replication is working properly! Perform a final test by creating a test database on the Master MySQL server, then check to ensure it shows up on the Slave MySQL server. Afterwards, feel free to drop that test database on the Master MySQL server.

From here, you should be good to go! Just be sure to setup a monitoring check to ensure that replication is always running and doesn’t encounter any errors. A very basic MySQL Replication check can be found here:
https://github.com/stephenlang/system-health-check

Memory backed filesystem for the MySQL tmpdir

If your queries are constantly writing to disk, and you cannot fix your query (which is strongly recommended) or properly adjust your buffer sizes for one reason or another, you can create a memory backed file system for the tmpdir, which should allow for incredibly faster writes and reads.

This should only be viewed as a temporary fix until you can get your query to stop writing its temp tables to disk.

Create the mount point:

mkdir -p /tmp/mysqltmp

Now give MySQL access to it:

chown mysql:mysql /tmp/mysqltmp

Set up the entry in /etc/fstab so it will mount on reboot:
* NOTE: Change the uid and gid to whatever MySQL runs as in /etc/passwd and /etc/group

vi /etc/fstab
# Add
tmpfs /tmp/mysqltmp tmpfs
rw,gid=27,uid=27,size=256M,nr_inodes=10k,mode=0700 0 0

Mount the memory backed file system:

mount -a

Update /etc/my.cnf and add the variable:

tmpdir = /tmp/mysqltmp

Now, restart MySQL:

service mysql restart

Finally, confirm MySQL is now utilizing the new memory backed filesystem:

mysql
mysql> show variables like '%tmpdir%';
+-------------------+----------------+
| Variable_name     | Value          |
+-------------------+----------------+
| slave_load_tmpdir | /tmp           |
| tmpdir            | /tmp/mysqltmp  |
+-------------------+----------------+
2 rows in set (0.00 sec)

How to dynamically enable general log in MySQL

Ever wonder what the exact queries are that cause MySQL to spike your CPU constantly? Here is an quick and easy way to enable the general log within MySQL without restarting MySQL.

First, confirm your initial variables before changing anything:

mysql> show variables like '%general%';
+------------------+--------------------------+
| Variable_name    | Value                    |
+------------------+--------------------------+
| general_log      | OFF                      |
| general_log_file | /var/lib/mysql/web01.log |
+------------------+--------------------------+
2 rows in set (0.00 sec)

So in this case, the logs will be recorded to /var/lib/mysql/web01.log. If you have nothing set for ‘general_log_file’, you can set it as shown below, but just make sure its in a directory that MySQL can write to:

mysql> set global general_log_file='/var/lib/mysql/web01.log';

With the log file ready, enable the general log by:

SET global general_log = 1;

Verify it is enabled by running:

mysql> show variables like '%general%';
+------------------+--------------------------+
| Variable_name    | Value                    |
+------------------+--------------------------+
| general_log      | On                       |
| general_log_file | /var/lib/mysql/web01.log |
+------------------+--------------------------+
2 rows in set (0.00 sec)

You can now view the queries being ran by looking at your general_log_file. Please note, your log will be in a different location then the one I have below:

tail -f /var/lib/mysql/web01.log

On busy MySQL servers, leaving the general_log setting enabled will quickly cause your hard drive to fill up. So after a few seconds or minutes, disable it by running:

SET global general_log = 0;

Then verify the setting is now off by:

mysql> show variables like '%general%';
+------------------+--------------------------+
| Variable_name    | Value                    |
+------------------+--------------------------+
| general_log      | OFF                      |
| general_log_file | /var/lib/mysql/web01.log |
+------------------+--------------------------+
2 rows in set (0.00 sec)

You can now go through your log file to see exactly what queries have been running.

Logrotate for MySQL slow query logs

Enabling the MySQL slow query log is extremely useful for finding out which queries are taking too long to execute, and either needs a table index created, or perhaps the query itself may need to be rewritten a bit more efficiently.

But oftentimes, it is forgotten that this log file can grow very large, and it needs to be rotated out. Below is a very quick and easy way to setup log rotation for the MySQL slow query logs:

vim /etc/logrotate.d/mysqllogs
/var/lib/mysql/slow-log {
        missingok
        rotate 2
        size 125M
        create 640 mysql mysql
}

That last line is critical. The file must be recreated with the owner and group set to ‘mysql’, otherwise, MySQL will not have permissions to write to that file.

How to increment MySQL replication counters

Sometimes, MySQL replication may break, displaying an error in ‘show slave status’ such as:

mysql> show slave status\G
********************* 1. row *********************
                 Last_Errno: 1062
                 Last_Error: Error 'Duplicate entry '123456' for key 1' on query. Default database: 'XXX'. Query: 'YYY'
               Skip_Counter: 0

When you see this, you can try to increment the counter by:

mysql> stop slave;
mysql> set global sql_slave_skip_counter=1; start slave;

Then rerun the ‘show slave status’ again to see if replication is now working again. If you start seeing many of these, it could possibly indicate that your replication is not consistent between the Master and Slave MySQL servers, and you may need to consider rebuilding replication.

Scrutiny

Being asked at 9AM to determine what caused a system to have problems at 2:30AM can be a weary task. If the normal system logs do not give us any real hints about what may have caused the issue, we oftentimes get trapped having to give the really poor answer of “We cannot replicate the issue that you experienced during the overnight, and the logs are not giving us enough information to go on. So we’ll have to watch for it tonight to see if it re-occurs.” Times like that makes a sysadmin feel completely helpless.

What if you could see what processes were running on the system at prescribed intervals? And not just processes, but what about what queries were running, how many people were hitting Apache, perhaps what types of network connections you were getting, on top of a bunch of other information that can be gathered from tools like vmstat, iostat, etc? Now you can draw better conclusions cause you will know what was happening at that single point in time.

Welcome Scrutiny! Located over on github. A tool based off of recap, rewriten to suit my own needs for portability between Red Hat, Debian, and FreeBSD based systems, as well as allowing for simple modifications of the metrics needed to best suit your own environment.

Features

– Simple code base for quick customizations
– Ability to enable/disable groups of checks
– Easy to add/modify/remove individual metric gathering
– Uses tools such as ps, top, df, vmstat, iostat, netstat, mysqladmin, and apache’s server-status module to help create a point in time snapshot of the systems events.

Configuration

The currently configurable options and thresholds are listed below:

# Enable / Disable Statistics
process_log=on
resource_log=on
network_log=on
mysql_log=on
apache_log=on

# Retention Days
retension=2

# Logs
basedir=/var/log/scrutiny

Implementation

Download script to desired directory and set it to be executable:

# Linux based systems
cd /root
git clone https://github.com/stephenlang/scrutiny/linux/scrutiny.sh

# FreeBSD based systems
git clone https://github.com/stephenlang/scrutiny/freebsd/scrutiny.sh
chmod 755 scrutiny.sh

After configuring the tunables in the script (see above), create a cron job to execute the script every 10 minutes:

crontab -e
*/10 * * * * /root/scrutiny.sh

Now days later, if a problem was reported during the overnight and you were able to narrow it down to a specific timeframe, you will be able to look at the point in time snapshots of system events that occurred:

ls /var/log/scrutiny

Server monitoring script

Without an agent based monitoring system, monitoring your servers internals for items such as CPU, memory, storage, processes, etc becomes very difficult without manually checking. There are many reputable monitoring services on the web such as Pingdom (www.pingdom.com), and most hosting providers provide a monitoring system, but they do not provide an agent. Therefore, you can only do basic external checks such as ping, port, and http content checks. There is no way to report if your MySQL replication has failed, some critical process has stopped running, or if your about to max out your / partition.

This simple bash script located on github, is meant to compliment these types of monitoring services. Just drop the script into a web accessible directory, configure a few options and thresholds, setup a URL content check that looks at the status page searching for the string ‘OK’, and then you can rest easy at night that your monitoring service will alert you if any of the scripts conditions are triggered.

Security note: To avoid revealing information about your system, it is strongly recommended that you place this and all web based monitoring scripts behind a htaccess file that has authentication, whitelisting your monitoring servers IP addresses if they are known.

Features

– Memory Check
– Swap Check
– Load Check
– Storage Check
– Process Check
– Replication Check

Configuration

The currently configurable options and thresholds are listed below:

 # Status page
status_page=/var/www/system-health-check.html

# Enable / Disable Checks
memory_check=off
swap_check=on
load_check=on
storage_check=on
process_check=on
replication_check=off

# Configure partitions for storage check
partitions=( / )

# Configure process(es) to check
process_names=( httpd mysqld postfix )

# Configure Thresholds
memory_threshold=99
swap_threshold=80
load_threshold=10
storage_threshold=80

Implementation

Download script to desired directory and set it to be executable:

 cd /root
git clone https://github.com/stephenlang/system-health-check
chmod 755 system-health-check/system-health-check.sh

After configuring the tunables in the script (see above), create a cron job to execute the script every 5 minutes:

 crontab -e
*/5 * * * * /root/system-health-check/system-health-check.sh

Now configure a URL content check with your monitoring providers tools check the status page searching for the string “OK”. Below are two examples:

 http://1.1.1.1/system-health-check.html
http://www.example.com/system-health-check.html

Testing

It is critical that you test this monitoring script before you rely on it. Bugs always exist somewhere, so test this before you implement it on your production systems! Here are some basic ways to test:

1. Configure all the thresholds really low so they will create an alarm. Manually run the script or wait for the cronjob to fire it off, then check the status page to see if it reports your checks are now in alarm.

2. To test out the process monitoring (assuming the system is not in production), configure the processes you want the script to check, then stop the process you are testing, and check the status page after the script runs to see if it reports your process is not running.

3. To test out the replication monitoring (assuming the system is not in production), log onto your MySQL slave server and run ‘stop slave;’. Then check the status page after the script runs to see if it reports an error on replication.

Resetting a MySQL root password

Resetting a forgotten MySQL root password is a pretty straight forward task to complete assuming you have sudo or root access to the server. It is important to note that by performing this procedure, MySQL will be down till you complete everything. So be sure to do this during a time when it will not impact your business.

First, stop the MySQL service

service mysqld stop # RHEL clones
service mysql stop # Ubuntu / Debian distro

Now its time to bring MySQL back up in safe mode. This means we will start MySQL, but we are simply skipping the user privileges table:

sudo mysqld_safe --skip-grant-tables &

Time to log into MySQL and switch to the MySQL database:

mysql -uroot
use mysql;

Now reset the root password and flush privileges:

update user set password=PASSWORD("enternewpasswordhere") where User='root';
flush privileges;
quit

Once you have completed that, its time to take MySQL out of safe mode and start it back up normally. NOTE: Be sure MySQL is fully stopped before trying to start the service again. You may need to kill -9 the process if its being stubborn.

Stop MySQL:

service mysqld stop # RHEL clones
service mysql stop # Ubuntu / Debian distro

Verify that the MySQL process is no longer running. If they are, you may have to kill -9 the process:

ps -waux |grep mysql

Start MySQL back up:

service mysqld start # RHEL clones
service mysql start # Ubuntu / Debian distro

Finally, test out logging in to ensure its now working properly:

mysql -uroot -p

Creating table indexes in MySQL

You may ask, what is a table index and how will it help performance? Table indexes provide MySQL a more efficient way to retrieve records. I often like to use the following example to explain it:

Imagine you have a phone book in front of you, and there are no letters in the top right corner that you can reference if you are looking up a last name. Therefore, you have to search page by page through hundreds of pages that have tens of thousands of results. Very inefficient and intensive. Think of this as a full table scan.

Now picture the phone book having the letter references in the top right corner. You can flip right to section “La – Lf” and only have to search through a smaller result set. The time to find the results is must faster and easier.

Common symptoms where this logic can be applied is when you log onto a server and see MySQL frequently chewing up a lot of CPU time, either constantly, or in spikes. The slow-query-log is also a great indicator cause if the query is taking a long time to execute, chances are it was because the query was making MySQL work too hard performing full table scans.

The information below will provide you with the tools to help identify these inefficient queries and how to help speed them up.

There are 2 common ways to identify queries that are very inefficient and may be creating CPU contention issues:

View MySQL’s process list:

When entering into MySQL CLI, you will want to look for any queries that you see that are often running to evaluate. You can see the queries by:

mysql
show processlist;

View slow queries log:

To view this, first check to ensure the slow-query-log variables are enabled in the my.cnf:

log-slow-queries=/var/lib/mysqllogs/slow-log
long_query_time=5

Now, lets look at an example of a slow query that got logged. Please note, these queries got logged here cause they took longer to run then the max seconds defined on long_query_time:

# Time: 110404 22:45:25
# User@Host: wpadmin[wordpressdb] @ localhost []
# Query_time: 14.609104  Lock_time: 0.000054 Rows_sent: 4  Rows_examined: 83532
SET timestamp=1301957125;
SELECT * FROM wp_table WHERE `key`='5544dDSDFjjghhd2544xGFDE' AND `carrier`='13';

Here is a query that we know is know runs often, and takes over 5 seconds to execute:

SELECT * FROM wp_table WHERE `key`='5544dDSDFjjghhd2544xGFDE' AND `carrier`='13';

Within the MySQL cli, run the following to view some more details about this query:

explain SELECT * FROM wp_table WHERE `key`='5544dDSDFjjghhd2544xGFDE' AND `carrier`='13';
+----+-------------+------------+------+---------------+------+---------+------+-------+-------------+
| id | select_type | table      | type | possible_keys | key  | key_len | ref  | rows  | Extra       |
+----+-------------+------------+------+---------------+------+---------+------+-------+-------------+
|  1 | SIMPLE      | wp_table   | ALL  | NULL          | NULL |    NULL | NULL | 83532 | Using where |
+----+-------------+------------+------+---------------+------+---------+------+-------+-------------+

The 2 important fields here are:

- Type: When you see "ALL", MySQL is performing a full table scan which is a very CPU intensive operation.
- Row: This is the total amount of rows returned in the table, so 83,000 results is a lot to sort through.

In general, when you are creating an index, you want to choose a field that has the highest amount of unique characters. In our case, we are going to use the field ‘key’ as shown below:

create index key_idx on wp_table(`key`);

Now, lets rerun our explain to see if the query is now returning less rows:

explain SELECT * FROM wp_table WHERE `key`='5544dDSDFjjghhd2544xGFDE' AND `carrier`='13';
+----+-------------+------------+------+---------------+------+---------+------+-------+-------------+
| id | select_type | table      | type | possible_keys | key  | key_len | ref  | rows  | Extra       |
+----+-------------+------------+------+---------------+------+---------+------+-------+-------------+
|  1 | SIMPLE      | wp_table   | ALL  | NULL          | NULL |    NULL | NULL | 13    | Using where |
+----+-------------+------------+------+---------------+------+---------+------+-------+-------------+

This is much better. Now each time that common query runs, MySQL will only have to go through 13 rows, instead of it having to check through 83,000.

Important note: Each time a table is updated, MySQL has to update the indexes, which could create some performance issues. Therefore, its recommended to keep the amount of indexes per table low, perhaps in the 4-6 range.

How to see what indexes already exist on a table and their cardinality:

show indexes from wp_table;

How to remove a table index:

delete index key_idx from wp_table;