Upgrade MySQL on CentOS

Sometimes you may run across a scenerio where you have to update MySQL. This is easy enough to do, however you should always test this out on a dev server before applying to production just in case you run into problems.

As a critical note, before performing the update, make sure you have a working MySQLdump of all your databases. This cannot be stressed enough! There are many ways of performing a MySQLdump. Be sure you can actually restore from those backups as well! One possible method of performing the backup of all the databases into a single large file, which locks the tables creating possible downtime, would be:

[root@db01 ~]# mysqldump --all-databases --master-data | gzip -1 > /root/all.sql.gz

On CentOS, I prefer to use the IUS repo’s as they are actively maintained, and they do not overwrite stock packages which is important.

So to get started, first setup the IUS repo if it isn’t already installed on your server:

# CentOS 6
[root@db01 ~]# rpm -ivh http://dl.iuscommunity.org/pub/ius/stable/CentOS/6/x86_64/ius-release-1.0-14.ius.centos6.noarch.rpm

# CentOS 7
[root@db01 ~]# rpm -ivh http://dl.iuscommunity.org/pub/ius/stable/CentOS/7/x86_64/ius-release-1.0-14.ius.centos7.noarch.rpm

To upgrade MySQL, yum has a plugin called ‘yum-replace’, which will automatically replace one package with another of your choosing. This simplifies the process of upgrading MySQL.

First, confirm that you are not already running another custom version of MySQL:

[root@db01 ~]# rpm -qa |grep -i mysql
mysql55-server-5.5.45-1.ius.el6.6.z.x86_64
mysql55-5.5.45-1.ius.el6.6.z.x86_64
...

Using the output from above, it looks like we just have MySQL 5.5 installed. I want to upgrade from MySQL 5.5 to MySQL 5.6. Here is how you would run it:

[root@db01 ~]# yum install yum-plugin-replace
[root@db01 ~]# yum replace mysql55 --replace-with mysql56u

During the upgrade process, I noticed that I could no longer log in with the root MySQL user. So to reset the root MySQL password:

[root@db01 ~]# service mysqld stop
[root@db01 ~]# mysql -uroot
mysql> use mysql;
mysql> update user set password=PASSWORD("enternewpasswordhere") where User='root';
mysql> flush privileges;
mysql> quit
[root@db01 ~]# service mysqld restart

Once the version has been updated, be sure to run mysql_upgrade. mysql_upgrade examines all tables in all databases for incompatibilities with the current version of MySQL Server. mysql_upgrade also upgrades the system tables so that you can take advantage of new privileges or capabilities that might have been added.

[root@db01 ~]# mysql_upgrade

If you find that the upgrade is not going to work for your environment, you can roll back to the original version:

[root@db01 ~]# yum replace mysql56u --replace-with mysql55

The yum-replace plugin makes upgrading and downgrading MySQL very fast and simple. But just to reiterate an earlier statement, make sure you test this out on a development server before applying to your production server! It is always possible that something may not be compatible with the new version of MySQL! So always test first so you know what to expect!

Benchmark MySQL with Sysbench

Tuning your MySQL configuration day in and day out without having an idea of what the hardware of the server can actually do in a perfect world can be a bit frustrating. This is where a tool like sysbench comes into play. Sysbench can allow you to get an idea of how MySQL will perform on your chosen server under load, using a basic set of tests.

It is important to note that this guide will not show you how to benchmark your existing MySQL dataset, but instead, it shows how your overall server will react to a generic MySQL dataset under heavy load.

Situations where this becomes useful is when you want to swap those SAS drives with SSD’s, or perhaps performing a comparison between running MySQL on a server vs using something like Amazon RDS or Rackspace Cloud Databases. It allows you to get a feel for where the bottlenecks may potentially come into play. Perhaps from IO, network saturation, CPU, etc.

Getting started with sysbench is pretty straight forward. I’ll outline how to create the test dataset, then perform a few benchmarks off that dataset. For the purposes of this article, I am most concerned about how many transactions per second MySQL can handle on my server in a perfect world.

First, log into your database server, and create a new test database. Do not attempt to use an existing database with content as sysbench will be populating it with its own tables. I posted 2 grant user statements on purpose. Set the access, username, and password as needed for your environment:

[root@db01 ~]# mysql
mysql> create database sbtest;
mysql> grant all on sbtest.* to 'sysbench'@'%' identified by 'your_uber_secure_password';
mysql> grant all on sbtest.* to 'sysbench'@'localhost' identified by 'your_uber_secure_password';
mysql> flush privileges;

Next, log into your server running sysbench, and install it:

# CentOS 6
[root@sysbench01 ~]#  rpm -ivh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
[root@sysbench01 ~]#  yum install sysbench

# CentOS 7
[root@sysbench01 ~]#  rpm -ivh http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-5.noarch.rpm
[root@sysbench01 ~]#  yum install sysbench

# Ubuntu 12.04 / Ubuntu 14.04
[root@sysbench01 ~]#  apt-get update
[root@sysbench01 ~]#  apt-get install sysbench

On the sysbench server, run sysbench with the prepare statement so it can generate a table with data to be used during the benchmark. This command will populate a table in the sbtest database with 1,000,000 rows of data, and force innodb:

[root@sysbench01 ~]# sysbench --test=oltp --oltp-table-size=1000000 --mysql-host=192.168.1.1 --mysql-db=sbtest --mysql-user=sysbench --mysql-password=your_uber_secure_password --db-driver=mysql --mysql-table-engine=innodb prepare

You can verify the table was written properly on your database server by:

[root@db01 ~]# mysql
mysql> use sbtest;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
mysql> show tables;
+------------------+
| Tables_in_sbtest |
+------------------+
| sbtest           |
+------------------+
1 row in set (0.00 sec)

mysql> select count(*) from sbtest;
+----------+
| count(*) |
+----------+
|  1000000 |
+----------+
1 row in set (0.13 sec)

Back on the server you are running sysbench on, we are going to run a benchmark using a read/write test (–oltp-read-only=off), for a max time of 60 seconds using 64 threads, with the test mode set to complex (range queries, range SUM, range ORDER by, inserts and updates on index, as well as non-index columns, delete rows).

[root@sysbench01 ~]# sysbench --test=oltp --oltp-table-size=1000000 --mysql-host=192.168.1.1 --mysql-db=sbtest --mysql-user=sysbench --mysql-password=your_uber_secure_password --max-time=60 --oltp-test-mode=complex --oltp-read-only=off --max-requests=0 --num-threads=64 --db-driver=mysql run

sysbench 0.4.12:  multi-threaded system evaluation benchmark

Running the test with following options:
Number of threads: 64

Doing OLTP test.
Running mixed OLTP test
Using Special distribution (12 iterations,  1 pct of values are returned in 75 pct cases)
Using "BEGIN" for starting transactions
Using auto_inc on the id column
Threads started!
Time limit exceeded, exiting...
(last message repeated 63 times)
Done.

OLTP test statistics:
    queries performed:
        read:                            1932084
        write:                           690030
        other:                           276012
        total:                           2898126
    transactions:                        138006 (2299.32 per sec.)
    deadlocks:                           0      (0.00 per sec.)
    read/write requests:                 2622114 (43687.09 per sec.)
    other operations:                    276012 (4598.64 per sec.)

Test execution summary:
    total time:                          60.0203s
    total number of events:              138006
    total time taken by event execution: 3839.0815
    per-request statistics:
         min:                                  8.76ms
         avg:                                 27.82ms
         max:                                313.65ms
         approx.  95 percentile:              50.64ms

Threads fairness:
    events (avg/stddev):           2156.3438/34.49
    execution time (avg/stddev):   59.9856/0.01

Lets say you want to run the same test, but perform the test using read only queries:

[root@sysbench01 ~]# sysbench --test=oltp --oltp-table-size=1000000 --mysql-host=192.168.1.1 --mysql-db=sbtest --mysql-user=sysbench --mysql-password=your_uber_secure_password --max-time=60 --oltp-test-mode=complex --oltp-read-only=on --max-requests=0 --num-threads=64 --db-driver=mysql run

Here is an example of running the test in read/write mode, and disconnecting and reconnecting after each query:

[root@sysbench01 ~]# sysbench --test=oltp --oltp-table-size=1000000 --mysql-host=192.168.1.1 --mysql-db=sbtest --mysql-user=sysbench --mysql-password=your_uber_secure_password --max-time=60 --oltp-test-mode=complex --oltp-read-only=off --max-requests=0 --num-threads=64 --db-driver=mysql --oltp-reconnect-mode=query run

Once you are done with your testing, you can clean up the the database by:

[root@db01 ~]# mysql
mysql> drop database sbtest;
mysql> DROP USER 'sysbench'@'localhost';
mysql> DROP USER 'sysbench'@'%';
mysql> flush privileges;
mysql> quit

MySQL Slow Query Log

Ever wonder why MySQL is spiking the CPU on your database server during the worst possible times? Then when you log into MySQL and run a ‘show processlist;’, you can never seem to catch exactly what query was running? The MySQL slow query log can help!

In short, the slow query log is a recording of all queries that took longer then a specified period of time to run. This information becomes valuable as it will help identify which queries are very intensive, which may lead you to having to create a table index, or perhaps showing you a plugin for you site that is performing queries in a very inefficient manner.

On most systems, the slow query log is disabled by default. However enabling it is very simple and can be applied without restarting MySQL.

Getting started, first create the log file, and set the proper permissions:

[root@db01 ~]# touch /var/lib/mysql/slow-log
[root@db01 ~]# chown mysql:mysql /var/lib/mysql/slow-log
[root@db01 ~]# chmod 640 /var/lib/mysql/slow-log

Now enable the slow query log without restarting the MySQL. The commands below will instruct MySQL to log any query that takes longer then 2 seconds:

[root@db01 ~]# mysql
mysql> SET GLOBAL slow_query_log=1;
mysql> SET GLOBAL slow_query_log_file="/var/lib/mysql/slow-log";
mysql> SET GLOBAL long_query_time=2

Now, update the systems my.cnf so the changes will persist if MySQL is restarted in the future:

[root@db01 ~]# vim /etc/my.cnf
[mysqld]
...
#log-output = FILE
slow-query-log = 1
slow-query-log-file = /var/lib/mysql/slow-log
long-query-time = 2
#log-queries-not-using-indexes = 0
...

Finally, don’t forget to setup log rotation for the slow query log as this can grow very large.

[root@db01 ~]# vim /etc/logrotate.d/mysqllogs
/var/lib/mysql/slow-log {
        missingok
        rotate 2
        size 125M
        create 640 mysql mysql
}

Now with the slow query log enabled, simple check the log file to see what queries took longer than 2 seconds to run. I’ll post an example below:

[root@db01 ~]# cat /var/lib/mysql/slow-log
# Time: 160210 22:45:25
# User@Host: wpadmin[wordpressdb] @ localhost []
# Query_time: 14.609104  Lock_time: 0.000054 Rows_sent: 4  Rows_examined: 83532
SET timestamp=1301957125;
SELECT * FROM wp_table WHERE `key`='5544dDSDFjjghhd2544xGFDE' AND `carrier`='13';

If the above query is shown repeatedly within the log file, perhaps running several times a minute, this could be a possible culprit for our CPU issues with MySQL. The key fields to pay attention to are “Rows_examined:” and the actual query itself.

If the “Rows_examined:” field is reporting over a few hundred, that could mean that you may need to create a table index. The query itself is also important because if a table index is not possible for one reason or another, your developer will be able to review the query to possible try optimizing or rewriting it, so it returns less rows to MySQL, therefore making the query more CPU friendly.

Exporting and importing compressed MySQL Dumps

Typically when creating MySQL dumps, I prefer to use Holland. However sometimes you just need to be able to dump and import a database for various reasons.

Below are two quick commands for doing this with compressed files. If you have a 30G database dump, you probably don’t want to waste time uncompressing the whole thing first and using up valuable disk space. And you certainly don’t want a dump a 30G file on your file system.

Export to a compressed MySQL dump

This command will allow you to compress a database dump as you dump the database. Just be sure to remove the {} when you run the command, and replace database with the name of your database:

mysqldump -u {user} -p {database} | gzip > {database}.sql.gz

Or if you prefer to add a timestamp:

mysqldump -u {user} -p {database} | gzip > `date -I`.{database}.sql.gz

Import a compressed MySQL dump

This single command will allow you to import the compressed dump directly into MySQL without having to uncompress it first. This assumes you have already created the database within MySQL.

The command to import it is as follows:

gzip -dc < {database}.sql.gz | mysql -u {user} -p {database}

Determine table size in MySQL

Understanding which tables within your database that contain large amounts of data can be a very useful. Tables that have a ton of data could indicate that some pruning may be needed.

Perhaps the space may be taken up by stats data, lead generation, or some other type of data which dates back 10 years. It will be up to the DBA’s to determine if any of that data can be archived of course.

Very large tables can give a performance penalty if the queries are not well written and return too many rows. It can also slow down your database dumps, or make them use up more disk space. Disk space in general is also a very real concern as well!

You can easily determine what tables within your database(s) are using up significant space by running:

mysql> SELECT table_schema as `Database`, table_name AS `Table`,
round(((data_length + index_length) / 1024 / 1024), 2) `Size in MB`
FROM information_schema.TABLES
ORDER BY (data_length + index_length) DESC;

And the returned results look something like the following:

+---------------------+---------------------------------------+------------+
| Database            | Table                                 | Size in MB |
+---------------------+---------------------------------------+------------+
| example_one         | tracker                               |    2854.70 |
| example_one         | corehits                              |    1321.00 |
| example_two         | reg_submissions                       |    1313.14 |
| example_one         | unsubscribes                          |    1312.00 |
| example_one         | members                               |     930.08 |
| example_two         | admin                                 |     939.55 |
| example_one         | submissions                           |     829.91 |
| example_two         | tracker_archive                       |     813.17 |
| example_two         | tracker_archive2                      |     757.56 |
| example_two         | affiliates                            |     749.56 |
| example_two         | post_data                             |     712.92 |
| example_one         | signups                               |     711.52 |
| example_one         | guests                                |     703.31 |
| example_one         | success                               |     586.41 |
| example_two         | tracker                               |     574.70 |
| example_one         | e_data                                |     434.28 |
| example_one         | reg_data                              |     426.88 |
+---------------------+---------------------------------------+------------+

Another possible way to run this that I recently found on https://www.percona.com/blog/2008/02/04/finding-out-largest-tables-on-mysql-server/ is posted below.

This one shows more detailed information about each table including how many rows it has, how much data, index size, total size, and the last column shows how much data the index takes compared to the data. If you see a large index size (idx column), it could be worth checking the indexes to see if you have any redundant indexes in place.

The command is:

mysql> SELECT CONCAT(table_schema, '.', table_name), CONCAT(ROUND(table_rows / 1000000, 2), 'M') rows, CONCAT(ROUND(data_length / ( 1024 * 1024 * 1024 ), 2), 'G') DATA, CONCAT(ROUND(index_length / ( 1024 * 1024 * 1024 ), 2), 'G') idx, CONCAT(ROUND(( data_length + index_length ) / ( 1024 * 1024 * 1024 ), 2), 'G') total_size, ROUND(index_length / data_length, 2) idxfrac FROM information_schema.TABLES ORDER  BY data_length + index_length DESC LIMIT  10;

I shortened the column titles a bit so it will fit on this blog, but the output will look something like:

+-----------------------+---------+-------+-------+------------+---------+
| CONCAT(.. table_name) | rows    | DATA  | idx   | total_size | idxfrac |
+-----------------------+---------+-------+-------+------------+---------+
| page_cache            | 38.58M  | 3.09G | 2.97G | 6.06G      |    0.96 |
| exit_pages            | 106.84M | 4.78G | 1.02G | 5.80G      |    0.21 |
| shipping_log004       | 33.94M  | 4.06G | 1.16G | 5.22G      |    0.29 |
| visitor_data          | 9.55M   | 4.61G | 0.09G | 4.70G      |    0.02 |
| orders                | 2.40M   | 1.57G | 0.30G | 1.87G      |    0.19 |
| marketing_info        | 25.16M  | 1.85G | 0.00G | 1.85G      |    0.00 |
| clients               | 1.27M   | 1.29G | 0.13G | 1.42G      |    0.10 |
| directory             | 5.87M   | 0.93G | 0.17G | 1.10G      |    0.18 |
| promos                | 2.31M   | 0.79G | 0.05G | 0.84G      |    0.07 |
| product_information   | 9.33M   | 0.39G | 0.35G | 0.74G      |    0.89 |
+-----------------------+---------+-------+-------+------------+---------+
10 rows in set (0.26 sec)

Maybe you have to see if you can set a retention rate on the data stored in a table if its for caching or logging purposes.

Perhaps you may need to run an optimize tables to reclaim the physical storage space. Just keep in mind that the optimize table command will lock your tables, so it may be best to run that during a low traffic time. See the following URL for more information:
http://dev.mysql.com/doc/refman/5.7/en/optimize-table.html

Backing up MySQL with Holland

Taken directly from the vendors website, Holland is an Open Source backup framework originally developed at Rackspace and written in Python. Its goal is to help facilitate backing up databases with greater configurability, consistency, and ease. Holland is capable of backing up other types of data, too. Because of its plugin structure, Holland can be used to backup anything you want by whatever means you want.

Notable Features
– Pluggable Framework
– Supports Multiple Backup Sets
– Database and Table Filtering (Using GLOBs)
– Auto-Detection of Transactional DBs
– Safe use of –single-transaction with mysqldump
– In-Line and Pluggable Compression
– Backups Suitable for Point-In-Time Recovery / Replication
– MySQL + LVM Snapshot and Logical Backups
– PostgreSQL backups using pgdump

Website: http://hollandbackup.org
Documentation: http://docs.hollandbackup.org

How to install Holland on Ubuntu / Debian

As the packages don’t exist in the distro’s repositories, pull them from the official repositories.

This works on the followings OS’s:
– Ubuntu 12.04
– Ubuntu 14.04
– Ubuntu 14.10
– Ubuntu 16.04
– Debian 7
– Debian 8

You can setup the repo by running the following on the commandline as root:

eval $(cat /etc/os-release)
 
DIST="xUbuntu_${VERSION_ID}"
[ $ID == "debian" ] && DIST="Debian_${VERSION_ID}.0"
 
curl -s http://download.opensuse.org/repositories/home:/holland-backup/${DIST}/Release.key | sudo apt-key add -
echo "deb http://download.opensuse.org/repositories/home:/holland-backup/${DIST}/ ./" > /etc/apt/sources.list.d/holland.list

Then install Holland:

apt-get update
apt-get install holland holland-mysqldump holland-common

How to install Holland on CentOS and RedHat

The holland packages exist in the EPEL repositories. Using the list below, install the EPEL repo for your distro:

# CentOS 5 / RedHat 5
rpm -ivh http://dl.fedoraproject.org/pub/epel/5/x86_64/epel-release-5-4.noarch.rpm
 
# CentOS 6 / RedHat 6
rpm -ivh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm

# CentOS 7 / RedHat 7
rpm -ivh http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-5.noarch.rpm

Then install Holland by:

yum install holland holland-mysqldump holland-common

How to configure Holland

Now that Holland is installed, its now time to configure it.

First create the backup directory where you will be storing your database backups:

mkdir -p /var/lib/mysqlbackup

Then configure holland to store the backups in the directory created above:

vim /etc/holland/holland.conf
...
backup_directory = /var/lib/mysqlbackup
backupsets = default
...

Next we setup default backupset within Holland that will control the retention period, compression, credentials, etc:

cat << EOF > /etc/holland/backupsets/default.conf
 
## Default Backup-Set
##
## Backs up all MySQL databases in a one-file-per-database fashion using
## lightweight in-line compression and engine auto-detection. This backup-set
## is designed to provide reliable backups "out of the box", however it is
## generally advisable to create additional custom backup-sets to suit
## one's specific needs.
##
## For more inforamtion about backup-sets, please consult the online Holland
## documentation. Fully-commented example backup-sets are also provided, by
## default, in /etc/holland/backupsets/examples.
 
[holland:backup]
plugin = mysqldump
backups-to-keep = 7
auto-purge-failures = yes
purge-policy = after-backup
estimated-size-factor = 1.0
 
# This section defines the configuration options specific to the backup
# plugin. In other words, the name of this section should match the name
# of the plugin defined above.
[mysqldump]
file-per-database       = yes
#lock-method        = auto-detect
#databases          = "*"
#tables             = "*"
#stop-slave         = no
#bin-log-position   = no
 
# The following section is for compression. The default, unless the
# mysqldump provider has been modified, is to use inline fast gzip
# compression (which is identical to the commented section below).
[compression]
method = gzip
options = "--rsyncable"
 
[mysql:client]
defaults-extra-file       = /root/.my.cnf
EOF

In order for Holland to backup the databases, setup the /root/.my.cnf by running:

cat << EOF > /root/.my.cnf
[client]
user=root
password=your_root_mysql_password_here
EOF

Now setup the nightly cronjob that will run Holland:

cat << EOF > /etc/cron.d/holland
30 3 * * * root /usr/sbin/holland -q bk
EOF

Finally, run the backup job to ensure Holland works. Please note this will lock your tables when it runs, so do not run this during peak times as it could cause downtime for your site or application!

/usr/sbin/holland -q bk

You can verify Holland ran successfully by checking the logs:

tail -f /var/log/holland/holland.log
2015-12-16 03:58:01,789 [INFO] Wrote backup manifest /var/lib/mysqlbackup/default/20151216_035801/backup_data/MANIFEST.txt
2015-12-16 03:58:01,793 [INFO] Executing: /usr/bin/mysqldump --defaults-file=/var/lib/mysqlbackup/default/20151216_035801/my.cnf --flush-privileges --max-allowed-packet=128M --lock-tables mysql
2015-12-16 03:58:01,888 [ERROR] /usr/bin/mysqldump[25783]: -- Warning: Skipping the data of table mysql.event. Specify the --events option explicitly.
2015-12-16 03:58:01,888 [INFO] Final on-disk backup size 181.77KB
2015-12-16 03:58:01,889 [INFO] 26.47% of estimated size 686.67KB
2015-12-16 03:58:01,889 [INFO] Backup completed in 0.24 seconds
2015-12-16 03:58:01,902 [INFO] Purged default/20151209_035801
2015-12-16 03:58:01,902 [INFO] 1 backups purged
2015-12-16 03:58:01,909 [INFO] Released lock /etc/holland/backupsets/default.conf
2015-12-16 03:58:01,909 [INFO] --- Ending backup run ---

Holland Tips and Tricks

Below are some of the more common tips and tricks for working with Holland. There are broken down as follows:
1. Alternate username for Holland
2. Change Default Retention
3. Hourly Backups
4. Backing up 2 or more database servers

1. Alternate username for Holland

For security purposes, a client may want to have a user, other then root, performing the Holland backups. To configure this, first create a new MySQL user for Holland:

mysql
GRANT SELECT, RELOAD, SUPER, LOCK TABLES, REPLICATION CLIENT, SHOW VIEW ON *.* TO 'holland_backup'@'localhost' IDENTIFIED BY 'secretpassword';
flush privileges;

Now update /etc/holland/backupsets/default.conf to reflect the new credentials:

vi /etc/holland/backupsets/default.conf
...
[mysql:client]
# defaults-file       = /root/.my.cnf
user = holland_backup
password = secretpassword
...

2. Change Default Retention

This guide sets Holland to keep 7 backups of your databases. Please note that this does not mean 7 days as it makes the assumption that you are only running one Holland backup job each night.

With that said, if you wanted to configure a retention of 14 days, you would update “backups-to-keep” in /etc/holland/backupsets/default.conf as shown below:

vi /etc/holland/backupsets/default.conf
...
[holland:backup]
plugin = mysqldump
backups-to-keep = 14
auto-purge-failures = yes
...

3. Hourly Backups

You can configure Holland to be more aggressive with how often it runs. Assuming we have a MySQL Slave server with a small database, and we want
– Holland backups every 4 hours
– 7 day retention

You would first update the cronjob as follows:

vi /etc/cron.d/holland
30 */4 * * * root /usr/sbin/holland -q bk

Now update the “backups-to-keep” in /etc/holland/backupsets/default.conf to keep 42 backups (6 backups a day * 7 days) as shown below

vi /etc/holland/backupsets/default.conf
...
[holland:backup]
plugin = mysqldump
backups-to-keep = 42
auto-purge-failures = yes
...

4. Backing up 2 or more database servers

An environment may have two or more database servers. Perhaps they exist on another server, or perhaps they are using something like Rackspace Cloud Database, or Amazon RDS. Below is how you can configure Holland to backup multiple targets:

First, create a copy of the default.conf we configured in this guide to serve as a starting point:

cd /etc/holland/backupsets/
cp default.conf clouddatabase.conf

Now update the clouddatabase.conf to configure the remote database server IP and credentials. You must comment out the defaults-extra-file as shown below:

vim clouddatabase.conf

...
[mysql:client]
#defaults-extra-file = /root/.my.cnf

# define external database information

host     = xxxxxxxxx.rackspaceclouddb.com
user     = holland
password = your_holland_mysql_password_here
port     = 3306
...

Now update your retention settings:

vi /etc/holland/backupsets/clouddatabase.conf
...
[holland:backup]
plugin = mysqldump
backups-to-keep = 7
auto-purge-failures = yes
...

Finally, update the holland.conf to include your second config file:

vim /etc/holland/holland.conf
...
backupsets = default, clouddatabase
...

Finally, run the backup job to ensure Holland works. Please note this will lock your tables which it runs, so do not run this during peak times as it could cause downtime for your site or application!

/usr/sbin/holland -q bk

Detecting if MySQLdump was running

Ever have those times when your site was down for a few minutes, and you have no way to explain why?

One possible thing to check for would be to see if one your developers accidentally ran a MySQLdump on your production databases during that time. You can see this pretty easily if you suspect its currently running by:

mysql
mysql> show processlist;
...
SELECT /*!40001 SQL_NO_CACHE */ * FROM `your_table`;

You can then verify it by running the following query:

mysql> select COUNT(1) mysqldumpThreads
from information_schema.processlist
where info like 'SELECT /*!40001 SQL_NO_CACHE */%';
+------------------+
| mysqldumpThreads |
+------------------+
|                1 |
+------------------+
1 row in set (0.00 sec)

If you get something other than 0, like what is shown above, then mysqldump is running.

If the issue already passed, and you are just trying to do a root cause analysis, check your MySQL slow query log for queries like what is posted below around the timestamp in question:

# Time: 151112 10:34:12
# User@Host: root[root] @ localhost []
# Query_time: 539.418400  Lock_time: 0.000041 Rows_sent: 60547211  Rows_examined: 60547211
SET timestamp=1420389252;
SELECT /*!40001 SQL_NO_CACHE */ * FROM `your_table`;

Increase open-files-limit in MariaDB on CentOS 7 with systemd

In the new age of linux known as systemd, a word that many still prepend colorful words to, I stumbled across another gotcha. Changing MariaDb’s open files limit.

Typically you would configure this in /etc/my.cnf or in /etc/security/limits.conf, but this no longer appears to be the case as systemd wants to control it.

I ran across having to do this cause I was receiving the following errors in the database when running mysqldump, mysqlcheck, etc:

mysqldump: Error: 'Out of resources when opening file '/tmp/#sql_7f96_2.MAI' (Errcode: 24)' when trying to dump tablespaces

mysqldump: Couldn't execute 'SHOW TRIGGERS LIKE 'EXAMPLE'': Can't read dir of './DB_NAME/' (errno: 24) (1018)

mysqldump: Got error: 23: "Out of resources when opening file './DB_NAME/TABLE.MYD' (Errcode: 24)" when using LOCK TABLES

So to increase the open files limit which is the accepted fix for this, you need to do the following with OS’s that use systemd:

First, check to see what your current open files is set to in MariaDB:

MariaDB [(none)]> SHOW VARIABLES LIKE 'Open_files_limit';
+------------------+-------+
| Variable_name    | Value |
+------------------+-------+
| open_files_limit | 1024  |
+------------------+-------+
1 row in set (0.00 sec)

Create a directory to store the MariaDB service changes for systemd:

mkdir -p /etc/systemd/system/mariadb.service.d/
cat /etc/systemd/system/mariadb.service.d/limits.conf
[Service]
LimitNOFILE=20000

Now reload the systemd daemon so it is aware of the changes applied to systemd. This does not restart any service:

systemctl daemon-reload

Finally, restart the MariaDB service so it will apply the new setting:

systemctl stop mariadb
systemctl start mariadb

You can now verify that you setting went into place by:

MariaDB [(none)]> SHOW VARIABLES LIKE 'Open_files_limit';
+------------------+-------+
| Variable_name    | Value |
+------------------+-------+
| open_files_limit | 20000 |
+------------------+-------+
1 row in set (0.00 sec)

More details and the reason behind this is found here:

cat /usr/lib/systemd/system/mariadb.service
# It's not recommended to modify this file in-place, because it will be
# overwritten during package upgrades.  If you want to customize, the
# best way is to create a file "/etc/systemd/system/mariadb.service",
# containing
#	.include /lib/systemd/system/mariadb.service
#	...make your changes here...
# or create a file "/etc/systemd/system/mariadb.service.d/foo.conf",
# which doesn't need to include ".include" call and which will be parsed
# after the file mariadb.service itself is parsed.
#
# For more info about custom unit files, see systemd.unit(5) or
# http://fedoraproject.org/wiki/Systemd#How_do_I_customize_a_unit_file.2F_add_a_custom_unit_file.3F

# For example, if you want to increase mariadb's open-files-limit to 10000,
# you need to increase systemd's LimitNOFILE setting, so create a file named
# "/etc/systemd/system/mariadb.service.d/limits.conf" containing:
#	[Service]
#	LimitNOFILE=10000

# Note: /usr/lib/... is recommended in the .include line though /lib/...
# still works.
# Don't forget to reload systemd daemon after you change unit configuration:
# root> systemctl --system daemon-reload

MySQL recovery mode

InnoDB tables rarely get corrupted, but it can occur when MySQL is randomly killed off by power outages, hardware problems, or the OOM killer running. When this happens, you will see some scary looking log entries in your MySQL error log, such as:

InnoDB: Assertion failure in thread 114256846 in file blah.c line 5423
InnoDB: Failing assertion: page_get_n_recs(page) > 1
InnoDB: We intentionally generate a memory trap.
InnoDB: Submit a detailed bug report to http://bugs.mysql.com.
InnoDB: If you get repeated assertion failures or crashes, even
InnoDB: immediately after the mysqld startup, there may be
InnoDB: corruption in the InnoDB tablespace. Please refer to
InnoDB: http://dev.mysql.com/doc/refman/5.5/en/forcing-innodb-recovery.html
InnoDB: about forcing recovery.
mysqld got signal 6 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
We will try our best to scrape up some info that will hopefully help diagnose
the problem, but since we have already crashed, something is definitely wrong
and this may fail.
...
obscure backtrace message
...
The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains
information that should help you find out what is causing the crash.

Usually, you can get MySQL back up in recovery mode pretty quickly by setting the following variable in /etc/my.cnf under the section [mysqld]:

vim /etc/my.cnf

[mysqld]
innodb_force_recovery = 1

Then try to restart MySQL:

service mysql restart

Once you have MySQL running, you can now check to see what tables may be corrupted:

mysqlcheck -c --all-databases

After you find the database that is corrupted, you can try to repair it by running:

mysqlcheck -r your_database_name
ie:  mysqlcheck -r drupal

Now confirm that the databases and tables are now showing ‘OK’:

mysqlcheck -c --all-databases

If all is well, take MySQL out of recovery mode by removing the following variable from /etc/my.cnf:

vim /etc/my.cnf
# Remove
innodb_force_recovery = 1

Finally, restart MySQL and check your logs to ensure that there are no further errors being recorded:

service mysql restart

MySQL slave promotion

There are times when you may need to promote to MySQL Slave server to a Master server. This task can be done pretty easily, but it is important to note that once this is done, you will need to rebuild your Master MySQL server as it is no longer the point of truth for your databases.

Stop the ‘IO_THREAD’ on the MySQL SLAVE:

mysql> stop slave io_thread;

Check the ‘processlist’ to make sure that the SLAVE has read the entire relay log:

mysql> show processlist;;
+----+-------------+-----------+------+---------+------+-----------------------------------------------------------------------------+------------------+
| Id | User | Host | db | Command | Time | State | Info |
+----+-------------+-----------+------+---------+------+-----------------------------------------------------------------------------+------------------+
| 2 | root | localhost | NULL | Query | 0 | NULL | SHOW PROCESSLIST |
| 4 | system user | | NULL | Connect | 124 | Slave has read all relay log; waiting for the slave I/O thread to update it | NULL |
+----+-------------+-----------+------+---------+------+-----------------------------------------------------------------------------+------------------+
2 rows in set (0.00 sec)

Stop the MySQL Slave process on MySQL 5.0 and MySQL 5.1

mysql> stop slave;
mysql> reset slave;

Stop the MySQL Slave process on MySQL 5.5 and MySQL 5.6

mysql> stop slave;
mysql> reset slave all;

Confirm that the following line is commented out in your /etc/my.cnf

# read-only = 1

Restart MySQL to re-read your my.cnf:

service mysql restart

Finally, don’t forget to update your web applications or anything else using the databases to point to the MySQL Slave IP address since it is now the new MySQL Master server.