Benchmark MySQL with Sysbench

Tuning your MySQL configuration day in and day out without having an idea of what the hardware of the server can actually do in a perfect world can be a bit frustrating. This is where a tool like sysbench comes into play. Sysbench can allow you to get an idea of how MySQL will perform on your chosen server under load, using a basic set of tests.

It is important to note that this guide will not show you how to benchmark your existing MySQL dataset, but instead, it shows how your overall server will react to a generic MySQL dataset under heavy load.

Situations where this becomes useful is when you want to swap those SAS drives with SSD’s, or perhaps performing a comparison between running MySQL on a server vs using something like Amazon RDS or Rackspace Cloud Databases. It allows you to get a feel for where the bottlenecks may potentially come into play. Perhaps from IO, network saturation, CPU, etc.

Getting started with sysbench is pretty straight forward. I’ll outline how to create the test dataset, then perform a few benchmarks off that dataset. For the purposes of this article, I am most concerned about how many transactions per second MySQL can handle on my server in a perfect world.

First, log into your database server, and create a new test database. Do not attempt to use an existing database with content as sysbench will be populating it with its own tables. I posted 2 grant user statements on purpose. Set the access, username, and password as needed for your environment:

[[email protected] ~]# mysql
mysql> create database sbtest;
mysql> grant all on sbtest.* to 'sysbench'@'%' identified by 'your_uber_secure_password';
mysql> grant all on sbtest.* to 'sysbench'@'localhost' identified by 'your_uber_secure_password';
mysql> flush privileges;

Next, log into your server running sysbench, and install it:

# CentOS 6
[[email protected] ~]#  rpm -ivh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
[[email protected] ~]#  yum install sysbench

# CentOS 7
[[email protected] ~]#  rpm -ivh http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-5.noarch.rpm
[[email protected] ~]#  yum install sysbench

# Ubuntu 12.04 / Ubuntu 14.04
[[email protected] ~]#  apt-get update
[[email protected] ~]#  apt-get install sysbench

On the sysbench server, run sysbench with the prepare statement so it can generate a table with data to be used during the benchmark. This command will populate a table in the sbtest database with 1,000,000 rows of data, and force innodb:

[[email protected] ~]# sysbench --test=oltp --oltp-table-size=1000000 --mysql-host=192.168.1.1 --mysql-db=sbtest --mysql-user=sysbench --mysql-password=your_uber_secure_password --db-driver=mysql --mysql-table-engine=innodb prepare

You can verify the table was written properly on your database server by:

[[email protected] ~]# mysql
mysql> use sbtest;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
mysql> show tables;
+------------------+
| Tables_in_sbtest |
+------------------+
| sbtest           |
+------------------+
1 row in set (0.00 sec)

mysql> select count(*) from sbtest;
+----------+
| count(*) |
+----------+
|  1000000 |
+----------+
1 row in set (0.13 sec)

Back on the server you are running sysbench on, we are going to run a benchmark using a read/write test (–oltp-read-only=off), for a max time of 60 seconds using 64 threads, with the test mode set to complex (range queries, range SUM, range ORDER by, inserts and updates on index, as well as non-index columns, delete rows).

[[email protected] ~]# sysbench --test=oltp --oltp-table-size=1000000 --mysql-host=192.168.1.1 --mysql-db=sbtest --mysql-user=sysbench --mysql-password=your_uber_secure_password --max-time=60 --oltp-test-mode=complex --oltp-read-only=off --max-requests=0 --num-threads=64 --db-driver=mysql run

sysbench 0.4.12:  multi-threaded system evaluation benchmark

Running the test with following options:
Number of threads: 64

Doing OLTP test.
Running mixed OLTP test
Using Special distribution (12 iterations,  1 pct of values are returned in 75 pct cases)
Using "BEGIN" for starting transactions
Using auto_inc on the id column
Threads started!
Time limit exceeded, exiting...
(last message repeated 63 times)
Done.

OLTP test statistics:
    queries performed:
        read:                            1932084
        write:                           690030
        other:                           276012
        total:                           2898126
    transactions:                        138006 (2299.32 per sec.)
    deadlocks:                           0      (0.00 per sec.)
    read/write requests:                 2622114 (43687.09 per sec.)
    other operations:                    276012 (4598.64 per sec.)

Test execution summary:
    total time:                          60.0203s
    total number of events:              138006
    total time taken by event execution: 3839.0815
    per-request statistics:
         min:                                  8.76ms
         avg:                                 27.82ms
         max:                                313.65ms
         approx.  95 percentile:              50.64ms

Threads fairness:
    events (avg/stddev):           2156.3438/34.49
    execution time (avg/stddev):   59.9856/0.01

Lets say you want to run the same test, but perform the test using read only queries:

[[email protected] ~]# sysbench --test=oltp --oltp-table-size=1000000 --mysql-host=192.168.1.1 --mysql-db=sbtest --mysql-user=sysbench --mysql-password=your_uber_secure_password --max-time=60 --oltp-test-mode=complex --oltp-read-only=on --max-requests=0 --num-threads=64 --db-driver=mysql run

Here is an example of running the test in read/write mode, and disconnecting and reconnecting after each query:

[[email protected] ~]# sysbench --test=oltp --oltp-table-size=1000000 --mysql-host=192.168.1.1 --mysql-db=sbtest --mysql-user=sysbench --mysql-password=your_uber_secure_password --max-time=60 --oltp-test-mode=complex --oltp-read-only=off --max-requests=0 --num-threads=64 --db-driver=mysql --oltp-reconnect-mode=query run

Once you are done with your testing, you can clean up the the database by:

[[email protected] ~]# mysql
mysql> drop database sbtest;
mysql> DROP USER 'sysbench'@'localhost';
mysql> DROP USER 'sysbench'@'%';
mysql> flush privileges;
mysql> quit

Load testing with Siege

Taken directly from the authors site at https://www.joedog.org/siege-home: Siege is an http load testing and benchmarking utility. It was designed to let web developers measure their code under duress, to see how it will stand up to load on the internet. Siege supports basic authentication, cookies, HTTP, HTTPS and FTP protocols. It lets its user hit a server with a configurable number of simulated clients. Those clients place the server “under siege.”

This tool becomes extremely useful when you need to get a feel for how a solution will handle under normal or high traffic events. During these simulated traffic events, it may help expose inefficient database queries, CPU intensive code, opportunities for setting up caching, or simply demonstrate the need for having to add additional web servers to increase overall site capacity.

Unlike many other load testers out there, Siege allows you to populate a file with a listing of your URL’s to help generate a more realistic simulation. While Siege can support a number of different tests, I generally keep it simple and basic. I’ll outline how I utilize it below.

First, install siege:

# CentOS 6 / RedHat 6
[[email protected] ~]# rpm -ivh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
[[email protected] ~]# yum install siege

# CentOS 7 / RedHat 7
[[email protected] ~]# rpm -ivh http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-5.noarch.rpm
[[email protected] ~]# yum install siege

# Ubuntu 12.04 / Ubuntu 14.04
[[email protected] ~]# apt-get update
[[email protected] ~]# apt-get install siege

Now click around your site recording about 10 or more URL’s. Make sure it includes various sections of your site that people would likely be visiting. For example, if running an e-commerce site, be sure to include the base domain several times since that will be accessed most often usually. But also include several landing pages, a couple of products, and maybe the shopping cart. For example:

[[email protected] ~]# vim /root/site.txt
http://example-store.com
http://example-store.com
http://example-store.com/cameras
http://example-store.com/electronics
http://example-store.com/home-decor/decorative-accents
http://example-store.com/home-decor/decorative-accents/herald-glass-vase
http://example-store.com/apparel/shirts
http://example-store.com/home-decor/books-music
http://example-store.com/home-decor/books-music/a-tale-of-two-cities.html
http://example-store.com/sale.html
http://example-store.com
http://example-store.com

You should now be able to run your load test:

[[email protected] ~]# siege -c50 -d3 -t30M -i -f /root/site.txt

This load test will be sending 50 concurrent connections, with a random delay between 1 and 3 seconds, lasting for 30 minutes against the url’s posted in /root/site.txt.

A couple quick notes about the flags:

-c, --concurrent=NUM      CONCURRENT users, default is 10
-d, --delay=NUM           Time DELAY, random delay before each request
                            between 1 and NUM. (NOT COUNTED IN STATS)
-t, --time=NUMm           TIMED testing where "m" is modifier S, M, or H
                            ex: --time=1H, one hour test.
-i, --internet            INTERNET user simulation, hits URLs randomly.
-f, --file=FILE           FILE, select a specific URLS FILE.

While the simulated traffic test is running, things you will want to watch your solution for include:
– Observe the CPU, Memory, IO, and Memory usage of the servers.
– Check the database to see if there are any intensive queries consistently running, perhaps indicating the need for redis or memcached.
– Check the MySQL slow query log to see if there are queries that may need a table index, or otherwise need to be optimized.
– Check that any caching software you have installed is returning a good hit rate.
– Ensuring the site remains online during the tests.

How to resize ext3 or ext4 filesystems

You have a 50G Amazon Elastic Block Storage device, or maybe a 50G Rackspace SSD Cloud Block Storage device that is running low on space. So you clone it and create a 100G drive. But when you mount it on your system, it still only shows 50G. What the #$%&!

This is because the partition and filesystem needs to be expanded to know it has more space!

Assuming that you have the larger drive already attached to your system, first verify the drive size vs the partition size:

[[email protected] ~]# lsblk
NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
xvda    202:0    0   20G  0 disk 
└─xvda1 202:1    0   20G  0 part /
xvdb    202:16   0  100G  0 disk 
└─xvdb1 202:17   0   50G  0 part

Checking the output above, you will see that xvdb has 100G available, but the partition, xvdb1, has only 50G. I’ll outline how to expand the filesystem to use all the new space on this ext4 volume.

Unmount the disk if its currently mounted:

[[email protected] ~]# umount /dev/xvdb1

Confirm the drive does not have any filesystem errors:

[[email protected] ~]# e2fsck /dev/xvdb1

Now comes the nail biting part. We need to remove the partition, then recreate it so it will see the entire disk. This guide assumes there is only 1 partition on this drive. This will not remove the data, however never trust notes that you didn’t verify yourself. Make sure you have backups before proceeding!

[[email protected] ~]# fdisk /dev/xvdb
d
n
p
1
enter
enter
t
83
w

Now, once the partition is recreated, you need to run another filesystem check:

[[email protected] ~]# e2fsck -f /dev/xvdb1
e2fsck 1.41.12 (17-May-2010)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/xvdb1: 13/3276800 files (0.0% non-contiguous), 251700/13107024 blocks

Then resize the filesystem:

[[email protected] ~]# resize2fs /dev/xvdb1
resize2fs 1.41.12 (17-May-2010)
Resizing the filesystem on /dev/xvdb1 to 26214055 (4k) blocks.
The filesystem on /dev/xvdb1 is now 26214055 blocks long.

Finally, mount the volume and confirm it shows the correct space:

[[email protected] ~]# mount -a
[[email protected] ~]# lsblk
NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
xvda    202:0    0   20G  0 disk 
└─xvda1 202:1    0   20G  0 part /
xvdb    202:16   0  100G  0 disk 
└─xvdb1 202:17   0  100G  0 part /mnt

[[email protected] ~]# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/xvda1       20G  1.2G   18G   7% /
tmpfs           496M     0  496M   0% /dev/shm
/dev/xvdb1       99G   60M   94G   1% /mnt

Logrotate examples

Logrotate is a useful application for automatically rotating your log files. If you choose to store certain logs in directories that logrotate doesn’t know about, you need to create a definition for this.

I have posted a few articles about this for various scenarios, but I wanted to include one that just contains examples for reference.

Typically, entires for logrotate should be stored inside of: /etc/logrotate.d/

To rotate out the MySQL slow query log after it reaches 125M in size, and have a retention rate of 4 logs, use the following:

[[email protected] ~]# vim /etc/logrotate.d/mysqllogs
/var/lib/mysql/slow-log {
        missingok
        rotate 2
        size 125M
        create 640 mysql mysql
}

To rotate out a custom log file for your application daily, and keep 7 day’s worth of logs, compress it, and ensure the ownership stays owned by apache:

[[email protected] ~]# vim /etc/logrotate.d/applicationname
/var/www/vhosts/example.com/application/logs/your_app.log {
        missingok
        daily
        rotate 7
        compress
        create 644 apache apache
}

If you would like to rotate your Holland backup logs weekly, keeping one months worth of logs, compress it, and ensure the ownership stays owned by root:

[[email protected] ~]# vim /etc/logrotate.d/holland
/var/log/holland/holland.log {
    rotate 4
    weekly
    compress
    missingok
    create root adm
}

If you would like to rotate out 2 logs, using one defination, simply add it to the first line as shown below:

[[email protected] ~]# vim /etc/logrotate.d/holland
/var/log/holland.log /var/log/holland/holland.log {
    rotate 4
    weekly
    compress
    missingok
    create root adm
}

Postfix – Flush mail queue

I have seen servers where the Postfix mail queue is jammed up with mail, perhaps from a programming error on the application, or maybe spam if they were web hacked. During these times, you may just want to purge the queue since you don’t want the messages going out. Below are some very simple methods of doing this:

How to remove all mail in the postfix queue:

[[email protected] ~]# postsuper -d ALL

How to remove all email in the deferred queue

[[email protected] ~]# postsuper -d ALL deferred

Remove all email in the queue for a specific domain:

[[email protected] ~]# postqueue -p | tail -n +2 | awk 'BEGIN { RS = "" } /@example\.com/ { print $1 }' | tr -d '*!' | postsuper -d -

Remove all email in the queue from a specific email address:

[[email protected] ~]# postqueue -p | tail -n +2 | awk 'BEGIN { RS = "" } /[email protected]\.com/ { print $1 }' | tr -d '*!' | postsuper -d -

Force fsck on next reboot

Ever needed to do a fsck on the root filesystem? I was happy to find that an old method of forcing a file system check on reboot still exists on most modern distributions. I tested this successfully on the following operating systems:

CentOS 6
Ubuntu 12.04
Ubuntu 14.04

Before proceeding, it is a really good idea to have a monitor and keyboard hooked up to the server just in case the fsck requires manual intervention due to bad blocks or something, therefore preventing normal boot up!

First, check to see when the last time the server had a file system check ran. This command should be against the device listing for /, such as /dev/sda1, or if using LVM, something like /dev/mapper/ubuntu–vg-root.

[[email protected] ~]# tune2fs -l /dev/mapper/ubuntu--vg-root |grep -iE "last|state"
Last mounted on:          /
Filesystem state:         clean
Last mount time:          Thu Sep 10 23:48:18 2015
Last write time:          Sun Mar  1 16:02:06 2015
Last checked:             Sun Mar  1 16:02:04 2015

As shown in the output above, a fsck has not been ran in quite some time! So to force a fsck at next boot, simply type:

[[email protected] ~]# touch /forcefsck

Now reboot your server:

[[email protected] ~]# shutdown -r now

Once the server comes back online, confirm the fsck ran by:

[[email protected] ~]# tune2fs -l /dev/mapper/ubuntu--vg-root |grep -iE "last|state"
Last mounted on:          /
Filesystem state:         clean
Last mount time:          Thu Feb 18 18:40:34 2016
Last write time:          Thu Feb 18 18:40:32 2016

Finally, check to confirm the system removed the file /forcefsck:

[[email protected] ~]# ls -al /forcefsck
ls: cannot access /forcefsck: No such file or directory

Varnish 3 – Installation and configuration

Varnish is a HTTP reverse proxy that can be installed in front of the web server to provide caching. If the VCL’s are properly configured for your site, Varnish can greatly offset the backend server load many times over.

In this guide, it is assumed you already have a running LAMP stack, and Apache’s vhost configurations are stored in /etc/httpd/vhost.d/. At the end of this guide if all goes well, you will be running Varnish 3, with Varnish listening for inbound connections on port 80, and passing any backend connections that cannot be served via cache to Apache on port 8080.

CentOS 6 – Installation and initial configuration

Install the varnish-release package repository, then install:

[[email protected] ~]# rpm --nosignature -ivh https://repo.varnish-cache.org/redhat/varnish-3.0.el6.rpm
[[email protected] ~]# yum -y install varnish

Now update your Apache ports and vhosts to 8080 since Varnish will be listening on port 80:

[[email protected] ~]# sed -i "s/Listen 80\$/Listen 8080/g" /etc/httpd/ports.conf
[[email protected] ~]# sed -i "s/NameVirtualHost \*:80\$/NameVirtualHost \*:8080/g" /etc/httpd/ports.conf
[[email protected] ~]# sed -i "s/:80>/:8080>/g" /etc/httpd/vhost.d/*

Configure Varnish to pass connections back to Apache on port 8080:

[[email protected] ~]# sed -i 's/port = "80"/port = "8080"/g' /etc/varnish/default.vcl

Then update Varnish so it listens on port 80:

[[email protected] ~]# sed -i 's/VARNISH_LISTEN_PORT=6081$/VARNISH_LISTEN_PORT=80/g' /etc/sysconfig/varnish

Finally, restart Apache and Varnish:

[[email protected] ~]# service httpd restart
[[email protected] ~]# service varnish start
[[email protected] ~]# chkconfig varnish on

Ubuntu 12.04 / 14.04 – Installation and initial configuration

First, setup the Varnish repos:

# Ubuntu 12.04
[[email protected] ~]# curl -sL http://repo.varnish-cache.org/debian/GPG-key.txt | apt-key add -
[[email protected] ~]# echo "deb http://repo.varnish-cache.org/ubuntu/ precise varnish-3.0" > /etc/apt/sources.list.d/varnish.list

# Ubuntu 14.04
[[email protected] ~]# curl -sL http://repo.varnish-cache.org/debian/GPG-key.txt | apt-key add -
[[email protected] ~]# echo "deb http://repo.varnish-cache.org/ubuntu/ trusty varnish-3.0" > /etc/apt/sources.list.d/varnish.list

Now install Varnish:

[[email protected] ~]# apt-get update
[[email protected] ~]# apt-get install varnish

Next update your Apache ports and vhosts to 8080 since Varnish will be listening on port 80:

[[email protected] ~]# sed -i "s/Listen 80\$/Listen 8080/g" /etc/apache2/ports.conf
[[email protected] ~]# sed -i "s/NameVirtualHost \*:80\$/NameVirtualHost \*:8080/g" /etc/apache2/ports.conf
[[email protected] ~]# sed -i "s/:80>/:8080>/g" /etc/apache2/sites-available/*

Configure Varnish to pass connections back to Apache on port 8080:

[[email protected] ~]# sed -i 's/port = "80"/port = "8080"/g' /etc/varnish/default.vcl

Then update Varnish so it listens on port 80:

[[email protected] ~]# sed -i 's/^DAEMON_OPTS="-a :6081/DAEMON_OPTS="-a :80/g' /etc/default/varnish
[[email protected] ~]# sed -i 's/START=no/START=yes/' /etc/default/varnish
[[email protected] ~]# service apache2 restart
[[email protected] ~]# service varnish restart

Varnish VCL configuration examples

All of the tunings take place within the vcl’s. For the purpose of this guide, we are going to just use the default varnish configuration file in /etc/varnish/default.vcl for our examples.

How to enable basic caching of static resources:

sub vcl_recv {
...
if (req.url ~ "\.(html|gif|jpg|jpeg|png|js|css)$") {
         unset req.http.cookie;
         return(lookup);
     }
 return(pass);
 }
...

If the request is coming in from CloudFlare or a load balancer, here is how to set the real IP of the client:

sub vcl_recv {
...
     if (req.restarts == 0) {
        if (req.http.x-forwarded-for) {
            set req.http.X-Forwarded-For =
                req.http.X-Forwarded-For + ", " + client.ip;
        } else {
            set req.http.X-Forwarded-For = client.ip;
        }
     }
...

Here is an example of how to exclude things like phpmyadmin, apc.php, and server-status from being cached:

sub vcl_recv {
...
if (req.url ~ "(?i)/(phpmyadmin|apc.php|server-status)") {
      return(pass);
    }
...

Here is how you can exclude a specific URL from being cached:

sub vcl_recv {
...
     if (req.url ~ "^/example") {
     return (pass);
     }
...

Perhaps you have 30 domains on the server, and you need one of them to be excluded from the cache. Or maybe your actively working on the site. Here is how you can prevent the domain from being served through varnish:

sub vcl_recv {
...
    if (req.http.host ~ "^(www.)?domain.com") {
    return (pass);
    }
...

If you find a script running via your browser, and suspect it is timing out due to varnish, you can adjust the timeout on that specific script by:

sub vcl_recv {
...
if (req.url == "^/bigscript.php") {
    set bereq.first_byte_timeout = 10m;
}
...

Here is an example of how to never cache PUT and DELETE requests for a domain:

sub vcl_recv {
...
if ( req.http.host == "subdomain.domain.com" ) {
    if (req.method == "PUT" || req.method == "POST" || req.method == "DELETE")
    {
        return(pass);
    }
}
...

Varnish Troubleshooting

One of the most common errors I see on sites utilizing Varnish is a error message:

Error 503 Service Unavailable
Guru Meditation:
XID: 1234567

Typically Varnish is not the problem, but instead its something else such as Apache or PHP-FPM (aka the backend) not being available. If you can replicate the error in your browser, then run the following command so you can see if you can catch the issue as its happening in the logs:

[[email protected] ~]# varnishlog -d -c -m TxStatus:503

This will return a bunch of output. You are most interesting in the lines surrounding ‘FetchError’ as shown below:

   11 SessionOpen  c 192.168.1.56 60015 :80
   11 ReqStart     c 192.168.1.56 60015 311889525
   11 RxRequest    c GET
   11 RxURL        c /
   11 RxProtocol   c HTTP/1.1
   11 RxHeader     c Host: example.com
   11 RxHeader     c Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
   11 RxHeader     c Connection: keep-alive
   11 RxHeader     c Cookie: wordpress_test_cookie=WP+Cookie+check; wp-settings-time-1=1455921695
   11 RxHeader     c User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_3) AppleWebKit/601.4.4 (KHTML, like Gecko) Version/9.0.3 Safari/601.4.4
   11 RxHeader     c Accept-Language: en-us
   11 RxHeader     c DNT: 1
   11 RxHeader     c Accept-Encoding: gzip, deflate
   11 VCL_call     c recv pass
   11 VCL_call     c hash
   11 Hash         c /
   11 Hash         c example.com
   11 VCL_return   c hash
   11 VCL_call     c pass pass
   11 FetchError   c no backend connection
   11 VCL_call     c error deliver
   11 VCL_call     c deliver deliver
   11 TxProtocol   c HTTP/1.1
   11 TxStatus     c 503
   11 TxResponse   c Service Unavailable
   11 TxHeader     c Server: Varnish
   11 TxHeader     c Content-Type: text/html; charset=utf-8
   11 TxHeader     c Retry-After: 5
   11 TxHeader     c Content-Length: 418
   11 TxHeader     c Accept-Ranges: bytes
   11 TxHeader     c Date: Fri, 19 Feb 2016 23:04:03 GMT
   11 TxHeader     c X-Varnish: 311889525
   11 TxHeader     c Age: 0
   11 TxHeader     c Via: 1.1 varnish
   11 TxHeader     c Connection: close
   11 Length       c 418
   11 ReqEnd       c 311889525 1455923043.127803802 1455923043.128304243 0.000658751 0.000423908 0.000076532

And in the example above, where is has ‘FetchError’, it gave the 503 as Apache was not running. Which was why is says: “no backend connection’.

MySQL Slow Query Log

Ever wonder why MySQL is spiking the CPU on your database server during the worst possible times? Then when you log into MySQL and run a ‘show processlist;’, you can never seem to catch exactly what query was running? The MySQL slow query log can help!

In short, the slow query log is a recording of all queries that took longer then a specified period of time to run. This information becomes valuable as it will help identify which queries are very intensive, which may lead you to having to create a table index, or perhaps showing you a plugin for you site that is performing queries in a very inefficient manner.

On most systems, the slow query log is disabled by default. However enabling it is very simple and can be applied without restarting MySQL.

Getting started, first create the log file, and set the proper permissions:

[[email protected] ~]# touch /var/lib/mysql/slow-log
[[email protected] ~]# chown mysql:mysql /var/lib/mysql/slow-log
[[email protected] ~]# chmod 640 /var/lib/mysql/slow-log

Now enable the slow query log without restarting the MySQL. The commands below will instruct MySQL to log any query that takes longer then 2 seconds:

[[email protected] ~]# mysql
mysql> SET GLOBAL slow_query_log=1;
mysql> SET GLOBAL slow_query_log_file="/var/lib/mysql/slow-log";
mysql> SET GLOBAL long_query_time=2

Now, update the systems my.cnf so the changes will persist if MySQL is restarted in the future:

[[email protected] ~]# vim /etc/my.cnf
[mysqld]
...
#log-output = FILE
slow-query-log = 1
slow-query-log-file = /var/lib/mysql/slow-log
long-query-time = 2
#log-queries-not-using-indexes = 0
...

Finally, don’t forget to setup log rotation for the slow query log as this can grow very large.

[[email protected] ~]# vim /etc/logrotate.d/mysqllogs
/var/lib/mysql/slow-log {
        missingok
        rotate 2
        size 125M
        create 640 mysql mysql
}

Now with the slow query log enabled, simple check the log file to see what queries took longer than 2 seconds to run. I’ll post an example below:

[[email protected] ~]# cat /var/lib/mysql/slow-log
# Time: 160210 22:45:25
# [email protected]: wpadmin[wordpressdb] @ localhost []
# Query_time: 14.609104  Lock_time: 0.000054 Rows_sent: 4  Rows_examined: 83532
SET timestamp=1301957125;
SELECT * FROM wp_table WHERE `key`='5544dDSDFjjghhd2544xGFDE' AND `carrier`='13';

If the above query is shown repeatedly within the log file, perhaps running several times a minute, this could be a possible culprit for our CPU issues with MySQL. The key fields to pay attention to are “Rows_examined:” and the actual query itself.

If the “Rows_examined:” field is reporting over a few hundred, that could mean that you may need to create a table index. The query itself is also important because if a table index is not possible for one reason or another, your developer will be able to review the query to possible try optimizing or rewriting it, so it returns less rows to MySQL, therefore making the query more CPU friendly.

Automatic package updates on CentOS 7

Keeping on top of your systems security updates is one important step in a defense in depth strategy for your solution. Security best practices indicate security updates should be applied within 30 days of being released.

While I prefer to test any and all updates in a test environment before deploying to production just in case the updates break existing functionality, some system administrators prefer to have the updates applied automatically nightly or weekly. The school of thought behind this is automatic updates are applied quickly, more often, and generally more consistently than they would be if done manually.

Also, its important to ensure that you are excluding any packages that may be critical to your system, such as the kernel or perhaps MySQL. The package exclusion list is provided in the example below.

Please keep in mind, in rare cases, system updates have been known to cause problems. So you should be cautious with any type of automatic updates, especially on production systems, and fully understand the risks involved before proceeding.

To enable automatic updates, first update yum, then install the yum-cron package:

[[email protected] ~]# yum -y update yum
[[email protected] ~]# yum -y install yum-cron
[[email protected] ~]# systemctl enable yum-cron
[[email protected] ~]# systemctl start yum-cron

The configuration is pretty simplified on CentOS 7 as shown below:

[[email protected] ~]# vim /etc/yum/yum-cron.conf
...
#  What kind of update to use:
# default                            = yum upgrade
# security                           = yum --security upgrade
# security-severity:Critical         = yum --sec-severity=Critical upgrade
# minimal                            = yum --bugfix update-minimal
# minimal-security                   = yum --security update-minimal
# minimal-security-severity:Critical =  --sec-severity=Critical update-minimal
update_cmd = default

# Whether a message should be emitted when updates are available,
# were downloaded, or applied.
update_messages = yes

# Whether updates should be downloaded when they are available.
download_updates = yes

# Whether updates should be applied when they are available.  Note
# that download_updates must also be yes for the update to be applied.
apply_updates = yes

# The address to send email messages from.
email_from = [email protected]

# List of addresses to send messages to.
email_to = [email protected]
...

If you would like to exclude specific packages from receiving automatic updates, you can add exclusions. In this example, we are excluding updates to the mysql and the kernel. Please note this must be done in the ‘base’ section of the configuration as shown below:

[[email protected] ~]# vim /etc/yum/yum-cron.conf
...
[base]
# This section overrides yum.conf
exclude=mysql* kernel*
...

After you make your changes to the configuration file, restart yum-cron by:

[[email protected] ~]# systemctl restart yum-cron

Once that is complete, no further configuration should be needed as yum-cron will run when the daily cron jobs are set to run.

If you find that you need to roll back a package update, you can do that by:

[[email protected] ~]# yum history
Loaded plugins: fastestmirror
ID     | Login user               | Date and time    | Action(s)      | Altered
-------------------------------------------------------------------------------
     9 | root               | 2016-02-09 17:47 | Install        |    1   
     8 | root               | 2016-02-09 17:47 | Update         |    1   
     7 | root               | 2015-08-18 03:19 | I, O, U        |  189 EE
     6 | root               | 2015-03-01 16:44 | Install        |    1   
     5 | root               | 2015-03-01 16:31 | Erase          |    1   
     4 | root               | 2015-03-01 16:30 | Install        |   49   
     3 | root               | 2015-03-01 16:28 | Install        |    1   
     2 | root               | 2015-03-01 16:25 | I, U           |   80   
     1 | System            | 2015-03-01 15:52 | Install        |  298  
[[email protected] ~]# yum history undo ID

Automatic package updates on CentOS 6

Keeping on top of your systems security updates is one important step in a defense in depth strategy for your solution. Security best practices indicate security updates should be applied within 30 days of being released.

While I prefer to test any and all updates in a test environment before deploying to production just in case the updates break existing functionality, some system administrators prefer to have the updates applied automatically nightly or weekly. The school of thought behind this is automatic updates are applied quickly, more often, and generally more consistently than they would be if done manually.

Also, its important to ensure that you are excluding any packages that may be critical to your system, such as the kernel or perhaps MySQL. Configuring the package exclusion list is shown below.

Please keep in mind, in rare cases, system updates have been known to cause problems. So you should be cautious with any type of automatic updates, especially on production systems, and fully understand the risks involved before proceeding.

To enable automatic updates, first install the yum-cron package:

[[email protected] ~]# yum -y install yum-cron
[[email protected] ~]# chkconfig yum-cron on
[[email protected] ~]# service yum-cron start

The main configuration file resides in /etc/sysconfig/yum-cron. Some common options for configuring how yum-cron works is documented below:

# Default - Check for updates, download and install:
[[email protected] ~]# vim /etc/sysconfig/yum-cron
...
CHECK_ONLY=no
DOWNLOAD_ONLY=no
...

# Download only and send an email report:
[[email protected] ~]# vim /etc/sysconfig/yum-cron
...
[email protected]
CHECK_ONLY=no
DOWNLOAD_ONLY=yes
...
 
# Send email report only, but do not download or install:
[[email protected] ~]# vim /etc/sysconfig/yum-cron
...
[email protected]
CHECK_ONLY=yes
DOWNLOAD_ONLY=no
...

You can also configure this to apply updates on specific days. This example below will check for updates, download and apply them on Sundays:

[[email protected] ~]# vim /etc/sysconfig/yum-cron
...
CHECK_ONLY=no
DOWNLOAD_ONLY=no
DAYS_OF_WEEK="0"
...

If you would like to exclude specific packages from receiving automatic updates, you can add exclusions. In this example, we are excluding updates to the mysql and the kernel. Please be sure to pay close attention to the single quotes and double quotes!

[[email protected] ~]# vim /etc/sysconfig/yum-cron
...
YUM_PARAMETER="--exclude='mysql*' --exclude='kernel*'"
...

Once that is complete, no further configuration should be needed as yum-cron will run when the daily cron jobs are set to run.

If you find that you need to roll back a package update, you can do that by:

[[email protected] ~]# yum history
Loaded plugins: fastestmirror
ID     | Login user               | Date and time    | Action(s)      | Altered
-------------------------------------------------------------------------------
     9 | root               | 2016-02-09 17:47 | Install        |    1   
     8 | root               | 2016-02-09 17:47 | Update         |    1   
     7 | root               | 2015-08-18 03:19 | I, O, U        |  189 EE
     6 | root               | 2015-03-01 16:44 | Install        |    1   
     5 | root               | 2015-03-01 16:31 | Erase          |    1   
     4 | root               | 2015-03-01 16:30 | Install        |   49   
     3 | root               | 2015-03-01 16:28 | Install        |    1   
     2 | root               | 2015-03-01 16:25 | I, U           |   80   
     1 | System            | 2015-03-01 15:52 | Install        |  298  
[[email protected] ~]# yum history undo ID