Load testing with Siege

Taken directly from the authors site at https://www.joedog.org/siege-home: Siege is an http load testing and benchmarking utility. It was designed to let web developers measure their code under duress, to see how it will stand up to load on the internet. Siege supports basic authentication, cookies, HTTP, HTTPS and FTP protocols. It lets its user hit a server with a configurable number of simulated clients. Those clients place the server “under siege.”

This tool becomes extremely useful when you need to get a feel for how a solution will handle under normal or high traffic events. During these simulated traffic events, it may help expose inefficient database queries, CPU intensive code, opportunities for setting up caching, or simply demonstrate the need for having to add additional web servers to increase overall site capacity.

Unlike many other load testers out there, Siege allows you to populate a file with a listing of your URL’s to help generate a more realistic simulation. While Siege can support a number of different tests, I generally keep it simple and basic. I’ll outline how I utilize it below.

First, install siege:

# CentOS 6 / RedHat 6
[root@loadtest01 ~]# rpm -ivh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
[root@loadtest01 ~]# yum install siege

# CentOS 7 / RedHat 7
[root@loadtest01 ~]# rpm -ivh http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-5.noarch.rpm
[root@loadtest01 ~]# yum install siege

# Ubuntu 12.04 / Ubuntu 14.04
[root@loadtest01 ~]# apt-get update
[root@loadtest01 ~]# apt-get install siege

Now click around your site recording about 10 or more URL’s. Make sure it includes various sections of your site that people would likely be visiting. For example, if running an e-commerce site, be sure to include the base domain several times since that will be accessed most often usually. But also include several landing pages, a couple of products, and maybe the shopping cart. For example:

[root@loadtest01 ~]# vim /root/site.txt
http://example-store.com
http://example-store.com
http://example-store.com/cameras
http://example-store.com/electronics
http://example-store.com/home-decor/decorative-accents
http://example-store.com/home-decor/decorative-accents/herald-glass-vase
http://example-store.com/apparel/shirts
http://example-store.com/home-decor/books-music
http://example-store.com/home-decor/books-music/a-tale-of-two-cities.html
http://example-store.com/sale.html
http://example-store.com
http://example-store.com

You should now be able to run your load test:

[root@loadtest01 ~]# siege -c50 -d3 -t30M -i -f /root/site.txt

This load test will be sending 50 concurrent connections, with a random delay between 1 and 3 seconds, lasting for 30 minutes against the url’s posted in /root/site.txt.

A couple quick notes about the flags:

-c, --concurrent=NUM      CONCURRENT users, default is 10
-d, --delay=NUM           Time DELAY, random delay before each request
                            between 1 and NUM. (NOT COUNTED IN STATS)
-t, --time=NUMm           TIMED testing where "m" is modifier S, M, or H
                            ex: --time=1H, one hour test.
-i, --internet            INTERNET user simulation, hits URLs randomly.
-f, --file=FILE           FILE, select a specific URLS FILE.

While the simulated traffic test is running, things you will want to watch your solution for include:
– Observe the CPU, Memory, IO, and Memory usage of the servers.
– Check the database to see if there are any intensive queries consistently running, perhaps indicating the need for redis or memcached.
– Check the MySQL slow query log to see if there are queries that may need a table index, or otherwise need to be optimized.
– Check that any caching software you have installed is returning a good hit rate.
– Ensuring the site remains online during the tests.

Sometimes you want to load test a website that has a username and password prompt provided by an htaccess file. To allow siege to authenticate, do the following:

[root@loadtest01 ~]# auth=`echo -n 'username:password' | openssl base64`
[root@loadtest01 ~]# siege -c50 -d3 -t30M --header="Authorization:Basic $auth" -i -f /root/site.txt

How to resize ext3 or ext4 filesystems

You have a 50G Amazon Elastic Block Storage device, or maybe a 50G Rackspace SSD Cloud Block Storage device that is running low on space. So you clone it and create a 100G drive. But when you mount it on your system, it still only shows 50G. What the #$%&!

This is because the partition and filesystem needs to be expanded to know it has more space!

Assuming that you have the larger drive already attached to your system, first verify the drive size vs the partition size:

[root@web01 ~]# lsblk
NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
xvda    202:0    0   20G  0 disk 
└─xvda1 202:1    0   20G  0 part /
xvdb    202:16   0  100G  0 disk 
└─xvdb1 202:17   0   50G  0 part

Checking the output above, you will see that xvdb has 100G available, but the partition, xvdb1, has only 50G. I’ll outline how to expand the filesystem to use all the new space on this ext4 volume.

Unmount the disk if its currently mounted:

[root@web01 ~]# umount /dev/xvdb1

Confirm the drive does not have any filesystem errors:

[root@web01 ~]# e2fsck /dev/xvdb1

Now comes the nail biting part. We need to remove the partition, then recreate it so it will see the entire disk. This guide assumes there is only 1 partition on this drive. This will not remove the data, however never trust notes that you didn’t verify yourself. Make sure you have backups before proceeding!

[root@web01 ~]# fdisk /dev/xvdb
d
n
p
1
enter
enter
t
83
w

Now, once the partition is recreated, you need to run another filesystem check:

[root@web01 ~]# e2fsck -f /dev/xvdb1
e2fsck 1.41.12 (17-May-2010)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/xvdb1: 13/3276800 files (0.0% non-contiguous), 251700/13107024 blocks

Then resize the filesystem:

[root@web01 ~]# resize2fs /dev/xvdb1
resize2fs 1.41.12 (17-May-2010)
Resizing the filesystem on /dev/xvdb1 to 26214055 (4k) blocks.
The filesystem on /dev/xvdb1 is now 26214055 blocks long.

Finally, mount the volume and confirm it shows the correct space:

[root@web01 ~]# mount -a
[root@web01 ~]# lsblk
NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
xvda    202:0    0   20G  0 disk 
└─xvda1 202:1    0   20G  0 part /
xvdb    202:16   0  100G  0 disk 
└─xvdb1 202:17   0  100G  0 part /mnt

[root@web01 ~]# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/xvda1       20G  1.2G   18G   7% /
tmpfs           496M     0  496M   0% /dev/shm
/dev/xvdb1       99G   60M   94G   1% /mnt

Logrotate examples

Logrotate is a useful application for automatically rotating your log files. If you choose to store certain logs in directories that logrotate doesn’t know about, you need to create a definition for this.

I have posted a few articles about this for various scenarios, but I wanted to include one that just contains examples for reference.

Typically, entires for logrotate should be stored inside of: /etc/logrotate.d/

To rotate out the MySQL slow query log after it reaches 125M in size, and have a retention rate of 4 logs, use the following:

[root@db01 ~]# vim /etc/logrotate.d/mysqllogs
/var/lib/mysql/slow-log {
        missingok
        rotate 2
        size 125M
        create 640 mysql mysql
}

To rotate out a custom log file for your application daily, and keep 7 day’s worth of logs, compress it, and ensure the ownership stays owned by apache:

[root@web01 ~]# vim /etc/logrotate.d/applicationname
/var/www/vhosts/example.com/application/logs/your_app.log {
        missingok
        daily
        rotate 7
        compress
        create 644 apache apache
}

If you would like to rotate your Holland backup logs weekly, keeping one months worth of logs, compress it, and ensure the ownership stays owned by root:

[root@db01 ~]# vim /etc/logrotate.d/holland
/var/log/holland/holland.log {
    rotate 4
    weekly
    compress
    missingok
    create root adm
}

If you would like to rotate out 2 logs, using one defination, simply add it to the first line as shown below:

[root@db01 ~]# vim /etc/logrotate.d/holland
/var/log/holland.log
/var/log/holland/holland.log {
    rotate 4
    weekly
    compress
    missingok
    create root adm
}

Postfix – Flush mail queue

I have seen servers where the Postfix mail queue is jammed up with mail, perhaps from a programming error on the application, or maybe spam if they were web hacked. During these times, you may just want to purge the queue since you don’t want the messages going out. Below are some very simple methods of doing this:

How to remove all mail in the postfix queue:

[root@web01 ~]# postsuper -d ALL

How to remove all email in the deferred queue

[root@web01 ~]# postsuper -d ALL deferred

Remove all email in the queue for a specific domain:

[root@web01 ~]# postqueue -p | tail -n +2 | awk 'BEGIN { RS = "" } /@example\.com/ { print $1 }' | tr -d '*!' | postsuper -d -

Remove all email in the queue from a specific email address:

[root@web01 ~]# postqueue -p | tail -n +2 | awk 'BEGIN { RS = "" } /user@example\.com/ { print $1 }' | tr -d '*!' | postsuper -d -

Force fsck on next reboot

Ever needed to do a fsck on the root filesystem? I was happy to find that an old method of forcing a file system check on reboot still exists on most modern distributions. I tested this successfully on the following operating systems:

CentOS 6
Ubuntu 12.04
Ubuntu 14.04

Before proceeding, it is a really good idea to have a monitor and keyboard hooked up to the server just in case the fsck requires manual intervention due to bad blocks or something, therefore preventing normal boot up!

First, check to see when the last time the server had a file system check ran. This command should be against the device listing for /, such as /dev/sda1, or if using LVM, something like /dev/mapper/ubuntu–vg-root.

[root@web01 ~]# tune2fs -l /dev/mapper/ubuntu--vg-root |grep -iE "last|state"
Last mounted on:          /
Filesystem state:         clean
Last mount time:          Thu Sep 10 23:48:18 2015
Last write time:          Sun Mar  1 16:02:06 2015
Last checked:             Sun Mar  1 16:02:04 2015

As shown in the output above, a fsck has not been ran in quite some time! So to force a fsck at next boot, simply type:

[root@web01 ~]# touch /forcefsck

Now reboot your server:

[root@web01 ~]# shutdown -r now

Once the server comes back online, confirm the fsck ran by:

[root@web01 ~]# tune2fs -l /dev/mapper/ubuntu--vg-root |grep -iE "last|state"
Last mounted on:          /
Filesystem state:         clean
Last mount time:          Thu Feb 18 18:40:34 2016
Last write time:          Thu Feb 18 18:40:32 2016

Finally, check to confirm the system removed the file /forcefsck:

[root@web01 ~]# ls -al /forcefsck
ls: cannot access /forcefsck: No such file or directory

Varnish 3 – Installation and configuration

Varnish is a HTTP reverse proxy that can be installed in front of the web server to provide caching. If the VCL’s are properly configured for your site, Varnish can greatly offset the backend server load many times over.

In this guide, it is assumed you already have a running LAMP stack, and Apache’s vhost configurations are stored in /etc/httpd/vhost.d/. At the end of this guide if all goes well, you will be running Varnish 3, with Varnish listening for inbound connections on port 80, and passing any backend connections that cannot be served via cache to Apache on port 8080.

CentOS 6 – Installation and initial configuration

Install the varnish-release package repository, then install:

[root@web01 ~]# rpm --nosignature -ivh https://repo.varnish-cache.org/redhat/varnish-3.0.el6.rpm
[root@web01 ~]# yum -y install varnish

Now update your Apache ports and vhosts to 8080 since Varnish will be listening on port 80:

[root@web01 ~]# sed -i "s/Listen 80\$/Listen 8080/g" /etc/httpd/ports.conf
[root@web01 ~]# sed -i "s/NameVirtualHost \*:80\$/NameVirtualHost \*:8080/g" /etc/httpd/ports.conf
[root@web01 ~]# sed -i "s/:80>/:8080>/g" /etc/httpd/vhost.d/*

Configure Varnish to pass connections back to Apache on port 8080:

[root@web01 ~]# sed -i 's/port = "80"/port = "8080"/g' /etc/varnish/default.vcl

Then update Varnish so it listens on port 80:

[root@web01 ~]# sed -i 's/VARNISH_LISTEN_PORT=6081$/VARNISH_LISTEN_PORT=80/g' /etc/sysconfig/varnish

Finally, restart Apache and Varnish:

[root@web01 ~]# service httpd restart
[root@web01 ~]# service varnish start
[root@web01 ~]# chkconfig varnish on

Ubuntu 12.04 / 14.04 – Installation and initial configuration

First, setup the Varnish repos:

# Ubuntu 12.04
[root@web01 ~]# curl -sL http://repo.varnish-cache.org/debian/GPG-key.txt | apt-key add -
[root@web01 ~]# echo "deb http://repo.varnish-cache.org/ubuntu/ precise varnish-3.0" > /etc/apt/sources.list.d/varnish.list

# Ubuntu 14.04
[root@web01 ~]# curl -sL http://repo.varnish-cache.org/debian/GPG-key.txt | apt-key add -
[root@web01 ~]# echo "deb http://repo.varnish-cache.org/ubuntu/ trusty varnish-3.0" > /etc/apt/sources.list.d/varnish.list

Now install Varnish:

[root@web01 ~]# apt-get update
[root@web01 ~]# apt-get install varnish

Next update your Apache ports and vhosts to 8080 since Varnish will be listening on port 80:

[root@web01 ~]# sed -i "s/Listen 80\$/Listen 8080/g" /etc/apache2/ports.conf
[root@web01 ~]# sed -i "s/NameVirtualHost \*:80\$/NameVirtualHost \*:8080/g" /etc/apache2/ports.conf
[root@web01 ~]# sed -i "s/:80>/:8080>/g" /etc/apache2/sites-available/*

Configure Varnish to pass connections back to Apache on port 8080:

[root@web01 ~]# sed -i 's/port = "80"/port = "8080"/g' /etc/varnish/default.vcl

Then update Varnish so it listens on port 80:

[root@web01 ~]# sed -i 's/^DAEMON_OPTS="-a :6081/DAEMON_OPTS="-a :80/g' /etc/default/varnish
[root@web01 ~]# sed -i 's/START=no/START=yes/' /etc/default/varnish
[root@web01 ~]# service apache2 restart
[root@web01 ~]# service varnish restart

Varnish VCL configuration examples

All of the tunings take place within the vcl’s. For the purpose of this guide, we are going to just use the default varnish configuration file in /etc/varnish/default.vcl for our examples.

How to enable basic caching of static resources:

sub vcl_recv {
...
if (req.url ~ "\.(html|gif|jpg|jpeg|png|js|css)$") {
         unset req.http.cookie;
         return(lookup);
     }
 return(pass);
 }
...

If the request is coming in from CloudFlare or a load balancer, here is how to set the real IP of the client:

sub vcl_recv {
...
     if (req.restarts == 0) {
        if (req.http.x-forwarded-for) {
            set req.http.X-Forwarded-For =
                req.http.X-Forwarded-For + ", " + client.ip;
        } else {
            set req.http.X-Forwarded-For = client.ip;
        }
     }
...

Here is an example of how to exclude things like phpmyadmin, apc.php, and server-status from being cached:

sub vcl_recv {
...
if (req.url ~ "(?i)/(phpmyadmin|apc.php|server-status)") {
      return(pass);
    }
...

Here is how you can exclude a specific URL from being cached:

sub vcl_recv {
...
     if (req.url ~ "^/example") {
     return (pass);
     }
...

Perhaps you have 30 domains on the server, and you need one of them to be excluded from the cache. Or maybe your actively working on the site. Here is how you can prevent the domain from being served through varnish:

sub vcl_recv {
...
    if (req.http.host ~ "^(www.)?domain.com") {
    return (pass);
    }
...

If you find a script running via your browser, and suspect it is timing out due to varnish, you can adjust the timeout on that specific script by:

sub vcl_recv {
...
if (req.url == "^/bigscript.php") {
    set bereq.first_byte_timeout = 10m;
}
...

Here is an example of how to never cache PUT and DELETE requests for a domain:

sub vcl_recv {
...
if ( req.http.host == "subdomain.domain.com" ) {
    if (req.method == "PUT" || req.method == "POST" || req.method == "DELETE")
    {
        return(pass);
    }
}
...

Varnish Troubleshooting

One of the most common errors I see on sites utilizing Varnish is a error message:

Error 503 Service Unavailable
Guru Meditation:
XID: 1234567

Typically Varnish is not the problem, but instead its something else such as Apache or PHP-FPM (aka the backend) not being available. If you can replicate the error in your browser, then run the following command so you can see if you can catch the issue as its happening in the logs:

[root@web01 ~]# varnishlog -d -c -m TxStatus:503

This will return a bunch of output. You are most interesting in the lines surrounding ‘FetchError’ as shown below:

   11 SessionOpen  c 192.168.1.56 60015 :80
   11 ReqStart     c 192.168.1.56 60015 311889525
   11 RxRequest    c GET
   11 RxURL        c /
   11 RxProtocol   c HTTP/1.1
   11 RxHeader     c Host: example.com
   11 RxHeader     c Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
   11 RxHeader     c Connection: keep-alive
   11 RxHeader     c Cookie: wordpress_test_cookie=WP+Cookie+check; wp-settings-time-1=1455921695
   11 RxHeader     c User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_3) AppleWebKit/601.4.4 (KHTML, like Gecko) Version/9.0.3 Safari/601.4.4
   11 RxHeader     c Accept-Language: en-us
   11 RxHeader     c DNT: 1
   11 RxHeader     c Accept-Encoding: gzip, deflate
   11 VCL_call     c recv pass
   11 VCL_call     c hash
   11 Hash         c /
   11 Hash         c example.com
   11 VCL_return   c hash
   11 VCL_call     c pass pass
   11 FetchError   c no backend connection
   11 VCL_call     c error deliver
   11 VCL_call     c deliver deliver
   11 TxProtocol   c HTTP/1.1
   11 TxStatus     c 503
   11 TxResponse   c Service Unavailable
   11 TxHeader     c Server: Varnish
   11 TxHeader     c Content-Type: text/html; charset=utf-8
   11 TxHeader     c Retry-After: 5
   11 TxHeader     c Content-Length: 418
   11 TxHeader     c Accept-Ranges: bytes
   11 TxHeader     c Date: Fri, 19 Feb 2016 23:04:03 GMT
   11 TxHeader     c X-Varnish: 311889525
   11 TxHeader     c Age: 0
   11 TxHeader     c Via: 1.1 varnish
   11 TxHeader     c Connection: close
   11 Length       c 418
   11 ReqEnd       c 311889525 1455923043.127803802 1455923043.128304243 0.000658751 0.000423908 0.000076532

And in the example above, where is has ‘FetchError’, it gave the 503 as Apache was not running. Which was why is says: “no backend connection’.

MySQL Slow Query Log

Ever wonder why MySQL is spiking the CPU on your database server during the worst possible times? Then when you log into MySQL and run a ‘show processlist;’, you can never seem to catch exactly what query was running? The MySQL slow query log can help!

In short, the slow query log is a recording of all queries that took longer then a specified period of time to run. This information becomes valuable as it will help identify which queries are very intensive, which may lead you to having to create a table index, or perhaps showing you a plugin for you site that is performing queries in a very inefficient manner.

On most systems, the slow query log is disabled by default. However enabling it is very simple and can be applied without restarting MySQL.

Getting started, first create the log file, and set the proper permissions:

[root@db01 ~]# touch /var/lib/mysql/slow-log
[root@db01 ~]# chown mysql:mysql /var/lib/mysql/slow-log
[root@db01 ~]# chmod 640 /var/lib/mysql/slow-log

Now enable the slow query log without restarting the MySQL. The commands below will instruct MySQL to log any query that takes longer then 2 seconds:

[root@db01 ~]# mysql
mysql> SET GLOBAL slow_query_log=1;
mysql> SET GLOBAL slow_query_log_file="/var/lib/mysql/slow-log";
mysql> SET GLOBAL long_query_time=2

Now, update the systems my.cnf so the changes will persist if MySQL is restarted in the future:

[root@db01 ~]# vim /etc/my.cnf
[mysqld]
...
#log-output = FILE
slow-query-log = 1
slow-query-log-file = /var/lib/mysql/slow-log
long-query-time = 2
#log-queries-not-using-indexes = 0
...

Finally, don’t forget to setup log rotation for the slow query log as this can grow very large.

[root@db01 ~]# vim /etc/logrotate.d/mysqllogs
/var/lib/mysql/slow-log {
        missingok
        rotate 2
        size 125M
        create 640 mysql mysql
}

Now with the slow query log enabled, simple check the log file to see what queries took longer than 2 seconds to run. I’ll post an example below:

[root@db01 ~]# cat /var/lib/mysql/slow-log
# Time: 160210 22:45:25
# User@Host: wpadmin[wordpressdb] @ localhost []
# Query_time: 14.609104  Lock_time: 0.000054 Rows_sent: 4  Rows_examined: 83532
SET timestamp=1301957125;
SELECT * FROM wp_table WHERE `key`='5544dDSDFjjghhd2544xGFDE' AND `carrier`='13';

If the above query is shown repeatedly within the log file, perhaps running several times a minute, this could be a possible culprit for our CPU issues with MySQL. The key fields to pay attention to are “Rows_examined:” and the actual query itself.

If the “Rows_examined:” field is reporting over a few hundred, that could mean that you may need to create a table index. The query itself is also important because if a table index is not possible for one reason or another, your developer will be able to review the query to possible try optimizing or rewriting it, so it returns less rows to MySQL, therefore making the query more CPU friendly.

Automatic package updates on CentOS 7

Keeping on top of your systems security updates is one important step in a defense in depth strategy for your solution. Security best practices indicate security updates should be applied within 30 days of being released.

While I prefer to test any and all updates in a test environment before deploying to production just in case the updates break existing functionality, some system administrators prefer to have the updates applied automatically nightly or weekly. The school of thought behind this is automatic updates are applied quickly, more often, and generally more consistently than they would be if done manually.

Also, its important to ensure that you are excluding any packages that may be critical to your system, such as the kernel or perhaps MySQL. The package exclusion list is provided in the example below.

Please keep in mind, in rare cases, system updates have been known to cause problems. So you should be cautious with any type of automatic updates, especially on production systems, and fully understand the risks involved before proceeding.

To enable automatic updates, first update yum, then install the yum-cron package:

[root@web01 ~]# yum -y update yum
[root@web01 ~]# yum -y install yum-cron
[root@web01 ~]# systemctl enable yum-cron
[root@web01 ~]# systemctl start yum-cron

The configuration is pretty simplified on CentOS 7 as shown below:

[root@web01 ~]# vim /etc/yum/yum-cron.conf
...
#  What kind of update to use:
# default                            = yum upgrade
# security                           = yum --security upgrade
# security-severity:Critical         = yum --sec-severity=Critical upgrade
# minimal                            = yum --bugfix update-minimal
# minimal-security                   = yum --security update-minimal
# minimal-security-severity:Critical =  --sec-severity=Critical update-minimal
update_cmd = default

# Whether a message should be emitted when updates are available,
# were downloaded, or applied.
update_messages = yes

# Whether updates should be downloaded when they are available.
download_updates = yes

# Whether updates should be applied when they are available.  Note
# that download_updates must also be yes for the update to be applied.
apply_updates = yes

# The address to send email messages from.
email_from = [email protected]

# List of addresses to send messages to.
email_to = [email protected]
...

If you would like to exclude specific packages from receiving automatic updates, you can add exclusions. In this example, we are excluding updates to the mysql and the kernel. Please note this must be done in the ‘base’ section of the configuration as shown below:

[root@web01 ~]# vim /etc/yum/yum-cron.conf
...
[base]
# This section overrides yum.conf
exclude=mysql* kernel*
...

After you make your changes to the configuration file, restart yum-cron by:

[root@web01 ~]# systemctl restart yum-cron

Once that is complete, no further configuration should be needed as yum-cron will run when the daily cron jobs are set to run.

If you find that you need to roll back a package update, you can do that by:

[root@web01 ~]# yum history
Loaded plugins: fastestmirror
ID     | Login user               | Date and time    | Action(s)      | Altered
-------------------------------------------------------------------------------
     9 | root               | 2016-02-09 17:47 | Install        |    1   
     8 | root               | 2016-02-09 17:47 | Update         |    1   
     7 | root               | 2015-08-18 03:19 | I, O, U        |  189 EE
     6 | root               | 2015-03-01 16:44 | Install        |    1   
     5 | root               | 2015-03-01 16:31 | Erase          |    1   
     4 | root               | 2015-03-01 16:30 | Install        |   49   
     3 | root               | 2015-03-01 16:28 | Install        |    1   
     2 | root               | 2015-03-01 16:25 | I, U           |   80   
     1 | System            | 2015-03-01 15:52 | Install        |  298  
[root@web01 ~]# yum history undo ID

Automatic package updates on CentOS 6

Keeping on top of your systems security updates is one important step in a defense in depth strategy for your solution. Security best practices indicate security updates should be applied within 30 days of being released.

While I prefer to test any and all updates in a test environment before deploying to production just in case the updates break existing functionality, some system administrators prefer to have the updates applied automatically nightly or weekly. The school of thought behind this is automatic updates are applied quickly, more often, and generally more consistently than they would be if done manually.

Also, its important to ensure that you are excluding any packages that may be critical to your system, such as the kernel or perhaps MySQL. Configuring the package exclusion list is shown below.

Please keep in mind, in rare cases, system updates have been known to cause problems. So you should be cautious with any type of automatic updates, especially on production systems, and fully understand the risks involved before proceeding.

To enable automatic updates, first install the yum-cron package:

[root@web01 ~]# yum -y install yum-cron
[root@web01 ~]# chkconfig yum-cron on
[root@web01 ~]# service yum-cron start

The main configuration file resides in /etc/sysconfig/yum-cron. Some common options for configuring how yum-cron works is documented below:

# Default - Check for updates, download and install:
[root@web01 ~]# vim /etc/sysconfig/yum-cron
...
CHECK_ONLY=no
DOWNLOAD_ONLY=no
...

# Download only and send an email report:
[root@web01 ~]# vim /etc/sysconfig/yum-cron
...
[email protected]
CHECK_ONLY=no
DOWNLOAD_ONLY=yes
...
 
# Send email report only, but do not download or install:
[root@web01 ~]# vim /etc/sysconfig/yum-cron
...
[email protected]
CHECK_ONLY=yes
DOWNLOAD_ONLY=no
...

You can also configure this to apply updates on specific days. This example below will check for updates, download and apply them on Sundays:

[root@web01 ~]# vim /etc/sysconfig/yum-cron
...
CHECK_ONLY=no
DOWNLOAD_ONLY=no
DAYS_OF_WEEK="0"
...

If you would like to exclude specific packages from receiving automatic updates, you can add exclusions. In this example, we are excluding updates to the mysql and the kernel. Please be sure to pay close attention to the single quotes and double quotes!

[root@web01 ~]# vim /etc/sysconfig/yum-cron
...
YUM_PARAMETER="--exclude='mysql*' --exclude='kernel*'"
...

Once that is complete, no further configuration should be needed as yum-cron will run when the daily cron jobs are set to run.

If you find that you need to roll back a package update, you can do that by:

[root@web01 ~]# yum history
Loaded plugins: fastestmirror
ID     | Login user               | Date and time    | Action(s)      | Altered
-------------------------------------------------------------------------------
     9 | root               | 2016-02-09 17:47 | Install        |    1   
     8 | root               | 2016-02-09 17:47 | Update         |    1   
     7 | root               | 2015-08-18 03:19 | I, O, U        |  189 EE
     6 | root               | 2015-03-01 16:44 | Install        |    1   
     5 | root               | 2015-03-01 16:31 | Erase          |    1   
     4 | root               | 2015-03-01 16:30 | Install        |   49   
     3 | root               | 2015-03-01 16:28 | Install        |    1   
     2 | root               | 2015-03-01 16:25 | I, U           |   80   
     1 | System            | 2015-03-01 15:52 | Install        |  298  
[root@web01 ~]# yum history undo ID

Automatic package updates on Ubuntu

Keeping on top of your systems security updates is one important step in a defense in depth strategy for your solution. Security best practices indicate security updates should be applied within 30 days of being released.

While I prefer to test any and all updates in a test environment before deploying to production just in case the updates break existing functionality, some system administrators prefer to have the updates applied automatically nightly or weekly. The school of thought behind this is automatic updates are applied quickly, more often, and generally more consistently than they would be if done manually.

Also, its important to ensure that you are excluding any packages that may be critical to your system, such as the kernel or perhaps MySQL. Configuring the package exclusion list is shown below.

Please keep in mind, in rare cases, system updates have been known to cause problems. So you should be cautious with any type of automatic updates, especially on production systems, and fully understand the risks involved before proceeding.

To enable automatic updates, first install the unattended-upgrades package:

[root@web01 ~]# apt-get install unattended-upgrades

Now run the interactive dialog to configure /etc/apt/apt.conf.d/20auto-upgrades. Be sure to answer ‘Yes’ to automatically download and install stable updates.

[root@web01 ~]# dpkg-reconfigure --priority=low unattended-upgrades

The main configuration files resides in /etc/apt/apt.conf.d/50unattended-upgrades. You have a choice of what types of updates you want. The default are ‘security’ updates only. But your choices are:

security - just security related updates (default)
updates - other updates, for example bug fixes that don't affect security
proposed - "beta" channel for updates before they are considered final (not recommended)
backports - selected feature updates from later Ubuntu releases (not recommended)

So to enable both security updates, and bug fixes, enable them by:

[root@web01 ~]# vim /etc/apt/apt.conf.d/50unattended-upgrades
...
Unattended-Upgrade::Allowed-Origins {
        "${distro_id}:${distro_codename}-security";
        "${distro_id}:${distro_codename}-updates";
//      "${distro_id}:${distro_codename}-proposed";
//      "${distro_id}:${distro_codename}-backports";
};
...

If you wish to exclude packages from automatic updates, you can add the following to the configuration:

[root@web01 ~]# vim /etc/apt/apt.conf.d/50unattended-upgrades
...
Unattended-Upgrade::Package-Blacklist {
        "mysql55*";
        "kernel";
};
...

More information on the options available can be found within:

[root@web01 ~]# cat /etc/apt/apt.conf.d/20auto-upgrades
[root@web01 ~]# cat /etc/cron.daily/apt