Using access control lists (FACLs)

An access control list (ACL) provides a more granular approach to permissions, allowing the system administrator the ability to move past the limitations of Linux permissions when the situation warrants it.

For instance, perhaps you have a complex use case that requires granting permissions to multiple individual users, or maybe to more then a single group. Some people would get around this by simply using 777 permissions, but that in and of itself is a very poor security practice.

ACLs can help solve this as they allow you to create a sophisticated permissions scheme that grant access to the users and groups that require it, without the need to open the permissions broadly for everyone.

With all this being said, ACLs should only be used when standard permissions cannot accomplish what your looking to do. ACLs are often used as the ‘quick fix’, but while they do work, they can overly complicate an environment that could cause major headaches while troubleshooting. There is also more room for error, as you are adding an additional layer of complication when employing ACLs.

Enabling ACLs on the filesystem

Setting up a system for ACLs are pretty simple. If you are running a system that uses systemd, then ACLs should already been enabled as its a dependency for systemd. For older systems, the process is shown below for installing and enabling ACLs.

First, check to see if ACLs are already enabled on the filesystem:

[root@web01 ~]# tune2fs -l /dev/sda1 |grep acl
Default mount options:    user_xattr acl

If you do not see ‘acl’ in the output, then you can install ACLs by:

# CentOS / RHEL:
[root@web01 ~]# yum install acl

# Ubuntu / Debian:
[root@web01 ~]# apt-get update
[root@web01 ~]# apt-get install acl

Now for all distros, enable ACLs in your /etc/fstab putting ‘acl’ in the mount opens as shown below:

[root@web01 ~]# cat /etc/fstab 
/dev/mapper/VolGroup-lv_root / ext4 defaults,acl 1 1

Then remount the filesystem so the new mount option takes effect by:

[root@web01 ~]# mount -o remount /

Then verify that ACL’s are now enabled on the filesystem:

[root@web01 ~]# tune2fs -l /dev/sda1 |grep acl
Default mount options:    user_xattr acl

Using ACLs

You can determine if a file or directory has an ACL in place as the permissions will be followed by a ‘+’ as shown below:

[root@web01 ~]#  ls -al
...
-rw-rwxr--+  1 root root   93 Jan 12 11:22 test
...

You can see what ACLs have been assigned to the file by:

[root@web01 ~]# getfacl test 
# file: test
# owner: root
# group: root
user::rw-
user:jdoe:rwx
group::r--
mask::rwx
other::r--

So in the example above, we see a user called jdoe also rwx permissions to the file called test.

To add or modify a user ACL:

[root@web01 ~]# setfacl -m u:jdoe:permissions /var/www/vhosts/domain.com

To add or modify a group ACL:

[root@web01 ~]# setfacl -m g:devs:permissions /var/www/vhosts/domain.com

To modify the ‘other’ ACL:

[root@web01 ~]# setfacl -m o:permissions /var/www/vhosts/domain.com

To add or modify an ACL recursively, you use the -R option. It is important to note that it is considered good practice to use the -X (capital X) permission when using recursion so files can retain the execute permission if they have them, but also allow you to traverse the directory as that requires the execute permission. The benefit comes into play with the -X (capital X) as it will prevent an admin from accidentally adding the execute permission on a regular file. An example of this is below:

[root@web01 ~]# setfacl -R -m u:jdoe:rwX /var/www/vhosts/domain.com

To set the default ACL so all new directories will inherit the ACL’s set on the parent directory:

[root@web01 ~]# setfacl -R -m default:u:jdoe:rwX /var/www/vhosts/domain.com

To remove a user from an ACL:

[root@web01 ~]# setfacl -x u:jdoe /var/www/vhosts/domain.com

To remove a group from an ACL:

[root@web01 ~]# setfacl -x g:devs /var/www/vhosts/domain.com

Remove ALL ACL’s on a file/directory:

[root@web01 ~]# setfacl -b /var/www/vhosts/domain.com

Remove ALL ACL’s on a file/directory recursively:

[root@web01 ~]# setfacl -R -b /var/www/vhosts/domain.com

CVE-2016-2183 SWEET32 Birthday attacks

Lately, vulnerability scanners have been flagging servers that are susceptible to CVE-2016-2183. In a nutshell, you need to disable any TLS ciphers using 3DES. More detailed information about this vulnerability and why it exists can be found at the links below:
https://access.redhat.com/articles/2548661
https://sweet32.info

Mitigating this vulnerability within Apache is pretty straight forward. Below are the steps to confirm if you are actually affected by this vulnerability and how to remediate it.

First, confirm your Apache web server is actually vulnerable to this by seeing if the 3DES ciphers are returned in this nmap test:

[user@workstation ~]# nmap --script ssl-enum-ciphers -p 443 SERVER_IP_ADDRESS_HERE
Starting Nmap 5.51 ( http://nmap.org ) at 2016-11-30 17:57 EST
Nmap scan report for xxxxxxxx (xxx.xxx.xxx.xxx)
Host is up (0.018s latency).
PORT    STATE SERVICE
443/tcp open  https
| ssl-enum-ciphers: 
|   TLSv1.2
|     Ciphers (14)
|       TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA
|       TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA
|       TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256
|       TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
|       TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA
|       TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384
|       TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
|       TLS_RSA_WITH_3DES_EDE_CBC_SHA
|       TLS_RSA_WITH_AES_128_CBC_SHA
|       TLS_RSA_WITH_AES_128_CBC_SHA256
|       TLS_RSA_WITH_AES_128_GCM_SHA256
|       TLS_RSA_WITH_AES_256_CBC_SHA
|       TLS_RSA_WITH_AES_256_CBC_SHA256
|       TLS_RSA_WITH_AES_256_GCM_SHA384
|     Compressors (1)
|_      uncompressed

As you can see in the output above, this server is affected by this vulnerability as its allowing for the following 3DES ciphers:

TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA
TLS_RSA_WITH_3DES_EDE_CBC_SHA

Disabling this in Apache is pretty easy. Simply navigate to where ever you have your SSLCipherSuite configuration defined and disable 3DES. Typically this should be in /etc/httpd/conf.d/ssl.conf, however some may also have this defined in each individual Apache vhost. If you are unsure where its configured, you should be able to locate it on your server by running:

# CentOS / Red Hat
[root@web01 ~]# egrep -R SSLCipherSuite /etc/httpd/*

# Ubuntu / Debian
[root@web01 ~]# egrep -R SSLCipherSuite /etc/apache2/*

Once you locate the config(s) that contain this directive, you simple add !3DES to the end of the SSLCipherSuite line as shown below:

[root@web01 ~]# vim /etc/httpd/conf.d/ssl.conf
...
SSLCipherSuite EECDH+AESGCM:EECDH+AES256:EECDH+AES128:EDH+AES:RSA+AESGCM:RSA+AES:!ECDSA:!NULL:!MD5:!DSS:!3DES
...

Once that is done, restart Apache by:

# CentOS / Red Hat
[root@web01 ~]# service httpd restart

# Ubuntu / Debian
[root@web01 ~]# service apache2 restart

Finally, retest using nmap to confirm no ciphers using 3DES show up:

[user@workstation ~]# nmap --script ssl-enum-ciphers -p 443 SERVER_IP_ADDRESS_HERE
Starting Nmap 5.51 ( http://nmap.org ) at 2016-11-30 18:03 EST
Nmap scan report for xxxxxxxx (xxx.xxx.xxx.xxx)
Host is up (0.017s latency).
PORT    STATE SERVICE
443/tcp open  https
| ssl-enum-ciphers: 
|   TLSv1.2
|     Ciphers (12)
|       TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA
|       TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256
|       TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
|       TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA
|       TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384
|       TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
|       TLS_RSA_WITH_AES_128_CBC_SHA
|       TLS_RSA_WITH_AES_128_CBC_SHA256
|       TLS_RSA_WITH_AES_128_GCM_SHA256
|       TLS_RSA_WITH_AES_256_CBC_SHA
|       TLS_RSA_WITH_AES_256_CBC_SHA256
|       TLS_RSA_WITH_AES_256_GCM_SHA384
|     Compressors (1)
|_      uncompressed

If no 3DES ciphers are returned like in the listing above, you should be good to rerun your vulnerability scan!

Quick programming language test scripts

This article simply contains how to test a few languages with a hello world script and a way to test mail functionality.

PHP

PHP script that will output hello world:

[root@web01 ~]# vim hello_world.php
#!/usr/bin/php
<?php
print "Hello World!\n";
?>

[root@web01 ~]# php hello_world.php 
Hello World!

PHP script to test mail functionality:

[root@web01 ~]# vim email_test.php
#!/usr/bin/php
<?php
$hostname = shell_exec('hostname');
$to = "[email protected]";
$subject = "Test mail";
$message = "Hello! This is a simple email message.";
$from = "root@" . $hostname;
$headers = "From:" . $from;
$parameters = "-f " . $from;
mail($to,$subject,$message,$headers,$parameters);
echo "Mail Sent.\n";
?>

[root@web01 ~]# php email_test.php
Mail Sent.

PHP script to test MySQL access:

[root@web01 ~]# db_test.php
<?php
# Fill our vars and run on cli
$dbname = 'DATABASE_NAME';
$dbuser = 'DATABASE_USER';
$dbpass = 'DATABASE_PASS';
$dbhost = 'DATABASE_HOST';
$link = mysqli_connect($dbhost, $dbuser, $dbpass) or die("Unable to Connect to '$dbhost'");
mysqli_select_db($link, $dbname) or die("Could not open the db '$dbname'");
$test_query = "SHOW TABLES FROM $dbname";
$result = mysqli_query($link, $test_query);
$tblCnt = 0;
while($tbl = mysqli_fetch_array($result)) {
$tblCnt++;
}
if (!$tblCnt) {
echo "There are no tables<br />\n";
} else {
echo "There are $tblCnt tables<br />\n";
}

[root@web01 ~]# php db_test.php
There are no tables

Perl

Perl script that will output hello world:

[root@web01 ~]# vim hello_world.pl
#!/usr/bin/perl
print "Hello World!\n";

[root@web01 ~]# perl hello_world.pl
Hello World!

Perl script to test mail functionality:

[root@web01 ~]# vim email_test.pl
#!/usr/bin/perl
use Sys::Hostname;
$hostname = hostname;
$to = '[email protected]';
$subject = 'Test mail';
$message = 'Hello! This is a simple email message.';
$from = 'root@' . $hostname;
open (MAIL,'|/usr/sbin/sendmail -t');
print MAIL "To: $to\n";
print MAIL "From: $from\n";
print MAIL "Subject: $subject\n\n";
print MAIL $message;
close (MAIL);
print "Mail Sent.\n";

[root@web01 ~]# perl email_test.pl
Mail Sent.

Python
Python script that will output hello world:

[root@web01 ~]# vim hello_world.py
#!/usr/bin/python
print "Hello World!";

[root@web01 ~]# python hello_world.py
Hello World!

Python script to test mail functionality:

[root@web01 ~]# vim email_test.py
#!/usr/bin/python
import smtplib
import socket
hostname = socket.getfqdn()
mailfrom = 'root@' + hostname
to = '[email protected]'
subject = 'Test mail'
message = 'Hello! This is a simple email message.'
smtpserver = smtplib.SMTP("127.0.0.1",25)
header = 'To:' + to + '\n' + 'From: ' + mailfrom + '\n' + 'Subject:' + subject + '\n\n'
smtpserver.sendmail(mailfrom, to, header + message)
print 'Mail Sent.'
smtpserver.close()

[root@web01 ~]# python email_test.py
Mail Sent.

Ruby

Ruby script that will output hello world:

[root@web01 ~]# vim hello_world.rb
#!/usr/bin/ruby
puts 'Hello World!'

[root@web01 ~]# ruby hello_world.rb
Hello World!

Ruby script to test mail functionality:

[root@web01 ~]# vim email_test.rb
#!/usr/bin/ruby
require 'net/smtp'
require 'socket'
hostname = Socket.gethostname
from = 'root@' + hostname
to = '[email protected]'
subject = 'Test mail'
message = 'Hello! This is a simple email message.'
msg = <<EOF
From: #{from}
To: #{to}
Subject: #{subject}
#{message}
EOF
Net::SMTP.start('localhost').send_message msg, from, to
puts 'Mail Sent.'

[root@web01 ~]# ruby email_test.rb
Mail Sent.

Copy disk over network using SSH

There can be use cases where you want to get a block level copy of your servers drives either for backup/archival purposes, or maybe you need it for forensic analysis. For both use cases, how can you perform a block level copy of your drives when you don’t have enough storage space on your server to store it?

This is where dd and ssh come into play. You can perform a block level copy of your disks using dd, and then use ssh inline to transfer it over to your remote workstation.

Things to keep in mind, when you perform a dd copy, it will copy the full disk over, this includes both used and unused space. So if you have 2 drives, one is a 1.5T drive, and the other in a 500G drive, you will need a total of 2T of space on your workstation, or where ever the images are being copied to.

In a perfect world, the drives should be unmounted, or if you are copying over the / partition, the server should be in rescue mode. However, dd can be performed live if you like as long as your not concerned about dynamic data like your databases being corrupted on the destination image.

The first step here is to determine which partitions you want to copy over. You find this by checking df on the server:

[root@web01 ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root
                       14G  1.4G   12G  11% /
tmpfs                 939M     0  939M   0% /dev/shm
/dev/sda1             477M   99M  353M  22% /boot
/dev/sdb1             2.0G  3.1M  1.9G   1% /mnt

In this example, I want to copy both my / partition, and my /mnt drive.

The commands are meant to be run on the destination server or workstation where you will be storing these images. Do not try to run these on the origin server!

[root@workstation ~]# ssh [email protected] "dd if=/dev/mapper/VolGroup-lv_root " | dd of=192.168.1.100-VolGroup-lv_root.img
[root@workstation ~]# ssh [email protected] "dd if=/dev/sdb1 " | dd of=192.168.1.100-sdb1.img

Depending on how large the drives are, and what your network latency is like, this could take a very long time to complete.

Once the process is completed, your can verify the image by mounting the image on your destination server by using a loop back image:

[root@workstation ~]# mkdir -p /mnt/server/disk1
[root@workstation ~]# mkdir -p /mnt/server/disk2
[root@workstation ~]# mount -o loop 192.168.1.100-VolGroup-lv_root.img /mnt/server/disk1
[root@workstation ~]# mount -o loop 192.168.1.100-sdb1.img /mnt/server/disk2
[root@workstation ~]# ls /mnt/server/disk1
[root@workstation ~]# ls /mnt/server/disk2

And once you confirmed the disk image looks good, unmount the loop back device by running:

[root@workstation ~]# umount /mnt/server/disk1
[root@workstation ~]# umount /mnt/server/disk2

Basic Apache Hardening

Below are a couple of the more common best practices that should be used when hardening Apache. These are simply some basics for mitigating a few of the more common CVE’s that have been cropping up in Apache.

At the very minimum, disable the trace method and prevent information disclosure by updating the ServerToken and ServerSignature variables. This can be done within Apache by modifying the following file:

# CentOS 5 and 6
vim /etc/httpd/conf.d/security.conf

# Ubuntu 12.04
vim /etc/apache2/conf.d/security

Then set it accordingly as shown below:

# Disable access to the entire file system except for the directories that
# are explicitly allowed later.
#
# This currently breaks the configurations that come with some web application
# Debian packages. It will be made the default for the release after lenny.
#
#<Directory />
#       AllowOverride None
#       Order Deny,Allow
#       Deny from all
#</Directory>


# Changing the following options will not really affect the security of the
# server, but might make attacks slightly more difficult in some cases.

#
# ServerTokens
# This directive configures what you return as the Server HTTP response
# Header. The default is 'Full' which sends information about the OS-Type
# and compiled in modules.
# Set to one of:  Full | OS | Minimal | Minor | Major | Prod
# where Full conveys the most information, and Prod the least.
#
ServerTokens Prod 

#
# Optionally add a line containing the server version and virtual host
# name to server-generated pages (internal error documents, FTP directory
# listings, mod_status and mod_info output etc., but not CGI generated
# documents or custom error documents).
# Set to "EMail" to also include a mailto: link to the ServerAdmin.
# Set to one of:  On | Off | EMail
#
ServerSignature Off 

#
# Allow TRACE method
#
# Set to "extended" to also reflect the request body (only for testing and
# diagnostic purposes).
#
# Set to one of:  On | Off | extended
#
TraceEnable Off

Another common area to lock down further from the vendor defaults is the SSL configuration, which is located in:

# CentOS 5 and 6
vim /etc/httpd/conf.d/ssl.conf

# Ubuntu 12.04
vim /etc/apache2/mods-enabled/ssl.conf

The most common ones I see on security reports are:
Set SSLHonorCipherOrder to ‘on’
Restrict the allowed ciphers in SSLCipherSuite
Enable only secure protocols

The ciphers can be a bit tricky, especially if you have a WAF or IDS in front of your solution. There is not a one size fits all here, so please be sure to test your site after making these changes as they can cause you problems if set incorrectly for your solution. I’ll post some scenarios below.

For securing your ssl.conf against many of the current vulnerabilities posted at the time of this writing, disable TLSv1.0 which will be a requirement come June 2018, and enable forward security, you can use:

SSLCipherSuite EECDH+AESGCM:EECDH+AES256:EECDH+AES128:EDH+AES:RSA+AESGCM:RSA+AES:!ECDSA:!NULL:!MD5:!DSS:!3DES
SSLProtocol -ALL +TLSv1.1 +TLSv1.2
SSLHonorCipherOrder On

If you prefer to leave TLSv1.0 enabled for the time being as you still have clients connecting to your site with unsupported browsers from Windows XP that doesn’t support anything above TLSv1.0, then you can try the following:

SSLCipherSuite ALL:!EXP:!NULL:!ADH:!LOW:!SSLv3:!SSLv2:!MD5:!RC4:!DSS:!3DES
SSLProtocol all -SSLv2 -SSLv3
SSLHonorCipherOrder On

If you have an Imperva WAF or Alertlogic IDS in front your solution that needs to decrypt the SSL traffic for analysis, so you therefore can’t use forward security since they need to perform a man-in-the-middle on the traffic, but still want to disable insecure ciphers, then modify the variables in the ssl.conf as follows:

SSLCipherSuite HIGH:!MEDIUM:!AESGCM:!ECDH:!aNULL:!ADH:!DH:!EDH:!CAMELLIA:!GCM:!KRB5:!IDEA:!EXP:!eNULL:!LOW:!RC4:!3DES
SSLProtocol all -SSLv2 -SSLv3
SSLHonorCipherOrder On

As a final note, Mozilla also put out a config generator for this. It can just provide some additional view points of how you can go about the ciphers. The link is here.

Docker quick start guide

I have avoided the whole Docker thing for some time now, mainly cause I didn’t understand why the world needed another kernel level virtualization product. I have been using OpenVZ and FreeBSD Jails for years, and just never saw the need to add another product to the mix.

But after spending some time with it, little did I realize that Docker is nothing more than a wrapper script essentially for using LXC on Linux. So if your already familiar with OpenVZ, LXC, FreeBSD Jails, or Solaris Zones, then Docker it actually really simple to pick up.

I am not going to go into the details of what container virtualization is and the pros and cons since there are literally hundreds of good sites online that dive into that.

Instead so you can see it in action, since many people learn what questions to ask by actually doing it, I’ll post a basic example here so you can get started quickly on CentOS 7 running Docker.

Our end goal for this quick start guide will simply illustrate how you can use Docker on a single CentOS 7 server to spin up multiple containers for a test environment. The Docker host can be a VM, cloud server, vagrant image, or a dedicated server.

So to get started, first configure the basics on your CentOS 7 server, such as setting the hostname, updating the system, installing NTP and sysstat, and rebooting so your running off the latest kernel:

[root@localhost ~]# hostnamectl set-hostname docker01.example.com
[root@docker01 ~]# yum -y update
[root@docker01 ~]# yum install ntp sysstat
[root@docker01 ~]# chkconfig ntpd on
[root@docker01 ~]# service ntpd start
[root@docker01 ~]# reboot

Now setup the official Docker repo on your server:

[root@docker01 ~]# vim /etc/yum.repos.d/docker.repo
[dockerrepo]
name=Docker Repository
baseurl=https://yum.dockerproject.org/repo/main/centos/$releasever/
enable=1
gpgcheck=1
gpgkey=https://yum.dockerproject.org/gpg

Then install Docker:

[root@docker01 ~]# yum install docker-engine
[root@docker01 ~]# systemctl enable docker
[root@docker01 ~]# systemctl start docker

Since there is really no reason to run Docker as root, we are going to setup a unprivileged user to run Docker:

[root@docker01 ~]# useradd dockeradmin
[root@docker01 ~]# usermod dockeradmin -G docker

Now log in as user dockeradmin, and confirm Docker is working:

[root@docker01 ~]# su - dockeradmin
[dockeradmin@docker01 ~]# docker ps
[dockeradmin@docker01 ~]# docker images

If no errors are returned, then Docker is working! So lets pull down some OS images I use often. This is not a requirement, but it just makes deploying new containers a bit faster:

[dockeradmin@docker01 ~]# docker pull centos:centos6
[dockeradmin@docker01 ~]# docker pull centos:centos7
[dockeradmin@docker01 ~]# docker pull ubuntu:precise
[dockeradmin@docker01 ~]# docker pull ubuntu:trusty
[dockeradmin@docker01 ~]# docker pull ubuntu:xenial

Lets spin up our first container:

[dockeradmin@docker01 ~]# docker run --name centos6-test01 -id --restart unless-stopped centos:centos6 /bin/bash

Once its running, you can get to a console by running:

[dockeradmin@docker01 ~]# docker exec -it centos6-test01 /bin/bash

From there, you will be able to configure and use the container as you see fit!

Some other basic commands you will need to know for how to interact with Docker are below.

To see your running containers:

[dockeradmin@docker01 ~]# docker ps

To see both your running and stopped containers:

[dockeradmin@docker01 ~]# docker ps -a

To stop a running container:

[dockeradmin@docker01 ~]# docker stop your_container_name

To start up a stopped container:

[dockeradmin@docker01 ~]# docker start your_container_name

To create an image of a running container that you can use to deploy new containers from:

[dockeradmin@docker01 ~]# docker commit -m "your_commit_message" -a dockeradmin your_container_name dockeradmin/your_new_image_name:v1

To see what images you have setup:

[dockeradmin@docker01 ~]# docker images

To remove an image:

[dockeradmin@docker01 ~]# docker rmi your_image_id

If you would like you view the stats of your containers:

[dockeradmin@docker01 ~]# docker stats

To delete a container:

[dockeradmin@docker01 ~]# docker stop your_container_name
[dockeradmin@docker01 ~]# docker rm your_container_name

If you want to stop all running containers and delete them from the system so you can start fresh:

[dockeradmin@docker01 ~]# docker stop `docker ps -a -q`
[dockeradmin@docker01 ~]# docker rm `docker ps -a -q`

This should get you started into the world of Docker. Once you get a feel for it, treating the containers as their own “vm’s”, you will start to recognize some of the benefits of Docker.

From there, you will be able to start diving deeper and instead of using containers as a “vm”, you can instead start thinking about how to break up your application into individual containers to give you a finer degree of control and portability over your application.

Determine how sites react under increased requests with curl

There are times when you test out your site using curl, and it loads fine. But yet when multiple users or a search bot is going through your site, you notice the page request times skyrocket. There could be many causes, though usually it revolves around a lack of caching since repeated requests for the same page need to first check in with the database, then get passed over to PHP to finish processing the request.

Okay great, cache is king. We know this. But how can we actually confirm this is a problem? Well, this is where curl comes to the rescue.

First, establish a baseline by sending a single request to each website:

[user@workstation01 ~]# for i in www.example1.com www.example2.com www.example3.com; do echo -n "$i "; (time curl -IL $i -XGET) 2>&1 | grep -E "real|HTTP"; echo; done

www.example1.com HTTP/1.1 200 OK
real	0m0.642s

www.example2.com HTTP/1.1 200 OK
real	0m2.234s

www.example3.com HTTP/1.1 200 OK
real	0m0.421s

So based off those results, we established that 2 of the sites take about half a second to load, and the other one (www.example2.com) takes about 2 seconds to load when hit with a single request.

As www.example2.com takes 2 seconds to load with a single request, lets see what happens during increased traffic. This could be from valid users, search bots, or something else. How much will the load times increase for the site? Here is an example sending over 25 requests:

[user@workstation01 ~]# for i in {1..25}; do (time curl -sIL http://www.example2.com -X GET &) 2>&1 | grep -E "real|HTTP" & done

HTTP/1.1 200 OK
real	0m11.297s
HTTP/1.1 200 OK
real	0m11.395s
HTTP/1.1 200 OK
real	0m11.906s
HTTP/1.1 200 OK
real	0m12.079s
...
HTTP/1.1 200 OK
real	0m11.297s

So with the increased requests, the page load times increase from 2 seconds all the way up to 11-12 seconds!

Determining why this happens will involve investigation outside the scope of this article. However at least now we know which site doesn’t perform well under increased requests.

While every site is different, lets say www.example2.com was a WordPress site. Try installing the WordPress W3 Total Cache (W3TC) plugin. Basic items to enable/disable within W3TC would be:

- Enable Page Cache
- Disable Minify options
- Enable Browser Cache

Next, sign up with a free CloudFlare account, then add the following PageRules so CloudFlare actually starts caching your WordPress site:

1. *example2.com/wp-admin* : Cache Level (Bypass)
2. *example2.com/wp-login.php* : Cache Level (Bypass)
3. *example2.com/* : Cache Level (Cache Everything)
* The order is extremely critical here.  Make sure the pages you do not wish to cache are above the last line!
** Every site is different.  These rules may or may not work for you.

Then retest to see if your changes helped improve the times.

[user@workstation01 ~]# for i in {1..25}; do (time curl -sIL http://www.example2.com -X GET &) 2>&1 | grep -E "real|HTTP" & done

HTTP/1.1 200 OK
real	0m1.347s
HTTP/1.1 200 OK
real	0m1.342s
HTTP/1.1 200 OK
real	0m1.237s
HTTP/1.1 200 OK
real	0m1.021s
...
HTTP/1.1 200 OK
real	0m1.532s

Vagrant Introduction

What is Vagrant? Taken directly from the vendors website:

Vagrant provides easy to configure, reproducible, and portable work environments built on top of industry-standard technology and controlled by a single consistent workflow to help maximize the productivity and flexibility of you and your team.

For more details about how Vagrant can be beneficial, I strongly encourage you to read the vendors website at:
https://www.vagrantup.com/docs/why-vagrant

Vagrant can be somewhat difficult to understand without seeing it in action. But in summary, building test servers can take a long time. With vagrant, you can run your test servers with a few simple commands so you can begin performing the tests that you wanted to get done without the wait. The environments are portable, so its very easy to share with colleagues.

All Vagrant does is communicate with providers of your choice such as VirtualBox, AWS, Rackspace Cloud, etc, and spins up the boxes using the respective providers API’s.

Installation

Installing Vagrant with VirtualBox is simple and easy on most desktop OS’s running Windows, Mac OSX, and Linux. Simply use the links below to download and install both VirtualBox and Vargrant on your desktop:

Known working versions
https://www.virtualbox.org/wiki/Download_Old_Builds_5_0
https://releases.hashicorp.com/vagrant/1.8.4/ **

** Vagrant v1.8.5 currently has issues with CentOS. Therefore, use v1.8.4 for now. Once v1.8.6 comes out, the bug should be addressed.

Once you have Vagrant and VirtualBox installed on your desktop, you are ready to begin deploying your test servers.

Getting familiar with Vagrant

Some quick terms you need to know:
Vagrantfile: This is the main configuration file for your test server.
Provider: Which API are we using? AWS, Rackspace Cloud, VirtualBox, etc.
Vagrant Boxes: This is the Vagrant base image. Think of it as your golden template that is used to deploy your test servers.

To manage your individual environments or test servers, you simply create the directory on your desktop, and tell Vagrant to deploy the OS to that individual directory.

An example of some of the test environments on my workstation are:

/home/user/vagrant/centos6
/home/user/vagrant/ubuntu1404
/home/user/vagrant/centos6-lamp-environment

Most of the common distro’s used today have official boxes available. I included puppetlabs as they have images that are up to date with base images as well as boxes with Puppet Enterprise already installed:
https://atlas.hashicorp.com/centos
https://atlas.hashicorp.com/ubuntu
https://atlas.hashicorp.com/debian
https://atlas.hashicorp.com/freebsd
https://atlas.hashicorp.com/puppetlabs

Quick start to see Vagrant in action

This is just a quick way to see Vagrant in action. It is important to remember that all the vagrant commands apply to whatever directory you are in on your desktop!

First, create the directory on your desktop for your test server:

[user@workstation ~]# mkdir -p vagrant/centos6-test
[user@workstation ~]# cd vagrant/centos6-test

Now initialize a Vagrantfile, which tells Vagrant what image to use, and how to configure it:

[user@workstation ~]# vagrant init centos/6

Then startup the test server:

[user@workstation ~]# vagrant up

And thats it! You can now log into your test server and get to work by running:

[user@workstation ~]# vagrant ssh

Once your done with your testing, you can remove the test server by running:

[user@workstation ~]# vagrant destroy

Command commands

These commands are to be ran from inside the directory of your project.

To check the status of a single test server:

[user@workstation ~]# vagrant status

If you made changes to your Vagrantfile and need to reload your settings:

[user@workstation ~]# vagrant reload

If you want to shutdown a test server, but you don’t want to destroy it:

[user@workstation ~]# vagrant halt

To check the status of all your test servers:

[user@workstation ~]# vagrant global-status

How to customize your test server

What makes Vagrant powerful is the Vagrantfile that resides in each directory for your test servers. This file allows you to specify IP’s, set users and passwords, run bootstrap scripts, or even spin up multiple servers from one file. This greatly speeds up the provisioning time so you can focus on testing what you truly needed to test.

The examples below are full examples of the Vagrantfile you can copy in your projects directory to spin up test servers.

Here is an example for spinning up a Ubuntu 12.04 server, and having Apache already installed:

[user@workstation ~]# mkdir vagrant/ubuntu1204-test01
[user@workstation ~]# cd vagrant/ubuntu1204-test01
[user@workstation ~]# vim Vagrantfile
  config.vm.box = "hashicorp/precise32"
  config.vm.provision "shell", inline: <<-SHELL
    sudo apt-get update
    sudo apt-get install -y apache2
    SHELL
end

[user@workstation ~]# vagrant up

Below is an example for spinning up a CentOS 6 server with the private IP address of 172.16.0.2, accessible only to your workstation:

[user@workstation ~]# mkdir vagrant/ubuntu1204-test01
[user@workstation ~]# cd vagrant/ubuntu1204-test01
[user@workstation ~]# vim Vagrantfile
Vagrant.configure(2) do |config|
  config.vm.box = "centos/6"
  config.vm.network "private_network", ip: "172.16.0.2"
end

[user@workstation ~]# vagrant up

Below is an example for spinning up a CentOS 6 server with the public IP address of 123.123.123.123, accessible by anyone:

[user@workstation ~]# mkdir vagrant/ubuntu1204-test01
[user@workstation ~]# cd vagrant/ubuntu1204-test01
[user@workstation ~]# vim Vagrantfile
Vagrant.configure(2) do |config|
  config.vm.box = "centos/6"
  config.vm.network "public_network", ip: "123.123.123.123"
end

[user@workstation ~]# vagrant up

Below is an example for spinning up a CentOS 6 server with Puppet Enterprise, with the private IP address of 10.1.0.3, and adding a bootstrap.sh file that we'll use for automatically our build process to install and configure Nginx:

[user@workstation ~]# mkdir vagrant/centos6-puppet-server01
[user@workstation ~]# cd vagrant/centos6-puppet-server01
[user@workstation ~]# vim Vagrantfile
Vagrant.configure(2) do |config|
  config.vm.box = "puppetlabs/centos-6.6-64-puppet"
  config.vm.network "private_network", ip: "10.1.0.3"
  config.vm.network "forwarded_port", guest: 80, host: 80
  config.vm.provision :shell, path: "bootstrap.sh"
end

[user@workstation ~]# vim bootstrap.sh
#!/usr/bin/env bash

# Sanity checks
if [ ! `whoami` = root ]; then
        echo "This script must be ran by the user:  root"
        exit 1
fi

# CentOS specific tasks:
if [ -f /etc/redhat-release ]; then
	selinux=`getenforce`
	if [ $selinux = "Enforcing" ]; then
		setenforce 0
	fi
	
	if [ `cat /etc/redhat-release | grep "Linux release 7" | wc -l` = 1 ]; then
		yum -y install iptables-services
	fi
fi

# Install required Puppet modules from the forge
puppet module install puppetlabs-stdlib
puppet module install jfryman-nginx
puppet module install puppetlabs-firewall

# General default.pp for Puppet
cat << EOF > /root/default.pp

class { 'nginx': }

nginx::resource::vhost { 'example.com':
  www_root    => '/var/www/example.com',
}

package { 'git':
  ensure => present,
}

firewall { '000 accept all icmp':
  proto  => 'icmp',
  action => 'accept',
}

firewall { '100 allow SSH':
  port   => [22],
  proto  => tcp,
  action => accept,
}

firewall { '101 allow http over port 80':
  port   => [80],
  proto  => tcp,
  action => accept,
}

EOF

# Execute first run of default.pp
puppet apply /root/default.pp

[user@workstation ~]# vagrant up

Below is an example for spinning up 2x CentOS 6 server with private IP addresses, accessible only to your workstation, both with 1G of memory, and installing MySQL on the db server, and Apache/PHP on the web server:

Vagrant.configure(2) do |config|
  config.vm.define "db01" do |db01|
    db01.vm.box = "centos/6"
    db01.vm.hostname = "web01.example.com"
    db01.vm.network "private_network", ip: "192.168.33.100"
    db01.vm.provision "shell", inline: <<-SHELL
        sudo yum update
        sudo yum install -y mysql
    SHELL
    db01.vm.provider "virtualbox" do |vb|
      vb.gui = false
      vb.memory = "1024"
    end
  end
  config.vm.define "web01" do |web01|
    web01.vm.box = "centos/6"
    web01.vm.hostname = "web02.example.com"
    web01.vm.network "private_network", ip: "192.168.33.101"
    web01.vm.provider "virtualbox" do |vb|
      vb.gui = false
      vb.memory = "1024"
    end
   web01.vm.provision "shell", inline: <<-SHELL
      sudo yum update
      sudo yum install -y httpd php
   SHELL
  end
end

[user@workstation ~]# vagrant up

Setting up MySQL Master Slave Replication with LVM snapshots

This article is part of a series of setting up MySQL replication. As with most things, there is always more than one way to do something. In the case of setting up MySQL replication, or rebuilding it, some options are better than others depending on your use case.

The articles in the series are below:
Setting up MySQL Replication using mysqldump
Setting up MySQL Replication using Percona XtraBackup
Setting up MySQL Replication using Rsync
Setting up MySQL Replication using LVM snapshots

This guide will document how to setup MySQL Master / Slave Replication using LVM snapshots. So why use LVM snapshots for setting up or rebuilding MySQL Replication? If your databases and tables are large, you can greatly limit the downtime felt to the application using LVM snapshots. This should still be performed during a scheduled maintenance window as you will be flushing the tables with READ LOCK.

Some prerequisites before proceeding are below:

1. Confirming that your datadir is indeed configured on a partition running LVM:

[root@db01 ~]# lvs

2. Confirming that you have enough free space in your Volume Group for the LVM snapshot:

[root@db01 ~]# vgs

So in the sections below, we’ll configure the Master and Slave MySQL server for replication, then we’ll use an LVM snapshot for syncing the databases over to db02.

Setup the Master MySQL server

Configure the my.cnf as shown below:

log-bin=/var/lib/mysql/db01-binary-log
expire-logs-days=5
server-id=1

Then restart MySQL to apply the settings:

# CentOS / RHEL:
[root@db01 ~]# service mysqld restart

# Ubuntu / Debian:
[root@db01 ~]# service mysql restart

Finally, grant access to the Slave so it has access to communicate with the Master:

mysql> GRANT REPLICATION SLAVE ON *.* to 'repl’@’10.x.x.x’ IDENTIFIED BY 'your_password';

Setup the Slave MySQL server

Configure the my.cnf as shown below:

relay-log=/var/lib/mysql/db02-relay-log
relay-log-space-limit = 4G
read-only=1
server-id=2

Then restart MySQL to apply the settings:

# CentOS / RHEL:
[root@db02 ~]# service mysqld restart

# Ubuntu / Debian:
[root@db02 ~]# service mysql restart

Use LVM snapshots for syncing over the databases

For reference, the rest of this guide will refer to the servers as follows:

db01 - Master MySQL Server
db02 - Slave MySQL Server

On db02 only, rename the existing MySQL datadir, and create a fresh folder:

[root@db02 ~]# service mysqld stop
[root@db02 ~]# mv /var/lib/mysql /var/lib/mysql.old
[root@db02 ~]# mkdir /var/lib/mysql
[root@db02 ~]# chown mysql:mysql /var/lib/mysql

On db01 only, create a snapshot script for MySQL to ensure things move quick to limit downtime:

[root@db01 ~]# vim /root/lvmscript.sql
FLUSH LOCAL TABLES;
FLUSH LOCAL TABLES WITH READ LOCK;
SHOW MASTER STATUS;
SYSTEM lvcreate -L 10G -s vglocal00/mysql00 -n mysqlsnapshot00 3>&-
SHOW MASTER STATUS;
UNLOCK TABLES;

On db01 only, during a scheduled maintenance window, run the script to create the LVM snapshot, and be sure to take note of master status information as that will be needed later:

[root@db01 ~]# mysql -t < /root/lvmscript.sql 

On db01 only, mount the snapshot, sync over the contents to db02, and then remove the snapshot since it will no longer be needed:

[root@db01 ~]# mount /dev/mapper/vglocal00-mysqlsnapshot00 /mnt
[root@db01 ~]# rsync -axvz --delete -e ssh /mnt/ root@db02:/var/lib/mysql/
[root@db01 ~]# umount /mnt
[root@db01 ~]# lvremove vglocal00/mysqlsnapshot00

On db02 only, remove the stale mysql.sock file, startup MySQL, configure db02 to connect to db01 using the information from the show master status command you ran on db01 previously, and start replication:

[root@db02 ~]# rm /var/lib/mysql/mysql.sock
[root@db02 ~]# service mysqld start
[root@db02 ~]# mysql
mysql> CHANGE MASTER TO MASTER_HOST='10.x.x.x', MASTER_USER='repl', MASTER_PASSWORD='your_password', MASTER_LOG_FILE='db01-bin-log.000001', MASTER_LOG_POS=1456783;
mysql> start slave;
mysql> show slave status\G
...
        Slave_IO_Running: Yes
        Slave_SQL_Running: Yes
        Seconds_Behind_Master: 0
...

If those values are the same as what is shown above, then replication is working properly! Perform a final test by creating a test database on the Master MySQL server, then check to ensure it shows up on the Slave MySQL server. Afterwards, feel free to drop that test database on the Master MySQL server.

From here, you should be good to go! Just be sure to setup a monitoring check to ensure that replication is always running and doesn’t encounter any errors. A very basic MySQL Replication check can be found here:
https://github.com/stephenlang/system-health-check

Setting up MySQL Master Slave Replication with rsync

This article is part of a series of setting up MySQL replication. As with most things, there is always more than one way to do something. In the case of setting up MySQL replication, or rebuilding it, some options are better than others depending on your use case.

The articles in the series are below:
Setting up MySQL Replication using mysqldump
Setting up MySQL Replication using Percona XtraBackup
Setting up MySQL Replication using Rsync
Setting up MySQL Replication using LVM snapshots

This guide will document how to setup MySQL Master / Slave Replication using Rsync. So why use Rsync for setting up or rebuilding MySQL Replication? If your databases and tables are large, but fairly quiet, you can limit the downtime felt by syncing over the majority of the content live with Rsync, then you perform a final Rsync during a scheduled maintenance window to catch any of the tables that may have changed by using a READ LOCK. This is also very useful when you do not have enough disk space available on db01 to perform a traditional backup using mysqldump or using Percona’s XtraBackup as the data is being rsync’ed directly over to db02.

This is a fairly simplistic method of setting of MySQL Replication, or rebuilding it. So in the sections below, we’ll configure the Master and Slave MySQL server for replication, then we’ll sync over the databases.

Setup the Master MySQL server

Configure the my.cnf as shown below:

log-bin=/var/lib/mysql/db01-binary-log
expire-logs-days=5
server-id=1

Then restart MySQL to apply the settings:

# CentOS / RHEL:
[root@db01 ~]# service mysqld restart

# Ubuntu / Debian:
[root@db01 ~]# service mysql restart

Finally, grant access to the Slave so it has access to communicate with the Master:

mysql> GRANT REPLICATION SLAVE ON *.* to 'repl’@’10.x.x.x’ IDENTIFIED BY 'your_password';

Setup the Slave MySQL server

Configure the my.cnf as shown below:

relay-log=/var/lib/mysql/db02-relay-log
relay-log-space-limit = 4G
read-only=1
server-id=2

Then restart MySQL to apply the settings:

# CentOS / RHEL:
[root@db02 ~]# service mysqld restart

# Ubuntu / Debian:
[root@db02 ~]# service mysql restart

Rsync the databases

For reference, the rest of this guide will refer to the servers as follows:

db01 - Master MySQL Server
db02 - Slave MySQL Server

On db02 only, rename the existing MySQL datadir, and create a fresh folder:

[root@db02 ~]# service mysqld stop
[root@db02 ~]# mv /var/lib/mysql /var/lib/mysql.old
[root@db02 ~]# mkdir /var/lib/mysql

On db01 only, perform the initial sync of data over to db02:

[root@db01 ~]# rsync -axvz /var/lib/mysql/ root@db02:/var/lib/mysql/

Now that you have the majority of the databases moved over, its time to perform the final sync of data during a scheduled maintenance window as you will be flushing the tables with a READ LOCK, then syncing over the data.

On db01 only, flush the tables with READ LOCK, and grab the master status information as we’ll need that later. It is critical that you do NOT exit MySQL while the READ LOCK is in place. Once you exit MySQL, the READ LOCK is removed. Therefore, the example below will run this in a screen session so it continues to run in the background:

[root@db01 ~]# screen -S mysql
[root@db01 ~]# mysql
mysql> FLUSH TABLES WITH READ LOCK;
mysql> SHOW MASTER STATUS;
(detach screen session with ctrl a d)

On db01 only, perform the final rsync of the databases to db02, then release the READ LOCK on db01:

[root@db01 ~]# rsync -axvz --delete /var/lib/mysql/ root@db02:/var/lib/mysql/
[root@db01 ~]# screen -dr mysql
mysql> quit
[root@db01 ~]# exit

On db02 only, remove the stale mysql.sock file, startup MySQL, configure db02 to connect to db01 using the information from the show master status command you ran on db01 previously, and start replication:

[root@db02 ~]# rm /var/lib/mysql/mysql.sock
[root@db02 ~]# rm /var/lib/mysql/auto.cnf
[root@db02 ~]# service mysqld start
[root@db02 ~]# mysql
mysql> CHANGE MASTER TO MASTER_HOST='10.x.x.x', MASTER_USER='repl', MASTER_PASSWORD='your_password', MASTER_LOG_FILE='db01-bin-log.000001', MASTER_LOG_POS=1456783;
mysql> start slave;
mysql> show slave status\G
...
        Slave_IO_Running: Yes
        Slave_SQL_Running: Yes
        Seconds_Behind_Master: 0
...

If those values are the same as what is shown above, then replication is working properly! Perform a final test by creating a test database on the Master MySQL server, then check to ensure it shows up on the Slave MySQL server. Afterwards, feel free to drop that test database on the Master MySQL server.

From here, you should be good to go! Just be sure to setup a monitoring check to ensure that replication is always running and doesn’t encounter any errors. A very basic MySQL Replication check can be found here:
https://github.com/stephenlang/system-health-check