Setting up MySQL Master Slave Replication with Percona XtraBackup

This article is part of a series of setting up MySQL replication. As with most things, there is always more than one way to do something. In the case of setting up MySQL replication, or rebuilding it, some options are better than others depending on your use case.

The articles in the series are below:
Setting up MySQL Replication using mysqldump
Setting up MySQL Replication using Percona XtraBackup
Setting up MySQL Replication using Rsync
Setting up MySQL Replication using LVM snapshots

This guide will document how to setup MySQL Master / Slave Replication using Percona XtraBackup. I strongly recommend reviewing the official documentation on Percona’s site at:
How innobackupex works
Official guide for setting up replication with Percona XtraBackup

So why use Percona XtraBackup for setting up or rebuilding MySQL Replication? Percona XtraBackup performs hot backups on unmodified versions of MySQL, MariaDB and Percona on versions 5.1 and above.

This basically means that you can generally run the backup on InnoDB tables without having the interruption/downtime associated with table locking on your site, like you would normally experience using MySQLdump. However, it is critical to note that table locking WILL still occur on tables using MyISAM and other non-InnoDB tables.

Some important prerequisites before proceeding:

1. Recommend using Percona XtraBackup on MySQL, MariaDB, and Percona 5.5 and above. Example:

[root@db01 ~]# mysql -V
mysql  Ver 14.14 Distrib 5.5.49, for Linux (x86_64) using readline 5.1

2. Confirm the MySQL client libraries are installed.

3. The MySQL master and slave server both have the same innodb-log-file-size defined in the my.cnf, and that they are at least 48M in size.

[root@db01 ~]# grep innodb-log-file-size /etc/my.cnf 
innodb-log-file-size = 128M

4. The master does not have symlinks in the MySQL datadir that could cause space to be underestimated. You can check for this by running:

[root@db01 ~]# du -sch /var/lib/mysql/ $(for i in $(find /var/lib/mysql/ -type l); do readlink $i; done)

5. Confirm there is only one instance of MySQL running on the master. When you have multiple versions of MySQL running, its easy to backup the wrong data.

6. Event scheduler is not enabled on the slave. You can check for this by running:

[root@db01 ~]# mysql
mysql> show variables where Variable_name like 'event_scheduler';
+-----------------+-------+
| Variable_name   | Value |
+-----------------+-------+
| event_scheduler | OFF   |
+-----------------+-------+
1 row in set (0.00 sec)

7. Confirm that no tables are using the MEMORY engine on the master. You can check for this by running:

[root@db01 ~]# mysql
mysql> select concat('ALTER TABLE `',table_schema,'`.`',table_name,'` ENGINE=INNODB;') from information_schema.tables where engine='memory' and table_schema not in ('information_schema','performance_schema');
Empty set (0.03 sec)

8. Confirm that both the master and slave server have NTP enabled and running, and that both servers are using the same timezone. Example:

[root@db01 ~]# ps waux |grep ntp
ntp       7276  0.0  0.0  30740  1684 ?        Ss   Aug09   0:02 ntpd -u ntp:ntp -p /var/run/ntpd.pid -g
[root@db01 ~]# date
Tue Aug 30 05:06:50 UTC 2016

9. Confirm the master and slave have the same binlog_format and expire_log_days variables in the my.cnf when binary logging is enabled on the slave. Example:

[root@db01 ~]# egrep 'binlog-format|expire-logs-days' /etc/my.cnf 
expire-logs-days = 5
# binlog-format = STATEMENT

Now that you confirmed that the prerequisites are out of the way, proceed with setting up MySQL Replication.

Setup the Master MySQL server

Configure the my.cnf as shown below:

log-bin=/var/lib/mysql/db01-binary-log
expire-logs-days=5
server-id=1

Then restart MySQL to apply the settings:

# CentOS / RHEL:
[root@db01 ~]# service mysqld restart

# Ubuntu / Debian:
[root@db01 ~]# service mysql restart

Finally, grant access to the Slave so it has access to communicate with the Master:

mysql> GRANT REPLICATION SLAVE ON *.* to 'repl’@’10.x.x.x’ IDENTIFIED BY 'your_password';

Setup the Slave MySQL server

Configure the my.cnf as shown below:

relay-log=/var/lib/mysql/db02-relay-log
relay-log-space-limit = 4G
read-only=1
server-id=2

Then restart MySQL to apply the settings:

# CentOS / RHEL:
[root@db02 ~]# service mysqld restart

# Ubuntu / Debian:
[root@db02 ~]# service mysql restart

Install Percona XtraBackup

For reference, the rest of this guide will refer to the servers as follows:

db01 - Master MySQL Server
db02 - Slave MySQL Server

On db01 only, install Percona XtraBackup using Percona’s repos:

[root@db01 ~]# yum install http://www.percona.com/downloads/percona-release/redhat/0.1-3/percona-release-0.1-3.noarch.rpm
[root@db01 ~]# yum install percona-xtrabackup-24

On db01 only, confirm it installed XtraBackup version 2.3 or newer:

[root@db01 ~]# xtrabackup --version

On db01 only, to prevent future issues, it is extremely important to remove or disable the yum repo for Percona:

[root@db01 ~]# yum remove percona-release

Setup MySQL replication using Percona XtraBackup

On db02 only, rename the existing MySQL datadir, and create a fresh folder:

[root@db02 ~]# service mysqld stop
[root@db02 ~]# mv /var/lib/mysql /var/lib/mysql.old
[root@db02 ~]# mkdir /var/lib/mysql

On db01 only, create the backup, make the snapshot consistent by applying the log, and rsync it over to db02:

[root@db01 ~]# mkdir /root/perconabackup
[root@db01 ~]# innobackupex /root/perconabackup
[root@db01 ~]# innobackupex --apply-log /root/perconabackup/TIMESTAMP/
[root@db01 ~]# rsync -axvz -e ssh /root/perconabackup/TIMESTAMP/ root@db02:/var/lib/mysql/

On db02, fix the ownership of the datadir, startup MySQL, and grab the binlog name and position:

[root@db02 ~]# chown -R mysql:mysql /var/lib/mysql
[root@db02 ~]# service mysqld start
[root@db02 ~]# cat /var/lib/mysql/xtrabackup_binlog_info
db01-bin-log.000001     1456783

On db02, startup slave replication:

[root@db02 ~]# mysql
mysql> CHANGE MASTER TO MASTER_HOST='10.x.x.x', MASTER_USER='repl', MASTER_PASSWORD='your_password', MASTER_LOG_FILE='db01-bin-log.000001', MASTER_LOG_POS=1456783;
mysql> start slave;
mysql> show slave status\G
...
        Slave_IO_Running: Yes
        Slave_SQL_Running: Yes
        Seconds_Behind_Master: 0
...

If those values are the same as what is shown above, then replication is working properly! Perform a final test by creating a test database on the Master MySQL server, then check to ensure it shows up on the Slave MySQL server. Afterwards, feel free to drop that test database on the Master MySQL server.

From here, you should be good to go! Just be sure to setup a monitoring check to ensure that replication is always running and doesn’t encounter any errors. A very basic MySQL Replication check can be found here:
https://github.com/stephenlang/system-health-check

VMware disk expansion

There are a number of ways to add more disk space to a VM on VMware. This guide will discuss 5 different ways to handle expanding the existing disk in VMware, which are:

- Expand existing disk to LVM (Not previously expanded)
- Expand existing disk to LVM (Previously expanded)
- Expand existing disk with LVM not in use (Dangerous)
- Add a new disk into an existing LVM Volume Group
- Add a new disk as a separate mount point

Many VMware solutions set their disk labels to MBR, so for this guide, we’ll be making extensive use of fdisk. If your disk label is set to GPT, please use caution when following this guide!

As with any disk expansion operations, always be sure you have solid backups in place in case something goes wrong!

Expand existing disk to LVM (Not previously expanded)

Assuming the VM’s disk has already been expanded within VMware, you have to rescan the specific SD device to detect the new properties. You can do this by:

[root@web01 ~]# echo 1 > /sys/block/sdX/device/rescan
or
[root@web01 ~]# echo 1 > /sys/bus/scsi/drivers/sd/SCSI-ID/block/device/rescan
or
[root@web01 ~]# echo 1 > /sys/bus/scsi/drivers/sd/SCSI-ID/block\:sdX/device/rescan

Whether the added a new disk, or expanded an existing one, you can usually detect the change by:

[root@web01 ~]# dmesg|tail
...
sd 2:0:0:0: [sda] 67108864 512-byte logical blocks: (34.3 GB/32.0 GiB)
sd 2:0:0:0: [sda] Cache data unavailable
sd 2:0:0:0: [sda] Assuming drive cache: write through
sda: detected capacity change from 17179869184 to 34359738368

Now you need to determine if the volume has actually be expanded. Check for the ‘resize_inode’ flag by:

[root@web01 ~]# tune2fs -l /dev/vglocal00/lvroot | grep -i "^filesystem features"

Check to see if storage has increased in size yet first by:

[root@web01 ~]# fdisk -cul /dev/sda
[root@web01 ~]# pvs
[root@web01 ~]# vgs
[root@web01 ~]# lvs
[root@web01 ~]# df -h

Once the root disk has been expanded in VMware, rescan the disk which should now show additional sectors have been added:

[root@web01 ~]# echo 1 > /sys/block/sda/device/rescan
[root@web01 ~]# fdisk -cul /dev/sda

Now we need to add a partition for the new space. As fdisk only allows 4 primary partitions, we are going to use extended partitions so we can create logical partitions to hold the new space:

[root@web01 ~]# fdisk -cu /dev/sda
p
n
e (extended)
3
enter
enter
n
l (logical)
enter
enter
p
w

Now rescan the partitions so the system can detect the new one without rebooting:

[root@web01 ~]# ls /dev/sda*
[root@web01 ~]# partx -l /dev/sda
[root@web01 ~]# partx -v -a /dev/sda # There may be some errors here, ignore.
[root@web01 ~]# partx -l /dev/sda
[root@web01 ~]# ls /dev/sda*

Now setup LVM on that new partition, and add it to the existing volume group and expand the logical volume:

[root@web01 ~]# pvcreate /dev/sda5
[root@web01 ~]# vgs
[root@web01 ~]# vgextend YOUR_VG_NAME /dev/sda5
[root@web01 ~]# pvdisplay /dev/sda5 | grep Free
  Free PE               4095
[root@web01 ~]# lvextend --extents +4095 -n /dev/YOUR_VG_NAME/lv_root

Finally, expand the filesystem doing an online resize:

[root@web01 ~]# resize2fs /dev/YOUR_VG_NAME/lv_root
[root@web01 ~]# df -h |grep root

Expand existing disk to LVM (Previously expanded)

If there is a VM where a previous expansion already took place, or otherwise is already on an extended partition with the first (only) logical partition taking up all the room, then this is the section you want.

Once the root disk has been expanded in VMware, rescan the disk which should now show additional sectors have been added:

# Print out disk information
[root@web01 ~]# fdisk -cul /dev/sda

# Then rescan the device
[root@web01 ~]# echo 1 > /sys/block/sdX/device/rescan
or
[root@web01 ~]# echo 1 > /sys/bus/scsi/drivers/sd/SCSI-ID/block/device/rescan
or
[root@web01 ~]# echo 1 > /sys/bus/scsi/drivers/sd/SCSI-ID/block\:sdX/device/rescan

# Print out disk information to confirm it detected the additional space
[root@web01 ~]# fdisk -cul /dev/sda

Expand the existing extended partition:

[root@web01 ~]# parted /dev/sda
unit s
pri
  Number  Start      End        Size       Type      File system  Flags
   1      2048s      41431039s  41428992s  primary                lvm
   2      41431040s  41943039s  512000s    primary   ext3         boot
   3      41943040s  52428799s  10485760s  extended
   5      41945088s  52428799s  10483712s  logical
resize 3 41943040s -1  (Take whatever the extended start value is, and the number)
pri
quit

Now partition the new space, setup LVM, expand and resize the filesystem:

[root@web01 ~]# fdisk -cu /dev/sda
p
n
l (logical)
enter
enter
p
w

[root@web01 ~]# ls -hal /dev/sda*
[root@web01 ~]# partx -l /dev/sda
[root@web01 ~]# partx -v -a /dev/sda # There may be some errors here, ignore.
[root@web01 ~]# partx -l /dev/sda
[root@web01 ~]# ls -hal /dev/sda*

[root@web01 ~]# pvcreate /dev/sda6 # Or whatever the new partition was
[root@web01 ~]# vgs
[root@web01 ~]# vgextend YOUR_VG_NAME /dev/sda6
[root@web01 ~]# pvdisplay /dev/sda6 | grep Free
  Free PE               4607
[root@web01 ~]# lvextend --extents +4607 -n /dev/YOUR_VG_NAME/lv_root
[root@web01 ~]# df -h
[root@web01 ~]# resize2fs /dev/YOUR_VG_NAME/lv_root
[root@web01 ~]# df -h

Expand existing disk to without LVM (Dangerous)

This section assumes that LVM was never setup for the disk. Therefore you would need to recreate the partitions to use the new space.

Re-creating partitions is a high risk operation as there is there is the potential for data loss, so make sure you have known good backups you can restore to. And at the very least, snapshot your VM! It also requires a reboot to occur on the VM. Ideally, you should first check to see if an additional disk can simply be mounted to a different mount point instead.

First, list the current partitions:

[root@web01 ~]# fdisk -l

Now within VMware or on the SAN presenting the disk, expand the disk. Once that is done, we need to rescan the volume and confirm the new space:

[root@web01 ~]# echo 1 > /sys/block/sda/device/rescan
[root@web01 ~]# fdisk -l
     Device Boot      Start         End      Blocks   Id  System
  /dev/sda1   *           1          13      104391   83  Linux
  /dev/sda2              14         274     2096482+  83  Linux
  /dev/sda3             275         796     4192965   82  Linux swap / Solaris
  /dev/sda4             797        2610    14570955    5  Extended
  /dev/sda5             797        2610    14570923+  83  Linux

Using the example above, you will notice that the new partitions (4 and 5) end on the same cylinder (2610). So the extended and logical partitions need to be set to use the new space. So to attempt to help you in the event everything goes wrong, list out the following information and store it somewhere safe so you can refer to it later:

[root@web01 ~]# fdisk -l /dev/sda
[root@web01 ~]# df -h
[root@web01 ~]# cat /etc/fstab

Now hold on to your butts and expand the disks by deleting the partitions (which *shouldn’t* affect the underlining data), then recreate the partitions with the new sizes:

[root@web01 ~]# fdisk /dev/sda
d
5
d
4
n
e
(Pick original extended position (Should be default, just hit enter)
(Pick the new, much larger cylinder ending position (for default all space until end, hit enter)
n
5 (or it should just assume right now here)
(Pick original logical partition started point (should be default next cylinder, hit enter)
(Pick new, much larger cylinder ending position (just default all space until end, hit enter)
p (Double check everything, ensure starting cylinders to extended partition 4 and logical partition 5 have the SAME starting cylinder number
w

Now reboot the system so it can use the new space:

[root@web01 ~]# shutdown -r now

Then expand the filesystem:

[root@web01 ~]# df -h | grep sda5
  /dev/sda5              14G  2.6G   11G  21% /
[root@web01 ~]# resize2fs /dev/sda5
[root@web01 ~]# df -h | grep sda5
  /dev/sda5              19G  2.6G   15G  15% /

Add a new disk or into an existing LVM Volume Group

This section assumes the new disk is the second disk on the VM, and is enumerated as /dev/sdb. The disk will be added to an existing Volume Group, and we’ll use all the new space on the disk for the volume group and logical volume.

[root@web01 ~]# parted -s -- /dev/sdb mklabel gpt
[root@web01 ~]# parted -s -a optimal -- /dev/sdb mkpart primary 2048s -1
[root@web01 ~]# parted -s -- /dev/sdb align-check optimal 1
[root@web01 ~]# parted /dev/sdb set 1 lvm on
[root@web01 ~]# parted /dev/sdb unit s print
[root@web01 ~]# pvcreate --metadatasize 250k /dev/sdb1
[root@web01 ~]# vgs
[root@web01 ~]# vgextend YOUR_VG_NAME /dev/sdb1
[root@web01 ~]# pvdisplay /dev/sdb1 | grep Free
  Free PE               4095
[root@web01 ~]# lvextend --extents +4095 -n /dev/YOUR_VG_NAME/lv_root
[root@web01 ~]# df -h
[root@web01 ~]# resize2fs /dev/YOUR_VG_NAME/lv_root
[root@web01 ~]# df -h

Add a new disk as a separate mount point

This section assumes the new disk is the second disk on the VM, and it enumerated as /dev/sdb. We are going to use GPT and LVM as a best practice (even if the root disk/partition has the disk label set to MBR or is non-LVM). This example also uses the whole disk in one partition.

# RHEL/CentOS 5:  Scan for new disk, check for existing partitions
# setup gpt, align, and partition:
[root@web01 ~]# for x in /sys/class/scsi_host/host*/scan; do echo "- - -" > ${x}; done
[root@web01 ~]# parted /dev/sdb unit s print
[root@web01 ~]# fdisk -l /dev/sdb
[root@web01 ~]# parted /dev/sdb
mktable gpt
quit
[root@web01 ~]# parted -s -- /dev/sdb mkpart primary 2048s -1
[root@web01 ~]# parted /dev/sdb set 1 lvm on
[root@web01 ~]# parted /dev/sdb unit s print

# RHEL/CentOS 6:  Scan for new disk, check for existing partitions
# setup gpt, align, and partition:
[root@web01 ~]# for x in /sys/class/scsi_host/host*/scan; do echo "- - -" > ${x}; done
[root@web01 ~]# parted /dev/sdb unit s print
[root@web01 ~]# fdisk -l /dev/sdb
[root@web01 ~]# parted -s -- /dev/sdb mklabel gpt
[root@web01 ~]# parted -s -a optimal -- /dev/sdb mkpart primary 2048s -1
[root@web01 ~]# parted -s -- /dev/sdb align-check optimal 1
[root@web01 ~]# parted /dev/sdb set 1 lvm on
[root@web01 ~]# parted /dev/sdb unit s print

Now on both OS’s, setup LVM, format, and mount the volume to /mnt/data:

[root@web01 ~]# VGNAME=vglocal$(date +%Y%m%d)
[root@web01 ~]# LVNAME=lvdata01
[root@web01 ~]# MOUNTPOINT="/mnt/data"
[root@web01 ~]# FILESYSTEM=`mount | egrep "\ \/\ " | awk '{print $5}'`
[root@web01 ~]# pvcreate --metadatasize 250k /dev/sdb1
[root@web01 ~]# vgcreate ${VGNAME} /dev/sdb1
[root@web01 ~]# lvcreate --extents 100%VG -n ${LVNAME} ${VGNAME}
[root@web01 ~]# mkfs.${FILESYSTEM} /dev/mapper/${VGNAME}-${LVNAME}
[root@web01 ~]# mkdir ${MOUNTPOINT}
[root@web01 ~]# echo -e "/dev/mapper/${VGNAME}-${LVNAME}\t${MOUNTPOINT}\t${FILESYSTEM}\tdefaults\t0 0" >> /etc/fstab
[root@web01 ~]# mount -a
[root@web01 ~]# df -hP | grep "${VGNAME}-${LVNAME}"
/dev/mapper/vglocal20160830-lvdata01   16G   44M   15G   1% /mnt/data

LVM basics

Logical Volume Management, or LVM for short, takes entire disks or individual partitions, and combines them together so the group can act as a single managable entity.

A few best practices to keep in mind when using LVM.

1. The Volume Group name should represent what kind of storage it exists on, such as vglocal00, vgsan00, vgdas00, vgiscsi00, vgraid00, etc.

2. The Logical Volume name should represent what the LV is being used for where possible, such as nfs00, data00, mysql00, var00, root00, etc. So the end result of a LV for MySQL running on SAN would be: /dev/vgsan00/mysql00

3. Never combine disks coming from different raids. In other words, don’t combine disks from a raid 1 and a raid 5 in the same Volume Group.

4. Never combine disks from different storage mediums, such as local storage and remote (SAN, DAS, iSCSI, etc).

5. Never combine non-partitioned and partitioned devices due to performance issues and general end user confusion.

6. To avoid end user confusion, a partition should be created on the new physical device as some tools may not be able to see that data already resides on the physical volumes when using tools like fdisk, parted, gdisk, etc.

Setup new disk

We are going to assume that your new disk is setup on /dev/sdb. First, determine if there is a disk label already set, and check for any existing information. You just want to avoid accidentally data loss:

[root@web01 ~]# parted /dev/sdb unit s print | grep Table
[root@web01 ~]# parted /dev/sdb unit s print

Set the disk label on the new disk to GPT:

[root@web01 ~]# parted -s -- /dev/sdb mklabel gpt

On the first partition only, start the partition on sector 2048 to follow generally accepted best practices to ensure partition alignment:

[root@web01 ~]# parted -s -a optimal -- /dev/sdb mkpart primary 2048s -1

Now confirm the starting sector of the partition is aligned for the disk:

[root@web01 ~]# parted -s -- /dev/sdb align-check optimal 1

Set the partition to use LVM:

[root@web01 ~]# parted /dev/sdb set 1 lvm on

Now review the disks newly created partition layout:

[root@web01 ~]# parted /dev/sdb unit s print

Setup the new disk with LVM:

[root@web01 ~]# pvcreate --metadatasize 250k /dev/sdb1

Create the volume group:

[root@web01 ~]# vgcreate vglocal00 /dev/sdb1

And now setup the logical volume to use all available disk space:

[root@web01 ~]# lvcreate -n data00 -l 100%FREE vglocal00

Format the logical volume with your filesystem:

[root@web01 ~]# mkfs.ext4 -v -m2 /dev/vglocal00/data00

And finally, mount the new volume:

[root@web01 ~]# mkdir /mnt/data
[root@web01 ~]# echo "/dev/vglocal00/data00   /mnt/data       ext4    defaults 0 0" >> /etc/fstab
[root@web01 ~]# mount -a
[root@web01 ~]# df -h

Shrink an existing Logical Volume

If you have to shrink an existing volume, there are a few steps that need to be taken. While its generally safe, you should always ensure that you have known good backups in place before proceeding.

Also note that you cannot shrink an existing volume while it is mounted. So this should be done during a scheduled maintenance window as you will need to stop any services that are using data from that volume.

First, unmount the logical volume:

[root@web01 ~]# umount /mnt/data

Run a file system check on the logical volume:

[root@web01 ~]# e2fsck -f /dev/vglocal00/data00

Now shrink the volume. In this example, we’re going to shrink it down to be 15G in size:

[root@web01 ~]# resize2fs /dev/vglocal00/data00 15G

Now reduce the file system to be 15G in size:

[root@web01 ~]# lvreduce -L 15G /dev/vglocal00/data00

Finally, mount the filesystem for normal use again:

[root@web01 ~]# mount -a

Shrink the root Logical Volume

As the / logical volume cannot be unmounted while the system is running, you need to boot the server off the distro’s cd, or boot in it a rescue environment if your running a Cloud server that supports this. While its generally safe to resize a volume, you should always ensure that you have known good backups in place before proceeding.

In this example, I’m running my server in VMware, so I can simply boot using a CentOS 6 installation cdrom. When the installation screen comes up, select:

Rescue installed system

When the screen asks if you would like the rescue environment to attempt to find your Linux installation and mount it under the directory /mnt/sysimage, select:

Skip

Now that your booted into the rescue enviroment, run the following commands so the system is aware of your LVM setup:

pvscan
vgscan
vgchange -a y
lvscan

In my case, my root logical volume is /dev/vg_local/lv_root. I want to shrink it from 60G down to 6G. I already confirmed that my data in the / partition does not exceed 6G.

First, run a file system check on the logical volume:

[root@web01 ~]# e2fsck -f /dev/vglocal00/lv_root

Now shrink the root volume. In this example, we’re going to shrink it down to be 6G in size:

[root@web01 ~]# resize2fs /dev/vglocal00/lv_root 6G

Then reduce the file system to be 6G in size:

[root@web01 ~]# lvreduce -L 6G /dev/vglocal00/lv_root

Finally, eject the CD, reboot the system, and check to ensure your / file system is now at 6G:

[root@web01 ~]# df -h /

Expand an existing Logical Volume

This operation can be done live with LVM2.

First confirm you have enough free space in the volume group by running:

[root@web01 ~]# vgs
[root@web01 ~]# vgdisplay vglocal00

Now lets expand the logical volume ‘data00’ from 15G to 25G total.

[root@web01 ~]# df -h
[root@web01 ~]# lvextend -L 25G /dev/vglocal00/data00
[root@web01 ~]# resize2fs /dev/vglocal00/data00
[root@web01 ~]# df -h

Add a new Logical Volume to an existing Volume Group

First confirm you have enough free space in the volume group by running:

[root@web01 ~]# vgs
[root@web01 ~]# vgdisplay vglocal00

Now create a new 5G logical volume called mysql00:

[root@web01 ~]# lvcreate -n mysql00 -L 5G vglocal00
[root@web01 ~]# mkfs.ext4 -v -m2 /dev/vglocal00/mysql00

Finally, mount the new logical volume:

[root@web01 ~]# mkdir /mnt/mysql-fs
[root@web01 ~]# echo "/dev/vglocal00/mysql00   /mnt/mysql-fs       ext4    defaults 0 0" >> /etc/fstab
[root@web01 ~]# mount -a
[root@web01 ~]# df -h

Remove a Logical Volume

First, unmount the volume:

[root@web01 ~]# umount /mnt/mysql-fs

Then remove the volume:

[root@web01 ~]# lvremove /dev/vglocal00/mysql00 
Do you really want to remove active logical volume mysql00? [y/n]: y
  Logical volume "mysql00" successfully removed

WordPress – configuration and troubleshooting

This article will contain a number of tips and tricks when working with WordPress.

Working with permaLinks

When changing permalinks around in wp-admin, WordPress will warn you if it is unable to make the changes directly to your .htaccess file. This happens when:

- The main .htaccess file for the site is not writable by the web server user
- The Apache vhost setting, AllowOverride, is not set to 'All'
- Apache mod_rewrite may not be enabled

If wp-admin is unable to write the changes into the .htaccess, you can do this manually by:

[root@web01 ~]# vim /var/www/vhosts/example.com/.htaccess
...
# BEGIN WordPress
<IfModule mod_rewrite.c>
RewriteEngine On
RewriteBase /
RewriteRule ^index\.php$ - [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /index.php [L]
</IfModule>
# END WordPress
...

Password protecting wp-admin

As people can oftentimes use weak passwords for their WordPress users, it is recommended to password protect the entire wp-admin login portal with a strong password as shown below:

First create the htaccess username and password:

[root@web01 ~]# htpasswd -c /etc/httpd/conf.d/example.com-wp-admin-htpasswd your_username

Then update the .htaccess file within the wp-admin directory by:

[root@web01 ~]# vim /var/www/vhosts/example.com/wp-admin/.htaccess
...
<Files admin-ajax.php>
    Order allow,deny
    Allow from all
    Satisfy any
</Files>
AuthType Basic
AuthName " Restricted"
AuthUserFile /etc/httpd/conf.d/example.com-wp-admin-htpasswd
Require valid-user
...

Disable PHP execution in uploads directory

When a site becomes compromised, malware is often uploaded that can be executed easily. Below is a common example for disabling PHP execution within the wp-content/uploads directory to help minimize the impact of a compromise:

[root@web01 ~]# vim /var/www/vhosts/example.com/wp-content/uploads/.htaccess
...
# Prevent PHP execution
<Files *.php>
deny from all
</Files>
...

Blocking xmlrpc.php attacks

XML-RPC is often subjected to brute force attacks within WordPress. These attempts can create severe resource contention issues, causing performance issues for the site.

Before blocking this blindly, there are modules such as JetPack, WordPress Desktop and Mobile apps that need XML-RPC enabled. So use caution! JetPack can mitigate these brute force attacks if the option is enabled within the plugin.

First determine if xmlrpc.php is being brute forced by checking your site’s access log as shown below. Generally hundreds or thousands of these entries would be found within a short period of time.

[root@web01 ~]# tail /var/log/httpd/example.com-access.log
....
xxx.xxx.xxx.xxx - - [19/May/2016:15:45:02 +0000] "POST /xmlrpc.php HTTP/1.1" 200 247 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:40.0) Gecko/20100101 Firefox/40.1"
xxx.xxx.xxx.xxx - - [19/May/2016:15:45:02 +0000] "POST /xmlrpc.php HTTP/1.1" 200 247 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:40.0) Gecko/20100101 Firefox/40.1"
xxx.xxx.xxx.xxx - - [19/May/2016:15:45:03 +0000] "POST /xmlrpc.php HTTP/1.1" 200 247 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:40.0) Gecko/20100101 Firefox/40.1"
...

The brute force attacks against xmlrpc.php can be blocked by adding the following in the site’s .htaccess file:

[root@web01 ~]# vim /var/www/vhosts/example.com/.htaccess
...
# Block WordPress xmlrpc.php requests
<Files xmlrpc.php>
order allow,deny
deny from all
</Files>
...

Force SSL on wp-admin

To force all logins for wp-admin to go over SSL, update the site’s wp-config.php with the options below. Just be sure to put this before the line “/* That’s all, stop editing! Happy blogging. */”:

[root@web01 ~]# vim /var/www/vhosts/example.com/wp-config.php
...
define('FORCE_SSL_LOGIN', true);
define('FORCE_SSL_ADMIN', true);
/* That's all, stop editing! Happy blogging. */
...

Make WordPress aware of SSL termination on the load balancer

When using SSL termination on the load balancer or perhaps through something like CloudFlare, you can sometimes create a redirect loop on the site. WordPress needs to believe that everything is really going over SSL since the load balancer is already handling that, and not the server. This can be corrected by adding the following near the top of the site’s .htaccess file:

[root@web01 ~]# vim /var/www/vhosts/example.com/.htaccess
...
SetEnvIf X-Forwarded-Proto https HTTPS=on
...

Search for outdated versions of WordPress

A primary reason why WordPress sites get compromised is due to outdated versions of the software. If a server has dozens of WordPress sites, it can be time consuming to determine what sites are running what versions. Shown below is a quick method of obtaining the versions of WordPress on the server:

[root@web01 ~]# yum install mlocate
[root@web01 ~]# updatedb
[root@web01 ~]# locate wp-includes/version.php | while read x; do echo -n "$x : WordPress Version " && egrep '^\s*\$wp_version\s*=' "$x" | cut -d\' -f2; done | column -t -s :
/var/www/vhosts/example.com/wp-includes/version.php         WordPress Version 4.3.1
/var/www/vhosts/example2.com/wp-includes/version.php        WordPress Version 4.3.3
/var/www/vhosts/example3.com/wp-includes/version.php        WordPress Version 3.9.4
/var/www/vhosts/example4.com/wp-includes/version.php        WordPress Version 3.7.1

Compare the versions returned against the following site to see how old the version is:
https://codex.wordpress.org/WordPress_Versions

Error establishing a database connection

This error usually means one of three things:

- The database credentials within wp-config.php may be wrong
- The database server is busy and cannot accept additional connections
- The database itself may be corrupted

To ensure the database credentials are correct, test them by doing the following:

[root@web01 ~]# cat /var/www/vhosts/example.com/wp-config.php | grep -iE 'DB_USER|DB_PASSWORD|DB_HOST|DB_NAME'
define('DB_NAME', 'example');
define('DB_USER', 'wordpress');
define('DB_PASSWORD', 'mysecurepassword');
define('DB_HOST', 'localhost');
[root@web01 ~]# mysql -h localhost -uwordpress -pmysecurepassword
mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| example            |
+--------------------+
2 rows in set (0.00 sec)

Confirm that Apache MaxClients does not exceed the max-connections variable within MySQL. While this example is specific for CentOS 6, it can be easily adapted for any distro. To check these variables, run the following:

[root@web01 ~]# cat /etc/httpd/conf/httpd.conf |grep MaxClients |grep -v \# | head -1
    MaxClients            63
[root@web01 ~]# mysql -e 'show variables where Variable_name like "max_connections";'
+-----------------+-------+
| Variable_name   | Value |
+-----------------+-------+
| max_connections | 65    |
+-----------------+-------+

Check for database corruption by adding the following before the line ‘/* That’s all, stop editing! Happy blogging. */’ in the wp-config.php:

[root@web01 ~]# vim /var/www/vhosts/example.com/wp-config.php
...
define( 'WP_ALLOW_REPAIR', true );
...

From there, do the following to repair the corruption:

- Point your browser to the following URL replacing placing the domain according:  http://www.example.com/wp-admin/maint/repair.php
- Select 'Repair database'
- Once done, remove the WP_ALLOW_REPAIR from the wp-config.php

Reset the WordPress Admin password

If you find yourself locked out of wp-admin, you can restore access to the portal by updating the active themes function.php file right after the opening comments as shown below. Just be sure to remove this code immediately after the password is updated:

[root@web01 ~]# vim /var/www/vhosts/example.com/wp-content/themes/twentyfifteen/functions.php
<?php
wp_set_password( 'your_secure_password_here', 1 );
...

Another way of resetting the admin password is to update MySQL directly by:

[root@web01 ~]# mysql
mysql> use your_wordpress_db_name;
mysql> UPDATE wp_users SET user_pass=MD5('your_new_password_here') WHERE user_login='admin';

Find number of SQL queries executed on each page load

To quickly determine how many queries a page is making to the database, add the following to the active theme’s footer.php near the top:

[root@web01 ~]# vim /var/www/vhosts/example.com/wp-content/themes/twentyfifteen/footer.php
...
<?php if ( current_user_can( 'manage_options' ) ) {
echo $wpdb->num_queries . " SQL queries performed.";
} else {
  // Uncomment the below line to show SQL queries to everybody
  echo $wpdb->num_queries . " SQL queries performed.";
}?>

This will display the query count at the bottom of every page. The public will be able to see this, so do not leave this in your footer.php longer than needed. In the example above on the second to last line, you can comment that out so only someone logged into WordPress will be able to see the results.

Enable the WordPress debug log

WordPress has the ability to log all errors, notices and warnings to a file called debug.log. This file is placed by default in wp-content/debug.log. This will hide the errors from showing up on the production site, and simply allow the developers to review them at their leisure.

To enable this, first create the log file and allow it to be writable by the web server user, then insert the following before the line ‘/* That’s all, stop editing! Happy blogging. */’ in the wp-config.php file as shown below:

[root@web01 ~]# touch /var/www/vhosts/example.com/wp-content/debug.log
[root@web01 ~]# chown apache:apache /var/www/vhosts/example.com/wp-content/debug.log
[root@web01 ~]# vim /var/www/vhosts/example.com/wp-config.php
// Enable WP_DEBUG mode
define( 'WP_DEBUG', true );

// Enable Debug logging to the /wp-content/debug.log file
define( 'WP_DEBUG_LOG', true );

// Disable display of errors and warnings 
define( 'WP_DEBUG_DISPLAY', false );
@ini_set( 'display_errors', 0 );
...
/* That's all, stop editing! Happy blogging. */

Deactivating WordPress plugins

This is useful when trying to determine which plugins are causing memory leaks or overall performance issues. This should only be done after creating a backup of the database and also manually backing up the wp-content/plugins directory so a rollback option exists just in case.

Keep in mind this will break the site since you may be disabling plugins that the site requires to work.

If you prefer to disable to disable the modules one by one until the problem module is identified:

[root@web01 ~]# cd /var/www/vhosts/example.com/wp-content/plugins
[root@web01 ~]# mv akismet akismet.disabled

To disable all the modules at once, then enable them one by one after testing the site each time to see if the issue manifests, do the following:

[root@web01 ~]# mkdir /var/www/vhosts/example.com/wp-content/plugins.disabled
[root@web01 ~]# mv /var/www/vhosts/example.com/wp-content/plugins/* /var/www/vhosts/example.com/wp-content/plugins.disabled
[root@web01 ~]# cd /var/www/vhosts/example.com/wp-content/plugins
[root@web01 ~]# mv ../plugins.disabled/akismet .
[root@web01 ~]# mv ../plugins.disabled/buddypress .
etc

WordPress setup on CentOS 6

Setting up WordPress is a pretty common task. However all too often I see people installing WordPress, and setting the ownership to ‘apache:apache’ recursively. While this makes life easier for the administrator, it opens up a host of security issues.

Taken directly from WordPress’s best practice guide on permissions:

Typically, all files should be owned by your user (ftp) account on your web server, and should be writable by that account. On shared hosts, files should never be owned by the web server process itself (sometimes this is www, or apache, or nobody user).

Most people know that using FTP is bad. However if you plan on using the wp-admin portal for media uploads, plugin updates, and core updates, you MUST have an FTP server installed and running. Using the Pecl SSH2 library looks like it would work in theory, but in reality, it doesn’t. Or at least, I haven’t found a way to make it work for the wp-admin portal without giving permission errors for this, that and everything in between since it needs weaker permissions. So while your users can use SSH/SCP to upload content via the command line, if they choose to do most of the WordPress tasks through wp-admin like most people would, use the FTP option from within /wp-admin.

This guide is going to show how you can setup WordPress properly accordingly to the note above from WordPress’s best practices guide on permissions. This guide will assume that you already have a working LAMP stack installed.

FTP Server Setup

First, install an FTP server called vsftpd:

[root@web01 ~]# yum install vsftpd
[root@web01 ~]# chkconfig vsftpd on

Now disable anonymous logins since vsftpd enables this by default for some reason:

[root@web01 ~]# vim /etc/vsftpd/vsftpd.conf
...
anonymous_enable=NO
...
[root@web01 ~]# service vsftpd restart

Then confirm you have a firewall in place that has a default to deny policy. So in the example below, I am only allowing in ports 80 and 443 from the world. Then I have SSH restricted to my IP address. Everything else is blocked, including that FTP server.

[root@web01 ~]# vim /etc/sysconfig/iptables
# Generated by iptables-save v1.4.7 on Fri Nov 13 19:24:15 2015
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [2:328]
-A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT 
-A INPUT -p tcp -m tcp --dport 443 -j ACCEPT 
-A INPUT -p tcp -m tcp --dport 80 -j ACCEPT 
-A INPUT -i eth0 -s xx.xx.xx.xx/32 -p tcp -m tcp --dport 22 -m comment --comment "Allow inbound SSH from remote ip" -j ACCEPT
-A INPUT -p icmp -j ACCEPT 
-A INPUT -i lo -j ACCEPT 
-A INPUT -j REJECT --reject-with icmp-host-prohibited 
-A FORWARD -j REJECT --reject-with icmp-host-prohibited 
COMMIT
# Completed on Fri Nov 13 19:24:15 2015

Database Setup

Create a database for your new WordPress site by:

[root@web01 ~]# mysql
mysql> create database your_database;

Now grant access for that database to a user:

[root@web01 ~]# mysql
mysql> grant all on your_database.* to 'your_db_user'@'localhost' identified by 'your_secure_db_password';
mysql> flush privileges;
mysql> quit

Apache Setup

First, create a FTP/SCP user:

[root@web01 ~]# mkdir -p /var/www/vhosts/example.com
[root@web01 ~]# chmod 755 /var/www/vhosts/example.com
[root@web01 ~]# useradd -d /var/www/vhosts/example.com example_site_user
[root@web01 ~]# passwd example_site_user

Now setup the Apache vhost:

[root@web01 ~]# vim /etc/httpd/vhost.d/example.com.conf
<VirtualHost *:80>
        ServerName example.com
        ServerAlias www.example.com
        #### This is where you put your files for that domain
        DocumentRoot /var/www/vhosts/example.com

        ### Enable this if you are using a SSL terminated Load Balancer
        SetEnvIf X-Forwarded-Proto https HTTPS=on

	#RewriteEngine On
	#RewriteCond %{HTTP_HOST} ^example.com
	#RewriteRule ^(.*)$ http://www.example.com [R=301,L]

        <Directory /var/www/vhosts/example.com>
                Options -Indexes +FollowSymLinks -MultiViews
                AllowOverride All
		Order deny,allow
		Allow from all
        </Directory>
        CustomLog /var/log/httpd/example.com-access.log combined
        ErrorLog /var/log/httpd/example.com-error.log
        # New Relic PHP override
        <IfModule php5_module>
               php_value newrelic.appname example.com
        </IfModule>
        # Possible values include: debug, info, notice, warn, error, crit,
        # alert, emerg.
        LogLevel warn
</VirtualHost>


##
# To install the SSL certificate, please place the certificates in the following files:
# >> SSLCertificateFile    /etc/pki/tls/certs/example.com.crt
# >> SSLCertificateKeyFile    /etc/pki/tls/private/example.com.key
# >> SSLCACertificateFile    /etc/pki/tls/certs/example.com.ca.crt
#
# After these files have been created, and ONLY AFTER, then run this and restart Apache:
#
# To remove these comments and use the virtual host, use the following:
# VI   -  :39,$ s/^#//g
# RedHat Bash -  sed -i '39,$ s/^#//g' /etc/httpd/vhost.d/example.com.conf && service httpd reload
# Debian Bash -  sed -i '39,$ s/^#//g' /etc/apache2/sites-available/example.com && service apache2 reload
##

# <VirtualHost _default_:443>
#        ServerName example.com
#        ServerAlias www.example.com
#        DocumentRoot /var/www/vhosts/example.com
#        <Directory /var/www/vhosts/example.com>
#                Options -Indexes +FollowSymLinks -MultiViews
#                AllowOverride All
#        </Directory>
#
#        CustomLog /var/log/httpd/example.com-ssl-access.log combined
#        ErrorLog /var/log/httpd/example.com-ssl-error.log
#
#        # Possible values include: debug, info, notice, warn, error, crit,
#        # alert, emerg.
#        LogLevel warn
#
#        SSLEngine on
#        SSLCertificateFile    /etc/pki/tls/certs/2016-example.com.crt
#        SSLCertificateKeyFile /etc/pki/tls/private/2016-example.com.key
#        SSLCACertificateFile /etc/pki/tls/certs/2016-example.com.ca.crt
#
#        <IfModule php5_module>
#                php_value newrelic.appname example.com
#        </IfModule>
#        <FilesMatch \"\.(cgi|shtml|phtml|php)$\">
#                SSLOptions +StdEnvVars
#        </FilesMatch>
#
#        BrowserMatch \"MSIE [2-6]\" \
#                nokeepalive ssl-unclean-shutdown \
#                downgrade-1.0 force-response-1.0
#        BrowserMatch \"MSIE [17-9]\" ssl-unclean-shutdown
#</VirtualHost>

Then restart Apache to apply the changes:

[root@web01 ~]# service httpd restart

WordPress Setup

Download a copy of WordPress, uncompress, and move the files into place by:

[root@web01 ~]# cd /var/www/vhosts/example.com
[root@web01 ~]# wget http://wordpress.org/latest.tar.gz && tar -xzf latest.tar.gz
[root@web01 ~]# mv wordpress/* ./ && rmdir ./wordpress && rm -f latest.tar.gz

Update the files and directories ownership to lock it down accordingly:

[root@web01 ~]# chown -R example_site_user:example_site_user /var/www/vhosts/example.com

Then open up a few files so wp-admin can manage the .htaccess, and so it can install plugins, upload media, and use the cache if you choose to configure it:

[root@web01 ~]# mkdir /var/www/vhosts/example.com/wp-content/uploads
[root@web01 ~]# mkdir /var/www/vhosts/example.com/wp-content/cache
[root@web01 ~]# touch /var/www/vhosts/example.com/.htaccess
[root@web01 ~]# chown apache:apache /var/www/vhosts/example.com/wp-content/uploads
[root@web01 ~]# chown apache:apache /var/www/vhosts/example.com/wp-content/cache
[root@web01 ~]# chown apache:apache /var/www/vhosts/example.com/.htaccess

And thats it! Once you have the domain setup in DNS, you should be able to navigate to the domain, and follow the WordPress installation wizard to complete the setup. Afterwards, log into wp-admin, and try to update a plugin, or install a new one. When it prompts you for the FTP information, be sure to use:

Hostname:  localhost
FTP Username:  example_site_user
FTP Username:  example_site_user_pw
Connection Type:  FTP

Magento CE 1.9.x setup on CentOS 6

Setting up a load balanced Magento setup can be a bit daunting. Anyone that has worked with Magento in the past knows that getting the architecture right the first time around is key. The right architecture will vary from solution to solution depending on the needs of the site.

The scalable solution outlined in this document will be build on the Rackspace Cloud, and will have the following server components:

- Domain:
www.example.com

- Load Balancer:  
lb01-http.example.com (With SSL termination)

- Servers
db01.example.com 64.123.123.1 / 192.168.1.1 (Master DB Server)
web01.example.com 64.123.123.3 / 192.168.1.3 (Master Web Server)
web02.example.com 64.123.123.4 / 192.168.1.4 (Slave Web Server)
web03.example.com 64.123.123.5 / 192.168.1.5 (Slave Web Server)

And our setup will utilize the following software to create a scalable and high performing solution:

- Apache 2.2.x with PHP 5.5
- NFS installed on web01 to export the directory /media to the slave web servers
- Lsyncd installed on web01 to sync the documentroot to the slave, and exclude /media, /var, /.git
- Set the 'Admin Base URL' in Magento to point to http://admin.example.com to the master web server, web01
- MySQL 5.6 installed on db01
- Redis 3.x installed on db01 to handle both sessions and provide a centralized cache for all web servers

A special note about web servers: Don’t drive yourself nuts trying to determine which is faster, nginx or Apache. The real performance bottleneck is PHP, and it can be mitigated with a properly configured solution, and using a Full Page Cache (FPC) like Turpentine. I prefer Apache as it is the least complicated one to support in my opinion.

Requirements

Magento CE can be very CPU intensive, even for small to mid size sites. Therefore, you need fast servers with many CPU available. When using the Rackspace Cloud, the minimum server size for lower traffic sites would be 4G General Purpose servers. However as Magento is very CPU intensive, I strongly recommend using 8G General Purpose servers.

The following hard requirements as posted in Magento’s documentation is below for Magento CE 1.9:

Apache 2.x
MySQL 5.6 (Oracle or Percona)
PHP 5.4 or PHP 5.5
Redis or Memcached (For session or cache storage)

The MySQL versions should be noted as Magento does not appear to explicitly state support for MariaDB at this time. They also do not explicitly state support for PHP 5.6. So deviate from these requirements at your own risk!

As per the Magento documentation, if you use MySQL database replication, Magento does not support MySQL statement-based replication. Make sure you use only row-based replication.

[root@db01 ~]# vim /etc/my.cnf
...
binlog-format = ROW
...

Web server prep

Servers involved: All web servers

The prerequisites outlined in here can be found in Magento’s documentation. This guide will assume that you already have Apache and PHP installed on your web servers.

First, apply any needed updates to CentOS 6:

yum update

Now install the required PHP modules for Magento.

# php 5.6 (Unsupported PHP version by Magento!)
yum -y install php56u-xml php56u-mcrypt php56u-gd php56u-soap php56u-devel php56u-mysql php56u-mbstring

# php 5.5
yum -y install php55u-xml php55u-soap php55u-mcrypt php55u-gd php55u-devel php55u-mysql php55u-mbstring

# php 5.4
yum -y install php-mcrypt gd gd-devel php-gd php-mysql mbstring

Increase the PHP memory limit:

vim /etc/php.ini
...
memory_limit = 512M
...

Apache Setup

Servers involved: All web servers

Setting up the Apache vhost for the Magento site is pretty straight forward. Below is a verbose version of the Apache vhost file needed.

First setup the documentroot:

mkdir -p /var/www/vhosts/example.com

Now setup the Apache vhost:

[root@web01 ~]# vim /etc/httpd/vhost.d/example.com.conf
<VirtualHost *:80>
        ServerName example.com
        ServerAlias www.example.com
        #### This is where you put your files for that domain
        DocumentRoot /var/www/vhosts/example.com

        ### Enable this if you are using a SSL terminated Load Balancer
        SetEnvIf X-Forwarded-Proto https HTTPS=on

	#RewriteEngine On
	#RewriteCond %{HTTP_HOST} ^example.com
	#RewriteRule ^(.*)$ http://www.example.com [R=301,L]

        <Directory /var/www/vhosts/example.com>
                Options -Indexes +FollowSymLinks -MultiViews
                AllowOverride All
		Order deny,allow
		Allow from all
        </Directory>
        CustomLog /var/log/httpd/example.com-access.log combined
        ErrorLog /var/log/httpd/example.com-error.log
        # New Relic PHP override
        <IfModule php5_module>
               php_value newrelic.appname example.com
        </IfModule>
        # Possible values include: debug, info, notice, warn, error, crit,
        # alert, emerg.
        LogLevel warn
</VirtualHost>


##
# To install the SSL certificate, please place the certificates in the following files:
# >> SSLCertificateFile    /etc/pki/tls/certs/example.com.crt
# >> SSLCertificateKeyFile    /etc/pki/tls/private/example.com.key
# >> SSLCACertificateFile    /etc/pki/tls/certs/example.com.ca.crt
#
# After these files have been created, and ONLY AFTER, then run this and restart Apache:
#
# To remove these comments and use the virtual host, use the following:
# VI   -  :39,$ s/^#//g
# RedHat Bash -  sed -i '39,$ s/^#//g' /etc/httpd/vhost.d/example.com.conf && service httpd reload
# Debian Bash -  sed -i '39,$ s/^#//g' /etc/apache2/sites-available/example.com && service apache2 reload
##

# <VirtualHost _default_:443>
#        ServerName example.com
#        ServerAlias www.example.com
#        DocumentRoot /var/www/vhosts/example.com
#        <Directory /var/www/vhosts/example.com>
#                Options -Indexes +FollowSymLinks -MultiViews
#                AllowOverride All
#        </Directory>
#
#        CustomLog /var/log/httpd/example.com-ssl-access.log combined
#        ErrorLog /var/log/httpd/example.com-ssl-error.log
#
#        # Possible values include: debug, info, notice, warn, error, crit,
#        # alert, emerg.
#        LogLevel warn
#
#        SSLEngine on
#        SSLCertificateFile    /etc/pki/tls/certs/2016-example.com.crt
#        SSLCertificateKeyFile /etc/pki/tls/private/2016-example.com.key
#        SSLCACertificateFile /etc/pki/tls/certs/2016-example.com.ca.crt
#
#        <IfModule php5_module>
#                php_value newrelic.appname example.com
#        </IfModule>
#        <FilesMatch \"\.(cgi|shtml|phtml|php)$\">
#                SSLOptions +StdEnvVars
#        </FilesMatch>
#
#        BrowserMatch \"MSIE [2-6]\" \
#                nokeepalive ssl-unclean-shutdown \
#                downgrade-1.0 force-response-1.0
#        BrowserMatch \"MSIE [17-9]\" ssl-unclean-shutdown
#</VirtualHost>

Then restart Apache to apply the changes:

[root@web01 ~]# service httpd restart

Magento Installation

Servers involved: web01 only

Download a copy of Magento from their site, and upload it to the /root directory. Once done, move it into place by:

[root@web01 ~]# cd /root
[root@web01 ~]# tar -xvf magento-1*.tar
[root@web01 ~]# cd /root/magento
[root@web01 ~]# cp -a ./* /var/www/vhosts/example.com
[root@web01 ~]# cp -a ./.htaccess /var/www/vhosts/example.com
[root@web01 ~]# chown -R apache:apache /var/www/vhosts/example.com
[root@web01 ~]# crontab -e
*/5 * * * * /bin/bash /var/www/vhosts/example.com/cron.sh

A special note: The cron.sh script only needs to run on the master (admin) web server.

Now browse to your site’s URL, and complete the post installation wizard. When it asks for where you want to store sessions, be sure to specify ‘database’.

Magento admin separation

Servers involved: web01 only

Specifying a master web server for all admin operations is critical for an application like Magento. This allows you to create a subdomain such as ‘http://admin.example.com’, from which all your administrative or backend functions can be run. This helps prevent the age old issue of your images and other work through Magento being uploaded to a slave web server by accident.

Some prefer to do this through Varnish. However in my experience, while Varnish is great for caching, and it is a complete nightmare for admin redirection. So this guide will not be using Varnish. Instead, we’ll use the functionality already provided to us in Magento.

Setting up an admin base URL in Magento CE 1.9 is very simple. First, you need to create an “A” record in DNS to point ‘admin.example.com’ to your master web server. If your using bind, the entry would look like this:

admin.example.com. IN A 64.123.123.3

On web01 only, update Apache’s vhost configuration for the site to include a server alias for the new subdomain, admin.example.com:

[root@web01 ~]# vim /etc/httpd/vhost.d/example.com.conf
<VirtualHost *:80>
...
ServerAlias www.example.com admin.example.com
...
</VirtualHost>

<VirtualHost _default_:443>
...
ServerAlias www.example.com admin.example.com
...
</VirtualHost>

Then restart Apache to apply the change:

[root@web01 ~]# service httpd restart

Finally, log into Magento’s backend control panel, and update the admin base url by:

System -> Configuration -> Admin -> Admin Base URL
Use Custom Admin URL:  Yes
Custom Admin URL:  http://admin.example.com/
Use Custom Admin Path:  No

Lsyncd Setup

Servers involved: web01 only

To ensure that any code changes made on web01 get pushed down to the slave web servers, we are going to install Lsyncd on web01.

On web01 only, install Lsyncd by:

[root@web01 ~]# yum -y install lsyncd
[root@web01 ~]# chkconfig lsyncd on

Now setup the lsyncd configuration by:

[root@web01 ~]# vim /etc/lsyncd.conf
 
settings {
   logfile = "/var/log/lsyncd/lsyncd.log",
   statusFile = "/var/log/lsyncd/lsyncd-status.log",
   statusInterval = 20
}
servers = {
 "192.168.1.4",
 "192.168.1.5"
}
 
for _, server in ipairs(servers) do
sync {
    default.rsyncssh,
    source="/var/www/",
    host=server,
    targetdir="/var/www/",
    excludeFrom="/etc/lsyncd-excludes.txt",
    rsync = {
        compress = true,
        archive = true,
        verbose = true,
        rsh = "/usr/bin/ssh -p 22 -o StrictHostKeyChecking=no"
    }
}
end

Setup the required excludes for Lsyncd:

[root@web01 ~]# vim /etc/lsyncd-excludes.txt
vhosts/example.com/media
vhosts/example.com/var
vhosts/example.com/.git

Finally, start the service

[root@web01 ~]# chkconfig lsyncd on
[root@web01 ~]# service lsyncd start

NFS Setup

Servers involved: NFS server (web01) / NFS Client (all slave web servers)

For load balanced Magento installations, Magento recommends that the /media directory is NFS mounted. On web01, setup NFS by:

[root@web01 ~]# yum install rpcbind nfs-utils -y

Now perform some basic tuning for NFS since the defaults are a bit outdated. Uncomment or add the following variables in /etc/sysconfig/nfs

[root@web01 ~]# vim /etc/sysconfig/nfs
...
RPCNFSDCOUNT=64
RQUOTAD_PORT=875
LOCKD_TCPPORT=32803
LOCKD_UDPPORT=32769
MOUNTD_PORT=892
STATD_PORT=662
STATD_OUTGOING_PORT=2020
...

Open the firewall to allow your private network access to the NFS services. You may have to adjust your rules as my private network resides on eth2. Do not allow this on the public interface without adjusting the source IP’s accordingly!

[root@nfs01 ~]# vim /etc/sysconfig/iptables
...
-A INPUT -i eth2 -s 192.168.1.0/24 -j ACCEPT
...

root@web01 ~]# service iptables restart

Export the directory to be shared, along with its permissions, in /etc/exports:

[root@web01 ~]# vim /etc/exports
...
/var/www/vhosts/example.com/media 192.168.1.0/24(rw,no_root_squash)
...

Now start the services, and enable them to start at boot time:

[root@web01 ~]# service rpcbind start; chkconfig rpcbind on
[root@web01 ~]# service nfslock start; chkconfig nfslock on
[root@web01 ~]# service nfs start; chkconfig nfs on

Now that the NFS server is ready, the NFS clients now need to be setup to connect. This MUST be performed on each slave server. Install the required packages on the NFS clients by:

[root@web02 ~]# yum install rpcbind nfs-utils -y

Now start the services, and enable them to start at boot time.

[root@web02 ~]# service rpcbind start; chkconfig rpcbind on
[root@web02 ~]# service nfslock start; chkconfig nfslock on
[root@web02 ~]# chkconfig netfs on

Configure the mount point in /etc/fstab:

[root@web02 ~]# vim /etc/fstab

192.168.1.3:/var/www/vhosts/example.com/media  /var/www/vhosts/example.com/media  nfs  vers=3,proto=tcp,hard,intr,rsize=32768,wsize=32768,noatime  0  0

Now create the placeholder directory on the client, mount, and verify it works:

[root@web02 ~]# mkdir /var/www/vhosts/example.com/media
[root@web02 ~]# mount -a
[root@web02 ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root
                       14G  1.8G   11G  15% /
tmpfs                 939M     0  939M   0% /dev/shm
/dev/sda1             477M   74M  378M  17% /boot
192.168.1.3:/data      14G  1.9G   11G  15% /var/www/vhosts/example.com/media
[root@web02 ~]#
[root@web02 ~]# grep /data /proc/mounts 
192.168.1.3:/var/www/vhosts/example.com/media /var/www/vhosts/example.com/media nfs rw,noatime,vers=3,rsize=32768,wsize=32768,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.1.3,mountvers=3,mountport=892,mountproto=tcp,local_lock=none,addr=192.168.1.3 0 0
[root@web02 ~]#
[root@web02 ~]# touch /var/www/vhosts/example.com/media/test-file
[root@web02 ~]# ls -al /var/www/vhosts/example.com/media/test-file 
-rw-r--r-- 1 root root 0 May 5 20:23 /var/www/vhosts/example.com/media/test-file

Be sure to setup the NFS client’s on each slave web server.

Redis Setup

Managing Magento’s cache on a load balancer setup can be a bit of a pain since you would have to log into each server to flush the contents of var/cache whenever you want to empty the cache. This is where a centralized Redis server can come into play. According to Magento’s documentation, they recommend using Redis for session management, and caching. As Magento CE 1.9 supports Redis out of the box, its pretty simple to setup:

On db01, install Redis:

yum install redis30u
chkconfig redis on

Now setup redis to listen on our local network, setup the memory limits, and disable disk caching since we want it to be served out of memory:

vim /etc/redis.conf
...
bind 192.168.1.1
maxmemory 1500mb
maxmemory-policy allkeys-lru
# save 900 1
# save 300 10
# save 60 10000
...

Then start the service:

service redis restart

Now on each web server, install the Redis PHP module:

# PHP 5.4
yum install php54-pecl-redis

# PHP 5.5
yum install php55u-pecl-redis

# PHP 5.6
yum install php56u-pecl-redis

Then restart Apache:

service httpd restart

Finally, update Magento’s configuration to make use of Redis. The redis section is in bold. This only needs to be performed on web01:

cd /var/www/vhosts/example.com/app/etc
cp local.xml local.xml.orig
vim local.xml
...
<config>
    <global>
        <install>
            <date>
        </install>
        <crypt>
            <key>
        </crypt>
        <disable_local_modules>false</disable_local_modules>
        <resources>
            <db>
                <table_prefix><![CDATA[]]></table_prefix>
            </db>
            <default_setup>
                <connection>
                    <host><![CDATA[192.168.1.1]]></host>
                    <username><![CDATA[example]]></username>
                    <password><![CDATA[YOUR_PASSWORD]]></password>
                    <dbname><![CDATA[example]]></dbname>
                    <initStatements><![CDATA[SET NAMES utf8]]></initStatements>
                    <model><![CDATA[mysql4]]></model>
                    <type><![CDATA[pdo_mysql]]></type>
                    <pdoType><![CDATA[]]></pdoType>
                    <active>1</active>
                </connection>
            </default_setup>
        </resources>
        <session_save><![CDATA[files]]></session_save>
  <redis_session>
         <host>192.168.1.1</host>
          <port>6379</port>
           <password></password>
            <timeout>2.5</timeout>
             <persistent></persistent>
              <db>2</db>
               <compression_threshold>2048</compression_threshold>
                <compression_lib>gzip</compression_lib>
                 <log_level>1</log_level>
                  <max_concurrency>6</max_concurrency>
                   <break_after_frontend>5</break_after_frontend>
                    <break_after_adminhtml>30</break_after_adminhtml>
                     <bot_lifetime>7200</bot_lifetime>
                      </redis_session>
               <cache>
               <backend>Mage_Cache_Backend_Redis</backend>
               <backend_options>
               <server>192.168.1.1</server>
               <port>6379</port>
               <persistent></persistent>
               <database>1</database>
               <password></password>
               <force_standalone>0</force_standalone> 
               <connect_retries>3</connect_retries>   
               <read_timeout>10</read_timeout>        
               <automatic_cleaning_factor>0</automatic_cleaning_factor>
               <compress_data>1</compress_data>
               <compress_tags>1</compress_tags> 
               <compress_threshold>20480</compress_threshold> 
               <compression_lib>gzip</compression_lib>
               <use_lua>0</use_lua>
               </backend_options>
               </cache>
    </global>
    <admin>
        <routers>
            <adminhtml>
                <args>
                    <frontName><![CDATA[admin]]></frontName>
                </args>
            </adminhtml>
        </routers>
    </admin>
</config>

Apache Proxypass

Many solutions today are built using highly available configurations that can easily scale. Setting up a solution to scale is easy, but getting your web application to work correctly with a multi-server configuration can be difficult as not everyone has access to a quality shared storage solution that is fast and reliable.

In many web applications such as WordPress, you typically want all your wp-admin traffic to go to the master server. There are probably a dozen ways to go about this, many of which get very over complicated with wacky Varnish configurations handling the redirection, or even with Nginx.

These is where ProxyPass can offer a cleaner alternative. ProxyPass allows you to take a request for a specific URL, and forward it to another server, which would be known as your backend server, or your master web server.

This guide will assume that you are performing this on all web servers in the solution, unless otherwise specified. The specific examples are for a WordPress based solution, but it can be easily adapted for other CMS’s.

To get started, first ensure that mod_proxy is installed:

# CentOS 6
[root@web01 ~]# yum install mod_proxy_html
[root@web01 ~]# service httpd restart
[root@web01 ~]# httpd -M |grep proxy
 proxy_module (shared)
 proxy_balancer_module (shared)
 proxy_ftp_module (shared)
 proxy_http_module (shared)
 proxy_connect_module (shared)
 proxy_ajp_module (shared)

# Ubuntu 12.04 and 14.04
[root@web01 ~]# apt-get update
[root@web01 ~]# apt-get install libapache2-mod-proxy-html
[root@web01 ~]# a2enmod proxy proxy_http

There are several ways you can proceed from here. I’ll post them out as ‘options’ below. Each one basically accomplishes the same thing, but one may work better for your environment than another.

So no matter which of the 3 options you go with, always be sure to rigorously test it before implementing it in production!

Option 1: Easy – Define master server based off the URI in each Apache Vhost

This example is simple. In each Apache Vhost, add the following lines on each slave web server to point wp-admin and wp-login.php to your master server, which in this case is 192.168.2.1:

# CentOS 6
[root@web02 ~]# vim /etc/httpd/vhost.d/example.com.conf

# Ubuntu 12.04 and 14.04
[root@web02 ~]# vim /etc/apache2/sites-enabled/example.com.conf
...
ProxyPreserveHost On
        ProxyRequests Off
        ProxyPassMatch ".*/wp-admin.*" "http://192.168.2.1"
        ProxyPassMatch ".*/wp-login.php" "http://192.168.2.1"
...

Option 2: Advanced – Define master server based off URI using location blocks in each Apache Vhost

This example is slightly more advanced. In each Apache Vhost, add the following location blocks to point wp-admin and wp-login.php to your master server, which in this case is 192.168.2.1. We’re also manually defining the host header within these location blocks, which gives you the option to start excluding specific items if needed:

# CentOS 6
[root@web02 ~]# vim /etc/httpd/vhost.d/example.com.conf

# Ubuntu 12.04 and 14.04
[root@web02 ~]# vim /etc/apache2/sites-enabled/example.com.conf
...
ProxyRequests Off
  ProxyPreserveHost Off
  ProxyVia Off
  <Location "/wp-login.php">
    Header set "Host" "www.example.com"
    ProxyPass http://192.168.2.1/wp-login.php
    ProxyPassReverse http://192.168.2.1/wp-login.php
  </Location>
  <Location "/wp-admin">
    Header set "Host" "www.example.com"
    ProxyPass http://192.168.2.1/wp-admin
    ProxyPassReverse http://192.168.2.1/wp-admin
  </Location>

Option 3: Complex – Define master server in global Apache configuration, and only send over POST requests for wp-admin

This example is more complex. You are defining the master server (192.168.2.1) in your global Apache configuration, then configuring each Apache Vhost to only send over POST requests for wp-admin to the master server.

Setup proxypass so it knows which server is the master web server. Be sure to update the IP so its the IP address of your master web server:

# CentOS 6
[root@web01 ~]# vim /etc/sysconfig/httpd
...
OPTIONS="-DSLAVE"
export MASTER_SERVER="192.168.2.1"
...

# Ubuntu 12.04 and 14.04
[root@web01 ~]# /etc/apache2/envvars
...
export APACHE_ARGUMENTS="-DSLAVE"
export MASTER_SERVER="192.168.2.1"
...

Now on your slave web servers, we need to update the site’s vhost configuration to proxy the requests for /wp-admin so they will route to the master web server:

# CentOS 6
[root@web02 ~]# vim /etc/httpd/vhost.d/example.com.conf

# Ubuntu 12.04 and 14.04
[root@web02 ~]# vim /etc/apache2/sites-enabled/example.com.conf
...
<IfDefine SLAVE>
RewriteEngine On
ProxyPreserveHost On
ProxyPass /wp-admin/ http://${MASTER_SERVER}/wp-admin/
     ProxyPassReverse /wp-admin/ http://${MASTER_SERVER}/wp-admin/
RewriteCond %{REQUEST_METHOD} =POST
     RewriteRule . http://${MASTER_SERVER}%{REQUEST_URI} [P]
</IfDefine>
...

# CentOS 6
[root@web02 ~]# service httpd restart

# Ubuntu 12.04 and 14.04
[root@web02 ~]# service apache2 restart

That slave server(s) should now start proxying the /wp-admin requests and sending them over to the master web server. Please be sure to test this out and check your logs to ensure /wp-admin POST requests are now routing to the master web server.

Upgrade MySQL on CentOS

Sometimes you may run across a scenerio where you have to update MySQL. This is easy enough to do, however you should always test this out on a dev server before applying to production just in case you run into problems.

As a critical note, before performing the update, make sure you have a working MySQLdump of all your databases. This cannot be stressed enough! There are many ways of performing a MySQLdump. Be sure you can actually restore from those backups as well! One possible method of performing the backup of all the databases into a single large file, which locks the tables creating possible downtime, would be:

[root@db01 ~]# mysqldump --all-databases --master-data | gzip -1 > /root/all.sql.gz

On CentOS, I prefer to use the IUS repo’s as they are actively maintained, and they do not overwrite stock packages which is important.

So to get started, first setup the IUS repo if it isn’t already installed on your server:

# CentOS 6
[root@db01 ~]# rpm -ivh http://dl.iuscommunity.org/pub/ius/stable/CentOS/6/x86_64/ius-release-1.0-14.ius.centos6.noarch.rpm

# CentOS 7
[root@db01 ~]# rpm -ivh http://dl.iuscommunity.org/pub/ius/stable/CentOS/7/x86_64/ius-release-1.0-14.ius.centos7.noarch.rpm

To upgrade MySQL, yum has a plugin called ‘yum-replace’, which will automatically replace one package with another of your choosing. This simplifies the process of upgrading MySQL.

First, confirm that you are not already running another custom version of MySQL:

[root@db01 ~]# rpm -qa |grep -i mysql
mysql55-server-5.5.45-1.ius.el6.6.z.x86_64
mysql55-5.5.45-1.ius.el6.6.z.x86_64
...

Using the output from above, it looks like we just have MySQL 5.5 installed. I want to upgrade from MySQL 5.5 to MySQL 5.6. Here is how you would run it:

[root@db01 ~]# yum install yum-plugin-replace
[root@db01 ~]# yum replace mysql55 --replace-with mysql56u

During the upgrade process, I noticed that I could no longer log in with the root MySQL user. So to reset the root MySQL password:

[root@db01 ~]# service mysqld stop
[root@db01 ~]# mysql -uroot
mysql> use mysql;
mysql> update user set password=PASSWORD("enternewpasswordhere") where User='root';
mysql> flush privileges;
mysql> quit
[root@db01 ~]# service mysqld restart

Once the version has been updated, be sure to run mysql_upgrade. mysql_upgrade examines all tables in all databases for incompatibilities with the current version of MySQL Server. mysql_upgrade also upgrades the system tables so that you can take advantage of new privileges or capabilities that might have been added.

[root@db01 ~]# mysql_upgrade

If you find that the upgrade is not going to work for your environment, you can roll back to the original version:

[root@db01 ~]# yum replace mysql56u --replace-with mysql55

The yum-replace plugin makes upgrading and downgrading MySQL very fast and simple. But just to reiterate an earlier statement, make sure you test this out on a development server before applying to your production server! It is always possible that something may not be compatible with the new version of MySQL! So always test first so you know what to expect!

Upgrade PHP on CentOS

The version of PHP that ships with CentOS 6 and CentOS 7 is getting a bit outdated. Oftentimes, people will want to use a newer version of PHP, such as PHP 5.6. This is easy enough to do, however you should always test this out on a dev server before applying to production just in case you run into problems.

On CentOS, I prefer to use the IUS repo’s as they are actively maintained, and they do not overwrite stock packages which is important.

So to get started, first setup the IUS repo if it isn’t already installed on your server:

# CentOS 6
[root@web01 ~]# rpm -ivh http://dl.iuscommunity.org/pub/ius/stable/CentOS/6/x86_64/ius-release-1.0-14.ius.centos6.noarch.rpm

# CentOS 7
[root@web01 ~]# rpm -ivh http://dl.iuscommunity.org/pub/ius/stable/CentOS/7/x86_64/ius-release-1.0-14.ius.centos7.noarch.rpm

To upgrade PHP, yum has a plugin called ‘yum-replace’, which will automatically replace one package with another of your choosing. This simplifies the process of upgrading PHP greatly.

First, confirm that you are not already running another custom version of PHP:

[root@web01 ~]# rpm -qa |grep -i php
php-tcpdf-dejavu-sans-fonts-6.2.11-1.el6.noarch
php-cli-5.3.3-46.el6_7.1.x86_64
php-pdo-5.3.3-46.el6_7.1.x86_64
...

Using the output from above, it looks like we just have the stock PHP version installed. I want to upgrade from PHP 5.3 which is the default package on CentOS 6, and replace it with PHP 5.6. Here is how you would run it:

[root@web01 ~]# yum install yum-plugin-replace
[root@web01 ~]# yum replace php --replace-with php56u

Perhaps you find that your application doesn’t work with PHP 5.6, so you want to try PHP 5.5 instead:

[root@web01 ~]# yum install yum-plugin-replace
[root@web01 ~]# yum replace php56u --replace-with php55u

Or maybe you find that the upgrade is not going to work for your environment, so you want to roll back to the original version:

[root@web01 ~]# yum replace php55u --replace-with php

The yum-replace plugin makes upgrading and downgrading PHP very fast and simple. But just to reiterate an earlier statement, make sure you test this out on a development server before applying to your production server! Its always possible that a module that worked in PHP 5.3 is deprecated in a newer version of PHP, or perhaps your site code is using deprecated functions that no longer exist! So always test first so you know what to expect!

Upgrade PHP on Ubuntu

This guide will not cover Ubuntu 12.04 at this time as the PPA from ondrej appears to upgrade Apache 2.2 to 2.4, and I have not been able to install this cleanly. If PHP 5.5 or 5.6 is needed, I am going to have to suggest migrating to Ubuntu 14.04 for the time being.

The version of PHP that ships with Ubuntu 14.04 is PHP 5.5, which is starting to get a bit outdated. Oftentimes, people will want to use a newer version of PHP, such as PHP 5.6 or PHP 7.0. This is easy enough to do, however you should always test this out on a dev server before applying to production just in case you run into problems.

On Ubuntu, it looks like the preferred method is to use the PPA from ondrej. So to get started, first update your existing repos, then add the new PPA:

# Update PHP 5.5 on Ubuntu 14.04 to PHP 5.6
[root@web01 ~]# apt-get -y update
[root@web01 ~]# apt-get install software-properties-common
[root@web01 ~]# add-apt-repository ppa:ondrej/php
[root@web01 ~]# apt-get -y update
[root@web01 ~]# apt-get -y install php5.6 php5.6-cli php5.6-mysql php5.6-mcrypt php5.6-mbstring php5.6-curl php5.6-gd php5.6-intl php5.6-xsl php5.6-zip
[root@web01 ~]# a2dismod php5
[root@web01 ~]# a2enmod php5.6
[root@web01 ~]# service apache2 reload

# Update PHP 5.5 on Ubuntu 14.04 to PHP 7.0
[root@web01 ~]# apt-get -y update
[root@web01 ~]# apt-get install software-properties-common
[root@web01 ~]# add-apt-repository ppa:ondrej/php
[root@web01 ~]# apt-get -y update
[root@web01 ~]# apt-get -y install php7.0 php7.0-cli php7.0-mysql php7.0-mcrypt php7.0-mbstring php7.0-curl php7.0-gd php7.0-intl php7.0-xsl php7.0-zip
[root@web01 ~]# a2dismod php5
[root@web01 ~]# a2enmod php7.0
[root@web01 ~]# service apache2 restart

Perhaps you find that your application doesn’t work with PHP 5.6 or PHP 7.0, so you want to roll back to stock PHP 5.5 instead:

# Downgrade PHP 5.6 on Ubuntu 14.04 to PHP 5.5
[root@web01 ~]# apt-get install ppa-purge
[root@web01 ~]# add-apt-repository -r ppa:ondrej/php
[root@web01 ~]# ppa-purge ppa:ondrej/php
[root@web01 ~]# apt-get remove php5.6-common php5.6-cli
[root@web01 ~]# apt-get autoremove
[root@web01 ~]# apt-get install php5 php5-cli php5-common
[root@web01 ~]# a2enmod php5
[root@web01 ~]# service apache2 restart

# Downgrade PHP 7.0 on Ubuntu 14.04 to PHP 5.5
[root@web01 ~]# apt-get install ppa-purge 
[root@web01 ~]# add-apt-repository -r ppa:ondrej/php
[root@web01 ~]# ppa-purge ppa:ondrej/php
[root@web01 ~]# apt-get remove php7.0-common php7.0-cli
[root@web01 ~]# apt-get autoremove
[root@web01 ~]# apt-get install php5 php5-cli php5-common
[root@web01 ~]# a2enmod php5
[root@web01 ~]# service apache2 restart