How to install and configure Redis on Ubuntu 14.04

Redis is an in-memory data structure store that is commonly used as a database, cache or a message broker. What makes Redis powerful is its optional ability to have data persistence. Therefore your dataset isn’t lost during restarts of the service.

The article below will discuss how to install, configure and provide basic security for Redis. From there, it will go into the basic use cases I use it for on a daily basis, session storing and general caching.

Installation

[root@redis01 ~]# apt-get update
[root@redis01 ~]# apt-get install redis-server redis-tools

Configuration

Redis listens on port 6379 by default and needs some additional configuration to ensure that it is secured. If you do not protect Redis with a firewall, authentication and have it listen only on a private network, there is a extremely high risk of leaking sensitive data.

First, set Redis to only listen on your private network. Redis does not have any type of encryption built in, it is important that the data is transferred only over private networks or secured tunnels. Set Redis to listen on the private interface by:

[root@redis01 ~]# vim /etc/redis/redis.conf
...
bind redis_servers_private_IP
...

If Redis is being installed on a stand alone web server and will not need to accept connections from other clients, then you can set Redis to listen on the local socket instead by commenting out the bind value and setting up a socket by:

[root@redis01 ~]# mkdir /var/run/redis
[root@redis01 ~]# chown redis:redis /var/run/redis
[root@redis01 ~]# vim /etc/redis/redis.conf
...
# bind 127.0.0.1
unixsocket /var/run/redis/redis.sock
unixsocketperm 777

If you do not have a dedicated firewall, use your OS’s built in firewall to only allow in connections from trusted web servers using their internal IP’s. Some quick examples are below:

# iptables
[root@redis01 ~]# vim /etc/sysconfig/iptables
...
-A INPUT -p tcp -m tcp --dport 6379 -s client_server_private_IP -m comment --comment "redis" -j ACCEPT
[root@redis01 ~]# service iptables restart

# ufw
[root@redis01 ~]# ufw allow from client_server_private_IP/32 to any port 6379

To protect Redis further, setup authentication which is a built in security feature. This will force clients to authenticate before being granted access. Use a tool such as apg or pwgen to create a secure password. Set the password within Redis by:

[root@redis01 ~]# vim /etc/redis/redis.conf
...
requirepass your_secure_password_here
...

[root@redis01 ~]# service redis restart

Then test to ensure the password works by:

# This should fail
[root@redis01 ~]# redis_cli
127.0.0.1:6379> set key1 10
(error) NOAUTH Authentication required.

# This should work
[root@redis01 ~]# redis-cli
127.0.0.1:6379> auth your_secure_password_here
127.0.0.1:6379> set key1 10
OK
127.0.0.1:6379> get key1
"10"

Next, we need to secure the file permissions for Redis. The redis.conf contains the password for redis, so that file shouldn’t be readable by everyone. We also want to lock down the Redis data directory. Lock down the permissions on Redis by:

[root@redis01 ~]# chmod 700 /var/lib/redis
[root@redis01 ~]# chown redis:redis /etc/redis/redis.conf
[root@redis01 ~]# chmod 600 /etc/redis/redis.conf
[root@redis01 ~]# service redis restart

The Official Redis Administration Guide recommends to disable Transparent Huge Pages (THP). This can be performed live by:

[root@redis01 ~]# echo 'never' > /sys/kernel/mm/transparent_hugepage/enabled
[root@redis01 ~]# echo 'never' > /sys/kernel/mm/transparent_hugepage/defrag

And disable Transparent Huge Pages (THP) at boot time by entering the following before the ‘exit0’:

[root@redis01 ~]# vim /etc/rc.local
...
if test -f /sys/kernel/mm/transparent_hugepage/enabled; then
   echo never > /sys/kernel/mm/transparent_hugepage/enabled
fi
if test -f /sys/kernel/mm/transparent_hugepage/defrag; then
   echo never > /sys/kernel/mm/transparent_hugepage/defrag
fi
...

The Official Redis Administration Guide also recommends setting the following sysctl:

[root@redis01 ~]# sysctl vm.overcommit_memory=1
[root@redis01 ~]# vim /etc/sysctl.conf
...
vm.overcommit_memory = 1
...

Redis Configurations

Now that Redis is installed and secured, its time to tune it for your application’s needs. There are typically 2 types of Redis cache configurations I see:
Session store
Database cache or full page cache

The session cache allows for a single locations for your application to store sessions that would normally be stored by PHP on the local file system. It is important to ensure that the data is saved to disk so you don’t lose all the sessions between restarts of Redis. This is one of the primary advantages of using Redis over Memcached.

The full page cache is great for caching SQL queries or for serving as a full page cache for applications such as Magento or WordPress, caching images, videos, html, css or js. Generally this type of cache doesn’t need to be persistent across restarts of Redis.

Redis Configurations – Session store

When using Redis as a session store, you want to ensure that the Redis data persists between restarts of Redis. Otherwise your users could be left wondering why their shopping carts all of a sudden vanished. The example below will have disk persistence enabled with a memory limit of 1.5G (1536M).

These settings may or may not work for you! Adjust your settings to meet your environments requirements!

[root@redis01 ~]# vim /etc/redis/redis.conf
...
## Ensure disk persistence is enabled
save 900 1
save 300 10
save 60 10000
...
## Set the max memory
maxmemory 1536mb
...

[root@redis01 ~]# service redis restart

Redis Configurations – Database cache or full page cache

When using Redis for database caching or as a FPC for applications like WordPress or Magento, I disable disk persistence. This means the cache will only be stored in memory and lost whenever redis restart. I also set the memory limit to 1.5G (1536M) for starters and adjust accordingly from there. Since I am only storing cached data, I can avoid out of memory issues by allowing Redis to automatically remove the oldest cache entries using the maxmemory-policy allkeys-lru. Read up on the Redis supported eviction policies here.

These settings may or may not work for you! Adjust your settings to meet your environments requirements! Remember, this example assumes everything in Redis can be lost when the service restarts and the eviction policy will remove the least used keys out of all the data. I typically find this works for Magento and WordPress redis setups:

[root@redis01 ~]# vim /etc/redis/redis.conf
...
## Disable disk persistence
#save 900 1
#save 300 10
#save 60 10000
...
## Set the max memory
maxmemory 1536mb
...
## Update the eviction policy
maxmemory-policy allkeys-lru
...

[root@redis01 ~]# service redis restart

Multiple Redis Configurations

Redis has the ability to utilize multiple caching configurations with their own settings. The only requirement is that each instance of Redis listens on a unique port, has a unique pid, and of course has its own config and startup script. In the example below, we are going to have 2 redis instances running called redis-sessions and redis-cache. To avoid confusion, we will disable the original redis instance.

# Create 2 new configs and modify the values accordingly
[root@redis01 ~]# cp /etc/redis/redis.conf /etc/redis/redis-session.conf
[root@redis01 ~]# vim /etc/redis/redis-session.conf
...
pidfile /var/run/redis/redis_session.pid
port 6379
logfile /var/log/redis/redis-session.log
dir /var/lib/redis-session
...
# If unixsocket is uncommented, then update to:
unixsocket /var/run/redis/redis-session.sock
unixsocketperm 777
...

[root@redis01 ~]# cp /etc/redis/redis.conf /etc/redis/redis-cache.conf
[root@redis01 ~]# vim /etc/redis/redis-cache.conf
...
pidfile /var/run/redis_cache.pid
port 6380
logfile /var/log/redis/redis-cache.log
dir /var/lib/redis-cache
...
# If unixsocket is uncommented, then update to:
unixsocket /var/run/redis/redis-cache.sock
unixsocketperm 777
...

# Create directories and secure the permissions
[root@redis01 ~]# mkdir /var/lib/redis-session /var/lib/redis-cache
[root@redis01 ~]# chown redis:redis /var/lib/redis-session /var/lib/redis-cache /etc/redis/redis-session.conf /etc/redis/redis-cache.conf
[root@redis01 ~]# chmod 700 /var/lib/redis-session /var/lib/redis-cache
[root@redis01 ~]# chmod 600 /etc/redis/redis-session.conf /etc/redis/redis-cache.conf

# Create startup files
[root@redis01 ~]# cp /etc/init.d/redis-server /etc/init.d/redis-session
[root@redis01 ~]# vim /etc/init.d/redis-session
...
DAEMON_ARGS=/etc/redis/redis-session.conf
PIDFILE=$RUNDIR/redis-session.pid
...
[root@redis01 ~]# cp /etc/init.d/redis-server /etc/init.d/redis-cache
[root@redis01 ~]# vim /etc/init.d/redis-cache
...
DAEMON_ARGS=/etc/redis/redis-cache.conf
PIDFILE=$RUNDIR/redis-cache.pid
...

# Stop and disable old instance, start new instances
[root@redis01 ~]# service redis-server stop && update-rc.d redis-server disable
[root@redis01 ~]# service redis-session start && update-rc.d redis-session defaults
[root@redis01 ~]# service redis-cache start && update-rc.d redis-cache defaults

# Finally, edit the /etc/redis/redis-session.conf and /etc/redis/redis-cache.conf using the instructions earlier in this article for configuring sessions and db cache.

Client setup

The typical use cases I run into on a day to day basis are clients using Redis for their PHP application. Redis can be used to store cached content, or it can be used to centrally store sessions. Therefore these examples will be PHP focused.

Client setup – General data caching

For storing data, there is nothing that needs to be configured on the client side. The application code itself is what controls storing content within Redis.

Client setup – Storing sessions in Redis

To have Redis act as a central server for sessions, some additional configuration is needed on each client web server. Install php-memcache for your version of PHP. Assuming the default PHP version is installed from the package manager, you can install it by:

[root@web01 ~]# apt-get update
[root@web01 ~]# apt-get install php-redis redis-tools
[root@web01 ~]# service apache2 graceful
[root@web01 ~]# php -m |grep redis

Then update the php.ini as follows:

session.save_handler = redis
session.save_path = "tcp://127.0.0.1:6379?auth=your_secure_password_here"

Test to ensure sessions are now being stored in Redis:

[root@web01 ~]# vim /var/www/html/test-sessions.php
<?php
session_start();
?>
Created a session

Then run the following on the command line and confirm the returned numbers increment as shown below:

[root@web01 ~]# curl localhost/test-sessions.php && redis-cli -a your_secure_password_here keys '*' |grep SESSION | wc -l
10
Created a session
11

Troubleshooting

Confirm Redis is online:

[root@redis01 ~]# redis-cli ping

How to connect using redis-cli when redis is running on a different server or using a different port:

[root@redis01 ~]# redis-cli -h ip_of_redis_server -p port_number_here

Sometimes you may need to flush the Redis cache. Before doing this, make sure you are connecting to the right instance of Redis since there could be multiple instances running. An example is below:

[root@redis01 ~]# redis-cli -h ip_of_redis_server -p port_number_here
FLUSHALL

To get some useful stats on Redis, run:

[root@redis01 ~]# redis-cli
info

To get memory specific stats, run:

[root@redis01 ~]# redis-cli
info memory
127.0.0.1:6379> info memory
# Memory
used_memory:488315720
used_memory_human:465.69M
used_memory_rss:499490816
used_memory_peak:505227288
used_memory_peak_human:481.82M
used_memory_lua:36864
mem_fragmentation_ratio:1.02
mem_allocator:jemalloc-3.6.0

To increase the memory limit assigned to Redis without restarting the service, the example shown below will increase the memory allocated dynamically from 1G to 2G without a restart of the Redis:

[root@redis01 ~]# redis-cli
127.0.0.1:6379> config get maxmemory
1) "maxmemory"
2) "1000000000"
127.0.0.1:6379> config set maxmemory 2g
OK
127.0.0.1:6379> config get maxmemory
1) "maxmemory"
2) "2000000000"

Regarding performance issues with Redis, there are a number of factors that need to be account for, too many for this article. Redis published an excellent article that goes into various things that could cause latency with Redis at:
https://redis.io/topics/latency

How to install and configure Redis on CentOS 7

Redis is an in-memory data structure store that is commonly used as a database, cache or a message broker. What makes Redis powerful is its optional ability to have data persistence. Therefore your dataset isn’t lost during restarts of the service.

The article below will discuss how to install, configure and provide basic security for Redis. From there, it will go into the basic use cases I use it for on a daily basis, session storing and general caching.

Installation

[root@redis01 ~]# yum install epel-release
[root@redis01 ~]# yum install redis
[root@redis01 ~]# systemctl enable redis
[root@redis01 ~]# systemctl start redis.service
[root@redis01 ~]# redis-cli ping

Configuration

Redis listens on port 6379 by default and needs some additional configuration to ensure that it is secured. If you do not protect Redis with a firewall, authentication and have it listen only on a private network, there is a extremely high risk of leaking sensitive data.

First, set Redis to only listen on your private network. Redis does not have any type of encryption built in, it is important that the data is transferred only over private networks or secured tunnels. Set Redis to listen on the private interface by:

[root@redis01 ~]# vim /etc/redis.conf
...
bind redis_servers_private_IP
...

If Redis is being installed on a stand alone web server and will not need to accept connections from other clients, then you can set Redis to listen on the local socket instead by commenting out the bind value and setting up a socket by:

[root@redis01 ~]# mkdir /var/run/redis
[root@redis01 ~]# chown redis:redis /var/run/redis
[root@redis01 ~]# vim /etc/redis.conf
...
# bind 127.0.0.1
unixsocket /var/run/redis/redis.sock
unixsocketperm 777

If you do not have a dedicated firewall, use your OS’s built in firewall to only allow in connections from trusted web servers using their internal IP’s. Some quick examples are below:

# iptables
[root@redis01 ~]# vim /etc/sysconfig/iptables
...
-A INPUT -p tcp -m tcp --dport 6379 -s client_server_private_IP -m comment --comment "redis" -j ACCEPT
[root@redis01 ~]# service iptables restart

# firewalld
[root@redis01 ~]# firewall-cmd --permanent --new-zone=redis
[root@redis01 ~]# firewall-cmd --permanent --zone=redis --add-port=6379/tcp
[root@redis01 ~]# firewall-cmd --permanent --zone=redis --add-source=client_server_private_IP
[root@redis01 ~]# firewall-cmd --reload

To protect Redis further, setup authentication which is a built in security feature. This will force clients to authenticate before being granted access. Use a tool such as apg or pwgen to create a secure password. Set the password within Redis by:

[root@redis01 ~]# vim /etc/redis.conf
...
requirepass your_secure_password_here
...

[root@redis01 ~]# systemctl restart redis

Then test to ensure the password works by:

# This should fail
[root@redis01 ~]# redis_cli
127.0.0.1:6379> set key1 10
(error) NOAUTH Authentication required.

# This should work
[root@redis01 ~]# redis-cli
127.0.0.1:6379> auth your_secure_password_here
127.0.0.1:6379> set key1 10
OK
127.0.0.1:6379> get key1
"10"

Next, we need to secure the file permissions for Redis. The redis.conf contains the password for redis, so that file shouldn’t be readable by everyone. We also want to lock down the Redis data directory. Lock down the permissions on Redis by:

[root@redis01 ~]# chown redis:redis /var/lib/redis
[root@redis01 ~]# chmod 700 /var/lib/redis
[root@redis01 ~]# chown redis:redis /etc/redis.conf
[root@redis01 ~]# chmod 600 /etc/redis.conf
[root@redis01 ~]# systemctl restart redis

The Official Redis Administration Guide recommends to disable Transparent Huge Pages (THP). This can be performed live by:

[root@redis01 ~]# echo 'never' > /sys/kernel/mm/transparent_hugepage/enabled
[root@redis01 ~]# echo 'never' > /sys/kernel/mm/transparent_hugepage/defrag

And disable Transparent Huge Pages (THP) at boot time by making a systemd unit file:

[root@db01 ~]# vim /etc/systemd/system/disable-thp.service
[Unit]
Description=Disable Transparent Huge Pages (THP)

[Service]
Type=simple
ExecStart=/bin/sh -c "echo 'never' > /sys/kernel/mm/transparent_hugepage/enabled && echo 'never' > /sys/kernel/mm/transparent_hugepage/defrag"

[Install]
WantedBy=multi-user.target

[root@db01 ~]# systemctl daemon-reload
[root@db01 ~]# systemctl start disable-thp
[root@db01 ~]# systemctl enable disable-thp

The Official Redis Administration Guide also recommends setting the following sysctl:

[root@redis01 ~]# sysctl vm.overcommit_memory=1
[root@redis01 ~]# vim /etc/sysctl.conf
...
vm.overcommit_memory = 1
...

Redis Configurations

Now that Redis is installed and secured, its time to tune it for your application’s needs. There are typically 2 types of Redis cache configurations I see:
Session store
Database cache or full page cache

The session cache allows for a single locations for your application to store sessions that would normally be stored by PHP on the local file system. It is important to ensure that the data is saved to disk so you don’t lose all the sessions between restarts of Redis. This is one of the primary advantages of using Redis over Memcached.

The full page cache is great for caching SQL queries or for serving as a full page cache for applications such as Magento or WordPress, caching images, videos, html, css or js. Generally this type of cache doesn’t need to be persistent across restarts of Redis.

Redis Configurations – Session store

When using Redis as a session store, you want to ensure that the Redis data persists between restarts of Redis. Otherwise your users could be left wondering why their shopping carts all of a sudden vanished. The example below will have disk persistence enabled with a memory limit of 1.5G (1536M).

These settings may or may not work for you! Adjust your settings to meet your environments requirements!

[root@redis01 ~]# vim /etc/redis.conf
...
## Ensure disk persistence is enabled
save 900 1
save 300 10
save 60 10000
...
## Set the max memory
maxmemory 1536mb
...

[root@redis01 ~]# systemctl restart redis

Redis Configurations – Database cache or full page cache

When using Redis for database caching or as a FPC for applications like WordPress or Magento, I disable disk persistence. This means the cache will only be stored in memory and lost whenever redis restart. I also set the memory limit to 1.5G (1536M) for starters and adjust accordingly from there. Since I am only storing cached data, I can avoid out of memory issues by allowing Redis to automatically remove the oldest cache entries using the maxmemory-policy allkeys-lru. Read up on the Redis supported eviction policies here.

These settings may or may not work for you! Adjust your settings to meet your environments requirements! Remember, this example assumes everything in Redis can be lost when the service restarts and the eviction policy will remove the least used keys out of all the data. I typically find this works for Magento and WordPress redis setups:

[root@redis01 ~]# vim /etc/redis.conf
...
## Disable disk persistence
#save 900 1
#save 300 10
#save 60 10000
...
## Set the max memory
maxmemory 1536mb
...
## Update the eviction policy
maxmemory-policy allkeys-lru
...

[root@redis01 ~]# systemctl restart redis

Multiple Redis Configurations

Redis has the ability to utilize multiple caching configurations with their own settings. The only requirement is that each instance of Redis listens on a unique port, has a unique pid, and of course has its own config and startup script. In the example below, we are going to have 2 redis instances running called redis-sessions and redis-cache. To avoid confusion, we will disable the original redis instance.

# Create 2 new configs and modify the values accordingly
[root@redis01 ~]# cp /etc/redis.conf /etc/redis-session.conf
[root@redis01 ~]# vim /etc/redis-session.conf
...
pidfile /var/run/redis_session.pid
port 6379
logfile /var/log/redis/redis-session.log
dir /var/lib/redis-session
...
# If unixsocket is uncommented, then update to:
unixsocket /var/run/redis/redis-session.sock
unixsocketperm 777
...

[root@redis01 ~]# cp /etc/redis.conf /etc/redis-cache.conf
...
pidfile /var/run/redis_cache.pid
port 6380
logfile /var/log/redis/redis-cache.log
dir /var/lib/redis-cache
...
# If unixsocket is uncommented, then update to:
unixsocket /var/run/redis/redis-cache.sock
unixsocketperm 777
...

# Create directories and secure the permissions
[root@redis01 ~]# mkdir /var/lib/redis-session /var/lib/redis-cache
[root@redis01 ~]# chown redis:redis /var/lib/redis-session /var/lib/redis-cache /etc/redis-session.conf /etc/redis-cache.conf
[root@redis01 ~]# chmod 700 /var/lib/redis-session /var/lib/redis-cache
[root@redis01 ~]# chmod 600 /etc/redis-session.conf /etc/redis-cache.conf

# Create startup files
[root@redis01 ~]# cp /usr/lib/systemd/system/redis.service /usr/lib/systemd/system/redis-session.service
[root@redis01 ~]# vim /usr/lib/systemd/system/redis-session.service
...
ExecStart=/usr/bin/redis-server /etc/redis-session.conf --daemonize no
...
[root@redis01 ~]# cp /usr/lib/systemd/system/redis.service /usr/lib/systemd/system/redis-cache.service
[root@redis01 ~]# vim /usr/lib/systemd/system/redis-cache.service
...
ExecStart=/usr/bin/redis-server /etc/redis-cache.conf --daemonize no
...

# Stop and disable old instance, start new instances
[root@redis01 ~]# systemctl daemon-reload
[root@redis01 ~]# systemctl stop redis.service && systemctl disable redis.service
[root@redis01 ~]# systemctl enable redis-session.service && systemctl start redis-session.service
[root@redis01 ~]# systemctl enable redis-cache.service && systemctl start redis-cache.service

# Finally, edit the /etc/redis-session.conf and /etc/redis-cache.conf using the instructions earlier in this article for configuring sessions and db cache.

Client setup

The typical use cases I run into on a day to day basis are clients using Redis for their PHP application. Redis can be used to store cached content, or it can be used to centrally store sessions. Therefore these examples will be PHP focused.

Client setup – General data caching

For storing data, there is nothing that needs to be configured on the client side. The application code itself is what controls storing content within Redis.

Client setup – Storing sessions in Redis

To have Redis act as a central server for sessions, some additional configuration is needed on each client web server. Install php-memcache for your version of PHP. Assuming the default PHP version is installed from the package manager, you can install it by:

[root@web01 ~]# yum install php-pecl-redis
[root@web01 ~]# service httpd graceful
[root@web01 ~]# php -m |grep redis

Then update the php.ini as follows:

session.save_handler = redis
session.save_path = "tcp://127.0.0.1:6379?auth=your_secure_password_here"

On CentOS and Red Hat servers, depending on what version of PHP was installed and how, you may have to update another file as it will override the php.ini. Only change this if the values exist and are configured for files:

[root@web01 ~]# vim /etc/httpd/conf.d/php.conf
session.save_handler = redis
session.save_path = "tcp://127.0.0.1:6379?auth=your_secure_password_here"

Test to ensure sessions are now being stored in Redis:

[root@web01 ~]# vim /var/www/html/test-sessions.php
<?php
session_start();
?>
Created a session

Then run the following on the command line and confirm the returned numbers increment as shown below:

[root@web01 ~]# curl localhost/test-sessions.php && redis-cli -a your_secure_password_here keys '*' |grep SESSION | wc -l
10
Created a session
11

Troubleshooting

Confirm Redis is online:

[root@redis01 ~]# redis-cli ping

How to connect using redis-cli when redis is running on a different server or using a different port:

[root@redis01 ~]# redis-cli -h ip_of_redis_server -p port_number_here

Sometimes you may need to flush the Redis cache. Before doing this, make sure you are connecting to the right instance of Redis since there could be multiple instances running. An example is below:

[root@redis01 ~]# redis-cli -h ip_of_redis_server -p port_number_here
FLUSHALL

To get some useful stats on Redis, run:

[root@redis01 ~]# redis-cli
info

To get memory specific stats, run:

[root@redis01 ~]# redis-cli
info memory
127.0.0.1:6379> info memory
# Memory
used_memory:488315720
used_memory_human:465.69M
used_memory_rss:499490816
used_memory_peak:505227288
used_memory_peak_human:481.82M
used_memory_lua:36864
mem_fragmentation_ratio:1.02
mem_allocator:jemalloc-3.6.0

To increase the memory limit assigned to Redis without restarting the service, the example shown below will increase the memory allocated dynamically from 1G to 2G without a restart of the Redis:

[root@redis01 ~]# redis-cli
127.0.0.1:6379> config get maxmemory
1) "maxmemory"
2) "1000000000"
127.0.0.1:6379> config set maxmemory 2g
OK
127.0.0.1:6379> config get maxmemory
1) "maxmemory"
2) "2000000000"

Regarding performance issues with Redis, there are a number of factors that need to be account for, too many for this article. Redis published an excellent article that goes into various things that could cause latency with Redis at:
https://redis.io/topics/latency

How to install and configure Redis on CentOS 6

Redis is an in-memory data structure store that is commonly used as a database, cache or a message broker. What makes Redis powerful is its optional ability to have data persistence. Therefore your dataset isn’t lost during restarts of the service.

The article below will discuss how to install, configure and provide basic security for Redis. From there, it will go into the basic use cases I use it for on a daily basis, session storing and general caching.

Installation

[root@redis01 ~]# yum install epel-release
[root@redis01 ~]# yum install redis
[root@redis01 ~]# chkconfig redis on
[root@redis01 ~]# service redis start
[root@redis01 ~]# redis-cli ping

Configuration

Redis listens on port 6379 by default and needs some additional configuration to ensure that it is secured. If you do not protect Redis with a firewall, authentication and have it listen only on a private network, there is a extremely high risk of leaking sensitive data.

First, set Redis to only listen on your private network. Redis does not have any type of encryption built in, it is important that the data is transferred only over private networks or secured tunnels. Set Redis to listen on the private interface by:

[root@redis01 ~]# vim /etc/redis.conf
...
bind redis_servers_private_IP
...

If Redis is being installed on a stand alone web server and will not need to accept connections from other clients, then you can set Redis to listen on the local socket instead by commenting out the bind value and setting up a socket by:

[root@redis01 ~]# mkdir /var/run/redis
[root@redis01 ~]# chown redis:redis /var/run/redis
[root@redis01 ~]# vim /etc/redis.conf
...
# bind 127.0.0.1
unixsocket /var/run/redis/redis.sock
unixsocketperm 777

If you do not have a dedicated firewall, use your OS’s built in firewall to only allow in connections from trusted web servers using their internal IP’s. Some quick examples are below:

# iptables
[root@redis01 ~]# vim /etc/sysconfig/iptables
...
-A INPUT -p tcp -m tcp --dport 6379 -s client_server_private_IP -m comment --comment "redis" -j ACCEPT
[root@redis01 ~]# service iptables restart

To protect Redis further, setup authentication which is a built in security feature. This will force clients to authenticate before being granted access. Use a tool such as apg or pwgen to create a secure password. Set the password within Redis by:

[root@redis01 ~]# vim /etc/redis.conf
...
requirepass your_secure_password_here
...

[root@redis01 ~]# service redis restart

Then test to ensure the password works by:

# This should fail
[root@redis01 ~]# redis_cli
127.0.0.1:6379> set key1 10
(error) NOAUTH Authentication required.

# This should work
[root@redis01 ~]# redis-cli
127.0.0.1:6379> auth your_secure_password_here
127.0.0.1:6379> set key1 10
OK
127.0.0.1:6379> get key1
"10"

Next, we need to secure the file permissions for Redis. The redis.conf contains the password for redis, so that file shouldn’t be readable by everyone. We also want to lock down the Redis data directory. Lock down the permissions on Redis by:

[root@redis01 ~]# chown redis:redis /var/lib/redis
[root@redis01 ~]# chmod 700 /var/lib/redis
[root@redis01 ~]# chown redis:redis /etc/redis.conf
[root@redis01 ~]# chmod 600 /etc/redis.conf
[root@redis01 ~]# service redis restart

The Official Redis Administration Guide recommends disabling Transparent Huge Pages (THP). This can be performed live by:

[root@redis01 ~]# echo 'never' > /sys/kernel/mm/transparent_hugepage/enabled
[root@redis01 ~]# echo 'never' > /sys/kernel/mm/transparent_hugepage/defrag

And disable Transparent Huge Pages (THP) at boot time by entering the following before the ‘exit0’:

[root@redis01 ~]# vim /etc/rc.local
...
if test -f /sys/kernel/mm/transparent_hugepage/enabled; then
   echo never > /sys/kernel/mm/transparent_hugepage/enabled
fi
if test -f /sys/kernel/mm/transparent_hugepage/defrag; then
   echo never > /sys/kernel/mm/transparent_hugepage/defrag
fi
...

The Official Redis Administration Guide also recommends setting the following sysctl:

[root@redis01 ~]# sysctl vm.overcommit_memory=1
[root@redis01 ~]# vim /etc/sysctl.conf
...
vm.overcommit_memory = 1
...

Redis Configurations

Now that Redis is installed and secured, its time to tune it for your application’s needs. There are typically 2 types of Redis cache configurations I see:
Session store
Database cache or full page cache

The session cache allows for a single locations for your application to store sessions that would normally be stored by PHP on the local file system. It is important to ensure that the data is saved to disk so you don’t lose all the sessions between restarts of Redis. This is one of the primary advantages of using Redis over Memcached.

The full page cache is great for caching SQL queries or for serving as a full page cache for applications such as Magento or WordPress, caching images, videos, html, css or js. Generally this type of cache doesn’t need to be persistent across restarts of Redis.

Redis Configurations – Session store

When using Redis as a session store, you want to ensure that the Redis data persists between restarts of Redis. Otherwise your users could be left wondering why their shopping carts all of a sudden vanished. The example below will have disk persistence enabled with a memory limit of 1.5G (1536M).

These settings may or may not work for you! Adjust your settings to meet your environments requirements!

[root@redis01 ~]# vim /etc/redis.conf
...
## Ensure disk persistence is enabled
save 900 1
save 300 10
save 60 10000
...
## Set the max memory
maxmemory 1536mb
...

[root@redis01 ~]# service redis restart

Redis Configurations – Database cache or full page cache

When using Redis for database caching or as a FPC for applications like WordPress or Magento, I disable disk persistence. This means the cache will only be stored in memory and lost whenever Redis restart. I also set the memory limit to 1.5G (1536M) for starters and adjust accordingly from there. Since I am only storing cached data, I can avoid out of memory issues by allowing Redis to automatically remove the oldest cache entries using the maxmemory-policy allkeys-lru. Read up on the Redis supported eviction policies here.

These settings may or may not work for you! Adjust your settings to meet your environments requirements! Remember, this example assumes everything in Redis can be lost when the service restarts and the eviction policy will remove the least used keys out of all the data. I typically find this works for Magento and WordPress Redis setups:

[root@redis01 ~]# vim /etc/redis.conf
...
## Disable disk persistence
#save 900 1
#save 300 10
#save 60 10000
...
## Set the max memory
maxmemory 1536mb
...
## Update the eviction policy
maxmemory-policy allkeys-lru
...

[root@redis01 ~]# service redis restart

Multiple Redis Configurations

Redis has the ability to utilize multiple caching configurations with their own settings. The only requirement is that each instance of Redis listens on a unique port, has a unique pid, and of course has its own config and startup script. In the example below, we are going to have 2 Redis instances running called redis-sessions and redis-cache. To avoid confusion, we will disable the original Redis instance.

# Create 2 new configs and modify the values accordingly
[root@redis01 ~]# cp /etc/redis.conf /etc/redis-session.conf
[root@redis01 ~]# vim /etc/redis-session.conf
...
pidfile /var/run/redis_session.pid
port 6379
logfile /var/log/redis/redis-session.log
dir /var/lib/redis-session
...
# If unixsocket is uncommented, then update to:
unixsocket /var/run/redis/redis-session.sock
unixsocketperm 777
...

[root@redis01 ~]# cp /etc/redis.conf /etc/redis-cache.conf
...
pidfile /var/run/redis_cache.pid
port 6380
logfile /var/log/redis/redis-cache.log
dir /var/lib/redis-cache
...
# If unixsocket is uncommented, then update to:
unixsocket /var/run/redis/redis-cache.sock
unixsocketperm 777
...

# Create directories and secure the permissions
[root@redis01 ~]# mkdir /var/lib/redis-session /var/lib/redis-cache
[root@redis01 ~]# chown redis:redis /var/lib/redis-session /var/lib/redis-cache /etc/redis-session.conf /etc/redis-cache.conf
[root@redis01 ~]# chmod 700 /var/lib/redis-session /var/lib/redis-cache
[root@redis01 ~]# chmod 600 /etc/redis-session.conf /etc/redis-cache.conf

# Create startup files
[root@redis01 ~]# cp /etc/init.d/redis /etc/init.d/redis-session
[root@redis01 ~]# vim /etc/init.d/redis-session
...
pidfile="/var/run/redis/redis_session.pid"
REDIS_CONFIG="/etc/redis-session.conf"
...
[root@redis01 ~]# cp /etc/init.d/redis /etc/init.d/redis-cache
[root@redis01 ~]# vim /etc/init.d/redis-cache
...
pidfile="/var/run/redis/redis_cache.pid"
REDIS_CONFIG="/etc/redis-cache.conf"
...

# Stop and disable old instance, start new instances
[root@redis01 ~]# service redis stop && chkconfig redis off
[root@redis01 ~]# service redis-session start && chkconfig redis-session on
[root@redis01 ~]# service redis-cache start && chkconfig redis-session on

# Finally, edit the /etc/redis-session.conf and /etc/redis-cache.conf using the instructions earlier in this article for configuring sessions and db cache.

Client setup

The typical use cases I run into on a day to day basis are clients using Redis for their PHP application. Redis can be used to store cached content, or it can be used to centrally store sessions. Therefore these examples will be PHP focused.

Client setup – General data caching

For storing data, there is nothing that needs to be configured on the client side. The application code itself is what controls storing content within Redis.

Client setup – Storing sessions in Redis

To have Redis act as a central server for sessions, some additional configuration is needed on each client web server. Install php-memcache for your version of PHP. Assuming the default PHP version is installed from the package manager, you can install it by:

[root@web01 ~]# yum install php-pecl-redis
[root@web01 ~]# service httpd graceful
[root@web01 ~]# php -m |grep redis

Then update the php.ini as follows:

session.save_handler = redis
session.save_path = "tcp://127.0.0.1:6379?auth=your_secure_password_here"

On CentOS and Red Hat servers, depending on what version of PHP was installed and how, you may have to update another file as it will override the php.ini. Only change this if the values exist and are configured for files:

[root@web01 ~]# vim /etc/httpd/conf.d/php.conf
session.save_handler = redis
session.save_path = "tcp://127.0.0.1:6379?auth=your_secure_password_here"

Test to ensure sessions are now being stored in Redis:

[root@web01 ~]# vim /var/www/html/test-sessions.php
<?php
session_start();
?>
Created a session

Then run the following on the command line and confirm the returned numbers increment as shown below:

[root@web01 ~]# curl localhost/test-sessions.php && redis-cli -a your_secure_password_here keys '*' |grep SESSION | wc -l
10
Created a session
11

Troubleshooting

Confirm Redis is online:

[root@redis01 ~]# redis-cli ping

How to connect using redis-cli when redis is running on a different server or using a different port:

[root@redis01 ~]# redis-cli -h ip_of_redis_server -p port_number_here

Sometimes you may need to flush the Redis cache. Before doing this, make sure you are connecting to the right instance of Redis since there could be multiple instances running. An example is below:

[root@redis01 ~]# redis-cli -h ip_of_redis_server -p port_number_here
FLUSHALL

To get some useful stats on Redis, run:

[root@redis01 ~]# redis-cli
info

To get memory specific stats, run:

[root@redis01 ~]# redis-cli
info memory
127.0.0.1:6379> info memory
# Memory
used_memory:488315720
used_memory_human:465.69M
used_memory_rss:499490816
used_memory_peak:505227288
used_memory_peak_human:481.82M
used_memory_lua:36864
mem_fragmentation_ratio:1.02
mem_allocator:jemalloc-3.6.0

To increase the memory limit assigned to Redis without restarting the service, the example shown below will increase the memory allocated dynamically from 1G to 2G without a restart of the Redis:

[root@redis01 ~]# redis-cli
127.0.0.1:6379> config get maxmemory
1) "maxmemory"
2) "1000000000"
127.0.0.1:6379> config set maxmemory 2g
OK
127.0.0.1:6379> config get maxmemory
1) "maxmemory"
2) "2000000000"

Regarding performance issues with Redis, there are a number of factors that need to be account for, too many for this article. Redis published an excellent article that goes into various things that could cause latency with Redis at:
https://redis.io/topics/latency

How to install and configure Memcached

Memcached is commonly used to alleviate backend database contention by temporarily storing recently requested database records in memory. As a result, the database traffic is reduced as Memcached is able to pull the records from cache.

Installing Memcached is quick. However there are a number of steps that need to be taken to secure Memcached. Memcached has the potential to store a lot of sensitive information, so its critical that the service is locked down properly to prevent outside access or data leakage.

Installation

# CentOS 6
[root@memcached01 ~]# yum install memcached
[root@memcached01 ~]# chkconfig memcached on
[root@memcached01 ~]# service memcached start

# CentOS 7
[root@memcached01 ~]# yum install memcached
[root@memcached01 ~]# systemctl enable memcached
[root@memcached01 ~]# systemctl start memcached

# Ubuntu 14.04 and 16.04
[root@memcached01 ~]# apt-get update
[root@memcached01 ~]# apt-get install memcached

Configuration

Memcached listens on port 11211 by default and does not have any type of restrictions built in to prevent it from being queried from the public internet. If you do not protect Memcached with a firewall, there is a extremely high risk of leaking sensitive data. In recent years, unprotected memcached servers have also been exploited to launch DDoS amplification attacks.

If you do not have a dedicated firewall, use your OS’s built in firewall to only allow in connections for trusted web servers using their internal IP’s. Some quick examples are below:

# iptables
[root@memcached01 ~]# vim /etc/sysconfig/iptables
...
-A INPUT -p tcp -m tcp --dport 11211 -s client_server_private_IP -m comment --comment "memcached" -j ACCEPT
service iptables restart

# firewalld
[root@memcached01 ~]# firewall-cmd --permanent --new-zone=memcached
[root@memcached01 ~]# firewall-cmd --permanent --zone=memcached --add-port=11211/tcp
[root@memcached01 ~]# firewall-cmd --permanent --zone=memcached --add-source=client_server_private_IP
[root@memcached01 ~]# firewall-cmd --reload

# ufw
[root@memcached01 ~]# ufw allow from client_server_private_IP/32 to any port 11211

When configuring memcached itself, there are a few options to set. In my case, to help limit exposure to the service, I want to set Memcached to only listen on my private network interface and disable UDP. I’ll also set max connections to be 16384 and set the max cachesize to be 1.5G (1536M). Adjust the connections and cachesize as needed for your situation. Below is the example using the options above:

# CentOS 6 and 7
[root@memcached01 ~]# vim /etc/sysconfig/memcached

# Ubuntu 14.04 and 16.04
[root@memcached01 ~]# vim /etc/memcached.conf

The configuration options to use to reflect the requirements above:

PORT="11211"
USER="memcached"
MAXCONN="16384"
CACHESIZE="1536"
OPTIONS="-l memcached_servers_private_IP -U 0"

Then restart memcached

[root@memcached01 ~]# service memcached restart

Client setup

The typical use cases I run into on a day to day basis are clients using memcached for their PHP application. Memcached can be used to store cached content, or it can be used to centrally store sessions. Therefore these examples will be PHP focused.

Client setup – General data caching

For storing data, there is nothing that needs to be configured on the client side. The application code itself is what controls storing content within Memcached. To ensure memcached can be reached to store content, create the following test script:

[root@web01 ~]# vim /var/www/html/test-memcached.php
if (class_exists('Memcache')) {
    $meminstance = new Memcache();
} else {
    $meminstance = new Memcached();
}

$meminstance->addServer("memcached_servers_private_IP", 11211);

$result = $meminstance->get("test");

if ($result) {
    echo $result;
} else {
    echo "No matching key found.  Refresh the browser to add it!";
    $meminstance->set("test", "Successfully retrieved the data!") or die("Couldn't save anything to memcached...");
}

Then run it in your browser. You can further confirm it works by running it on the command line and confirming ‘cmd_set’ increments by one indicating it was able to store the object:

[root@web01 ~]# echo stats | nc localhost 11211 | grep cmd_set; curl localhost/test-memcached.php; echo stats | nc localhost 11211 |grep cmd_set
STAT cmd_set 2
No matching key found.  Refresh the browser to add it!STAT cmd_set 3

[root@web01 ~]# echo stats | nc localhost 11211 | grep cmd_set; curl localhost/test-memcached.php; echo stats | nc localhost 11211 |grep cmd_set
STAT cmd_set 3
Successfully retrieved the data!STAT cmd_set 3

Digital Ocean has a good article that goes into far more detail on various ways to test/use memcached:
https://www.digitalocean.com/community/tutorials/how-to-install-and-use-memcache-on-ubuntu-14-04

Client setup – Storing sessions in memcached

To have memcached act as a central server for sessions, some additional configuration is needed on each client web server. Install php-memcache for your version of PHP. Assuming the default PHP version is installed from the package manager, you can install it by:

# Red Hat / CentOS:
[root@web01 ~]# yum install php-pecl-memcached
[root@web01 ~]# service httpd graceful
[root@web01 ~]# php -m |grep memcached

# Ubuntu
[root@web01 ~]# apt-get update
[root@web01 ~]# apt-get install php-pecl-memcached
[root@web01 ~]# service apache2 graceful
[root@web01 ~]# php -m |grep memcached

Then update php.ini on both server as follows:

session.save_handler = memcached
session.save_path="memcached_servers_private_IP:11211?persistent=1&weight=1&timeout=1&retry_interval=15"

On CentOS and Red Hat servers, depending on what version of PHP was installed and how, you may have to update another file as it will override the php.ini. Only change this if the values exist and are configured for files:

[root@web01 ~]# vim /etc/httpd/conf.d/php.conf
php_value session.save_handler "memcached"
php_value session.save_path    "memcached_servers_private_IP:11211?persistent=1&weight=1&timeout=1&retry_interval=15"

Test to ensure sessions are now being stored in memcached:

[root@web01 ~]# vim /var/www/html/test-sessions.php
<?php
session_start();
?>
Created a session

Perform the test by running the following on the command line and confirming ‘cmd_set’ increases, indicating it was able to store the session:

[root@web01 ~]# echo stats | nc localhost 11211 | grep cmd_set; curl localhost/session.php; echo stats | nc localhost 11211 |grep cmd_set
STAT cmd_set 17
Created a session
STAT cmd_set 19

Client setup – Distributed sessions across multiple Memcached instances

Building solutions that can withstand failure is always recommended. Having a Memcached server go offline that is storing sessions will most likely result in unhappy customers. Memcached does not have a built in mechanism to replicate data between multiple memcached servers. This functionality is instead done on the client side.

To allow another Memcached server to take over connections without replicating the session data, update the php.ini on the web servers with the snippet below and restart Apache:

memcache.hash_strategy = consistent
session.save_handler = memcache
memcache.allow_failover = 1
session.save_path="memcached_servers_private_IP:11211?persistent=1&weight=1&timeout=1&retry_interval=15,memcached_servers_private_IP:11211?persistent=1&weight=1&timeout=1&retry_interval=15"

If you wanted to have automatic failure and also ensure that the sessions are replicated to each Memcached server, update the php.ini on the web servers with the snippet below and restart Apache:

memcache.hash_strategy = consistent
memcache.session_redundancy=3
memcache.allow_failover = 1
session.save_handler = memcache
session.save_path="memcached_servers_private_IP:11211?persistent=1&weight=1&timeout=1&retry_interval=15,memcached_servers_private_IP:11211?persistent=1&weight=1&timeout=1&retry_interval=15"

Important note: To determine what memcache.session_redundacy should be set to, simply total up all the Memcached servers and add 1 to that total. So in the example above, I have 2 Memcached servers. Therefore, the memcache.session_redundacy should be set to 3.

Troubleshooting

Confirm the web server can reach the memcached server:

[root@web01 ~]# telnet memcached_servers_private_IP 11211

Verify traffic is being sent from the web servers to the memcached server:

[root@web01 ~]# tcpdump -i any port 11211

Checking memcached stats:

[root@memcached01 ~]# echo stats | nc localhost 11211

To see some of the more commonly used stats, use:

[root@memcached01 ~]# echo stats | nc localhost 11211 |grep -E 'total_connections|curr_connections|limit_maxbytes|bytes'

Check to see if you may need to increase the size of the cache. If bytes is reaching the total memory allocation defined by ‘limit_maxbytes’, you may need to increase the cachesize setting:

[root@memcached01 ~]# echo stats | nc localhost 11211 |grep -E 'bytes|limit_maxbytes' |grep -v bytes_ |grep -v _bytes

To flush all the data within memcache:

[root@memcached01 ~]# echo flush_all | nc localhost 11211

To retrieve the version of memcached:

[root@memcached01 ~]# echo version | nc localhost 11211

Ubuntu 16.04 Apache 2.4 with PHP-FPM

PHP-FPM does have some advantages depending on the solution and the common path is to use Nginx with PHP-FPM. However what happens when you want to utilize the normal features of Apache, such as basics like .htaccess files, but still keep the tuning options open that come with PHP-FPM? Well, there is a module for that!

This guide is going to assume a fresh Ubuntu 16.04 server to illustrate everything from start to finish, and will assume that all sites on this server will use the same php-fpm pool.

First, installed the required packages for your web server:

[root@web01 ~]# apt-get update
[root@web01 ~]# apt-get install php7.0-fpm apache2

Now confirm or update the Apache configuration to use the mpm_event_module instead of the mpm_prefork_module:

[root@web01 ~]# a2enmod actions
[root@web01 ~]# apache2ctl -M | grep mpm
[root@web01 ~]# a2dismod mpm_prefork
[root@web01 ~]# a2dismod mpm_worker
[root@web01 ~]# a2enmod mpm_event

Then tell Apache to send all PHP requests over to PHP-FPM by creating a new configuration file:

[root@web01 ~]# vim /etc/apache2/conf-available/php.conf
<FilesMatch \.php$>
	SetHandler "proxy:unix:/run/php/php7.0-fpm.sock|fcgi://localhost/"
</FilesMatch>

Enable the new Apache PHP configuration:

[root@web01 ~]# a2enconf php.conf

Confirm PHP-FPM is set to use sockets instead of TCP connections for performance purposes, and also confirm the following additional settings:

[root@web01 ~]# vim /etc/php/7.0/fpm/pool.d/www.conf
; listen = 127.0.0.1:9000
listen = /run/php/php7.0-fpm.sock
...
listen.owner = www-data
listen.group = www-data
listen.mode = 0660
user = www-data
group = www-data

Enable FCGI proxy, then restart Apache and PHP-FPM to apply the changes above:

[root@web01 ~]# a2enmod proxy_fcgi
[root@web01 ~]# systemctl restart apache2
[root@web01 ~]# systemctl restart php7.0-fpm

If you are using a software firewall on the server, open ports 80/443 accordingly. This example will open them up to the world. Adjust yours accordingly:

[root@web01 ~]# ufw allow 80
[root@web01 ~]# ufw allow 443

Finally, test a site to ensure PHP is working and is using PHP-FPM by creating the file below, then visiting the page at x.x.x.x/info.php:

[root@web01 ~]# vim /var/www/html/info.php
<?php phpinfo(); ?>

And your done!

Using multiple PHP-FPM pools

What happens if you want to isolate each site to their own PHP-FPM pool instead of using a shared pool? That is easy enough to do. Assuming that you followed everything in this guide to get to this point, do the following.

First, disable the global Apache configuration for PHP:

[root@web01 ~]# a2disconf php.conf

Create a new PHP-FPM pool for this specific site and update it accordingly:

[root@web01 ~]# cp /etc/php/7.0/fpm/pool.d/www.conf /etc/php/7.0/fpm/pool.d/example.com.conf
[root@web01 ~]# vim /etc/php/7.0/fpm/pool.d/example.com.conf
; Start a new pool named 'www'.
; the variable $pool can be used in any directive and will be replaced by the
; pool name ('www' here)
[example.com]
...
; listen = 127.0.0.1:9000
listen = /run/php/www.example.com-php7.0-fpm.sock
...
listen.owner = www-data
listen.group = www-data
listen.mode = 0660
user = www-data
group = www-data

Then update the site’s Apache vhost to point to a new PHP-FPM pool in both the 80 and 443 stanzas. Be sure to update the socket accordingly for your site in the 2 sections below! (ie: unix:/run/php/www.example.com-php7.0-fpm.sock)

[root@web01 ~]# vim /etc/httpd/vhost.d/example.com.conf
<VirtualHost *:80>
        ServerName example.com
        ServerAlias www.example.com
        DocumentRoot /var/www/vhosts/example.com

	# Send PHP requests to php-fpm
        <FilesMatch \.php$>
                SetHandler "proxy:unix:/run/php/www.example.com-php7.0-fpm.sock|fcgi://localhost/"
        </FilesMatch>

...
<VirtualHost *:443>
        ServerName example.com
        ServerAlias www.example.com
        DocumentRoot /var/www/vhosts/example.com

	# Send PHP requests to php-fpm
        <FilesMatch \.php$>
                SetHandler "proxy:unix:/run/php/www.example.com-php7.0-fpm.sock|fcgi://localhost/"
        </FilesMatch>
...

Enable FCGI proxy, then restart Apache and PHP-FPM to apply the changes above:

[root@web01 ~]# a2enmod proxy_fcgi
[root@web01 ~]# systemctl restart php7.0-fpm
[root@web01 ~]# systemctl restart apache2

Finally, test a site to ensure PHP is working and is using PHP-FPM by creating the file below, then visiting the page at example.com/info.php:

[root@web01 ~]# vim /var/www/vhosts/example.com/info.php
<?php phpinfo(); ?>

And your done!

Ubuntu 14.04 Apache 2.4 with PHP-FPM

PHP-FPM does have some advantages depending on the solution and the common path is to use Nginx with PHP-FPM. However what happens when you want to utilize the normal features of Apache, such as basics like .htaccess files, but still keep the tuning options open that come with PHP-FPM? Well, there is a module for that!

This guide is going to assume a fresh Ubuntu 14.04 server to illustrate everything from start to finish, and will assume that all sites on this server will use the same php-fpm pool.

First, installed the required packages for your web server:

[root@web01 ~]# apt-get update
[root@web01 ~]# apt-get install php5-fpm apache2 libapache2-mod-fastcgi

Now update the Apache configuration to use the mpm_event_module instead of the mpm_prefork_module:

[root@web01 ~]# a2enmod actions
[root@web01 ~]# apache2ctl -M | grep mpm
[root@web01 ~]# a2dismod mpm_prefork
[root@web01 ~]# a2dismod mpm_worker
[root@web01 ~]# a2enmod mpm_event

Then tell Apache to send all PHP requests over to PHP-FPM by creating a new configuration file:

[root@web01 ~]# vim /etc/apache2/conf-available/php.conf

<IfModule mod_fastcgi.c>
        AddHandler php5.fcgi .php
        Action php5.fcgi /php5.fcgi
        Alias /php5.fcgi /usr/lib/cgi-bin/php5.fcgi
        FastCgiExternalServer /usr/lib/cgi-bin/php5.fcgi -socket /var/run/php-fpm.sock -pass-header Authorization -idle-timeout 3600
        <Directory /usr/lib/cgi-bin>
                Require all granted
        </Directory>
</IfModule>

Enable the new Apache PHP configuration:

[root@web01 ~]# a2enconf php.conf

Confirm PHP-FPM is set to use sockets instead of TCP connections for performance purposes, and also confirm the following additional settings:

[root@web01 ~]# vim /etc/php5/fpm/pool.d/www.conf
; listen = 127.0.0.1:9000
listen = /var/run/php-fpm.sock
...
listen.owner = www-data
listen.group = www-data
listen.mode = 0660
user = www-data
group = www-data

Restart Apache and PHP-FPM to apply the changes:

[root@web01 ~]# service apache2 restart
[root@web01 ~]# service php5-fpm restart

If you are using a software firewall on the server, open ports 80/443 accordingly. This example will open them up to the world. Adjust yours accordingly:

[root@web01 ~]# ufw allow 80
[root@web01 ~]# ufw allow 443

Finally, test a site to ensure PHP is working and is using PHP-FPM by creating the file below, then visiting the page at x.x.x.x/info.php:

[root@web01 ~]# vim /var/www/html/info.php
<?php phpinfo(); ?>

And your done!

Using multiple PHP-FPM pools

What happens if you want to isolate each site to their own PHP-FPM pool instead of using a shared pool? That is easy enough to do. Assuming that you followed everything in this guide to get to this point, do the following.

First, disable the global Apache configuration for PHP:

[root@web01 ~]# a2disconf php.conf

Create a new PHP-FPM pool for this specific site and update it accordingly:

[root@web01 ~]# cp /etc/php5/fpm/pool.d/www.conf /etc/php5/fpm/pool.d/example.com.conf
[root@web01 ~]# vim /etc/php5/fpm/pool.d/example.com.conf
; listen = 127.0.0.1:9000
listen = /var/run/www.example.com-php5-fpm.sock
...
listen.owner = www-data
listen.group = www-data
listen.mode = 0660
user = www-data
group = www-data

Then update the site’s Apache vhost to point to a new PHP-FPM pool in both the 80 and 443 stanzas. Be sure to update the socket accordingly for your site in the 2 sections below! (ie: -socket /var/run/www.example.com-php5-fpm.sock)

[root@web01 ~]# vim /etc/apache2/sites-enabled/example.com.conf
<VirtualHost *:80>
        ServerName example.com
        ServerAlias www.example.com
        DocumentRoot /var/www/vhosts/example.com

	# Send PHP requests to php-fpm
	<IfModule mod_fastcgi.c>
		AddHandler php5.fcgi .php
		Action php5.fcgi /php5.fcgi
		Alias /php5.fcgi /usr/lib/cgi-bin/php5.fcgi
		FastCgiExternalServer /usr/lib/cgi-bin/php5.fcgi -socket /var/run/www.example.com-php5-fpm.sock -pass-header Authorization -idle-timeout 3600
		<Directory /usr/lib/cgi-bin>
			Require all granted
		</Directory>
	</IfModule>
...

<VirtualHost *:443>
        ServerName example.com
        ServerAlias www.example.com
        DocumentRoot /var/www/vhosts/example.com

	# Send PHP requests to php-fpm
	<IfModule mod_fastcgi.c>
		AddHandler php5.fcgi .php
		Action php5.fcgi /php5.fcgi
		Alias /php5.fcgi /usr/lib/cgi-bin/php5.fcgi
		FastCgiExternalServer /usr/lib/cgi-bin/php5.fcgi -socket /var/run/www.example.com-php5-fpm.sock -pass-header Authorization -idle-timeout 3600
		<Directory /usr/lib/cgi-bin>
			Require all granted
		</Directory>
	</IfModule>
...

Then restart the services:

[root@web01 ~]# systemctl restart php5-fpm
[root@web01 ~]# systemctl restart apache2

Finally, test a site to ensure PHP is working and is using PHP-FPM by creating the file below, then visiting the page at example.com/info.php:

[root@web01 ~]# vim /var/www/vhosts/example.com/info.php
<?php phpinfo(); ?>

And your done!

CentOS 7 Apache 2.4 with PHP-FPM

PHP-FPM does have some advantages depending on the solution and the common path is to use Nginx with PHP-FPM. However what happens when you want to utilize the normal features of Apache, such as basics like .htaccess files, but still keep the tuning options open that come with PHP-FPM? Well, there is a module for that!

This guide is going to assume a fresh CentOS 7 server to illustrate everything from start to finish, and will assume that all sites on this server will use the same php-fpm pool.

First, installed the required packages for your web server:

[root@web01 ~]# yum install httpd httpd-tools mod_ssl php-fpm

Now update the Apache configuration to use the mpm_event_module instead of the mpm_prefork_module:

[root@web01 ~]# vim /etc/httpd/conf.modules.d/00-mpm.conf 
# LoadModule mpm_prefork_module modules/mod_mpm_prefork.so
LoadModule mpm_event_module modules/mod_mpm_event.so

Then tell Apache to send all PHP requests over to PHP-FPM by creating a new configuration file:

[root@web01 ~]# vim /etc/httpd/conf.d/php.conf

# Tell the PHP interpreter to handle files with a .php extension.

# Proxy declaration
<Proxy "unix:/var/run/php-fpm/default.sock|fcgi://php-fpm">
	# we must declare a parameter in here (doesn't matter which) or it'll not register the proxy ahead of time
    	ProxySet disablereuse=off
</Proxy>

# Redirect to the proxy
<FilesMatch \.php$>
	SetHandler proxy:fcgi://php-fpm
</FilesMatch>

#
# Allow php to handle Multiviews
#
AddType text/html .php

#
# Add index.php to the list of files that will be served as directory
# indexes.
#
DirectoryIndex index.php

#
# Uncomment the following lines to allow PHP to pretty-print .phps
# files as PHP source code:
#
#<FilesMatch \.phps$>
#	SetHandler application/x-httpd-php-source
#</FilesMatch>

Tweak PHP-FPM to use sockets instead of TCP connections for performance purposes as follows:

[root@web01 ~]# vim /etc/php-fpm.d/www.conf
; listen = 127.0.0.1:9000
listen = /var/run/php-fpm/default.sock
...
listen.allowed_clients = 127.0.0.1
listen.owner = apache
listen.group = apache
listen.mode = 0660
user = apache
group = apache

And lastly, enable the services to start on boot and start them up:

[root@web01 ~]# systemctl enable php-fpm
[root@web01 ~]# systemctl enable httpd
[root@web01 ~]# systemctl start php-fpm
[root@web01 ~]# systemctl start httpd

If you are using a software firewall on the server, open ports 80/443 accordingly. This example will open them up to the world. Adjust yours accordingly:

[root@web01 ~]# firewall-cmd --zone=public --permanent --add-service=http
[root@web01 ~]# firewall-cmd --zone=public --permanent --add-service=https
[root@web01 ~]# firewall-cmd --reload

Finally, test a site to ensure PHP is working and is using PHP-FPM by creating the file below, then visiting the page at x.x.x.x/info.php:

[root@web01 ~]# vim /var/www/html/info.php
<?php phpinfo(); ?>

And your done!

Using multiple PHP-FPM pools

What happens if you want to isolate each site to their own PHP-FPM pool instead of using a shared pool? That is easy enough to do. Assuming that you followed everything in this guide to get to this point, do the following.

First, disable the global Apache configuration for PHP:

[root@web01 ~]# mv /etc/httpd/conf.d/php.conf /etc/httpd/conf.d/php.conf.bak

Create a new PHP-FPM pool for this specific site and update it accordingly:

[root@web01 ~]# cp /etc/php-fpm.d/www.conf /etc/php-fpm.d/example.com.conf
[root@web01 ~]# vim /etc/php-fpm.d/example.com.conf
; listen = 127.0.0.1:9000
listen = /var/run/php-fpm/example.com.sock
...
listen.allowed_clients = 127.0.0.1
listen.owner = apache
listen.group = apache
listen.mode = 0660
user = apache
group = apache

Then update the site’s Apache vhost to point to a new PHP-FPM pool in both the 80 and 443 stanzas. Be sure to update the socket accordingly for your site in the 2 sections below! (ie: unix:/var/run/php-fpm/example.com.sock)

[root@web01 ~]# vim /etc/httpd/vhost.d/example.com.conf
<VirtualHost *:80>
        ServerName example.com
        ServerAlias www.example.com
        DocumentRoot /var/www/vhosts/example.com

        # Proxy declaration
        <Proxy "unix:/var/run/php-fpm/example.com.sock|fcgi://php-fpm">
                # we must declare a parameter in here (doesn't matter which) or it'll not register the proxy ahead of time
                ProxySet disablereuse=off
                # Note: If you configure php-fpm to use the "ondemand" process manager, then use "ProxySet disablereuse=on"
        </Proxy>

        # Redirect to the proxy
        <FilesMatch \.php$>
                SetHandler proxy:fcgi://php-fpm
        </FilesMatch>
...
<VirtualHost *:443>
        ServerName example.com
        ServerAlias www.example.com
        DocumentRoot /var/www/vhosts/example.com

        # Proxy declaration
        <Proxy "unix:/var/run/php-fpm/example.com.sock|fcgi://php-fpm">
                # we must declare a parameter in here (doesn't matter which) or it'll not register the proxy ahead of time
                ProxySet disablereuse=off
                # Note: If you configure php-fpm to use the "ondemand" process manager, then use "ProxySet disablereuse=on"
        </Proxy>

        # Redirect to the proxy
        <FilesMatch \.php$>
                SetHandler proxy:fcgi://php-fpm
        </FilesMatch>
...

Then restart the services:

[root@web01 ~]# systemctl restart php-fpm
[root@web01 ~]# systemctl restart httpd

Finally, test a site to ensure PHP is working and is using PHP-FPM by creating the file below, then visiting the page at example.com/info.php:

[root@web01 ~]# vim /var/www/vhosts/example.com/info.php
<?php phpinfo(); ?>

And your done!

CentOS 6 Apache 2.4 with PHP-FPM

PHP-FPM does have some advantages depending on the solution and the common path is to use Nginx with PHP-FPM. However what happens when you want to utilize the normal features of Apache, such as basics like .htaccess files, but still keep the tuning options open that come with PHP-FPM? Well, there is a module for that!

This guide is going to assume a fresh CentOS 6 server to illustrate everything from start to finish, and will assume that all sites on this server will use the same php-fpm pool.

Apache 2.2 has no native modules for working with fastcgi. So the options would be to install mod_fastcgi from source or use a older SRPM from repos that may not be too well known or maintained. As both those options are less than ideal, we will be installing Apache 2.4 from the IUS repository to avoid the patch management issues associated with source installations.

First, install the repos needed for the updated packages:

[root@web01 ~]# rpm -ivh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
[root@web01 ~]# rpm -ivh https://dl.iuscommunity.org/pub/ius/stable/CentOS/6/x86_64/ius-release-1.0-15.ius.centos6.noarch.rpm

Then install the required packages for your web server:

[root@web01 ~]# yum install httpd24u php56u-fpm

Now update the Apache configuration to use the mpm_event_module instead of the mpm_prefork_module:

[root@web01 ~]# vim /etc/httpd/conf.modules.d/00-mpm.conf 
# LoadModule mpm_prefork_module modules/mod_mpm_prefork.so
LoadModule mpm_event_module modules/mod_mpm_event.so

Then tell Apache to send all PHP requests over to PHP-FPM by creating a new configuration file:

[root@web01 ~]# vim /etc/httpd/conf.d/php.conf

# Tell the PHP interpreter to handle files with a .php extension.

<Proxy "unix:/var/run/php-fpm/default.sock|fcgi://php-fpm">
	# we must declare a parameter in here (doesn't matter which) or it'll not register the proxy ahead of time
	# Note: If you configure php-fpm to use the "ondemand" process manager, then use "ProxySet disablereuse=on"
	ProxySet disablereuse=off
</Proxy>

# Redirect to the proxy
<FilesMatch \.php$>
	SetHandler proxy:fcgi://php-fpm
</FilesMatch>

Tweak PHP-FPM to use sockets instead of TCP connections for performance purposes as follows:

[root@web01 ~]# vim /etc/php-fpm.d/www.conf
; listen = 127.0.0.1:9000
listen = /var/run/php-fpm/default.sock
...
listen.owner = apache
listen.group = apache
listen.mode = 0660
user = apache
group = apache

Enable the services to start on boot and start them up:

[root@web01 ~]# chkconfig php-fpm on
[root@web01 ~]# chkconfig httpd on
[root@web01 ~]# service php-fpm start
[root@web01 ~]# service httpd start

If you are using a software firewall on the server, open ports 80/443 accordingly. This example will open them up to the world. Adjust yours accordingly:

[root@web01 ~]# vim /etc/sysconfig/iptables
-A INPUT -p tcp -m tcp --dport 443 -j ACCEPT 
-A INPUT -p tcp -m tcp --dport 80 -j ACCEPT 
[root@web01 ~]# service iptables restart

Finally, test a site to ensure PHP is working and is using PHP-FPM by creating the file below, then visiting the page at x.x.x.x/info.php:

[root@web01 ~]# vim /var/www/html/info.php
<?php phpinfo(); ?>

And your done!

Using multiple PHP-FPM pools

What happens if you want to isolate each site to their own PHP-FPM pool instead of using a shared pool? That is easy enough to do. Assuming that you followed everything in this guide to get to this point, do the following.

First, disable the global Apache configuration for PHP:

[root@web01 ~]# mv /etc/httpd/conf.d/php.conf /etc/httpd/conf.d/php.conf.bak

Create a new PHP-FPM pool for this specific site and update it accordingly:

[root@web01 ~]# cp /etc/php-fpm.d/www.conf /etc/php-fpm.d/example.com.conf
[root@web01 ~]# vim /etc/php-fpm.d/example.com.conf
; Start a new pool named 'www'.
; the variable $pool can we used in any directive and will be replaced by the
; pool name ('www' here)
[example.com]
...
; listen = 127.0.0.1:9000
listen = /var/run/php-fpm/example.com.sock
...
listen.owner = apache
listen.group = apache
listen.mode = 0660
user = apache
group = apache

Then update the site’s Apache vhost to point to a new PHP-FPM pool in both the 80 and 443 stanzas. Be sure to update the socket accordingly for your site in the 2 sections below! (ie: unix:/var/run/php-fpm/example.com.sock)

[root@web01 ~]# vim /etc/httpd/vhost.d/example.com.conf
<VirtualHost *:80>
        ServerName example.com
        ServerAlias www.example.com
        DocumentRoot /var/www/vhosts/example.com

	# Send PHP requests to php-fpm
	<Proxy "unix:/var/run/php-fpm/example.com.sock|fcgi://php-fpm">
		# we must declare a parameter in here (doesn't matter which) or it'll not register the proxy ahead of time
		# Note: If you configure php-fpm to use the "ondemand" process manager, then use "ProxySet disablereuse=on"
		ProxySet disablereuse=off
	</Proxy>

	# Redirect to the proxy
	<FilesMatch \.php$>
		SetHandler proxy:fcgi://php-fpm
	</FilesMatch>
...
<VirtualHost *:443>
        ServerName example.com
        ServerAlias www.example.com
        DocumentRoot /var/www/vhosts/example.com

	# Send PHP requests to php-fpm
	<Proxy "unix:/var/run/php-fpm/example.com.sock|fcgi://php-fpm">
		# we must declare a parameter in here (doesn't matter which) or it'll not register the proxy ahead of time
		# Note: If you configure php-fpm to use the "ondemand" process manager, then use "ProxySet disablereuse=on"
		ProxySet disablereuse=off
	</Proxy>

	# Redirect to the proxy
	<FilesMatch \.php$>
		SetHandler proxy:fcgi://php-fpm
	</FilesMatch>
...

Then restart the services:

[root@web01 ~]# service php-fpm restart
[root@web01 ~]# service httpd restart

Finally, test a site to ensure PHP is working and is using PHP-FPM by creating the file below, then visiting the page at example.com/info.php:

[root@web01 ~]# vim /var/www/vhosts/example.com/info.php
<?php phpinfo(); ?>

And your done!

IO Scheduler tuning

What is an I/O scheduler? The I/O scheduler is a kernel level tunable whose purpose is to optimize disk access requests. Traditionally this is critical for spinning disks as I/O requests can be grouped together to avoid “seeking”.

Different I/O schedulers have their pro’s and con’s, so choosing which one to use depends on the type of environment and workload. There is no one right I/O scheduler to use, it all simply ‘depends’. Benchmarking your application before and after the I/O scheduler change is usually your best indicator. The good news is, the I/O scheduler can be changed at run time and can be configured to persist after reboots.

The three common I/O schedulers are:
– noop
– deadline
– cfq

noop

The noop I/O scheduler is optimized for systems that don’t need an I/O scheduler such as VMware, AWS EC2, Google Cloud, Rackspace public cloud, etc. Since the hypervisor already controls the I/O scheduling, it doesn’t make sense for the VM to waste CPU cycles on it. The noop I/O scheduler simply works as a FIFO (First In First Out) queue.

You can update the I/O scheduler to noop by:

## CentOS 6

# Change at runtime
[root@db01 ~]# cat /sys/block/sda/queue/scheduler
noop anticipatory deadline [cfq] 
[root@db01 ~]# echo 'noop' > /sys/block/sda/queue/scheduler
[root@db01 ~]# cat /sys/block/sda/queue/scheduler
[noop] anticipatory deadline cfq

# Change at boot time by appending 'elevator=noop' to end of kernel line:
[root@db01 ~]# vim /boot/grub/grub.conf
kernel /vmlinuz-2.6.9-67.EL ro root=/dev/vg0/lv0 elevator=noop


## CentOS 7

# Change at run time
[root@db01 ~]# cat /sys/block/sda/queue/scheduler
noop anticipatory deadline [cfq] 
[root@db01 ~]# echo 'noop' > /sys/block/sda/queue/scheduler
[root@db01 ~]# cat /sys/block/sda/queue/scheduler
[noop] anticipatory deadline cfq

# Change at boot time by appending 'elevator=noop' end of the following line, then rebuild the grub config:
[root@db01 ~]# vim /etc/default/grub
...
GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=rhel00/root rd.lvm.lv=rhel00/swap elevator=noop"
...
[root@db01 ~]# grub2-mkconfig -o /boot/grub2/grub.cfg


## Ubuntu 14.04

# Change at runtime
[root@db01 ~]# cat /sys/block/sda/queue/scheduler
noop [deadline] cfq
[root@db01 ~]# echo noop > /sys/block/sda/queue/scheduler
[root@db01 ~]# cat /sys/block/sda/queue/scheduler
[noop] deadline cfq

# Change at boot time by appending 'elevator=noop' end of the following line, then rebuild the grub config:
[root@db01 ~]# vim /etc/default/grub
...
GRUB_CMDLINE_LINUX="elevator=noop"
...
[root@db01 ~]# grub-mkconfig -o /boot/grub/grub.cfg


## Ubuntu 16.04

# Change at runtime
[root@db01 ~]# cat /sys/block/sda/queue/scheduler
noop [deadline] cfq
[root@db01 ~]# echo noop > /sys/block/sda/queue/scheduler
[root@db01 ~]# cat /sys/block/sda/queue/scheduler
[noop] deadline cfq

# Change at boot time by appending 'elevator=noop' end of the following line, then rebuild the grub config:
[root@db01 ~]# vim /etc/default/grub
...
GRUB_CMDLINE_LINUX="elevator=noop"
...
[root@db01 ~]# grub2-mkconfig -o /boot/grub2/grub.cfg

deadline

The deadline I/O scheduler is optimized by default for read heavy workloads like MySQL. It attempts to optimize I/O request by putting it in a read queue or write queue and assigning a timestamp to the request. For requests in the read queue, they have 500ms (by default) to execute before they are given the highest priority to run. Requests entering the write queue have 5000ms to execute before they are given the highest priority to run.

This deadline assigned to each I/O request is what makes deadline I/O scheduler optimal for read heavy workloads like MySQL.

You can update the I/O scheduler to deadline by:

## CentOS 6

# Change at runtime
[root@db01 ~]# cat /sys/block/sda/queue/scheduler
noop anticipatory deadline [cfq] 
[root@db01 ~]# echo 'deadline' > /sys/block/sda/queue/scheduler
[root@db01 ~]# cat /sys/block/sda/queue/scheduler
noop anticipatory [deadline] cfq

# Change at boot time by appending 'elevator=deadline' to end of kernel line apply the changes to grub:
[root@db01 ~]# vim /boot/grub/grub.conf
kernel /vmlinuz-2.6.9-67.EL ro root=/dev/vg0/lv0 elevator=deadline


## CentOS 7

# Change at run time
[root@db01 ~]# cat /sys/block/sda/queue/scheduler
noop anticipatory deadline [cfq] 
[root@db01 ~]# echo 'deadline' > /sys/block/sda/queue/scheduler
[root@db01 ~]# cat /sys/block/sda/queue/scheduler
noop anticipatory [deadline] cfq

# Change at boot time by appending 'elevator=deadline' end of the following line and apply the changes to grub:
[root@db01 ~]# vim /etc/default/grub
...
GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=rhel00/root rd.lvm.lv=rhel00/swap elevator=deadline"
...
[root@db01 ~]# grub2-mkconfig -o /boot/grub2/grub.cfg


# Ubuntu 14.04

# Change at runtime
[root@db01 ~]# cat /sys/block/sda/queue/scheduler
noop deadline [cfq]
[root@db01 ~]# echo deadline > /sys/block/sda/queue/scheduler
[root@db01 ~]# cat /sys/block/sda/queue/scheduler
noop [deadline] cfq

# Change at boot time by appending 'elevator=deadline' end of the following line apply the changes to grub:
[root@db01 ~]# vim /etc/default/grub
...
GRUB_CMDLINE_LINUX="elevator=deadline"
...
[root@db01 ~]# grub-mkconfig -o /boot/grub/grub.cfg


# Ubuntu 16.04

# Change at runtime
[root@db01 ~]# cat /sys/block/sda/queue/scheduler
noop deadline [cfq]
[root@db01 ~]# echo deadline > /sys/block/sda/queue/scheduler
[root@db01 ~]# cat /sys/block/sda/queue/scheduler
noop [deadline] cfq

# Change at boot time by appending 'elevator=deadline' end of the following line apply the changes to grub:
[root@db01 ~]# vim /etc/default/grub
...
GRUB_CMDLINE_LINUX="elevator=deadline"
...
[root@db01 ~]# grub2-mkconfig -o /boot/grub2/grub.cfg

cfg

The cfg I/O scheduler is probably best geared towards things running GUIs (like a desktop) where each process needs a fast response. The goal of the cfq I/O scheduler (Complete Fairness Queueing) is to give a fair allocation of disk I/O bandwidth for all the processes which requests an I/O operation.

You can update the I/O scheduler to cfq by:

## CentOS 6

# Change at runtime
[root@server01 ~]# cat /sys/block/sda/queue/scheduler
noop anticipatory [deadline] cfq 
[root@server01 ~]# echo 'cfq' > /sys/block/sda/queue/scheduler
[root@server01 ~]# cat /sys/block/sda/queue/scheduler
noop anticipatory deadline [cfq]

# Change at boot time by appending 'elevator=cfq' to end of kernel line apply the changes to grub:
[root@server01 ~]# vim /boot/grub/grub.conf
kernel /vmlinuz-2.6.9-67.EL ro root=/dev/vg0/lv0 elevator=cfq


## CentOS 7

# Change at run time
[root@server01 ~]# cat /sys/block/sda/queue/scheduler
noop anticipatory [deadline] cfq 
[root@server01 ~]# echo 'cfg' > /sys/block/sda/queue/scheduler
[root@server01 ~]# cat /sys/block/sda/queue/scheduler
noop anticipatory deadline [cfq]

# Change at boot time by appending 'elevator=cfq' end of the following line and apply the changes to grub:
[root@server01 ~]# vim /etc/default/grub
...
GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=rhel00/root rd.lvm.lv=rhel00/swap elevator=cfq"
...
[root@server01 ~]# grub2-mkconfig -o /boot/grub2/grub.cfg


# Ubuntu 14.04

# Change at runtime
[root@server01 ~]# cat /sys/block/sda/queue/scheduler
noop [deadline] cfq
[root@server01 ~]# echo cfq > /sys/block/sda/queue/scheduler
[root@server01 ~]# cat /sys/block/sda/queue/scheduler
noop deadline [cfq]

# Change at boot time by appending 'elevator=cfq' end of the following line apply the changes to grub:
[root@server01 ~]# vim /etc/default/grub
...
GRUB_CMDLINE_LINUX="elevator=cfq"
...
[root@server01 ~]# grub-mkconfig -o /boot/grub/grub.cfg


# Ubuntu 16.04

# Change at runtime
[root@server01 ~]# cat /sys/block/sda/queue/scheduler
noop [deadline] cfq
[root@server01 ~]# echo cfq > /sys/block/sda/queue/scheduler
[root@server01 ~]# cat /sys/block/sda/queue/scheduler
noop deadline [cfq]

# Change at boot time by appending 'elevator=cfq' end of the following line apply the changes to grub:
[root@server01 ~]# vim /etc/default/grub
...
GRUB_CMDLINE_LINUX="elevator=cfq"
...
[root@server01 ~]# grub2-mkconfig -o /boot/grub2/grub.cfg

As with any performance tuning recommendations, there is never a one size fits all solution! Always benchmark your application to establish a baseline before you make the change. After the performance changes have been made, run the same benchmark and compare the results to ensure that they had the desired outcomes.

Disabling Transparent Huge Pages in Linux

Transparent Huge Pages (THP) is a Linux memory management system that reduces the overhead of Translation Lookaside Buffer (TLB) lookups on machines with large amounts of memory by using larger memory pages.

However, database workloads often perform poorly with THP, because they tend to have sparse rather than contiguous memory access patterns. The overall recommendation for MySQL, MongoDB, Oracle, etc is to disable THP on Linux machines to ensure best performance.

You can check to see if THP is enabled or not by running:

[root@db01 ~]# cat /sys/kernel/mm/transparent_hugepage/enabled
[always] madvise never
[root@db01 ~]# cat /sys/kernel/mm/transparent_hugepage/defrag
[always] madvise never

If the result shows [never], then THP is disabled. However if the result shows [always], then THP is enabled.

You can disable THP at runtime on CentOS 6/7 and Ubuntu 14.04/16.04 by running:

[root@db01 ~]# echo 'never' > /sys/kernel/mm/transparent_hugepage/enabled
[root@db01 ~]# echo 'never' > /sys/kernel/mm/transparent_hugepage/defrag

However once the system reboots, it will go back to its default value again. To make the setting persistent on CentOS 7 and Ubuntu 16.04, you can disable THP on system startup by making a systemd unit file:

# CentOS 7 / Ubuntu 16.04:
[root@db01 ~]# vim /etc/systemd/system/disable-thp.service
[Unit]
Description=Disable Transparent Huge Pages (THP)

[Service]
Type=simple
ExecStart=/bin/sh -c "echo 'never' > /sys/kernel/mm/transparent_hugepage/enabled && echo 'never' > /sys/kernel/mm/transparent_hugepage/defrag"

[Install]
WantedBy=multi-user.target

[root@db01 ~]# systemctl daemon-reload
[root@db01 ~]# systemctl start disable-thp
[root@db01 ~]# systemctl enable disable-thp

On CentOS 6 and Ubuntu 14.04, you can disable THP on system startup by adding the following to /etc/rc.local. If this is on Ubuntu 14.04, make sure its added before the ‘exit 0’:

# CentOS 6 / Ubuntu 14.04
[root@db01 ~]# vim /etc/rc.local
...
if test -f /sys/kernel/mm/transparent_hugepage/enabled; then
   echo never > /sys/kernel/mm/transparent_hugepage/enabled
fi
if test -f /sys/kernel/mm/transparent_hugepage/defrag; then
   echo never > /sys/kernel/mm/transparent_hugepage/defrag
fi
...