Restricting access to directories on Apache websites

There are many ways to go about restricting access to specific content within Apache. This article is simply going to show a couple of examples.

Some of these examples will show you how to restrict access to a directory with a username and password. For this guide, the htpasswd file will be placed in /etc/httpd/example-htpasswd. You can create a username and password for it by simply using the external third party site http://www.htaccesstools.com/htpasswd-generator or you can use the built in tool htpasswd.

A basic example for password protecting an entire website is below:

[root@web01 ~]# vim /etc/httpd/vhost.d/example.com.conf
<VirtualHost *:80>
ServerName example.com
ServerAlias www.example.com
DocumentRoot /var/www/vhosts/example.com
<Directory /var/www/vhosts/example.com>
	Options -Indexes +FollowSymLinks -MultiViews
	AllowOverride All

        # Password protect site
	AuthType Basic
	AuthName "Restricted"
	AuthUserFile /etc/httpd/example-htpasswd
	Require valid-user
</Directory>
...

If you wanted to only allow in specific IP’s or networks without a password and require everyone else on the internet to have a username/password:

[root@web01 ~]# vim /etc/httpd/vhost.d/example.com.conf
<VirtualHost *:80>
ServerName example.com
ServerAlias www.example.com
DocumentRoot /var/www/vhosts/example.com
<Directory /var/www/vhosts/example.com>
	Options -Indexes +FollowSymLinks -MultiViews
	AllowOverride All

	# Password protect site
	Allow from 127.0.0.1
	Allow from 1.2.3.4
	Allow from 192.168.1.0/24

	AuthType Basic
	AuthName "Restricted"
	AuthUserFile /etc/httpd/example-htpasswd
	Require valid-user

	# Allow password-less access for allowed IPs
	Satisfy any
</Directory>
...

Below is an example for password protecting WordPress’s wp-admin page via an .htaccess file:

[root@web01 ~]# vim /var/www/vhosts/example.com/wp-admin/.htaccess
# Password protect wp-admin
<Files admin-ajax.php>
    Order allow,deny
    Allow from all
    Satisfy any
</Files>
AuthType Basic
AuthName "Restricted"
AuthUserFile /etc/httpd/example-htpasswd
Require valid-user

Here is one to restrict access to a directory by only allowing in specific IP’s within example.com/admin:

[root@web01 ~]# vim /var/www/vhosts/example.com/admin/.htaccess
order deny,allow
deny from all
allow from 1.2.3.4
allow from 192.168.1.0/24

On Apache 2.4, here is how you can password protect an entire website excluding one URI. This is useful if you use something like CakePHP or Laravel where the physical directory doesn’t exist, it all just filters through the index.php file. In this example, any requests to example.com/test will not require a password, but anything else on example.com will require a username and password:

[root@web01 ~]# vim /etc/httpd/vhost.d/example.com.conf
<VirtualHost *:80>
ServerName example.com
ServerAlias www.example.com
DocumentRoot /var/www/vhosts/example.com/current/public
<Directory /var/www/vhosts/example.com/current/public>
	Options -Indexes +FollowSymLinks -MultiViews
	AllowOverride All
</Directory>

<Location "/">
	# Password protect site
	AuthType Basic
	AuthName "Restricted"
	AuthUserFile /etc/httpd/example-htpasswd
	Require valid-user

	# If the request goes to /test: bypass basic auth
	SetEnvIf Request_URI ^/test$ noauth=1
	Allow from env=REDIRECT_noauth
	Allow from env=noauth

	Order Deny,Allow
	Satisfy any
	Deny from all
</Location>
...

What happens when you want to password protect an aliased site? For instance, I have 2 domains, www.example1.com and www.example2.com. The www.example2.com is simply a ServerAlias defined within /etc/httpd/vhost.d/www.example1.com.conf. How can you go about password protecting www.example2.com without affecting www.example1.com? Simply add the following to the bottom of the .htaccess:

[root@web01 ~]# vim /var/www/vhosts/www.example.com/.htaccess
...
SetEnvIfNoCase Host example2\.com$ require_auth=true

AuthUserFile /etc/httpd/example2.com-htpasswd
AuthName "Password Protected"
AuthType Basic
Require valid-user
Order Deny,Allow
Satisfy any
Deny from all

Allow from env=!require_auth
...

Summarizing the MySQL Slow Query log

The MySQL slow query log is great for being able to tell you which queries are intensive on the CPU’s, which is most likely impacting your websites performance. When looking at the slow query log, it can appear like its an unorganized mess of random queries. So how do you summarize this list so you can know what queries to start fixing first? Welcome mysqldumpslow!

By default, MySQL comes with a tool called mysqldumpslow. Its primary purpose is to parse the slow query log and group similar queries together. This allows you to quickly see which queries need your attention in the slow query log.

If the slow query log is not already enabled, follow the older article I have here:
https://www.stephenrlang.com/2016/02/mysql-slow-query-log/

Assuming the slow query log is already enabled, locate the slow query log by running:

[root@db01 ~]# grep slow-query-log-file /etc/my.cnf
slow-query-log-file = /var/lib/mysql/slow-log

Queries that return a lot of rows and run often are typically the cause of CPU contention within MySQL. These CPU heavy queries may be able benefit from either a LIMIT clause, a table index or just needs to be rewritten so it returns less rows.

To display the top 10 queries that returned the highest amount of rows, run:

[root@db01 ~]# mysqldumpslow -a -s c -t 10 /var/lib/mysql/slow-log

Sometimes you just want to see which queries are constantly in the slow query log. Many times this could be from something unexpected and easily fixable by the developers.

To display the top 10 queries without any other sorting options, run:

[root@db01 ~]# mysqldumpslow -a -s r -t 10 /var/lib/mysql/slow-log

The data returned will show:

Count - How many times the query has been logged
Time - Both the average time and the total time in the ()
Lock - Table lock time
Rows - Number of rows returned

Below is one entry returned by the mysqldumpslow command that should give you a rough idea what information will be presented to you:

[root@db01 ~]# mysqldumpslow -a -s c -t 10 /var/lib/mysql/slow-log
...
Count: 4201  Time=68.34s (278868s)  Lock=0.00s (0s)  Rows=998512 (4194748912), root[root]@localhost
  SELECT PL.pl_title, P.page_title
  FROM page P
  INNER JOIN pagelinks PL
  ON PL.pl_namespace = P.page_namespace
  WHERE P.page_namespace = N
...

Enabling HTTP/2 on Nginx

The HTTP/2 protocol is the latest craze with web servers at the moment. The updated protocol has many performance enhancements over the older HTTP/1.1 protocol mainly due to the fact requests are downloaded in parallel, so therefore its multiplexed over a single connection.

While HTTP/2 will work with non-ssl enabled websites, popular browsers such as Firefox and Chrome will only support HTTP/2 for SSL enabled websites. So how do we get started with HTTP/2 on Nginx?

Nginx makes HTTP/2 simple! The HTTP/2 protocol has been included in Nginx since version 1.9.5, so it works in CentOS 6 and 7 as well as Ubuntu 14.04, 16.04 and 18.04.

A special note about Ubuntu 14.04 which is set to go EOL on 4/2019, the default Nginx package that gets installed via apt is very old (v1.4.6) and therefore does not support HTTP/2. However if you install Nginx from the ppa:nginx/stable PPA, you can make use of HTTP/2. You can install nginx from the PPA by:

# This applies to Ubuntu 14.04 ONLY.
[root@web01-ubuntu1404 ~]# apt-get install python-software-properties
[root@web01-ubuntu1404 ~]# add-apt-repository ppa:nginx/stable
[root@web01-ubuntu1404 ~]# apt-get update
[root@web01-ubuntu1404 ~]# apt-get install nginx

For CentOS 6 and 7, you can install Nginx from EPEL. On Ubuntu 16.04 and 18.04, HTTP/2 will work fine with the default Nginx packages provided by Ubuntu.

From here, enabling HTTP/2 is as easy as updating the listen directive within the Nginx vhost config to include ‘http2’ as shown below:

[root@web01 ~]# vim /etc/nginx/conf.d/example.com.conf
...
listen 443 ssl http2

Now check the Nginx configuration to ensure there are no problems:

[root@web01 ~]# nginx -t

Finally, restart Nginx to apply the change:

[root@web01 ~]# service nginx restart

You can test to ensure its working by checking for the protocol header as shown below:

[root@workstation ~]# curl -IL https://www.example.com
...
HTTP/2 200
...

Centralized mail relay server on CentOS 7

Lets say you have dozens or hundreds of servers that all need to send mail out directly to the internet. This becomes a headache as you need to open up your firewall to allow all these servers outbound access over port 25 and your mail logs are scattered among all those servers.

Having a centralized mail relay server solves for this by serving as a central location for mail logs and only opening the firewall for one server to allow outbound port 25 access. All the other servers simply send their mail to this central mail relay server to handle sending mail, which alleviates the need for unnecessary outbound access for those other nodes.

This guide will discuss how to setup a centralized mail relay server for the sole purpose of sending only outbound email. The servers used in this guide as an example will be:

smtp-relay001.example.com - 192.168.1.100
web01.example.com - 192.168.1.101
web02.example.com - 192.168.1.102

There are some basic prerequisites that must be meet before beginning to help ensure successful email delivery:

  • The hostname of the relay server must be a FQDN, ie: smtp-relay001.example.com
  • There must be a corresponding A record setup in DNS that matches the hostname
  • There must be a corresponding PTR record (reverse DNS) setup in that matches the hostname
  • Setup an SPF record in DNS for your central mail relay server
  • Ensure your relay server is configured to ONLY accept mail from your private network to prevent it from becoming an open relay!

To reiterate the last point, ensure that your central mail relay server ONLY accepts mail from your private network. Opening it up to the world makes you an open relay which will get you blacklisted quickly. Use a dedicated firewall to block inbound 25 and 587 access to the relay server for added protection against a configuration error.

Setup central mail relay server (smtp-relay001.example.com)

First, confirm your hostname is setup properly:

[root@smtp-relay001 ~]# vim /etc/hosts
...
192.168.1.100 smtp-relay001.example.com smtp-relay001
...

[root@smtp-relay001 ~]# hostnamectl set-hostname smtp-relay001.example.com
[root@smtp-relay001 ~]# hostname smtp-relay001.example.com
[root@smtp-relay001 ~]# systemctl restart rsyslog

Now install postfix if it is not already installed:

[root@smtp-relay001 ~]# yum install postfix
[root@smtp-relay001 ~]# systemctl enable postfix

Set postfix to listen on your private IP address and only answer to servers within your network, which in my case is the 192.168.1.0/24 network:

[root@smtp-relay001 ~]# vim /etc/postfix/main.cf
...
inet_interfaces = 192.168.1.100
mydestination = $myhostname, localhost.$mydomain, $mydomain
mynetworks = 192.168.1.0/24, 127.0.0.0/8
...

Then setup a SSL certificate for use with TLS:

[root@smtp-relay001 ~]# openssl genrsa -out /etc/postfix/server.key 2048
[root@smtp-relay001 ~]# openssl req -new -x509 -key /etc/postfix/server.key -out /etc/postfix/server.crt -days 3650
[root@smtp-relay001 ~]# chmod 600 /etc/postfix/server.key

Add the following TLS configuration to the bottom of the postfix configuration:

[root@smtp-relay001 ~]# vim /etc/postfix/main.cf
...
# Enable TLS
smtpd_use_tls = yes
smtpd_tls_key_file = /etc/postfix/server.key
smtpd_tls_cert_file = /etc/postfix/server.crt
smtpd_tls_loglevel = 1
smtpd_tls_received_header = yes
smtpd_tls_session_cache_timeout = 3600s
tls_random_source = dev:/dev/urandom
smtpd_tls_mandatory_protocols = !SSLv2, !SSLv3, !TLSv1

Set the server to accept TLS connections by:

[root@smtp-relay001 ~]# vim /etc/postfix/master.cf
...
submission inet n       -       n       -       -       smtpd
  -o syslog_name=postfix/submission
  -o smtpd_tls_security_level=encrypt
...

Confirm postfix syntax looks good

[root@smtp-relay001 ~]# postfix check

Now restart Postfix to apply the changes:

[root@smtp-relay001 ~]# systemctl restart postfix

Finally, open up the software firewall (or the dedicated firewall) to allow inbound 25 and 587 requests from other servers within your private network by:

# Firewalld
[root@smtp-relay001 ~]# firewall-cmd --permanent --new-zone=postfix
[root@smtp-relay001 ~]# firewall-cmd --permanent --zone=postfix --add-port=25/tcp
[root@smtp-relay001 ~]# firewall-cmd --permanent --zone=postfix --add-port=587/tcp
[root@smtp-relay001 ~]# firewall-cmd --permanent --zone=postfix --add-source=192.168.1.0/24
[root@smtp-relay001 ~]# firewall-cmd --reload

# iptables
[root@smtp-relay001 ~]# vim /etc/sysconfig/iptables
...
-A INPUT -p tcp -m tcp --dport 25 -s 192.168.1.0/24 -m comment --comment "postfix" -j ACCEPT
-A INPUT -p tcp -m tcp --dport 587 -s 192.168.1.0/24 -m comment --comment "postfix" -j ACCEPT
...
[root@smtp-relay001 ~]# service iptables restart

Setup client servers running postfix to relay through smtp-relay001

First, confirm postfix is installed:

[root@web01 ~]# yum install postfix
[root@web01 ~]# systemctl enable postfix

Configure postfix to relay mail to smtp-relay001, only accept mail from localhost, and configure the relay host:

[root@web01 ~]# vim /etc/postfix/main.cf
...
inet_interfaces = loopback-only
mydestination= # leave blank
myhostname = ENTER_SERVER_HOSTNAME_HERE
mynetworks=127.0.0.0/8 [::1]/128
myorigin = $myhostname
relayhost = 192.168.1.100
local_transport=error: local delivery disabled
...

Confirm postfix syntax looks good:

[root@web01 ~]# postfix check

Restart postfix to apply the changes:

[root@web01 ~]# systemctl restart postfix

Confirm email can send outbound by sending a message, then checking the mail logs to ensure you see it relay through the relay server:

[root@web01 ~]# yum install mailx
[root@web01 ~]# echo "Testing" | mail -s "Test from web01" [email protected]
[root@web01 ~]# tail -f /var/log/maillog

Setup client servers running sendmail to relay through smtp-relay001

While I rarely run across sendmail nowadays, there are still some servers that are using it. If one of your servers is running sendmail, you can set the relay host by replacing DS with DS192.168.1.100 in your sendmail configuration as shown below:

[root@web01 ~]# vim /etc/mail/sendmail.cf
...
DS192.168.1.100
...
[root@web01 ~]# service sendmail restart

Confirm email can send outbound by sending a message, then checking the mail logs to ensure you see it relay through the relay server:

[root@web01 ~]# yum install mailx
[root@web01 ~]# echo "Testing" | mail -s "Test from web01" [email protected]
[root@web01 ~]# tail -f /var/log/maillog  # or /var/log/mail.log

DNS setup with bind on CentOS 7

Maybe you need a private DNS server on an internal network or maybe you just want to learn more about DNS. Setting up a pair of DNS servers is not too complicated and can be useful in certain situations.

This guide will outline how to setup 2 DNS servers, one being the primary and the other being the secondary DNS server on CentOS 7. This guide will make use of the following servers:

dns01.example.com (192.168.1.101) - Primary DNS server
dns02.example.com (192.168.1.102) - Secondary DNS server

Before beginning, make sure you have the latest updates on both servers by running:

[root@dns01 ~]# yum update
[root@dns01 ~]# reboot

Primary DNS server setup

First, install bind by running:

[root@dns01 ~]# yum install bind bind-utils

There are a couple key settings that need to be customized to fit your needs:

- trusted-recursion : Which IP's or subnets you want to allow inbound to perform lookups.
- forwarders : Specify a pair of DNS servers to act of forwarders.  
- zonefile : Setup your zonefiles

The instructions below will create a new named.conf with the following setup:

- trusted-recursion from 192.168.1.0/24
- forwarders : Will use google's DNS servers, 8.8.8.8 and 8.8.4.4
- zonefile : We will be setting up example.com and allow zone transfers to dns02.example.com (192.168.1.102).  We'll also be setting reverse DNS.

Adjust the settings to meet your environments needs accordingly. The primary fields to change will be in bold below:

[root@dns01 ~]# mv /etc/named.conf /etc/named.conf.orig 
[root@dns01 ~]# vim /etc/named.conf
//
// named.conf
//
// Provided by Red Hat bind package to configure the ISC BIND named(8) DNS
// server as a caching only nameserver (as a localhost DNS resolver only).
//
// See /usr/share/doc/bind*/sample/ for example named configuration files.
//
// See the BIND Administrator's Reference Manual (ARM) for details about the
// configuration located in /usr/share/doc/bind-{version}/Bv9ARM.html

acl "trusted-recursion" {
        localhost;
        localnets;
        192.168.1.0/24;
};


options {
        listen-on port 53 { any; };
        listen-on-v6 port 53 { ::1; };
        directory       "/var/named";
        dump-file       "/var/named/data/cache_dump.db";
        statistics-file "/var/named/data/named_stats.txt";
        memstatistics-file "/var/named/data/named_mem_stats.txt";
        allow-query     { any; };
        allow-recursion { trusted-recursion; };
        allow-query-cache { trusted-recursion; };
        /*
         - If you are building an AUTHORITATIVE DNS server, do NOT enable recursion.
         - If you are building a RECURSIVE (caching) DNS server, you need to enable
           recursion.
         - If your recursive DNS server has a public IP address, you MUST enable access
           control to limit queries to your legitimate users. Failing to do so will
           cause your server to become part of large scale DNS amplification
           attacks. Implementing BCP38 within your network would greatly
           reduce such attack surface
        */
        /* recursion yes; */

        dnssec-enable yes;
        dnssec-validation no;

        /* Path to ISC DLV key */
        bindkeys-file "/etc/named.iscdlv.key";

        managed-keys-directory "/var/named/dynamic";

        pid-file "/run/named/named.pid";
        session-keyfile "/run/named/session.key";

        # Setup Google's dns as forwarders
        forwarders {
                8.8.8.8;
                8.8.4.4;
        };
};

logging {
        channel default_debug {
                file "data/named.run";
                severity dynamic;
        };
};

zone "." IN {
        type hint;
        file "named.ca";
};

include "/etc/named.rfc1912.zones";
include "/etc/named.root.key";

zone "example.com" {
    type master;
    file "dynamic/example.com"; # zone file path
    allow-transfer { 192.168.1.102; };
    notify yes;
};

zone "1.168.192.in-addr.arpa" in {
        type master;
        file "dynamic/1.168.192.in-addr.arpa.zone";
        allow-transfer { 192.168.1.102; };
        notify yes;
};

Create the actual zonefile for example.com:

[root@dns01 ~]# vim /var/named/dynamic/db.example.com
; Remember to update the serial by 1 each time you edit this file!
$TTL 300                ; 5 minutes 
@       IN      SOA     dns01.example.com. admin.example.com. (
                  1     ; Serial
               3600     ; Refresh
                300     ; Retry
            1814400     ; Expire
                300 )   ; Negative Cache TTL

; name servers - NS records
    IN      NS      dns01.example.com.
    IN      NS      dns02.example.com.

; name servers - A records
dns01.example.com.     IN     A     192.168.1.101
dns02.example.com.     IN     A     192.168.1.102

; All other A records
example.com.           IN     A     192.168.1.200
www.example.com.       IN     A     192.168.1.200
web01.example.com.     IN     A     192.168.1.201
web02.example.com.     IN     A     192.168.1.202
web03.example.com.     IN     A     192.168.1.203

Then setup reverse DNS for your IP space:

[root@dns01 ~]# vim /var/named/dynamic/1.168.192.in-addr.arpa.zone
vim /var/named/dynamic/1.168.192.in-addr.arpa.zone
$ORIGIN 1.168.192.in-addr.arpa.
$TTL 86400              ; 1 day
@       IN      SOA     dns01.example.com. admin.example.com. (
                  1     ; Serial
               7200     ; refresh (2 hous)
               7200     ; retry (2 hours)
            2419200     ; expire (5 weeks 6 days 16 hours)
              86400 )   ; minimum (1 day)

1.168.192.in-addr.arpa. IN NS dns01.example.com.
1.168.192.in-addr.arpa. IN NS dns02.example.com.

101     IN     PTR     dns01.example.com.
102     IN     PTR     dns02.example.com.

200     IN     PTR     www.example.com.
201     IN     PTR     web01.example.com.
202     IN     PTR     web02.example.com.
203     IN     PTR     web03.example.com.

Now confirm your syntax is valid:

[root@dns01 ~]# named-checkconf

If no errors are returned, set bind to start on boot and fire it up:

[root@dns01 ~]# systemctl enable named
[root@dns01 ~]# systemctl restart named

Finally, update the servers /etc/resolv.conf:

[root@dns01 ~]# mv /etc/resolv.conf /etc/resolv.conf.orig
[root@dns01 ~]# cat << EOF > /etc/resolv.conf
domain example.com
search example.com
nameserver 192.168.1.101
nameserver 192.168.1.102
EOF

Secondary DNS server setup

Setting up dns02.example.com will be easier since it will automatically retrieve the zonefiles from dns01. Adjust the settings to meet your environments needs accordingly. The primary fields to change will be in bold below:

[root@dns02 ~]# mv /etc/named.conf /etc/named.conf.orig 
[root@dns02 ~]# vim /etc/named.conf
//
// named.conf
//
// Provided by Red Hat bind package to configure the ISC BIND named(8) DNS
// server as a caching only nameserver (as a localhost DNS resolver only).
//
// See /usr/share/doc/bind*/sample/ for example named configuration files.
//
// See the BIND Administrator's Reference Manual (ARM) for details about the
// configuration located in /usr/share/doc/bind-{version}/Bv9ARM.html

acl "trusted-recursion" {
        localhost;
        localnets;
        192.168.1.0/24;
};


options {
        listen-on port 53 { any; };
        listen-on-v6 port 53 { ::1; };
        directory       "/var/named";
        dump-file       "/var/named/data/cache_dump.db";
        statistics-file "/var/named/data/named_stats.txt";
        memstatistics-file "/var/named/data/named_mem_stats.txt";
        allow-query     { any; };
        allow-recursion { trusted-recursion; };
        allow-query-cache { trusted-recursion; };
        /*
         - If you are building an AUTHORITATIVE DNS server, do NOT enable recursion.
         - If you are building a RECURSIVE (caching) DNS server, you need to enable
           recursion.
         - If your recursive DNS server has a public IP address, you MUST enable access
           control to limit queries to your legitimate users. Failing to do so will
           cause your server to become part of large scale DNS amplification
           attacks. Implementing BCP38 within your network would greatly
           reduce such attack surface
        */
        /* recursion yes; */

        dnssec-enable yes;
        dnssec-validation no;

        /* Path to ISC DLV key */
        bindkeys-file "/etc/named.iscdlv.key";

        managed-keys-directory "/var/named/dynamic";

        pid-file "/run/named/named.pid";
        session-keyfile "/run/named/session.key";

        # Setup Google's dns as forwarders
        forwarders {
                8.8.8.8;
                8.8.4.4;
        };
};

logging {
        channel default_debug {
                file "data/named.run";
                severity dynamic;
        };
};

zone "." IN {
        type hint;
        file "named.ca";
};

include "/etc/named.rfc1912.zones";
include "/etc/named.root.key";

zone "example.com" {
        type slave;
        file "slaves/db.example.com";
        masters { 192.168.1.101; };
};

zone "1.168.192.in-addr.arpa" in {
        type slave;
        file "slaves/1.168.192.in-addr.arpa.zone";
        masters { 192.168.1.101; };

Now confirm your syntax is valid:

[root@dns02 ~]# named-checkconf

If no errors are returned, set bind to start on boot and fire it up:

[root@dns02 ~]# systemctl enable named
[root@dns02 ~]# systemctl restart named

Verify the zonefiles replicated over to dns02.example.com by checking in:

[root@dns02 ~]# ls -al /var/named/slaves/

Finally, update the servers /etc/resolv.conf:

[root@dns01 ~]# cp /etc/resolv.conf /etc/resolv.conf.orig
[root@dns01 ~]# cat << EOF > /etc/resolv.conf
domain example.com
search example.com
nameserver 192.168.1.102
nameserver 192.168.1.101
EOF

Testing the DNS servers

You can verify the DNS servers are working by:

[root@web01 ~]# dig @192.168.1.101 www.example.com +short
192.168.1.200
[root@web01 ~]# dig @192.168.1.101 -x 192.168.1.200 +short
www.example.com.

[root@web01 ~]# dig @192.168.1.102 www.example.com +short
192.168.1.200
[root@web01 ~]# dig @192.168.1.102 -x 192.168.1.200 +short
www.example.com.

Testing ports without telnet or nc

Ever hop onto a server where the network admin may have been a bit over-caffeinated when they were locking down the firewall? What if they also locked down egress along with ingress? They want you to prove you cannot connect outbound, but you cannot even install ‘telnet’ or ‘nc’ since yum/apt can’t get outbound. While that is proof in and of itself, what if you needed something more for some reason?

Assuming you have root access and ‘telnet’ or ‘nc’ is not installed, you can use the bash networking features (see REDIRECTION man page). The example below shows connections that succeed since they return instantly:

[root@web01 ~]# echo > /dev/tcp/1.1.1.1/80
[root@web01 ~]# echo > /dev/tcp/1.1.1.1/443
[root@web01 ~]# echo > /dev/tcp/google.com/443
[root@web01 ~]#

You can tell the connection failed as the command will hang or return an error about ‘connection refused’.

Another way around this is to use curl if it is available. Below is an example for checking if you can connect to port 25 on the remote server:

[root@web01 ~]# curl -v telnet://1.1.1.1:25
* About to connect() to 1.1.1.1 port 25 (#0)
*   Trying 1.1.1.1...
* Connected to 1.1.1.1 (1.1.1.1) port 25 (#0)

strace Cheat Sheet

strace is a tool for debugging and troubleshooting programs. It basically captures and records all system calls made by a process and the signals received by the process.

Some basic examples of how I use it are below:

Troubleshooting slow loading website

You can enable timestamps within the strace output. This will show both the timestamps at the beginning of the time and also the execution time at the end of the line. This is useful to be able to quickly identify which element of the site is slow to load.

The following example simply shows the lag introduced by a 5 second sleep statement within index.php:

[root@web01 ~]# strace -fs 10000 -tT -o /tmp/strace.txt sudo -u apache php /var/www/vhosts/www.example.com/index.php
[root@web01 ~]# less /tmp/strace.txt

1745  13:31:44 rt_sigprocmask(SIG_BLOCK, [CHLD], [], 8) = 0 <0.000010>
1745  13:31:44 rt_sigaction(SIGCHLD, NULL, {SIG_DFL, [], 0}, 8) = 0 <0.000011>
1745  13:31:44 rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0 <0.000009>
1745  13:31:44 nanosleep({5, 0}, 0x7ffd5926ea80) = 0 <5.000218>
1745  13:31:49 uname({sysname="Linux", nodename="web01", ...}) = 0 <0.000037>

Okay, so that was just a random coding error. A real world example is below. This shows the 60 second latency I was seeing on each page load as the site was trying to load something from a third party site.

[root@web01 ~]# strace -fs 10000 -tT -o /tmp/strace.txt sudo -u apache php /var/www/vhosts/www.example22.com/index.php
[root@web01 ~]# less strace.txt
...
35999 16:44:29 recvfrom(5, "\347$\201\200\0\1\0\1\0\1\0\0\3www\example22\3com\0\0\34\0\1\300\f\0\5\0\1\0\0!\3\0\2\300\20\300\20\0\6\0\1\0\0$(\0=\3ns1\3net\0\300GxH\262z\0\0\16\20\0\0\34 \0\22u\0\0\1Q\200", 65536, 0, {sa_family=AF_INET, sin_port=htons(53), sin_addr=inet_addr("123.123.123.123")}, [16]) = 128 <0.000025>
35999 16:44:29 close(5)                 = 0 <0.000029>
35999 16:44:29 socket(PF_INET, SOCK_STREAM, IPPROTO_TCP) = 5 <0.000029>
35999 16:44:29 fcntl(5, F_GETFL)        = 0x2 (flags O_RDWR) <0.000018>
35999 16:44:29 fcntl(5, F_SETFL, O_RDWR|O_NONBLOCK) = 0 <0.000019>
35999 16:44:29 connect(5, {sa_family=AF_INET, sin_port=htons(443), sin_addr=inet_addr("123.123.123.4")}, 16) = -1 EINPROGRESS (Operation now in progress) <0.000054>
35999 16:44:29 poll([{fd=5, events=POLLOUT|POLLWRNORM}], 1, 299993) = 1 ([{fd=5, revents=POLLERR|POLLHUP}]) <63.000274>
35999 16:45:32 getsockopt(5, SOL_SOCKET, SO_ERROR, [110], [4]) = 0 <0.000013>
35999 16:45:32 close(5)                 = 0 <0.000024>

Notice the timestamps that are in bold. You can clearly see the delay while the page was still loading. When looking up one line from there, I can see that the site was trying to call something from 123.123.123.4 and it appeared to be timing out.

Here is another example similar to the ones above that filter the strace output to only show ‘sendto’, ‘connect’, ‘open’ and ‘write’ to filter out some of the noise so you can more easily see the file/page being accessed as well as the resulting database lookup:

[root@web01 ~]# strace -tt -T -e trace=sendto,connect,open,write php /var/www/vhosts/www.example.com/index.php
...
12:22:56.362994 open("/var/www/vhosts/example.com/application/colors/red.php", O_RDONLY) = 4 <0.000027>
12:22:56.363933 write(3, "M\0\0\0\3SELECT *\nFROM (`tbl_color_red"..., 81) = 81 <0.000026>
12:22:56.364143 open("/usr/share/zoneinfo/America/New_York", O_RDONLY) = 4 <0.000026>
12:22:56.364974 write(3, "Y\0\0\0\3SELECT *\nFROM (`tbl_colors_orange`)"..., 93) = 93 <0.000021>
12:22:56.365747 write(3, "<\t\0\0\3Select `id`.`color` as "..., 2368) = 2368 <0.000021>
12:27:02.354995 write(3, "G\0\0\0\3SELECT *\nFROM (`tbl_paper_"..., 75) = 75 <0.000023>

In the example above, that may indicate that I need to look at the slow query log or run an explain against the query to identify why it is taking so long to execute.

Log all calls in and out of Apache

Sometimes you just cannot seem to narrow down the issue. Therefore you have to log everything and try to find that needle in the haystack. The command below will record all Apache web processes and their forks and log them to a file. Do not keep this running for long as the log can quickly fill up your disk!

[root@web01 ~]# pgrep "apache2|httpd" | awk '{print "-fp "$1}' | xargs strace -vvv -ts 2048 2>&1 | grep -vE "gettime|ENOENT" > /tmp/strace.txt
[root@web01 ~]# less /tmp/strace.txt

When going through the /tmp/strace.txt, you are basically looking for gaps in the timestamps that may or may not explain why a single pid hung while serving a request. Some common ways to begin looking for clues:

[root@web01 ~]# grep -Ev "munmap|gettime" /tmp/strace.txt  | cut -b -115 | less

[root@web01 ~]# grep -E 'connect\(|stat\(" /tmp/strace.txt  | cut -b -115 |less

# WordPress specific ones are below:
[root@web01 ~]# grep -Ev "munmap|gettime" /tmp/strace.txt  | cut -b -115 | grep wp-content | grep open | less

[root@web01 ~]# grep -Ev "munmap|gettime" /tmp/strace.txt  | cut -b -115 | grep -iE "open.*wp-content|connect" | less

MySQL 1071 (42000): Specified key was too long

Just recording a basic MySQL error I ran across recently. On MySQL 5.5, the following error was being reported:

ERROR 1071 (42000): Specified key was too long; max key length is 767 bytes

To see the error in action, run the following on MySQL 5.5:

[root@db01 ~]# mysql
mysql> create database example;
mysql> use example;
mysql> create table if not exists utf8_test (
day date not null,
product_id int not null,
dimension1 varchar(500) character set utf8 collate utf8_bin not null,
dimension2 varchar(500) character set utf8 collate utf8_bin not null,
unique index unique_index (day, product_id, dimension1, dimension2)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;

ERROR 1071 (42000): Specified key was too long; max key length is 767 bytes

This was curious because the same exact query would run without problem on MySQL 5.7. When researching this, I found that MySQL 5.7 has innodb_large_prefix enabled by default. MySQL 5.5 and 5.6 do not as described in the Official MySQL documentation as they wanted to maintain backwards compatibility with MySQL 5.1.

So to get this to work, you have to enable innodb_large_prefix and also change the innodb_file_format to barracuda. You can see the temporary fix in action by running the following. Just be sure to also add the ROW_FORMAT=DYNAMIC to the end of the query:

[root@db01 ~]# mysql
mysql> set global innodb_file_format = BARRACUDA;
mysql> set global innodb_large_prefix = ON;
mysql> create table if not exists utf8_test (
day date not null,
product_id int not null,
dimension1 varchar(500) character set utf8 collate utf8_bin not null,
dimension2 varchar(500) character set utf8 collate utf8_bin not null,
unique index unique_index (day, product_id, dimension1, dimension2)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 ROW_FORMAT=DYNAMIC;

Query OK, 0 rows affected (0.00 sec)

You can make the two changes persistent across MySQL restarts by adding the following to the my.cnf:

[root@db01 ~]# vim /etc/my.cnf
[mysqld]
...
innodb-file-format = BARRACUDA
innodb-large-prefix = ON
...

tcpdump Cheat Sheet

Packets coming inbound and outbound from a network interface contain a treasure trove of information that can be useful for troubleshooting purposes. Using the command tcpdump allows you to view the contents of the packets in real time, or it can be saved to a file for inspection later on.

This article will show some of the common tasks I use tcpdump for.

How to view Cisco Discovery Protocol

This is not always available. Cisco Discovery Protocol is a management protocol that Cisco uses to communicate a great deal of information about a network connection. It can tell you what switch and port the server is connected to, if there are connectivity issues due to the wrong duplex being set and can also help identify if the server is on the wrong VLAN. It can also show the management interface and operating system of the switch, amongst other things.

An example of how to run it and grepping for the fields I generally need is below::

[root@web01 ~]# tcpdump -nn -v -i eth0 -s 1500 -c 1 'ether[20:2] == 0x2000' | egrep "Device-ID|Address|Port-ID"
        Device-ID (0x01), length: 24 bytes: 'switch27.nyc4.example.com'
        Address (0x02), length: 13 bytes: IPv4 (1) 10.1.0.11
        Port-ID (0x03), length: 18 bytes: 'GigabitEthernet0/9'
        Management Addresses (0x16), length: 13 bytes: IPv4 (1) 10.1.0.11

Confirm traffic is flowing

Lets assume you have vlan tagging in place on the server, but for some reason that vlan cannot ping the gateway. You can check to see if your network interface is at least configured correcting by checking for ARP traffic by:
1. On another terminal, ping the target gateway.
2. Then in the other terminal, run:

[root@web01 ~]# tcpdump -i eth0 -nn -e vlan
tcpdump: WARNING: eth0: no IPv4 address assigned
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes
18:08:58.039740 63:b3:d2:5c:dd:dc > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 14, p 0, ethertype ARP, Request who-has 192.168.22.1 tell 192.168.22.100, length 28
18:08:59.039934 63:b3:d2:5c:dd:dc > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 14, p 0, ethertype ARP, Request who-has 192.168.22.1 tell 192.168.22.100, length 28
18:09:00.041922 63:b3:d2:5c:dd:dc > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 14, p 0, ethertype ARP, Request who-has 192.168.22.1 tell 192.168.22.100, length 28

This tells me that the server is sending out ARP requests successfully over VLAN 14, but no responses are coming back.

Check the payload of any traffic coming in over port 80

This example will provide you output similar to what you would see on an IDS. It is highly useful to be able to determine exactly what was accessed and what the web server responded with.

[root@web01 ~]# tcpdump -nnvvXS 'tcp port 80'
...
	GET /bogus HTTP/1.1
	Host: www.example22.com
	User-Agent: curl/7.54.0
	Accept: */*
...
	0x0030:  89c5 4347 4745 5420 2f62 6f67 7573 2048  ..CGGET./bogus.H
	0x0040:  5454 502f 312e 310d 0a48 6f73 743a 2077  TTP/1.1..Host:.w
	0x0050:  7777 2e65 7861 6d70 6c65 3232 2e63 6f6d  ww.example22.com
	0x0060:  0d0a 5573 6572 2d41 6765 6e74 3a20 6375  ..User-Agent:.cu
	0x0070:  726c 2f37 2e35 342e 300d 0a41 6363 6570  rl/7.54.0..Accep
	0x0080:  743a 202a 2f2a 0d0a 0d0a                 t:.*/*....
...
	HTTP/1.1 404 Not Found
	Date: Wed, 16 May 2018 02:51:16 GMT
	Server: Apache/2.4.6 (CentOS) OpenSSL/1.0.2k-fips
	Content-Length: 203
	Content-Type: text/html; charset=iso-8859-1
	
	<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
	<html><head>
	<title>404 Not Found</title>
	</head><body>
	<h1>Not Found</h1>
	<p>The requested URL /bogus was not found on this server.</p>
	</body></html>
	0x0000:  4500 01b3 fe88 4000 4006 2fda a5e3 44b3  E.....@.@./...D.
	0x0010:  182d 081f 0050 b35f c379 bddd 7fc3 db3c  .-...P._.y.....<
	0x0020:  8018 00e3 0c88 0000 0101 080a 89c5 435a  ..............CZ
	0x0030:  2681 c10b 4854 5450 2f31 2e31 2034 3034  &...HTTP/1.1.404
	0x0040:  204e 6f74 2046 6f75 6e64 0d0a 4461 7465  .Not.Found..Date
	0x0050:  3a20 5765 642c 2031 3620 4d61 7920 3230  :.Wed,.16.May.20
	0x0060:  3138 2030 323a 3531 3a31 3620 474d 540d  18.02:51:16.GMT.
	0x0070:  0a53 6572 7665 723a 2041 7061 6368 652f  .Server:.Apache/
	0x0080:  322e 342e 3620 2843 656e 744f 5329 204f  2.4.6.(CentOS).O
	0x0090:  7065 6e53 534c 2f31 2e30 2e32 6b2d 6669  penSSL/1.0.2k-fi

How to install and configure Redis on Ubuntu 16.04

Redis is an in-memory data structure store that is commonly used as a database, cache or a message broker. What makes Redis powerful is its optional ability to have data persistence. Therefore your dataset isn’t lost during restarts of the service.

The article below will discuss how to install, configure and provide basic security for Redis. From there, it will go into the basic use cases I use it for on a daily basis, session storing and general caching.

Installation

[root@redis01 ~]# apt-get update
[root@redis01 ~]# apt-get install redis-server redis-tools

Configuration

Redis listens on port 6379 by default and needs some additional configuration to ensure that it is secured. If you do not protect Redis with a firewall, authentication and have it listen only on a private network, there is a extremely high risk of leaking sensitive data.

First, set Redis to only listen on your private network. Redis does not have any type of encryption built in, it is important that the data is transferred only over private networks or secured tunnels. Set Redis to listen on the private interface by:

[root@redis01 ~]# vim /etc/redis/redis.conf
...
bind redis_servers_private_IP
...

If Redis is being installed on a stand alone web server and will not need to accept connections from other clients, then you can set Redis to listen on the local socket instead by commenting out the bind value and setting up a socket by:

[root@redis01 ~]# mkdir /var/run/redis
[root@redis01 ~]# chown redis:redis /var/run/redis
[root@redis01 ~]# vim /etc/redis/redis.conf
...
# bind 127.0.0.1
unixsocket /var/run/redis/redis.sock
unixsocketperm 777

If you do not have a dedicated firewall, use your OS’s built in firewall to only allow in connections from trusted web servers using their internal IP’s. Some quick examples are below:

# iptables
[root@redis01 ~]# vim /etc/sysconfig/iptables
...
-A INPUT -p tcp -m tcp --dport 6379 -s client_server_private_IP -m comment --comment "redis" -j ACCEPT
[root@redis01 ~]# service iptables restart

# ufw
[root@redis01 ~]# ufw allow from client_server_private_IP/32 to any port 6379

To protect Redis further, setup authentication which is a built in security feature. This will force clients to authenticate before being granted access. Use a tool such as apg or pwgen to create a secure password. Set the password within Redis by:

[root@redis01 ~]# vim /etc/redis/redis.conf
...
requirepass your_secure_password_here
...

[root@redis01 ~]# systemctl restart redis

Then test to ensure the password works by:

# This should fail
[root@redis01 ~]# redis_cli
127.0.0.1:6379> set key1 10
(error) NOAUTH Authentication required.

# This should work
[root@redis01 ~]# redis-cli
127.0.0.1:6379> auth your_secure_password_here
127.0.0.1:6379> set key1 10
OK
127.0.0.1:6379> get key1
"10"

Next, we need to secure the file permissions for Redis. The redis.conf contains the password for redis, so that file shouldn’t be readable by everyone. We also want to lock down the Redis data directory. Lock down the permissions on Redis by:

[root@redis01 ~]# chmod 700 /var/lib/redis
[root@redis01 ~]# chown redis:redis /etc/redis/redis.conf
[root@redis01 ~]# chmod 600 /etc/redis/redis.conf
[root@redis01 ~]# systemctl restart redis

The Official Redis Administration Guide recommends to disable Transparent Huge Pages (THP). This can be performed live by:

[root@redis01 ~]# echo 'never' > /sys/kernel/mm/transparent_hugepage/enabled
[root@redis01 ~]# echo 'never' > /sys/kernel/mm/transparent_hugepage/defrag

And disable Transparent Huge Pages (THP) at boot time by making a systemd unit file:

[root@db01 ~]# vim /etc/systemd/system/disable-thp.service
[Unit]
Description=Disable Transparent Huge Pages (THP)

[Service]
Type=simple
ExecStart=/bin/sh -c "echo 'never' > /sys/kernel/mm/transparent_hugepage/enabled && echo 'never' > /sys/kernel/mm/transparent_hugepage/defrag"

[Install]
WantedBy=multi-user.target

[root@db01 ~]# systemctl daemon-reload
[root@db01 ~]# systemctl start disable-thp
[root@db01 ~]# systemctl enable disable-thp

The Official Redis Administration Guide also recommends setting the following sysctl:

[root@redis01 ~]# sysctl vm.overcommit_memory=1
[root@redis01 ~]# vim /etc/sysctl.conf
...
vm.overcommit_memory = 1
...

Redis Configurations

Now that Redis is installed and secured, its time to tune it for your application’s needs. There are typically 2 types of Redis cache configurations I see:
Session store
Database cache or full page cache

The session cache allows for a single locations for your application to store sessions that would normally be stored by PHP on the local file system. It is important to ensure that the data is saved to disk so you don’t lose all the sessions between restarts of Redis. This is one of the primary advantages of using Redis over Memcached.

The full page cache is great for caching SQL queries or for serving as a full page cache for applications such as Magento or WordPress, caching images, videos, html, css or js. Generally this type of cache doesn’t need to be persistent across restarts of Redis.

Redis Configurations – Session store

When using Redis as a session store, you want to ensure that the Redis data persists between restarts of Redis. Otherwise your users could be left wondering why their shopping carts all of a sudden vanished. The example below will have disk persistence enabled with a memory limit of 1.5G (1536M).

These settings may or may not work for you! Adjust your settings to meet your environments requirements!

[root@redis01 ~]# vim /etc/redis/redis.conf
...
## Ensure disk persistence is enabled
save 900 1
save 300 10
save 60 10000
...
## Set the max memory
maxmemory 1536mb
...

[root@redis01 ~]# service redis restart

Redis Configurations – Database cache or full page cache

When using Redis for database caching or as a FPC for applications like WordPress or Magento, I disable disk persistence. This means the cache will only be stored in memory and lost whenever redis restart. I also set the memory limit to 1.5G (1536M) for starters and adjust accordingly from there. Since I am only storing cached data, I can avoid out of memory issues by allowing Redis to automatically remove the oldest cache entries using the maxmemory-policy allkeys-lru. Read up on the Redis supported eviction policies here.

These settings may or may not work for you! Adjust your settings to meet your environments requirements! Remember, this example assumes everything in Redis can be lost when the service restarts and the eviction policy will remove the least used keys out of all the data. I typically find this works for Magento and WordPress redis setups:

[root@redis01 ~]# vim /etc/redis/redis.conf
...
## Disable disk persistence
#save 900 1
#save 300 10
#save 60 10000
...
## Set the max memory
maxmemory 1536mb
...
## Update the eviction policy
maxmemory-policy allkeys-lru
...

[root@redis01 ~]# systemctl restart redis

Multiple Redis Configurations

Redis has the ability to utilize multiple caching configurations with their own settings. The only requirement is that each instance of Redis listens on a unique port, has a unique pid, and of course has its own config and startup script. In the example below, we are going to have 2 redis instances running called redis-sessions and redis-cache. To avoid confusion, we will disable the original redis instance.

# Create 2 new configs and modify the values accordingly
[root@redis01 ~]# cp /etc/redis/redis.conf /etc/redis/redis-session.conf
[root@redis01 ~]# vim /etc/redis/redis-session.conf
...
pidfile /var/run/redis-session.pid
port 6379
logfile /var/log/redis/redis-session.log
dir /var/lib/redis-session
...
# If unixsocket is uncommented, then update to:
unixsocket /var/run/redis/redis-session.sock
unixsocketperm 777
...

[root@redis01 ~]# cp /etc/redis/redis.conf /etc/redis/redis-cache.conf
[root@redis01 ~]# vim /etc/redis/redis-cache.conf
...
pidfile /var/run/redis-cache.pid
port 6380
logfile /var/log/redis/redis-cache.log
dir /var/lib/redis-cache
...
# If unixsocket is uncommented, then update to:
unixsocket /var/run/redis/redis-cache.sock
unixsocketperm 777
...

# Create directories and secure the permissions
[root@redis01 ~]# mkdir /var/lib/redis-session /var/lib/redis-cache
[root@redis01 ~]# chown redis:redis /var/lib/redis-session /var/lib/redis-cache /etc/redis/redis-session.conf /etc/redis/redis-cache.conf
[root@redis01 ~]# chmod 700 /var/lib/redis-session /var/lib/redis-cache
[root@redis01 ~]# chmod 600 /etc/redis/redis-session.conf /etc/redis/redis-cache.conf
[root@redis01 ~]# cd /etc/redis
[root@redis01 ~]# cp -Rp redis-server.post-down.d redis-session.post-down.d && cp -Rp redis-server.post-up.d redis-session.post-up.d && cp -Rp redis-server.pre-down.d redis-session.pre-down.d && cp -Rp redis-server.pre-up.d  redis-session.pre-up.d 
[root@redis01 ~]# cp -Rp redis-server.post-down.d redis-cache.post-down.d && cp -Rp redis-server.post-up.d redis-cache.post-up.d && cp -Rp redis-server.pre-down.d redis-cache.pre-down.d && cp -Rp redis-server.pre-up.d  redis-cache.pre-up.d 

# Create startup files
[root@redis01 ~]# cp /etc/systemd/system/redis.service /etc/systemd/system/redis-session.service
[root@redis01 ~]# vim /etc/systemd/system/redis-session.service
...
ExecStart=/usr/bin/redis-server /etc/redis/redis-session.conf
PIDFile=/var/run/redis/redis-session.pid
ExecStartPre=-/bin/run-parts --verbose /etc/redis/redis-session.pre-up.d
ExecStartPost=-/bin/run-parts --verbose /etc/redis/redis-session.post-up.d
ExecStop=-/bin/run-parts --verbose /etc/redis/redis-session.pre-down.d
ExecStopPost=-/bin/run-parts --verbose /etc/redis/redis-session.post-down.d
ReadWriteDirectories=-/var/lib/redis-session
Alias=redis-session.service
...
[root@redis01 ~]# cp /etc/systemd/system/redis.service /etc/systemd/system/redis-cache.service
[root@redis01 ~]# vim /etc/systemd/system/redis-cache.service
...
ExecStart=/usr/bin/redis-server /etc/redis/redis-cache.conf
PIDFile=/var/run/redis/redis-cache.pid
ExecStartPre=-/bin/run-parts --verbose /etc/redis/redis-cache.pre-up.d
ExecStartPost=-/bin/run-parts --verbose /etc/redis/redis-cache.post-up.d
ExecStop=-/bin/run-parts --verbose /etc/redis/redis-cache.pre-down.d
ExecStopPost=-/bin/run-parts --verbose /etc/redis/redis-cache.post-down.d
ReadWriteDirectories=-/var/lib/redis-cache
Alias=redis-cache.service
...

# Stop and disable old instance, start new instances.  Error during 'systemctl enable' appears safe to ignore as service still starts on boot.  But confirm for yourself!
[root@redis01 ~]# systemctl daemon-reload
[root@redis01 ~]# systemctl stop redis.service && systemctl disable redis.service
[root@redis01 ~]# systemctl enable redis-session.service && systemctl start redis-session.service
[root@redis01 ~]# systemctl enable redis-cache.service && systemctl start redis-cache.service

# Finally, edit the /etc/redis/redis-session.conf and /etc/redis/redis-cache.conf using the instructions earlier in this article for configuring sessions and db cache.

Client setup

The typical use cases I run into on a day to day basis are clients using Redis for their PHP application. Redis can be used to store cached content, or it can be used to centrally store sessions. Therefore these examples will be PHP focused.

Client setup – General data caching

For storing data, there is nothing that needs to be configured on the client side. The application code itself is what controls storing content within Redis.

Client setup – Storing sessions in Redis

To have Redis act as a central server for sessions, some additional configuration is needed on each client web server. Install php-memcache for your version of PHP. Assuming the default PHP version is installed from the package manager, you can install it by:

[root@web01 ~]# apt-get update
[root@web01 ~]# apt-get install php-redis redis-tools
[root@web01 ~]# service apache2 graceful
[root@web01 ~]# php -m |grep redis

Then update the php.ini as follows:

session.save_handler = redis
session.save_path = "tcp://127.0.0.1:6379?auth=your_secure_password_here"

Test to ensure sessions are now being stored in Redis:

[root@web01 ~]# vim /var/www/html/test-sessions.php
<?php
session_start();
?>
Created a session

Then run the following on the command line and confirm the returned numbers increment as shown below:

[root@web01 ~]# curl localhost/test-sessions.php && redis-cli -a your_secure_password_here keys '*' |grep SESSION | wc -l
10
Created a session
11

Troubleshooting

Confirm Redis is online:

[root@redis01 ~]# redis-cli ping

How to connect using redis-cli when redis is running on a different server or using a different port:

[root@redis01 ~]# redis-cli -h ip_of_redis_server -p port_number_here

Sometimes you may need to flush the Redis cache. Before doing this, make sure you are connecting to the right instance of Redis since there could be multiple instances running. An example is below:

[root@redis01 ~]# redis-cli -h ip_of_redis_server -p port_number_here
FLUSHALL

To get some useful stats on Redis, run:

[root@redis01 ~]# redis-cli
info

To get memory specific stats, run:

[root@redis01 ~]# redis-cli
info memory
127.0.0.1:6379> info memory
# Memory
used_memory:488315720
used_memory_human:465.69M
used_memory_rss:499490816
used_memory_peak:505227288
used_memory_peak_human:481.82M
used_memory_lua:36864
mem_fragmentation_ratio:1.02
mem_allocator:jemalloc-3.6.0

To increase the memory limit assigned to Redis without restarting the service, the example shown below will increase the memory allocated dynamically from 1G to 2G without a restart of the Redis:

[root@redis01 ~]# redis-cli
127.0.0.1:6379> config get maxmemory
1) "maxmemory"
2) "1000000000"
127.0.0.1:6379> config set maxmemory 2g
OK
127.0.0.1:6379> config get maxmemory
1) "maxmemory"
2) "2000000000"

Regarding performance issues with Redis, there are a number of factors that need to be account for, too many for this article. Redis published an excellent article that goes into various things that could cause latency with Redis at:
https://redis.io/topics/latency