How to install Elastic Stack

Your logs are trying to talk to you! The problem though is that reading through logs is like trying to pick out one conversation in a crowded and noisy room. Some people talk loud and others speak softly. With all this noise, how can you pick out the critical information? This is where Elastic Stack can help!

Elastic Stack is a group of open source products from Elastic designed to help users take data from any type of source and in any format and search, analyze, and visualize that data in real time. This is commonly referred to as an ELK stack (Elasticsearch, Logstash, and Kibana).

Setting up Elastic Stack can be quite confusing as there are several moving parts. As a very basic primer, logstash is the workhouse that applies various filters to parse the logs better. Logstash will then forward the parsed logs to elasticsearch for indexing. Kibana allows you to visualize the data stored in elasticsearch.

Server Installation

This guide is going to be based on CentOS/RHEL 7. Elasticsearch needs at least 2G of memory. So for the entire stack (Elasticsearch, Logstash and Kibana) to work, the absolute minimum required memory should be around 4G. Anything less than this may cause the services to become unstable or not start up at all.

Elastic Stack relies on Java, so install Java 1.8.0 by:

[[email protected] ~]# yum install java-1.8.0-openjdk
[[email protected] ~]# java -version
openjdk version "1.8.0_151"
OpenJDK Runtime Environment (build 1.8.0_151-b12)
OpenJDK 64-Bit Server VM (build 25.151-b12, mixed mode)

Elastic Stack packages all the needed software within their own repos, so to setup their repo by:

[[email protected] ~]# rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
[[email protected] ~]# echo '[elasticstack-6.x]
name=Elastic Stack repository for 6.x packages
baseurl=https://artifacts.elastic.co/packages/6.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md' > /etc/yum.repos.d/elasticstack.repo

Now install the needed packages for Elastic Stack and set them to start on boot:

[[email protected] ~]# yum install elasticsearch kibana logstash filebeat
[[email protected] ~]# systemctl daemon-reload
[[email protected] ~]# systemctl enable elasticsearch kibana logstash filebeat

Server Configuration

Setup Elasticsearch to listen for connects on the public IP of the server. Mine is also configured to listen on localhost as well since I am monitoring logs locally as well:

[[email protected] ~]# vim /etc/elasticsearch/elasticsearch.yml
...
network.host: 123.123.123.123, localhost
...

Setup Elasticsearch to be able to use geoip and user-agent by:

[[email protected] ~]# /usr/share/elasticsearch/bin/elasticsearch-plugin install ingest-geoip
[[email protected] ~]# /usr/share/elasticsearch/bin/elasticsearch-plugin install ingest-user-agent

Configure logstash with a basic configuration to accept logs from filebeats and forward them to elasticsearch by:

[[email protected] ~]# echo 'input {
  beats {
    port => 5044
  }
}

# The filter part of this file is commented out to indicate that it is
# optional.
# filter {
#
# }

filter {
  if [type] == "apache-access" {
    # This will parse the apache access event
    grok {
      match => [ "message", "%{COMBINEDAPACHELOG}" ]
    }
  }
}

output {
  elasticsearch {
    hosts => "localhost:9200"
    manage_template => false
    index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}" 
    document_type => "%{[@metadata][type]}" 
  }
}' > /etc/logstash/conf.d/logstash.conf

Start and test services by:

[[email protected] ~]# systemctl start kibana elasticsearch logstash filebeat

Elasticsearch will take about 15 seconds or more to start. To ensure elasticsearch is running, check that the output is similar to the following:

[[email protected] ~]# curl -XGET 'localhost:9200/?pretty'
{
  "name" : "Cp8oag6",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "AT69_T_DTp-1qgIJlatQqA",
  "version" : {
    "number" : "6.0.1",
    "build_hash" : "f27399d",
    "build_date" : "2016-03-30T09:51:41.449Z",
    "build_snapshot" : false,
    "lucene_version" : "7.0.1",
    "minimum_wire_compatibility_version" : "1.2.3",
    "minimum_index_compatibility_version" : "1.2.3"
  },
  "tagline" : "You Know, for Search"
}

Then log into Kibana by navigating your browser to:

http://localhost:5601

If this is installed on a remote server, then you can easily install Nginx to act as a front end for Kibana by:

# Install Nginx
[[email protected] ~]# yum install nginx httpd-tools

# Setup username/password
[[email protected] ~]# htpasswd -c /etc/nginx/htpasswd.users kibanaadmin

# Create Nginx vhost
[[email protected] ~]# echo 'server {
    listen 80;

    server_name kibana.yourdomain.com;

    auth_basic "Restricted Access";
    auth_basic_user_file /etc/nginx/htpasswd.users;

    location / {
        proxy_pass http://localhost:5601;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;        
    }
}' > /etc/nginx/conf.d/kibana.conf

# Set services to start on boot and start nginx
[[email protected] ~]# systemctl daemon-reload
[[email protected] ~]# systemctl enable nginx
[[email protected] ~]# systemctl start nginx

# Open up the firewall to allow inbound port 80 traffic from anywhere
[[email protected] ~]# firewall-cmd --zone=public --add-port=80/tcp --permanent
[[email protected] ~]# firewall-cmd --reload

# Allow nginx to connect to Kibana port 5601 if you’re using SELinux:
[[email protected] ~]# semanage port -a -t http_port_t -p tcp 5601

# Navigate your browser to your new domain you setup, assuming you already setup DNS for it:
http://kibana.yourdomain.com

Client installation – Filebeat

The question now becomes, how can I get the log messages from other servers into our Elastic Stack server? As my needs are more basic since I am not doing any manipulation of log data, I can make use of filebeat and its associated plugins to get the Apache, Nginx, MySQL, Syslog, etc data I need over to the ElasticSearch server.

Assuming filebeat is not installed, ensure that you have the repos setup for it:

[[email protected] ~]# rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
[[email protected] ~]# echo '[elasticstack-6.x]
name=Elastic Stack repository for 6.x packages
baseurl=https://artifacts.elastic.co/packages/6.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md' > /etc/yum.repos.d/elasticstack.repo

Then install filebeat by:

[[email protected] ~]# yum install filebeat
[[email protected] ~]# systemctl daemon-reload
[[email protected] ~]# systemctl enable filebeat

Setup filebeats to send the logs over to your Elastic Stack server:

[[email protected] ~]# vim /etc/filebeat/filebeat.yml
...
output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["123.123.123.123:9200"]
...

Now setup the plugins for filebeat. Only setup the ones you need. Be sure to restart filebeat after you have your desired modules enabled.

To send over Apache logs

[[email protected] ~]# filebeat modules enable apache2
[[email protected] ~]# filebeat setup -e
[[email protected] ~]# systemctl restart filebeat

Note, you may need to modify the filebeat apache2 module to pickup your logs. In my case, I had to set the ‘var.paths’ for both the access and error logs by:

[[email protected] ~]# vim /etc/filebeat/modules.d/apache2.yml
- module: apache2
  # Access logs
  access:
    enabled: true

    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    var.paths: ["/var/log/httpd/*access.log*"]

  # Error logs
  error:
    enabled: true

    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    var.paths: ["/var/log/httpd/*error.log*"]

[[email protected] ~]# systemctl restart filebeat

To send over syslog data:

[[email protected] ~]# filebeat modules enable system
[[email protected] ~]# filebeat setup -e
[[email protected] ~]# systemctl restart filebeat

To handle MySQL data:

[[email protected] ~]# filebeat modules enable mysql
[[email protected] ~]# filebeat setup -e
[[email protected] ~]# systemctl restart filebeat

To send over auditd logs

[[email protected] ~]# filebeat modules enable auditd
[[email protected] ~]# filebeat setup -e
[[email protected] ~]# systemctl restart filebeat

To send over Nginx logs

[[email protected] ~]# filebeat modules enable nginx
[[email protected] ~]# filebeat setup -e
[[email protected] ~]# systemctl restart filebeat

Enable Docker log shipping to elasticsearch. There is no plugin for this, but its easy enough to configure:
Reference: https://www.elastic.co/blog/enrich-docker-logs-with-filebeat

[[email protected] ~]# vim /etc/filebeat/filebeat.yml
filebeat.prospectors:
...
- type: log
  paths:
   - '/var/lib/docker/containers/*/*.log'
  json.message_key: log
  json.keys_under_root: true
  processors:
  - add_docker_metadata: ~
...

[[email protected] ~]# systemctl restart filebeat

Then browse to the Kibana dashboard to view the available dashboards for Filebeat, or create your own!

Client installation – Metricbeat

What about shipping metrics and statistics over to the Elastic Stack server? This is where Metricbeat comes into play. Metricbeat is a lightweight shipper that you can install on your client nodes that will collect metrics and ships them to Elasticsearch. There are modules for Apache, HAProxy, MySQL, Nginx, PostgreSQL, Redis, System and more. This can be installed on your client servers or on the ELK server itself if you like.

Assuming Metricbeat is not installed, ensure that you have the repos setup for it:

[[email protected] ~]# rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
[[email protected] ~]# echo '[elasticstack-6.x]
name=Elastic Stack repository for 6.x packages
baseurl=https://artifacts.elastic.co/packages/6.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md' > /etc/yum.repos.d/elasticstack.repo

Then install Metricbeat by:

[[email protected] ~]# yum install metricbeat
[[email protected] ~]# systemctl daemon-reload
[[email protected] ~]# systemctl enable metricbeat

Setup Metricbeat to send the logs over to your Elastic Stack server:

[[email protected] ~]# vim /etc/metricbeat/metricbeat.yml
...
output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["123.123.123.123:9200"]
...

Now setup the plugins for Metricbeat. Only setup the ones you need. Be sure to restart Metricbeat after you have your desired modules enabled.

To see the full listing of available modules and what is currently enabled:

[[email protected] ~]# metricbeat modules list

To send over Apache, MySQL, Nginx and System metrics:

[[email protected] ~]# metricbeat modules enable apache mysql nginx system
[[email protected] ~]# filebeat setup -e

After enabling each one, be sure to check out the modules associated config file as you may need to make changes to it so it will work with your environment. The modules config files can be found in:

[[email protected] ~]# cd /etc/metricbeat/modules.d

Once the configurations are updated accordingly, restart Filebeat by:

[[email protected] ~]# systemctl restart filebeat

Then browse to the Kibana dashboard to view the available dashboards for Metricbeat, or create your own!

Quick Kibana Primer

Now that you have data coming into Elasticsearch, you can use Kibana to generate some quick searches and visualizations. This is not meant to be a full fledge tutorial on how to use Kibana, just a way to jump start the learning process as Kibana can be somewhat complicated if you have never seen it.

To search Kibana looking for failed logins, type the following in the discover search box:

system.auth.ssh.event:Failed OR system.auth.ssh.event:Invalid

To see what packages have been recently installed, type the following in the discover search box:

source: "/var/log/messages" AND system.syslog.message: *Install*

What about visualizations? To see the top 5 countries accessing Apache:

- Click 'Visualizations' over on the left
- Select 'Vertical bar chart'
- Select 'filebeat-*' from the existing index
Click 'X-Axis'
Aggregation:  Terms
Field:  apache2.access.geoip.country_iso_code
Order by:  Metric: Count
Order Descending:  5

To break it down further by city:
- Click 'Add sub-buckets'
- Select 'Split series'
Sub Aggregation:  Terms
Field:  apache2.access.geoip.city_name
Order by:  metric: Count
Order Decending:  5
Click run

View the top 5 remote IP’s hitting Apache:

- Click 'Visualizations' over on the left
- Select 'Vertical bar chart'
- Select 'filebeat-*' from the existing index
- Click 'X-Axis'
Aggregation:  Terms
Field:  apache2.access.remote_ip
Size:  5

Click 'Add sub-buckets'
- Select 'Split series'
Sub Aggregation:  Terms
Field:  apache2.access.remote_ip
Order by:  metric: Count
Order: Descending
Size:  5

View the top 10 requested URL’s in Apache:

- Click 'Visualizations' over on the left
- Select 'Data Table'
- Select 'filebeat-*' from the existing index
- Under Buckets, click 'Split Rows'
Aggregation:  Terms
Field:  apache2.access.url
Order By:  metric: Count
Order:  Descending
Size:  10
Custom Label:  URL

Then click 'Split Rows'
Sub Aggregation:  Terms
Field:  apache2.access.body_sent_bytes
Order By:  metric: Count
Descending:  10
Custom Label:  Size
Click run

Create line chart for apache response codes:

- Click 'Visualizations' over on the left
- Select 'Line chart'
- Select 'filebeat-*' from the existing index
- Click X-Axis
Aggregation:  Date Histogram
Field:  @timestamp
Interval:  Minute

Click 'Split Series'
Sub Aggregation:  Terms
Field:  apache2.access.response_code
Oder by:  metric: Count
Order:  Descending
Size: 5
Click run

See which logs are receiving a lot of activity:

- Click 'Visualizations' over on the left
- Select 'Pie Chart'
- Select 'filebeat-*' from the existing index
- Click 'Split Slices'
Aggregation:  Terms
Field:  source
Order by:  metric: Count
Order: Descending
Size: 5

Purging MySQL binary logs

Binary logs contain a record of all changes to the database, both in data and structure. It does not keep track of simple SELECT statements. These are required for MySQL replication and can also be useful for performing point in time backups after a nightly database dump has been restored.

If binary logs are enabled within MySQL, but you are not expiring them after X amount of days, they will eventually cause your server to run out of disk space. Typically I see this value set to 5 days.

As a quick note before we begin, if the server is completely out of disk space, you may need to free up some space beforehand so you can work with the system. You can temporarily free up space by changing the filesystems reserved block setting from 5% to 1% by:

[[email protected] ~]# df -h /
/dev/xvda1       79G   76G     0 100% /
[[email protected] ~]# tune2fs -m 2 /dev/xvda1
[[email protected] ~]# df -h /
/dev/xvda1       79G   76G  1.8G  98% /

To determine how many days worth of logs you can purge, first identify if this server is Master MySQL running running MySQL replication to a Slave MySQL server. If it is, log onto the Slave MySQL server, and see which binary log it is currently reading from:

[[email protected] ~]# mysql
mysql> show slave status\G
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 10.xx.xx.xx
Master_User: replication
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000244
...

So based off that, we can purge the binary logs up to mysql-bin.000244. So back on the master server, check the MySQL bin log status by:

[[email protected] ~]# mysql
mysql> show binary logs;
+------------------+------------+
| Log_name         | File_size  |
+------------------+------------+
| mysql-bin.000230 |  454230880 |
| mysql-bin.000231 |  511519497 |
| mysql-bin.000232 |  483552032 |
| mysql-bin.000233 |  472847181 |
| mysql-bin.000234 |  443236582 |
| mysql-bin.000235 |  408021824 |
| mysql-bin.000236 |  531519875 |
| mysql-bin.000237 |  468583798 |
| mysql-bin.000238 |  495423661 |
| mysql-bin.000239 |  474475274 |
| mysql-bin.000240 |  689162898 |
| mysql-bin.000241 | 1073749465 |
| mysql-bin.000242 |  939200512 |
| mysql-bin.000243 |  702552334 |
| mysql-bin.000244 |   40656827 |
+------------------+------------+

In my example, as I only want to keep 5 days worth of binary logs, we can purge all the prior bin logs by:

[[email protected] ~]# mysql
mysql> purge binary logs to 'mysql-bin.000240;
mysql> show binary logs;
+------------------+------------+
| Log_name         | File_size  |
+------------------+------------+
| mysql-bin.000240 |  689162898 |
| mysql-bin.000241 | 1073749465 |
| mysql-bin.000242 |  939200512 |
| mysql-bin.000243 |  702552334 |
| mysql-bin.000244 |   40656827 |
+------------------+------------+

As I only want to have a 5 day retention set in general, you can do this live without restarting mysql by:

[[email protected] ~]# mysql
mysql> show variables like '%expire_logs_days%';
+------------------+-------+
| Variable_name    | Value |
+------------------+-------+
| expire_logs_days | 0     |
+------------------+-------+

mysql> SET GLOBAL expire_logs_days = 5;
Query OK, 0 rows affected (0.00 sec)

mysql> show variables like '%expire_logs_days%';
+------------------+-------+
| Variable_name    | Value |
+------------------+-------+
| expire_logs_days | 5     |
+------------------+-------+
1 row in set (0.00 sec)

Then make the setting persistent across MySQL restarts by adding it to the /etc/my.cnf:

[[email protected] ~]# vim /etc/my.cnf
[mysqld]
...
expire_logs_days = 5
...

If you had to modify the filesystems block reservation, change it back to how it was beforehand, which is typically 5%. You can do this by:

[[email protected] ~]# tune2fs -m 5 /dev/xvda1

cURL Cheat Sheet

Curl is the swiss army knife for gathering information while troubleshooting websites. There are many ways to use curl and this guide is simply documenting the ones I often use, but can never seem to remember the specific flags.

General Usage

Test a page behind a username/password prompt:

[[email protected] ~]# curl --user name:password http://www.example.com

Download files from github:

[[email protected] ~]# curl -O https://raw.github.com/username/reponame/master/filename

Download content via curl to a specific filename:

[[email protected] ~]# curl -o archive.tgz https://www.example.com/file.tgz

Run a script from a remote source on server. Understand the security implications of doing this as it can be dangerous!

[[email protected] ~]# source <( curl -sL https://www.example.com/your_script.sh)

Website Troubleshooting

Receive output detailing the HTTP response and request headers, following all redirects, and discarding the page body:

[[email protected] ~]# curl -Lsvo /dev/null https://www.example.com

Test a domain hosted on the local server, bypassing DNS:

[[email protected] ~]# curl -sLH "host: www.example.com" localhost

Test a domain against specific IP, bypassing the need for modifying /etc/hosts:

[[email protected] ~]# curl -Lsvo /dev/null --resolve 'example.com:443:123.123.123.123' https://www.example.com/

or

[[email protected] ~]# curl -Lsvo /dev/null --header "Host: example.com" https://123.123.123.123/

Send request using a specific user-agent. Sometimes a server will have rules in place to block the default curl user-agent string. Or perhaps you need to test using a specific user-agent. You can pass specific user-agent strings by running:

[[email protected] ~]# curl --user-agent "USER_AGENT_STRING_HERE" www.example.com

A comprehensive listing of available user-agent strings available resides at:
http://www.useragentstring.com/pages/useragentstring.php

For example, lets say you need to test a mobile device user-agent to see if a custom redirect works:

[[email protected] ~]# curl -H "User-Agent: Mozilla/5.0 (Linux; Android 4.0.4; Galaxy Nexus Build/IMM76B) AppleWebKit/535.19 (KHTML, like Gecko) Chrome/18.0.1025.133 Mobile Safari/535.19" -IL http://www.example.com/about/contact-us

HTTP/1.1 301 Moved Permanently
Date: Tue, 17 Nov 2015 18:10:09 GMT
Server: Apache/2.4.6 (CentOS) OpenSSL/1.0.1e-fips
Location: http://www.example.com/mobi/contact.php
Content-Type: text/html; charset=iso-8859-1

SSL/TLS Testing

Test to see if the site supports specific SSL/TLS protocols:

[[email protected] ~]# curl --sslv2 https://www.example.com
[[email protected] ~]# curl --sslv3 https://www.example.com
[[email protected] ~]# curl --tlsv1 https://www.example.com
[[email protected] ~]# curl --tlsv1.0 https://www.example.com
[[email protected] ~]# curl --tlsv1.1 https://www.example.com
[[email protected] ~]# curl --tlsv1.2 https://www.example.com

Performance Troubleshooting

Load times can be impacted by a number of things, such as the TLS handshake, DNS lookup time, redirects, transfers, upload/downloads, etc. The curl command shown below will break down the times for each accordingly:

[[email protected] ~]# curl -Lsvo /dev/null https://www.example.com/ -w "\nContent Type: %{content_type} \
\nHTTP Code: %{http_code} \
\nHTTP Connect:%{http_connect} \
\nNumber Connects: %{num_connects} \
\nNumber Redirects: %{num_redirects} \
\nRedirect URL: %{redirect_url} \
\nSize Download: %{size_download} \
\nSize Upload: %{size_upload} \
\nSSL Verify: %{ssl_verify_result} \
\nTime Handshake: %{time_appconnect} \
\nTime Connect: %{time_connect} \
\nName Lookup Time: %{time_namelookup} \
\nTime Pretransfer: %{time_pretransfer} \
\nTime Redirect: %{time_redirect} \
\nTime Start Transfer (TTFB): %{time_starttransfer} \
\nTime Total: %{time_total} \
\nEffective URL: %{url_effective}\n" 2>&1

The example output is below:

...
HTTP Code: 200 
HTTP Connect:000 
Number Connects: 2 
Number Redirects: 1 
Redirect URL:  
Size Download: 136112 
Size Upload: 0 
SSL Verify: 0 
Time Handshake: 0.689 
Time Connect: 0.556 
Name Lookup Time: 0.528 
Time Pretransfer: 0.689 
Time Redirect: 0.121 
Time Start Transfer (TTFB): 0.738 
Time Total: 0.962 
Effective URL: https://www.example.com/

If a web server is running a bunch of sites and has a high load, how can you narrow down which site is likely causing the high load condition on the server? One way would be to see which site takes the longest to load, as that may indicate a resource intensive site. See the example below:

[[email protected] ~]# for i in www.example1.com www.example2.com www.example3.com; do echo -n "$i "; (time curl -IL $i -XGET) 2>&1 | grep -E "real|HTTP"; echo; done

www.example1.com HTTP/1.1 200 OK
real	0m0.642s

www.example2.com HTTP/1.1 200 OK
real	0m2.234s

www.example3.com HTTP/1.1 200 OK
real	0m0.421s

So www.example2.com takes 2 seconds to load. What happens to the load times on that domain during increased traffic. The example below will send 25 requests to the domain:

[[email protected] ~]# for i in {1..25}; do (time curl -sIL http://www.example2.com -X GET &) 2>&1 | grep -E "real|HTTP" & done

HTTP/1.1 200 OK
real	0m11.297s
HTTP/1.1 200 OK
real	0m11.395s
HTTP/1.1 200 OK
real	0m11.906s
HTTP/1.1 200 OK
real	0m12.079s
...
HTTP/1.1 200 OK
real	0m11.297s

Determining why this is happening will involve investigation outside the scope of this article. Mainly around investigating ways to cache the site or otherwise optimizing it. However at least now we know which site doesn’t perform well under increased requests, and may be causing the high server load.

How to setup Proxmox VE 5 with LXC containers on Rackspace Cloud

Testing out changes in a production environment is never a good idea. However prepping test servers can be tedious as you have to find the hardware and setup the operating system before you can begin. So I want a faster and more cost effective medium, turning a single Cloud Server into a virtualized host server for my test servers. Welcome LXC.

Taken from the providers site, LXC is an operating system-level virtualization technology for Linux. It allows a physical server to run multiple isolated operating system instances, called containers, virtual private servers (VPSs), or virtual environments (VEs.) LXC is similar to Solaris Containers, FreeBSD jails and OpenVZ.

To managed my LXC containers, I prefer to use Proxmox VE 5, which provides a clean control panel for managing my containers.

This guide will document how to install Proxmox on a 4G Rackspace Cloud Server running Debian 9. There will be a 50G SSD Cloud Block Storage volume attached to the server utilizing ZFS that will store the containers, which is outlined more below. The Proxmox installation will install everything needed to run LXC. The IP’s for the containers will be provided via NAT served from the server, therefore creating a self contained test environment.

Configure system for LXC according to best practices

Increase the open files limit by appending the following to the bottom of /etc/security/limits.conf:

[[email protected] ~]# vim /etc/security/limits.conf
...
*       soft    nofile  1048576 unset
*       hard    nofile  1048576 unset
root    soft    nofile  1048576 unset
root    hard    nofile  1048576 unset
*       soft    memlock 1048576 unset
*       hard    memlock 1048576 unset

Now setup some basic kernel tweaking at the bottom of /etc/sysctl.conf:

[[email protected] ~]# vim /etc/sysctl.conf
...
# LXD best practices:  https://github.com/lxc/lxd/blob/master/doc/production-setup.md
fs.inotify.max_queued_events = 1048576
fs.inotify.max_user_instances = 1048576
fs.inotify.max_user_watches = 1048576
vm.max_map_count = 262144

Install Proxmox VE 5

For this to work, we need a vanilla Debian 9 Cloud Server and install Proxmox on top of it, which will install the required kernel.

To get things started, update /etc/hosts to setup your fqdn, and remove any resolvable ipv6 domains:

[[email protected] ~]# cat /etc/hosts
127.0.0.1 localhost.localdomain localhost
123.123.123.123 proxmox01.yourdomain.com proxmox01-iad

# The following lines are desirable for IPv6 capable hosts
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts

Test to confirm /etc/files is setup properly. This should return your servers IP address:

[[email protected] ~]# hostname --ip-address

Add the Proxmox VE repo and add the repo key:

[[email protected] ~]# echo "deb http://download.proxmox.com/debian/pve stretch pve-no-subscription" > /etc/apt/sources.list.d/pve-install-repo.list
[[email protected] ~]# wget http://download.proxmox.com/debian/proxmox-ve-release-5.x.gpg -O /etc/apt/trusted.gpg.d/proxmox-ve-release-5.x.gpg

Update the package index and then update the system for Proxmox:

[[email protected] ~]# apt update && apt dist-upgrade
* Select option for 'Install the package maintainer's version' when asked about grub

Install Proxmox VE and reboot:

[[email protected] ~]# apt install proxmox-ve postfix open-iscsi
[[email protected] ~]# reboot

Once the cloud server comes back online, confirm you are running the pve kernel:

[[email protected] ~]# uname -a
Linux proxmox 4.13.4-1-pve #1 SMP PVE 4.13.4-25 (Fri, 13 Oct 2017 08:59:53 +0200) x86_64 GNU/Linux

Setup NAT for the containers

As the Rackspace Cloud server comes with 1 IP address, I will be making use of NAT’ed IP addresses to assign to my individual containers. The steps are documented below:

Update /etc/sysctl.conf to allow ip_forwarding:

[[email protected] ~]# vim /etc/sysctl.conf
...
net.ipv4.ip_forward = 1
...

Then apply the new settings without a reboot:

[[email protected] ~]# sysctl -p

To setup the NAT rules, we need to setup a script that will start on boot. Two things need to be taken into consideration here:

1. Change IP address below (123.123.123.123) in the NAT rule to your Cloud server’s public IP address.
2. This assumes you want to use a 192.168.1.0/24 network for your VE’s.

The quick and dirty script is below:

[[email protected] ~]# vim /etc/init.d/lxc-routing
#!/bin/sh
case "$1" in
 start) echo "lxc-routing started"
# It's important that you change the SNAT IP to the one of your server (not the local but the internet IP)
# The following line adds a route to the IP-range that we will later assign to the VPS. That's how you get internet access on # your VPS.
/sbin/iptables -t nat -A POSTROUTING -s 192.168.1.0/24 -o eth0 -j SNAT --to 123.123.123.123

# Allow servers to have access to internet:
/sbin/iptables -A FORWARD -s 192.168.1.0/24 -j ACCEPT
/sbin/iptables -A FORWARD -d 192.168.1.0/24 -j ACCEPT
# Be sure to add net.ipv4.ip_forward=1 to /etc/sysctl.conf, then run sysctl -p

# These are the rules for any port forwarding you want to do
# In this example, all traffic to and from the ports 11001-11019 gets routed to/from the VPS with the IP 192.168.1.1.
# Also the port 11000 is routed to the SSH port of the vps, later on you can ssh into your VPS through yourip:11000

#/sbin/iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 11000 -j DNAT --to 192.168.1.1:22
#/sbin/iptables -t nat -A PREROUTING -i eth0 -p udp --dport 11001:11019 -j DNAT --to 192.168.1.1
#/sbin/iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 11001:11019 -j DNAT --to 192.168.1.1

# In my case I also dropped outgoing SMTP traffic, as it's one of the most abused things on servers

#/sbin/iptables -A FORWARD -j DROP -p tcp --destination-port 25
#/sbin/iptables -A FORWARD -j DROP -p tcp --destination-port 2525
#/sbin/iptables -A FORWARD -j DROP -p tcp --destination-port 587
#/sbin/iptables -A FORWARD -j DROP -p tcp --destination-port 465
#/sbin/iptables -A FORWARD -j DROP -p tcp --destination-port 2526
#/sbin/iptables -A FORWARD -j DROP -p tcp --destination-port 110
#/sbin/iptables -A FORWARD -j DROP -p tcp --destination-port 143
#/sbin/iptables -A FORWARD -j DROP -p tcp --destination-port 993

;;

*) echo "Usage: /etc/init.d/lxc-routing {start}"
exit 2
;;

esac
exit 0

Setup permissions, set to run on boot, and run it:

[[email protected] ~]# chmod 755 /etc/init.d/lxc-routing
[[email protected] ~]# update-rc.d lxc-routing defaults
[[email protected] ~]# /etc/init.d/lxc-routing start

When you go to start a new container, the container will not start as Proxmox will complain about an error similar to below:

-- Unit [email protected] has begun starting up.
Nov 06 06:07:07 proxmox01.*********** systemd-udevd[11150]: Could not generate persistent MAC address for vethMVIWQY: No such file or directory
Nov 06 06:07:07 proxmox01.*********** kernel: IPv6: ADDRCONF(NETDEV_UP): veth100i0: link is not ready

This can be corrected by:

[[email protected] ~]# vim /etc/systemd/network/99-default.link
[Link]
NamePolicy=kernel database onboard slot path
MACAddressPolicy=none

Then reboot:

[[email protected] ~]# reboot

Navigate your browser to the control panel, login with your root SSH credentials, and setup a Linux Bridge

- Navigate your browser to: https://x.x.x.x:8006
- Click on System --> Network
- On top, click 'Create' --> 'Linux Bridge'
	- Name:  vmbr0
	- IP address:  192.168.1.1
	- Subnet mask: 255.255.255.0
	- Autostart:  checked
	- Leave everything else blank

Setup the 50G SSD Cloud Block Storage Volume with ZFS and add to proxmox. Assuming the device is already mounted, check to see what it got mapped to by:

[[email protected] ~]# lsblk 
NAME    MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda    202:0    0  80G  0 disk 
└─xvda1 202:1    0  80G  0 part /
xvdb    202:16   0  50G  0 disk   <--- This is my new volume

First, install the ZFS utils for Linux, and enable the kernel module:

[[email protected] ~]# apt-get install zfsutils-linux
[[email protected] ~]# /sbin/modprobe zfs

Then add the drive to the zpool:

[[email protected] ~]# zpool create zfs /dev/xvdb 
[[email protected] ~]# zpool list
NAME   SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
zfs   49.8G  97.5K  49.7G         -     0%     0%  1.00x  ONLINE  -

Now add the new disk to proxmox:

- Navigate your browser to: https://x.x.x.x:8006
- Click on Datacenter --> Storage
- On top, click 'Add' --> 'ZFS'
	- Name:  zfs
	- ZFS Pool:  zfs
	- Enable:  Checked
	- Thin provision:  Checked

Add Docker support to the containers

Docker can successfully run within a LXC container with some additional configuration. First, create the containers as desired for Docker via Proxmox, then add the following to the bottom of containers LXC config file:

[[email protected] ~]# /etc/pve/lxc/100.conf
...
#insert docker part below
lxc.aa_profile: unconfined
lxc.cgroup.devices.allow: a
lxc.cap.drop:

After restarting that container, you will be able to install and configure Docker as normal on that container.

Discrepancy on disk usage between df and du

When df is reporting your filesystem is almost full, but when you start using du to locate the offending directory and nothing lines up, what do you do? The example scenario is below:

[[email protected] ~]# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/xvda1       40G   36G  1.5G  97% /

[[email protected] ~]# du -sh /* |grep G
1.8G    /home
1.8G    /root
1.4G    /usr
12G     /var

So there is almost 19G of space that is unaccounted for! This can happen when a large file is deleted, but the file itself is still held open by a running process. You can see a listing of open files on your server lsof. To find files that are held open by a process, but are marked as deleted, run:

[[email protected] ~]# lsof |grep del
vsftpd     1479    root  txt       REG              202,1      167536      90144 /usr/sbin/vsftpd (deleted)
mysqld     8432   mysql    4u      REG              202,1           0    1155075 /tmp/ib4ecsWH (deleted)
mysqld     8432   mysql    5u      REG              202,1           0    1155076 /tmp/ib8umUE8 (deleted)
mysqld     8432   mysql    6u      REG              202,1           0    1155077 /tmp/ib0CGmnz (deleted)
mysqld     8432   mysql    7u      REG              202,1           0    1155078 /tmp/ibGK9i6Z (deleted)
mysqld     8432   mysql   10w      REG              202,1 19470520347    1114270 /var/lib/mysql/mysql-general.log (deleted)
mysqld     8432   mysql   11u      REG              202,1           0    1155079 /tmp/ib4M9ZPq (deleted)

Notice the file: /var/lib/mysql/mysql-general.log (deleted). When doing the math on the file size, it works out to be about 18G. That would account for the ‘missing’ space that the command du is not seeing almost perfectly.

So in this case, the culprit is the mysql general log, which is typically used for debugging purposes only as it logs every query that MySQL uses. The problem with leaving this log enabled is that it can quickly fill up your disks. So MySQL simply needs to be restarted so the process will release the file, and therefore allow the space to be reclaimed by the file system. Of course, if the general log is no longer being used, it should probably be disabled in the my.cnf.

Nginx Load Balancer

Load balancing applications across multiple servers is a common technique for creating highly available, fault-tolerant solutions. Nginx can serve as a very light weight and efficient HTTP load balancer to distribute traffic to multiple servers.

This guide will outline one basic way Nginx can be deployed as a load balancer.

Install Nginx

First, install Nginx on your server. In many cases, the version of Nginx supplied by the OS repos is outdated, so this guide will be using Nginx’s official repos instead.

# CentOS 6
[[email protected] ~]# rpm -i http://nginx.org/packages/centos/6/noarch/RPMS/nginx-release-centos-6-0.el6.ngx.noarch.rpm
[[email protected] ~]# rpm --import http://nginx.org/keys/nginx_signing.key
[[email protected] ~]# yum install nginx
[[email protected] ~]# chkconfig nginx on

# CentOS 7
[[email protected] ~]# rpm -i http://nginx.org/packages/centos/7/noarch/RPMS/nginx-release-centos-7-0.el7.ngx.noarch.rpm
[[email protected] ~]# rpm --import http://nginx.org/keys/nginx_signing.key
[[email protected] ~]# yum install nginx
[[email protected] ~]# systemctl enable nginx

# Ubuntu 14.04 and 16.04
[[email protected] ~]# nginx=stable # use nginx=development for latest development version
[[email protected] ~]# apt-get install python-software-properties
[[email protected] ~]# add-apt-repository ppa:nginx/$nginx
[[email protected] ~]# apt-get update
[[email protected] ~]# apt-get install nginx

Create Load Balancer config

As I have many servers running different domains, I will be setting up Nginx to load balance specific domains to specific servers. So in the example below, www.example.com will be load balanced to my 2 servers, 192.168.1.161 and 192.168.1.162.

The notable things defined in this example include:

- Uses Least Connections for routing traffic to the load balanced web servers
- Servers will automatically be disabled in the load balancer
- The load balancer will pass the x-forwarded-for header (real IP) to the load balanced web servers

Create the config by:

[[email protected] ~]# vim /etc/nginx/conf.d/example.com.conf
upstream lb-example.com {

	# Load balancing method, only enable one.
	# blank        # Round robin, leave commented out as it is the default
        least_conn;    # Least connections
	# ip_hash;     # Sticky sessions 

	# Backend load balanced servers
        server 192.168.1.161 max_fails=3 fail_timeout=15s;
        server 192.168.1.162 max_fails=3 fail_timeout=15s;

	# Temporarily disable server in load balancer
        # server 192.168.1.163 down;
}

server {
        listen 80;
        server_name www.example.com example.com;
        access_log  /var/log/nginx/example.com.access.log  main;
        # Enforce https
        # return 301 https://$server_name$request_uri;
        # }

        # Pass to load balanced servers
        location / {
        proxy_pass http://lb-example.com;
	
	# Pass x-forwarded-for headers to backend servers
	proxy_set_header Host $host;
	proxy_set_header X-Real-IP $remote_addr;
	proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
	proxy_set_header X-Forwarded-Proto $scheme;
        }
}

server {
        listen 443 ssl;
        server_name www.example.com example.com;
        access_log  /var/log/nginx/example.com.access.log  main;

        location / {
        proxy_pass https://lb-example.com;

        # Pass x-forwarded-for headers to backend servers
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        }
}

Now test the new configuration to ensure there are no errors and restart nginx:

[[email protected] ~]# nginx -t
[[email protected] ~]# service nginx restart

Ensure that you open the software firewall for 80 and 443 accordingly. Below is an example for CentOS 6:

[[email protected] ~]# vim /etc/sysconfig/iptables
...
-A INPUT -p tcp --dport 80 -j ACCEPT
-A INPUT -p tcp --dport 443 -j ACCEPT
...
[[email protected] ~]# service iptables restart

Securing a site with Let’s Encrypt SSL certificates

Let’s Encrypt is a free, automated and open certificate authority for anyone that wants to secure a website with SSL. I recently had to setup Let’s Encrypt for a client, and found that it was absurdly simple to use with their Certbot ACME client.

WARNING: This guide may become quickly outdated and is really just for my own reference. If you are looking to use Let’s Encrypt, please review the following articles from Let’s Encrypt for the latest installation and setup instructions:
https://letsencrypt.org/getting-started/
https://certbot.eff.org

For this guide, I am assuming the server is running Apache. So to get started, I simply following the instructions provided on https://certbot.eff.org to get Certbot installed:

# CentOS 6
# There is currently no packaged version of Certbot for CentOS 6.  So you have to download the script manually by:
[[email protected] ~]# cd /root
[[email protected] ~]# wget https://dl.eff.org/certbot-auto
[[email protected] ~]# chmod a+x certbot-auto

# CentOS 7
[[email protected] ~]# yum install yum-utils
[[email protected] ~]# yum-config-manager --enable rhui-REGION-rhel-server-extras rhui-REGION-rhel-server-optional
[[email protected] ~]# yum install certbot-apache

# Ubuntu 14.04
[[email protected] ~]# apt-get update
[[email protected] ~]# apt-get install software-properties-common
[[email protected] ~]# add-apt-repository ppa:certbot/certbot
[[email protected] ~]# apt-get update
[[email protected] ~]# apt-get install python-certbot-apache 

# Ubuntu 16.04
[[email protected] ~]# apt-get update
[[email protected] ~]# apt-get install software-properties-common
[[email protected] ~]# add-apt-repository ppa:certbot/certbot
[[email protected] ~]# apt-get update
[[email protected] ~]# apt-get install python-certbot-apache

The command below will install or update the certbot script, and also modify your Apache configs accordingly as it automatically configures the SSL certificate. When you run the tool, it will ask you for your email address, review their terms of service, and will ask you to select which URL’s you want to have the SSL certificate generated for. Always be sure to include both the www and non-www domains unless you don’t need one of them for some reason.

[[email protected] ~]# certbot --apache

One of the great things about Let’s Encrypt certificates, asides the fact its free, is that you can add a cron job to automatically renew the SSL certificate so it doesn’t expire. Let’s Encrypt recommends running it twice daily. It won’t do anything until your certificates are due for renewal or revoked. Setup the cron job by running:

# CentOS 6
[[email protected] ~]# crontab -e
0 12/24 * * * /root/certbot-auto renew

# All other OS's:
[[email protected] ~]# crontab -e
0 12/24 * * * certbot renew

Backing up permissions on directory

Before doing anything in Linux, it is also smart to have a rollback plan. Making blanket, recursive permission changes on a directory would certainly fall into this category!

Lets say you found a directory on your system where the file permissions were all 777, so you want to secure them a bit by changing the permissions over to 644. Something like:

[[email protected] ~]# find /var/www/vhosts/domain.com -type f -perm 0777 -print -exec chmod 644 {} \;

The paranoid among us will want to ensure we can revert things back to the way they were before. Thankfully there are two commands that can be used to either backup or restore permissions on a directory recursively: getfacl and setfacl

To backup all the permissions and ownerships within a given directory such as /var/www/vhosts/domain.com, do the following:

[[email protected] ~]# cd /var/www/vhosts/domain.com
[[email protected] ~]# getfacl -R . > permissions_backup

Now lets say you ran the find command, changed everything over to 644, then realized you broke your application cause it needed some files to be 664 or something, so you just want to roll back so you can investigate what happened.

You can roll back the permissions by running:

[[email protected] ~]# cd /var/www/vhosts/domain.com
[[email protected] ~]# setfacl --restore=permissions_backup

Backup entire servers permissions

If you wanted to backup the entire server’s permissions, you can do that by:

[[email protected] ~]# getfacl -R --absolute-names / > server_permissions_backup

And the restoration process remains the same:

[[email protected] ~]# setfacl --restore=server_permissions_backup

Find command examples

This is just a quick reference page for using find to do basic things.

Find is a pretty powerful tool that accepts a bunch of options for narrowing down your search. Some basic examples of stuff you can do are below.

Find a specific file type and extension older than 300 days and remove them

This will find files:
– Older than 300 days
– Is a file
– Match *.jpg
– Will not go into sub directories

This also works for those pesky directories that have millions of files.

First, always confirm the command will work before blindly removing files:

[[email protected] ~]# cd /path/to/directory
[[email protected] ~]# find . -maxdepth 1 -type f -name '*.jpg' -mtime +300 | xargs ls -al

Once you verified that the files displayed are the ones you want removed, remove them by running:

[[email protected] ~]# cd /path/to/directory
[[email protected] ~]# find . -maxdepth 1 -type f -name '*.jpg' -mtime +300 | xargs rm -f

Find files with 777 permissions

This will find all files that have 777 permissions:

[[email protected] ~]# cd /path/to/directory
[[email protected] ~]# find . -type f -perm 0777 -print

This will find all files that do NOT have 777 permissions

[[email protected] ~]# cd /path/to/directory
[[email protected] ~]# find / -type f ! -perm 777

Find Files with 777 Permissions and change to 644

Use caution with this, this is generally not smart to run blindly as it will go into subdirectories unless you set maxdepth.

[[email protected] ~]# cd /path/to/directory
[[email protected] ~]# find . -type f -perm 0777 -print -exec chmod 644 {} \;

Find Directories with 777 Permissions and change to 755

Use caution with this, this is generally not smart to run blindly as it will go into subdirectories unless you set maxdepth.

[[email protected] ~]# cd /path/to/directory
[[email protected] ~]# find . -type d -perm 777 -print -exec chmod 755 {} \;

Find empty directories

[[email protected] ~]# cd /path/to/directory
[[email protected] ~]# find /tmp -type d -empty

Find all hidden files within a directory

[[email protected] ~]# find /path/to/directory -type f -name ".*"

Find files owned by user or group

[[email protected] ~]# cd /path/to/directory
[[email protected] ~]# find /var/www -user apache
[[email protected] ~]# find /var/www -group apache

Find files that were modified in the last 30 days

[[email protected] ~]# find / -mtime 30

Find files that were modified in the last hour

[[email protected] ~]# find / -mmin -60

Find files that were changed within the last hour
Note, this one is specified in minutes only!

[[email protected] ~]# find / -cmin -60

Find files that were accessed in the last 5 days

[[email protected] ~]# find / -atime 5

Find files that were accessed within the last hour
Note, this one is specified in minutes only!

[[email protected] ~]# find / -amin -60

Count files per directory with find
This one is useful when you need to find the top 10 directories that contain the most amount of files.

[[email protected] ~]# vim count-files-per-directory.sh
#!/bin/bash

if [ $# -ne 1 ];then
  echo "Usage: `basename $0` DIRECTORY"
  exit 1
fi

echo "Please wait..."

find "[email protected]" -type d -print0 2>/dev/null | while IFS= read -r -d '' file; do 
    echo -e `ls -A "$file" 2>/dev/null | wc -l` "files in:\t $file"
done | sort -nr | head | awk '{print NR".", "\t", $0}'

exit 0

Now run it against the / directory:

[[email protected] ~]# bash count-files-per-directory.sh /
Please wait...
1. 	 768 files in:	 /usr/share/man/man1
2. 	 631 files in:	 /usr/lib64/python2.6
3. 	 575 files in:	 /usr/share/locale
4. 	 566 files in:	 /usr/share/vim/vim74/syntax
5. 	 496 files in:	 /usr/bin
6. 	 487 files in:	 /usr/share/man/man8
7. 	 393 files in:	 /usr/share/perl5/unicore/lib/gc_sc
8. 	 380 files in:	 /usr/include/linux
9. 	 354 files in:	 /usr/lib64/python2.6/encodings
10. 	 334 files in:	 /usr/share/man/man3

Or if you only need to run the search in a specific directory:

[[email protected] ~]# bash count-files-per-directory.sh /usr/share/man
Please wait...
1. 	 768 files in:	 /usr/share/man/man1
2. 	 487 files in:	 /usr/share/man/man8
3. 	 334 files in:	 /usr/share/man/man3
4. 	 124 files in:	 /usr/share/man/man5
5. 	 49 files in:	 /usr/share/man
6. 	 35 files in:	 /usr/share/man/ru/man8
7. 	 31 files in:	 /usr/share/man/man7
8. 	 27 files in:	 /usr/share/man/fr/man8
9. 	 25 files in:	 /usr/share/man/de/man8
10. 	 22 files in:	 /usr/share/man/ja/man8

Rolling back yum transactions

Ever had the system update a package, which winds up breaking the most random things? How can you roll back? How can you prevent that same buggy package from updating itself again the next time the system checks for updates, yet still get newer versions of that package when its released?

I ran across something like this recently. The symptom was that PHPMyAdmin was no longer working on this LAMP server. In short, it was found that an Apache update was to blame, which was found in this bug report: https://bz.apache.org/bugzilla/show_bug.cgi?id=61202

So how can the update to Apache be rolled back? First, try to confirm that Apache was indeed updated recently:

[[email protected] ~]# tail /var/log/yum.log
Jul 08 04:23:49 Updated: httpd24u-filesystem-2.4.26-1.ius.centos6.noarch
Jul 08 04:23:49 Updated: httpd24u-tools-2.4.26-1.ius.centos6.x86_64
Jul 08 04:23:50 Updated: httpd24u-2.4.26-1.ius.centos6.x86_64
Jul 08 04:23:50 Updated: 1:httpd24u-mod_ssl-2.4.26-1.ius.centos6.x86_64

Now find the transaction ID within yum by running:

[[email protected] ~]# yum history
ID     | Login user               | Date and time    | Action(s)      | Altered
-------------------------------------------------------------------------------
   220 | root               | 2017-07-08 04:23 | Update         |    4

View the details of this transaction by running:

[[email protected] ~]# yum history info 220
...
Transaction performed with:
    Installed     rpm-4.8.0-55.el6.x86_64                       @centos6-x86_64
    Installed     yum-3.2.29-81.el6.centos.noarch               @centos6-x86_64
    Installed     yum-metadata-parser-1.1.2-16.el6.x86_64       @anaconda-CentOS-201410241409.x86_64/6.6
    Installed     yum-plugin-fastestmirror-1.1.30-40.el6.noarch @centos6-x86_64
    Installed     yum-rhn-plugin-2.4.6-1.el6.noarch             @spacewalk
Packages Altered:
    Updated httpd24u-2.4.25-4.ius.centos6.x86_64            @rackspace-centos6-x86_64-ius
    Update           2.4.26-1.ius.centos6.x86_64            @rackspace-centos6-x86_64-ius
    Updated httpd24u-filesystem-2.4.25-4.ius.centos6.noarch @rackspace-centos6-x86_64-ius
    Update                      2.4.26-1.ius.centos6.noarch @rackspace-centos6-x86_64-ius
    Updated httpd24u-mod_ssl-1:2.4.25-4.ius.centos6.x86_64  @rackspace-centos6-x86_64-ius
    Update                   1:2.4.26-1.ius.centos6.x86_64  @rackspace-centos6-x86_64-ius
    Updated httpd24u-tools-2.4.25-4.ius.centos6.x86_64      @rackspace-centos6-x86_64-ius
    Update                 2.4.26-1.ius.centos6.x86_64      @rackspace-centos6-x86_64-ius
history info
...

To roll back the updates, getting us back to Apache 2.4.25 in this case, simple undo the transaction by running:

[[email protected] ~]# yum history undo 220

Then confirm Apache is back to the previous version 2.4.25:

[[email protected] ~]# rpm -qa |grep -i httpd24u
httpd24u-filesystem-2.4.25-4.ius.centos6.noarch
httpd24u-2.4.25-4.ius.centos6.x86_64
httpd24u-mod_ssl-2.4.25-4.ius.centos6.x86_64
httpd24u-tools-2.4.25-4.ius.centos6.x86_64

Next, restart Apache so the changes take place:

[[email protected] ~]# service httpd restart

Finally, exclude the buggy packages from ever being installed again. In this example, Apache 2.4.26 will never be installed, however any newer versions released after that will install/update normally.

[[email protected] ~]# yum install yum-plugin-versionlock
[[email protected] ~]# yum versionlock add! httpd24u-mod_ssl-2.4.26-1.ius.centos6.x86_64 httpd24u-2.4.26-1.ius.centos6.x86_64 httpd24u-tools-2.4.26-1.ius.centos6.x86_64 httpd24u-filesystem-2.4.26-1.ius.centos6.noarch