CVE-2016-2183 SWEET32 Birthday attacks

Lately, vulnerability scanners have been flagging servers that are susceptible to CVE-2016-2183. In a nutshell, you need to disable any TLS ciphers using 3DES. More detailed information about this vulnerability and why it exists can be found at the links below:
https://access.redhat.com/articles/2548661
https://sweet32.info

Mitigating this vulnerability within Apache is pretty straight forward. Below are the steps to confirm if you are actually affected by this vulnerability and how to remediate it.

First, confirm your Apache web server is actually vulnerable to this by seeing if the 3DES ciphers are returned in this nmap test:

[user@workstation ~]# nmap --script ssl-enum-ciphers -p 443 SERVER_IP_ADDRESS_HERE
Starting Nmap 5.51 ( http://nmap.org ) at 2016-11-30 17:57 EST
Nmap scan report for xxxxxxxx (xxx.xxx.xxx.xxx)
Host is up (0.018s latency).
PORT    STATE SERVICE
443/tcp open  https
| ssl-enum-ciphers: 
|   TLSv1.2
|     Ciphers (14)
|       TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA
|       TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA
|       TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256
|       TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
|       TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA
|       TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384
|       TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
|       TLS_RSA_WITH_3DES_EDE_CBC_SHA
|       TLS_RSA_WITH_AES_128_CBC_SHA
|       TLS_RSA_WITH_AES_128_CBC_SHA256
|       TLS_RSA_WITH_AES_128_GCM_SHA256
|       TLS_RSA_WITH_AES_256_CBC_SHA
|       TLS_RSA_WITH_AES_256_CBC_SHA256
|       TLS_RSA_WITH_AES_256_GCM_SHA384
|     Compressors (1)
|_      uncompressed

As you can see in the output above, this server is affected by this vulnerability as its allowing for the following 3DES ciphers:

TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA
TLS_RSA_WITH_3DES_EDE_CBC_SHA

Disabling this in Apache is pretty easy. Simply navigate to where ever you have your SSLCipherSuite configuration defined and disable 3DES. Typically this should be in /etc/httpd/conf.d/ssl.conf, however some may also have this defined in each individual Apache vhost. If you are unsure where its configured, you should be able to locate it on your server by running:

# CentOS / Red Hat
[root@web01 ~]# egrep -R SSLCipherSuite /etc/httpd/*

# Ubuntu / Debian
[root@web01 ~]# egrep -R SSLCipherSuite /etc/apache2/*

Once you locate the config(s) that contain this directive, you simple add !3DES to the end of the SSLCipherSuite line as shown below:

[root@web01 ~]# vim /etc/httpd/conf.d/ssl.conf
...
SSLCipherSuite EECDH+AESGCM:EECDH+AES256:EECDH+AES128:EDH+AES:RSA+AESGCM:RSA+AES:!ECDSA:!NULL:!MD5:!DSS:!3DES
...

Once that is done, restart Apache by:

# CentOS / Red Hat
[root@web01 ~]# service httpd restart

# Ubuntu / Debian
[root@web01 ~]# service apache2 restart

Finally, retest using nmap to confirm no ciphers using 3DES show up:

[user@workstation ~]# nmap --script ssl-enum-ciphers -p 443 SERVER_IP_ADDRESS_HERE
Starting Nmap 5.51 ( http://nmap.org ) at 2016-11-30 18:03 EST
Nmap scan report for xxxxxxxx (xxx.xxx.xxx.xxx)
Host is up (0.017s latency).
PORT    STATE SERVICE
443/tcp open  https
| ssl-enum-ciphers: 
|   TLSv1.2
|     Ciphers (12)
|       TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA
|       TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256
|       TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
|       TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA
|       TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384
|       TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
|       TLS_RSA_WITH_AES_128_CBC_SHA
|       TLS_RSA_WITH_AES_128_CBC_SHA256
|       TLS_RSA_WITH_AES_128_GCM_SHA256
|       TLS_RSA_WITH_AES_256_CBC_SHA
|       TLS_RSA_WITH_AES_256_CBC_SHA256
|       TLS_RSA_WITH_AES_256_GCM_SHA384
|     Compressors (1)
|_      uncompressed

If no 3DES ciphers are returned like in the listing above, you should be good to rerun your vulnerability scan!

Basic Apache Hardening

Below are a couple of the more common best practices that should be used when hardening Apache. These are simply some basics for mitigating a few of the more common CVE’s that have been cropping up in Apache.

At the very minimum, disable the trace method and prevent information disclosure by updating the ServerToken and ServerSignature variables. This can be done within Apache by modifying the following file:

# CentOS 5 and 6
vim /etc/httpd/conf.d/security.conf

# Ubuntu 12.04
vim /etc/apache2/conf.d/security

Then set it accordingly as shown below:

# Disable access to the entire file system except for the directories that
# are explicitly allowed later.
#
# This currently breaks the configurations that come with some web application
# Debian packages. It will be made the default for the release after lenny.
#
#<Directory />
#       AllowOverride None
#       Order Deny,Allow
#       Deny from all
#</Directory>


# Changing the following options will not really affect the security of the
# server, but might make attacks slightly more difficult in some cases.

#
# ServerTokens
# This directive configures what you return as the Server HTTP response
# Header. The default is 'Full' which sends information about the OS-Type
# and compiled in modules.
# Set to one of:  Full | OS | Minimal | Minor | Major | Prod
# where Full conveys the most information, and Prod the least.
#
ServerTokens Prod 

#
# Optionally add a line containing the server version and virtual host
# name to server-generated pages (internal error documents, FTP directory
# listings, mod_status and mod_info output etc., but not CGI generated
# documents or custom error documents).
# Set to "EMail" to also include a mailto: link to the ServerAdmin.
# Set to one of:  On | Off | EMail
#
ServerSignature Off 

#
# Allow TRACE method
#
# Set to "extended" to also reflect the request body (only for testing and
# diagnostic purposes).
#
# Set to one of:  On | Off | extended
#
TraceEnable Off

Another common area to lock down further from the vendor defaults is the SSL configuration, which is located in:

# CentOS 5 and 6
vim /etc/httpd/conf.d/ssl.conf

# Ubuntu 12.04
vim /etc/apache2/mods-enabled/ssl.conf

The most common ones I see on security reports are:
Set SSLHonorCipherOrder to ‘on’
Restrict the allowed ciphers in SSLCipherSuite
Enable only secure protocols

The ciphers can be a bit tricky, especially if you have a WAF or IDS in front of your solution. There is not a one size fits all here, so please be sure to test your site after making these changes as they can cause you problems if set incorrectly for your solution. I’ll post some scenarios below.

For securing your ssl.conf against many of the current vulnerabilities posted at the time of this writing, disable TLSv1.0 which will be a requirement come June 2018, and enable forward security, you can use:

SSLCipherSuite EECDH+AESGCM:EECDH+AES256:EECDH+AES128:EDH+AES:RSA+AESGCM:RSA+AES:!ECDSA:!NULL:!MD5:!DSS:!3DES
SSLProtocol -ALL +TLSv1.1 +TLSv1.2
SSLHonorCipherOrder On

If you prefer to leave TLSv1.0 enabled for the time being as you still have clients connecting to your site with unsupported browsers from Windows XP that doesn’t support anything above TLSv1.0, then you can try the following:

SSLCipherSuite ALL:!EXP:!NULL:!ADH:!LOW:!SSLv3:!SSLv2:!MD5:!RC4:!DSS:!3DES
SSLProtocol all -SSLv2 -SSLv3
SSLHonorCipherOrder On

If you have an Imperva WAF or Alertlogic IDS in front your solution that needs to decrypt the SSL traffic for analysis, so you therefore can’t use forward security since they need to perform a man-in-the-middle on the traffic, but still want to disable insecure ciphers, then modify the variables in the ssl.conf as follows:

SSLCipherSuite HIGH:!MEDIUM:!AESGCM:!ECDH:!aNULL:!ADH:!DH:!EDH:!CAMELLIA:!GCM:!KRB5:!IDEA:!EXP:!eNULL:!LOW:!RC4:!3DES
SSLProtocol all -SSLv2 -SSLv3
SSLHonorCipherOrder On

As a final note, Mozilla also put out a config generator for this. It can just provide some additional view points of how you can go about the ciphers. The link is here.

Setup ClamAV for nightly scans

PCI-DSS 3.1 Requirement 5 states the following:

Protect all systems against malware and regularly update anti-virus software or programs.

There are commercial based solutions out there for Linux based systems, but costs can become an issue, especially for small companies with a small footprint within their card holder data environment (CDE). So can one satify this requirement without breaking the bank? I personally prefer ClamAV.

Taken from the projects website, ClamAV is an open source antivirus engine for detecting trojans, viruses, malware and other malicious threats.

My requirements:
1. I want to scan my entire system nightly.
2. All virus reports are emailed to me so I can archive them for a year offsite.
3. Have the antivirus definitions updated nightly before the scan.

Installing, running and maintaining ClamAV is very straight forward on Linux based systems. To get started, install ClamAV by:

# CentOS 6 / RedHat 6
[root@web01 ~]# rpm -ivh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
[root@web01 ~]# yum install clamav mailx

# CentOS 7 / RedHat 7
[root@web01 ~]# rpm -ivh http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-5.noarch.rpm
[root@web01 ~]# yum install clamav clamav-update mailx
[root@web01 ~]# sed -i '/^Example/d' /etc/freshclam.conf

# Ubuntu 12.04 / Ubuntu 14.04
[root@web01 ~]# apt-get update
[root@web01 ~]# apt-get install clamav mailutils

Now update the virus definitions by running:

[root@web01 ~]# freshclam

Finally, configure the virus definitions to update nightly, and also scan the entire system and email a report:

[root@web01 ~]# crontab -e
00 2 * * *  /usr/bin/freshclam
00 3 * * * /usr/bin/clamscan -r -i / | mail -s "ClamAV Report : INSERT_SERVER_HOSTNAME_HERE" [email protected]

Posted below is an example report ClamAV would send me via email nightly:

----------- SCAN SUMMARY -----------
Known viruses: 4289299
Engine version: 0.99
Scanned directories: 51929
Scanned files: 808848
Infected files: 0
Total errors: 10982
Data scanned: 76910.89 MB
Data read: 83578.27 MB (ratio 0.92:1)
Time: 6641.424 sec (110 m 41 s)

How does one go about testing ClamAV to ensure its working? There is a known antivirus test file that was designed specifically for this purpose by www.eicar.org. To create this file, simply setup the following test file, then rerun your ClamAV scan:

[root@web01 ~]# vim /tmp/EICAR-AV-Test
...
X5O!P%@AP[4\PZX54(P^)7CC)7}$EICAR-STANDARD-ANTIVIRUS-TEST-FILE!$H+H*
...

Using AIDE for file integrity monitoring (FIM) on Ubuntu or Debian

PCI-DSS 3.1 section 10.5.5 has the following requirement:

Use file-integrity monitoring or change-detection software on logs to ensure that existing log data cannot be changed without generating alerts (although new data being added should not cause an alert).

For large solutions, I would suggest using a well known tool such as Tripwire Enterprise. However many small to mid size companies that have a small footprint within their card holder data environment (CDE), may not be able to afford this. So what can companies use to meet this requirement? Implement AIDE (Advanced Intrusion Detection Environment).

Taken from the projects website, AIDE creates a database from the regular expression rules that it finds from the config file(s). Once this database is initialized it can be used to verify the integrity of the files.

AIDE is a very simple (yet powerful) program that runs from cron checking your files (typically once a night), and it will scan your system looking for any changes in the directories its monitoring. There are a number of different ways to use this program, but I’ll outline one that I like to use.

My requirements:
1. I want the reports to run nightly.
2. All change reports are emailed to me so I can archive them for a year offsite.
3. Have the database automatically commit the additions, deletions, and changes to baseline each time its ran.

In the event my system was compromised, I want to ensure that the malicious user was not able to modify, or delete my previous reports. Therefore, I choose not to store them on the machine. Its true that once the malicious user gained access to my system, they could change my AIDE config on me, but at least my previous reports will be intact which should help me when determining what malicious changes this user made to my server. Please note that I am making an assumption here that you are already backing up your system nightly, which would include your AIDE database! If you do not currently have a backup strategy in place, get one. Tools such as AIDE helps identify what files a malicious user may have changed, but if they completely crippled the system, you will need to restore from backups.

Setting up AIDE is fairly straight forward. It exists in most of package repositories out there including most variants of Linux and BSD.

On Ubuntu or Debian based systems, you can install it by:

[root@web01 ~]# apt-get update
[root@web01 ~]# apt-get install aide

Now to setup some basic configurations, such as the email notifications, update type, etc, modify the AIDE system configuration file according:

[root@web01 ~]# vim /etc/default/aide
...
FQDN=web01.domain.com
MAILSUBJ="Daily AIDE report for $FQDN"
[email protected]
QUIETREPORTS=no
COMMAND=update
COPYNEWDB=yes
...

Now that AIDE is installed, and the basic preferences are in place, its now time to check out the main configuration files. The default configuration from the upstream provider should give you a reasonable default configuration. But what if you wanted to add your website documentroot to this so you can keep track of what files are changing on your website? The Debian/Ubuntu way of configuring AIDE is a bit different from the CentOS/RHEL method.

All the configuration files resides in /etc/aide/aide.conf.d/. The number of the file appears to be used by the AIDE wrapper to use for deciding which order to process these files. The AIDE documentation seems to indicate that the most general rules should be processed last, so I’ll default to creating my servers profile with 50_aide_CUSTOM-RULES.

So lets say I want to monitor my documentroot, here is how this would be setup:

[root@web01 ~]# vim /etc/aide/aide.conf.d/50_aide_CUSTOM-RULES
...
/var/www/vhosts/domain.com FULL
...

Now AIDE will be keeping track of our website. But adding your site may lead to very noisy reports because most websites implement caching. So this now becomes a balancing act to exclude directories that change often, yet retain enough of your sites critical content. We could just leave the entire directory in AIDE, but I know I personally don’t want to read a change report that contains 1,000 changes every day. So in the case of this wordpress site, I exclude the cache directory by appending the following to my custom configuration:

[root@web01 ~]# vim /etc/aide/aide.conf.d/50_aide_CUSTOM-RULES
...
/var/www/vhosts/domain.com Full
!/var/www/vhosts/domain.com/web/wp-content/cache
...

The “!” means NOT to monitor that specific directory. You will need to run AIDE a few times and fine tune the configuration before you get a report that is useful for your specific needs.

Anytime a change is made to your AIDE configuration, you need to rebuild the AIDE run time configuration, and initialize the database. You do that by:

[root@web01 ~]# update-aide.conf
[root@web01 ~]# aideinit -y -f

Now, try making a basic change to /etc/hosts, then run a check on AIDE to see if it detects the change and emails out the report:

[root@web01 ~]# /etc/cron.daily/aide

If you wanted to just quickly test AIDE to ensure it picks up your changes, but won’t commit them to baseline, you can perform a one-time scan by:

[root@web01 ~]# aide.wrapper

To receive nightly AIDE reports, no further configuration is needed since Ubuntu/Debian already setup a cron job that will run AIDE automatically in /etc/cron.daily/aide. This will run whenever your system normally runs the cron.daily jobs, which is defined in /etc/crontab.

Posted below is an example report that AIDE would send me via email daily:

This is an automated report generated by the Advanced Intrusion Detection 
Environment on web01.domain.com started at 2016-03-07 13:16:35.

AIDE returned with exit code 7. Added, removed and changed files detected!
AIDE post run information
output database /var/lib/aide/aide.db.new was copied to /var/lib/aide/aide.db as requested by cron job configuration
End of AIDE post run information

AIDE produced no errors.

Output of the daily AIDE run (83 lines):
AIDE 0.15.1 found differences between database and filesystem!!
Start timestamp: 2016-03-07 13:16:35

Summary:
  Total number of files:	77937
  Added files:			2
  Removed files:		3
  Changed files:		7


---------------------------------------------------
Added files:
---------------------------------------------------

f++++++++++++++++: /var/log/aide/aide.log.0
d++++++++++++++++: /var/www/vhosts/domain.com/new

---------------------------------------------------
Removed files:
---------------------------------------------------

f----------------: /var/www/vhosts/domain.com/blah
f----------------: /var/www/vhosts/domain.com/test
d----------------: /var/www/vhosts/domain.com/test1

---------------------------------------------------
Changed files:
---------------------------------------------------

f   p.g    . A. .: /var/log/aide/aide.log
d =.... mc.. .. .: /var/spool/postfix/active
d =.... mc.. .. .: /var/spool/postfix/incoming
d =.... mc.. .. .: /var/spool/postfix/maildrop
F =.... mc.. ..  : /var/spool/postfix/public/pickup
F =.... mc.. ..  : /var/spool/postfix/public/qmgr
d =.... mc.. .. .: /var/www/vhosts/domain.com

---------------------------------------------------
Detailed information about changes:
---------------------------------------------------


File: /var/log/aide/aide.log
 Perm     : -rw-------                       , -rw-r-----
 Gid      : 0                                , 4
 ACL      : old = A:
----
user::rw-
group::---
other::---
----
                  D: 
            new = A:
----
user::rw-
group::r--
other::---
----
                  D: 

Directory: /var/spool/postfix/active
 Mtime    : 2016-03-07 13:10:36              , 2016-03-07 13:13:23
 Ctime    : 2016-03-07 13:10:36              , 2016-03-07 13:13:23

Directory: /var/spool/postfix/incoming
 Mtime    : 2016-03-07 13:10:36              , 2016-03-07 13:13:23
 Ctime    : 2016-03-07 13:10:36              , 2016-03-07 13:13:23

Directory: /var/spool/postfix/maildrop
 Mtime    : 2016-03-07 13:10:36              , 2016-03-07 13:13:23
 Ctime    : 2016-03-07 13:10:36              , 2016-03-07 13:13:23

FIFO: /var/spool/postfix/public/pickup
 Mtime    : 2016-03-07 13:12:37              , 2016-03-07 13:17:37
 Ctime    : 2016-03-07 13:12:37              , 2016-03-07 13:17:37

FIFO: /var/spool/postfix/public/qmgr
 Mtime    : 2016-03-07 13:10:36              , 2016-03-07 13:13:36
 Ctime    : 2016-03-07 13:10:36              , 2016-03-07 13:13:36

Directory: /var/www/vhosts/domain.com
 Mtime    : 2016-03-07 13:03:25              , 2016-03-07 13:16:17
 Ctime    : 2016-03-07 13:03:25              , 2016-03-07 13:16:17

End of AIDE output.

The check was done against /var/lib/aide/aide.db with the following characteristics:
 Size     : 13041865
 Bcount   : 25480
 Mtime    : 2016-03-07 13:13:23
 Ctime    : 2016-03-07 13:13:23
 Inode    : 273628
 RMD160   : bIthG3Q5FiJmj4CIYdASjJx5Ygc=
 TIGER    : omto0nb3/oIqIiKHEjnbhjvXeGdfycbV
 SHA256   : VJPGKy61GxGfcSrjJFbrP879y/skJaiQ
 SHA512   : 7pz3FdYh8TvoNOqjxWBToZQNG6oxmrrp
 CRC32    : 1dYwqA==
 HAVAL    : LBFzyApqoYn7ogzoROG5FpneBO1s7R3p
 GOST     : iJ1tWPLtYaxxoFDHZEW8gxCS3/pVlS1G

The AIDE run created a new database /var/lib/aide/aide.db.new with the following characteristics:
 Size     : 13041834
 Bcount   : 25480
 Inode    : 273627
 RMD160   : 4TKRFSc0nt/VGDVvPEY8U6YNzaw=
 TIGER    : o4RzDHHWBlH+Zt3P7vI8GHHgGV1OecrC
 SHA256   : Gher/aINaU8r73/lQEWLQQSsKqP7sGjO
 SHA512   : D0/w3S6NOLZHw7D7dt1QxYBXe6miP5hF
 CRC32    : 5SRdpg==
 HAVAL    : pe7+ai57TPpW34NjJgTQxs+cQsFJ9zq0
 GOST     : RrIiyspbpKEb5wEGSG2HTYM7N6NUtKSv

End of AIDE daily cron job at 2016-03-07 13:18, run time 102 seconds

So this reports tells me that a log file for AIDE was rotated out, a new folder was created in my DocumentRoot called new, and the files/folders blah, test, and test1 where removed from my DocumentRoot.

Please remember that utilizing a tool to provide file integrity monitoring is only one part of a defense in depth strategy. There is no silver bullet for system security, but every layer you add will increase your security footprint which helps you with taking a proactive approach to security.

SSH brute force prevention with fail2ban

Ever take a look at your server’s auth logs and do a quick count of how many failed SSH login attempts you had on your server last week? Its very common to see hundreds, if not thousands of attempts in a very short period of time. Assuming you cannot use a firewall to restrict SSH access to only authorized IP addresses, how do you mitigate these brute force attacks?

There are many tools out there to help with this. One I like is fail2ban. This program scans through log files and takes action against events such as repeated failed login attempts, and blocks the offending IP address for a set period of time.

Procedure

On CentOS systems, fail2ban can be installed from the EPEL repositories. If you do not have EPEL installed, you can get it setup by:

CentOS 5

rpm -ivh http://archives.fedoraproject.org/pub/archive/epel/5/x86_64/epel-release-5-4.noarch.rpm

CentOS 6

rpm -ivh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm

CentOS 7

rpm -ivh http://dl.fedoraproject.org/pub/epel/7/x86_64/Packages/e/epel-release-7-11.noarch.rpm

Now install fail2ban:

yum install fail2ban
cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local

Now customize /etc/fail2ban/jail.local accordingly for your server. Posted below are some more commonly configured options for CentOS 6 servers. The defaults (at the time of this writing) should protect SSH by banning any 3 or greater failed login attempts for 5 minutes via iptables. So the defaults should be okay, but you may want to consider adding your workstations IP address to the ignore ip list below so you don’t lock yourself out by accident!

vi /etc/fail2ban/jail.local

# "ignoreip" can be an IP address, a CIDR mask or a DNS host. Fail2ban will not
# ban a host which matches an address in this list. Several addresses can be
# defined using space separator.
ignoreip = 127.0.0.1/8

# "bantime" is the number of seconds that a host is banned.
bantime  = 600

# A host is banned if it has generated "maxretry" during the last "findtime"
# seconds.
findtime  = 600

# "maxretry" is the number of failures before a host get banned.
maxretry = 3

[sshd]

# To use more aggressive sshd filter (inclusive sshd-ddos failregex):
#filter = sshd-aggressive
enabled = true
port    = ssh
logpath = %(sshd_log)s
backend = %(sshd_backend)s

Finally, set fail2ban to start at boot, and start service:

chkconfig fail2ban on
service fail2ban start

As a quick side note, sometimes hosting providers will automatically install fail2ban for you. And depending on the host, they may configure it in such a way that it sends an email each time an IP address gets banned from SSH. This can quickly create a flood of email or email failures, especially if its not configured for a real email address.

If you have having issues like this and your fail2ban configuration was set to email you, you can prevent fail2ban from sending you emails for SSH bans by removing the line from the ssh-iptables block:

sendmail-whois[name=SSH, dest=root, [email protected]]

As a live example assuming it was previously configured, here is what it would look like before you make the change:

vim /etc/fail2ban/jail.conf
...
[ssh-iptables]

enabled  = true
filter   = sshd
action   = iptables[name=SSH, port=ssh, protocol=tcp]
           sendmail-whois[name=SSH, dest=root, [email protected]]
logpath  = /var/log/secure
maxretry = 5

And here is what it would look like after you make the change:

vim /etc/fail2ban/jail.conf
...
[ssh-iptables]

enabled  = true
filter   = sshd
action   = iptables[name=SSH, port=ssh, protocol=tcp]
logpath  = /var/log/secure
maxretry = 5

And be sure to restart fail2ban after the configuration update has been completed:

service fail2ban restart

Malware Detection – rkhunter

Following on my previous articles, there are several good malware detection tools out there. These scanners help notify you of malware, hopefully before your clients notify you. Some of the common ones include:

chkrootkit
Linux Malware Detect (maldet)
rkhunter

Each have their own strong points, and they certainly compliment each other nicely when using them together depending on the solutions security strategy.

Rkhunter is similar in nature to chkrootkit, and I feel that both complement each other nicely. Taken from wikipedia’s page:

rkhunter (Rootkit Hunter) is a Unix-based tool that scans for rootkits, backdoors and possible local exploits. It does this by comparing SHA-1 hashes of important files with known good ones in online database, searching for default directories (of rootkits), wrong permissions, hidden files, suspicious strings in kernel modules, and special tests for Linux and FreeBSD.

Procedure

On CentOS systems, rkhunter can be installed from the EPEL repositories. If you do not have EPEL installed, you can get it setup by:

Installing rkhunter is pretty straight forward as shown below:

# CentOS 5 / RedHat 5
[root@web01 ~]# rpm -ivh http://dl.fedoraproject.org/pub/epel/5/x86_64/epel-release-5-4.noarch.rpm
[root@web01 ~]# yum install rkhunter mailx

# CentOS 6 / RedHat 6
[root@web01 ~]# rpm -ivh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
[root@web01 ~]# yum install rkhunter mailx

# CentOS 7 / RedHat 7
[root@web01 ~]# rpm -ivh http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-5.noarch.rpm
[root@web01 ~]# yum install rkhunter mailx

# Ubuntu / Debian
[root@web01 ~]# apt-get update
[root@web01 ~]# apt-get install rkhunter mailutils

Now that the installation is out of the way, lets configure rkhunter to send email if warning is found during scan:

[root@web01 ~]# vim /etc/rkhunter.conf
# Change
MAIL-ON-WARNING=""
# To
MAIL-ON-WARNING="[email protected]"

Now fetch the latest updates, create a baseline, and run a on-demand scan:

[root@web01 ~]# rkhunter --update 
[root@web01 ~]# rkhunter --propupd
[root@web01 ~]# rkhunter -sk -c

On CentOS and RHEL, configure cron so this runs automatically:

First, confirm the cronjob exists:

[root@web01 ~]# cat /etc/cron.daily/rkhunter

Now, update the rkhunter configuration with your email address so you can receive the nightly reports:

[root@web01 ~]# vi /etc/sysconfig/rkhunter
# Change
MAILTO=root@localhost
# To
[email protected]

On Ubuntu based systems, confirm the cronjob exists:

[root@web01 ~]# cat /etc/cron.daily/rkhunter

Now, update the rkhunter configuration with your email address so you can receive the nightly reports:

[root@web01 ~]# vi /etc/default/rkhunter
# Change
APT_AUTOGEN="false"
REPORT_EMAIL="root"

# To
APT_AUTOGEN="true"
REPORT_EMAIL="[email protected]"

NOTE: See https://help.ubuntu.com/community/RKhunter for more information about APT_AUTOGEN.

Malware Detection – Linux Malware Detect (maldet)

Following on my previous articles, there are several good malware detection tools out there. These scanners help notify you of malware, hopefully before your clients notify you. Some of the common ones include:

chkrootkit
Linux Malware Detect (maldet)
rkhunter

Each have their own strong points, and they certainly compliment each other nicely when using them together depending on the solutions security strategy.

Maldet happens to be one of the more useful tools I use often for checking suspected compromised servers. This scanner is a very in depth tool that is great at locating backdoors within binaries and website content, as well as other malicious tools. Below is a quick introduction for installing and configuring Maldet, as well as setting up a nightly cron job.

Procedure

As a prerequisite to have Maldet’s utilize ClamAV’s engine, install ClamAV. This will significantly speed up the scan time, and it should be considered a hard requirement:

# CentOS / RedHat
[root@web01 ~]# yum install clamav

# Ubuntu / Debian
[root@web01 ~]# apt-get install clamav

Then install it from source by:

[root@web01 ~]# wget http://www.rfxn.com/downloads/maldetect-current.tar.gz
[root@web01 ~]# tar -xf maldetect-current.tar.gz
[root@web01 ~]# cd maldetect-*
[root@web01 ~]# ./install.sh

Now you can run the scan. You could run the scan against the / directory, but depending on how much data you have, this could take hours or days. Normally I use this tool to check my website documentroot’s as I have other tools installed to monitor other parts of my system. So assuming our website content is in /var/www, here is how you scan that directory:

[root@web01 ~]# maldet -u
[root@web01 ~]# maldet --scan-all /var/www

If you would like to scan the entire server, which is good to do if you suspect you have been compromised, then you can run:

[root@web01 ~]# maldet --scan-all /

To setup a nightly cronjob, scanning your content and emailing you a report if something is found, first we need to modify the conf.maldet file as follows:

[root@web01 ~]# vi /usr/local/maldetect/conf.maldet
...
email_alert=1
email_addr="[email protected]"
...

Maldet automatically installs a cronjob for this, and you can feel free to modify /etc/cron.daily/maldet to your liking. However I prefer to remove it and setup my own. Please choose whatever works best for your needs. Here is one method:

[root@web01 ~]# rm /etc/cron.daily/maldet
[root@web01 ~]# crontab -e
...
32 3 * * * /usr/local/maldetect/maldet -d -u ; /usr/local/maldetect/maldet --scan-all /var/www
...

Now if maldet detects anything that flags as malware, it will send you a report via email for you to investigate.

If you are like me and want Maldet to send you daily reports via email regardless of if it found something or not, then simply setup the following cron job:

[root@web01 ~]# rm /etc/cron.daily/maldet
[root@web01 ~]# crontab -e
...
32 3 * * * /usr/local/maldetect/maldet -d -u ; /usr/local/maldetect/maldet --scan-all /var/www | mail -s "Maldet Report : INSERT_HOSTNAME_HERE" [email protected]
...

The reports will look similar to the following:

Linux Malware Detect v1.5
           (C) 2002-2015, R-fx Networks 
           (C) 2015, Ryan MacDonald 
This program may be freely redistributed under the terms of the GNU GPL v2
maldet(18252): {scan} signatures loaded: 10822 (8908 MD5 / 1914 HEX / 0 USER)
maldet(18252): {scan} building file list for /var/www, this might take awhile...
maldet(18252): {scan} setting nice scheduler priorities for all operations: cpunice 19 , ionice 6
maldet(18252): {scan} file list completed in 35s, found 278654 files...
maldet(18252): {scan} found clamav binary at /usr/bin/clamscan, using clamav scanner engine...
maldet(18252): {scan} scan of /var/www (278654 files) in progress...
maldet(18252): {scan} scan completed on /var/www: files 278654, malware hits 0, cleaned hits 0, time 2355s
maldet(18252): {scan} scan report saved, to view run: maldet --report 151111-0420.12250

Malware Detection – Chkrootkit

Imagine one day you get a phone call from one of your customers. Great, a sale! Nope, just kidding! They are calling to tell you that your site has been hacked. What do you say? How did they find out before you did?

This is a scenario that every sysadmin dreads. How can you prevent something like this? A good defense in depth security strategy goes a long way in preventing this. This includes, WAF’s, firewalls, file integrity monitoring, IDS, two factor authentication, ASV scanning, event management, etc, etc.

The more layers you have, the better chance you have at either mitigating the attack, or being notified of the compromise shortly after it happens so you can react quickly and not have your clients inform you of the problem.

Below is another layer you can add: chkrootkit
Taken from the following wikipedia article (http://en.wikipedia.org/wiki/Chkrootkit):

chkrootkit (Check Rootkit) is a common Unix-based program intended to help system administrators check their system for known rootkits. It is a shell script using common UNIX/Linux tools like the strings and grep commands to search core system programs for signatures and for comparing a traversal of the /proc filesystem with the output of the ps (process status) command to look for discrepancies.

The document below will outline how to install and configure it for CentOS and Ubuntu, including how to run this nightly and email you the report for review.

Procedure

Installing chkrootkit is pretty straight forward as shown below:

# CentOS 5 / RedHat 5
[root@web01 ~]# rpm -ivh http://dl.fedoraproject.org/pub/epel/5/x86_64/epel-release-5-4.noarch.rpm
[root@web01 ~]# yum install chkrootkit mailx

# CentOS 6 / RedHat 6
[root@web01 ~]# rpm -ivh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
[root@web01 ~]# yum install chkrootkit mailx

# CentOS 7 / RedHat 7
[root@web01 ~]# rpm -ivh http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-5.noarch.rpm
[root@web01 ~]# yum install chkrootkit mailx

# Ubuntu / Debian
[root@web01 ~]# apt-get update
[root@web01 ~]# apt-get install chkrootkit mailutils

Perform a one time scan by running:

[root@web01 ~]# chkrootkit

Finally, lets setup a nightly cronjob and receive the report via email for review:

[root@web01 ~]# crontab -e
12 3 * * * /usr/sbin/chkrootkit | mail -s "chkrootkit Report : hostname.yourservername.com" [email protected]

Keep in mind that just setting up these tools doesn’t take care of the problem itself. The sysadmin needs to actively review any and all logs, software, etc. So with this tool and any security tool, identify false positives early on and exclude them from your reports so they are easier to read. This will help malware from slipping through the cracks of your logs and daily reports.

Several more articles will be following this in a series of sorts for malware detection. Each performs essentially the same function, but they go about it in different ways, each having their own benefits. Find the one(s) that work best for your solution.
chkrootkit
Linux Malware Detect (maldet)
rkhunter

Investigating Compromised Servers

This guide is not a step by step tutorial on how to clean a compromised server, rather it is a reference to illustrate what tools are available for performing an analysis of the compromise. The goal of this guide is to show you what information is available to help you determine:

– Point of entry
– The origin of the attack
– What files were compromised
– What level access did the attacker obtain
– Audit trail of the attackers footprints

There are many different types of compromises available to exploit a UNIX server. Under many circumstances, a server is exploited using common techniques such as using a brute force attack, to guess a weak password, or attempting to use known vulnerabilities in software in hopes the server is not on a regular patch schedule. Whatever the method, it is important to understand how the machine got compromised so you can determine the extent of damage to your server and other hosts accessible to this machine.

During many root level compromises, the most straight forward approach to recovery is to perform a clean install of the server and restore any critical data from known good backups. However until the entry point of the compromise is known, this may not be enough as the compromise needs to be understood so that security hole can be properly closed.

Documentation

When you are notified that a system under your control may be compromised. You want to obtain as much information as possible from the complainant. This includes:

– How was the initial problem found?
– Can the time of the compromise be estimated?
– Has the server been modified since the compromise was detected?
– Anything else the complainant says that is important.

IMPORTANT NOTE: If you are planning on getting law enforcement involved, it is imperative that no additional actions are taken on the server. The server must remain in its current state for forensic and evidence collection purposes.

If you choose to proceed with the investigation, document anything you find on the server. It can be as simple as a copy/paste of the command and its results.

Tools used for investigation

In the attacker’s ideal scenario, all important log files would have been wiped so their tracks are clean. Oftentimes however, this doesn’t happen. This leaves some valuable clues in finding what was done. It may also help determine if this was a basic web hack, or a root level compromise. Below are some of the basic commands that I’ll look through when trying to find that one thread so I can unravel the rest of the compromise.

last

This will list the sessions of users that recently logged into the system and include the timestamps, hostnames and whether or not the user is still logged in. An odd IP address may be cross referenced against a brute force ssh attack in /var/log/messages or /var/log/secure which may indicate how the attacker gained entry, what user they got in as, and if they were able to escalate their privilege to root.

ls -lart /

This will provide a time ordered list of files and directories that can be correlated against the time of the compromise. This listing will help determine what has been added or removed from the system.

netstat -na

This will list the current listening sockets on the machine. Running this may reveal any back doors that are listening, or any errant services that are running.

ps -wauxef

This will be helpful in tracking down any errant processes that are listening, as well as help show other odd processes such as the user www running a bash process for example. lsof |grep can also be used to further find what open files this process is using. Concurrently, cat /proc//cmdline may also let you know where the file that controls this process exists.

bash_history

The history file often becomes the Rosetta stone of tracking down what took place during a compromise. Looking through the users .bash_history file will often show exactly what commands were executed, what malicious programs were downloaded, and possibly what directories they were focusing on.

top

Oftentimes, a malicious process will be causing CPU contention issues within the environment, and will usually show up near the top of the list. Any processes that are causing the CPU contention issues should be considered suspicious when tracking down a compromise.

strace

When running strace -p pid on a suspicious process, this may yield important insight into what the process is performing.

In some cases, the commands above may not provide many clues to what happened during the attack. This is where more fine grained tools must be used.

Before moving forward, it should be confirmed that the binaries you are using to investigate are not trojanned versions. These trojanned versions can perform whatever tasks the attacker wishes, which include not showing information that could trace what the compromise was trying to accomplish.

So to verify we have a good working set of tools:

rpm -Va

Verifying a package compares information about the installed files in the package with information about the files taken from the package meta data stored in the rpm database. Among other things, verifying compares the size, MD5 sum, permissions, type, owner and group of each file. Any discrepancies are displayed.

When running this command, it is important to note any packages that are flagged in the following directories may mean you are using a trojanned version of the binary, and therefore you cannot trust its output:

– /bin
– /sbin
– /usr/bin
– /usr/sbin

An example of what a trojanned file:

S.5….T /bin/login
rpm -qa

This can be used to show what rpm’s have been recently installed in chronological order. However, in the case of a root compromise, the rpm database could be altered and therefore not trusted.

lsattr

In cases where the attacker was able to get root access and trojan certain binaries, sometimes they will set that binary to be immutable so you cannot reinstall a clean version of that binary. Common directories to look in are:

– /bin
– /sbin
– /usr/bin
– /usr/sbin

An example of a file that had its attributes set to immutable:

-------i----- /bin/ps

Under normal circumstances in these directories, the rules should all look similar to:

------------- /bin/ps
find

Find is a UNIX tool that can be critical in finding files that have been recently modified. For instance, to find files that have been modified in the last file days, run:

find / -mtime 5

Common Directories Where Web Exploits Are Found
Check world writable directories that Apache would commonly write its temp files to. Locations such as:

– ls -al /tmp
– ls -al /var/tmp
– ls -al /dev/shm

If you have directories on your website that are chmod’ed 777, those are suspect as well.

When checking these directories, you are looking for any files that you don’t recognize, or look suspicious. Be on the lookout for hidden files or files that have execute permissions.

mailq
postcat -q [message_id] | grep X-PHP-Originating-Script
...
X-PHP-Originating-Script: 48:dirs60.php(243) : eval()'d code
...

As shown above, sometimes the compromise could reveal itself through Postfix. If you see your server sending out a lot of spam, or if you are having resource contention issues with CPU or Load, checking out mail queue could help locate the source of the malicious script on the server.

As dirs60.php doesn’t show you exactly where the file resides, you can usually track it down by:

updatedb
locate dirs60.php
# or
find /path/to/website/docroot -name dirs60.php
maldet --scan-all /var/www

Maldet can be very useful for scanning your site for potential malware. Instructions on how to install and run it can be found here.

clamscan -r -i /

ClamAV can also be extremely useful for scanning your entire server for malware. Instructions on how to install and run it can be found here.

Finding point of entry

If you found anything using the information above, that means that you most likely have a timestamp of when the malicious file(s) were installed on the server.

You can now use that timestamp to start combing through your website’s access logs, looking for any suspicious entries in the log during that time period. If you find something, you can cross reference it to where you found the malicious files, then you likely just narrowed down the point of entry.

While the large majority of compromises do come from exploitable code within your website, you cannot rule out other entry points. So be sure to dig through /var/log/* to see if there is anything suspicious during the reported time frame.

Example of investigation

Below is a real example of one of my investigations, documenting my thought process.

When I am investigating a suspected root level compromise, the first thing that needs to be verified was whether or not this was just a basic web hack, or if root privileges were really gained. 80% of the time, its just a simple web hack that can be safely cleaned.

Step 1: Quick and dirty check to see it root privileges were gained:

lsattr /usr/sbin | less
lsattr /usr/bin | less
lsattr /bin | less
lsattr /sbin | less

What to look for:

Your checking for modified attributes such as binaries being set immutable, etc.

Results:

s---ia------- /sbin/shs

^ When you strings that file, you see its a backdoor shell.

Step 2: See if the attacker cleaned his tracks.

Many times, these are script kiddies or dummies who just forgot to clean up after themselves.

What to look for:

All user accounts in /etc/passwd that a valid shell:

cat /home/$USER/.bash_history
Root's history:
history
cat /root/.bash_history

Results:

The /root/.bash_history revealed what the attacker did on the server, which includes:

– They downloaded some malicious tools to serve up via apache in /var/www/html/*.
– They also installed some IRC stuff in /var/tmp/.ICE-unix (as well as other tools).
– Modified root’s crontab to re-download the malicious tools if someone removes them from the server:
* * * * * /var/tmp/.ICE-unix/update >/dev/null 2>&1

Step 3: Check for basic web hacks

Normally if steps 1 and 2 do not show anything, most likely its just a simple web hack that CAN be cleaned easily without formatting the server or otherwise causing panic.

In this specific investigation, that logic is null and void since we know that root privileges were gained. However, just for reference, and since its relevant to this anyways cause I believe that the attacker exploited phpmyadmin. Once they had their backdoor phpshell loaded, they were able to perform a local root exploit to escalate their privileges.

What to look for:

Hidden files and directories, in world readable directories, that apache would normally write tmp files to:

ls -al /var/tmp |less
ls -al /tmp
ls -al /dev/shm

Results:

drwx------ 3 70 70 4096 Nov 19 02:00 /var/tmp/.ICE-unix

^ There is a whole bunch of fun stuff in there.

If items are found in here, you must attempt to track down the entry point so you can have the client take down the site, upgrade their site code, or otherwise fix the exploitable code. One quick and dirty way is by looking at step 5. However, if you see irc bots and stuff running in the output of ps -waux, then you can try to catch where the process is running from by using lsof, or ps -wauxxef |grep .

Step 4: Look for PID’s listening for incoming connections

What to look for:

netstat -natp : Looks for any suspicious connections running on odd ports
ps -wauxxef : look for suspicious files like bash running under www context
lsof : helps to determine where the pid above is running from

Results:

tcp 0 0 0.0.0.0:1144 0.0.0.0:* LISTEN 1008/bash
tcp 0 1 172.16.23.13:60968 22.22.22.22:7000 SYN_SENT 6860/sshd

There are also a fair amount of other ssh ESTABLISHED connections running from high level ports. This means the attackers are still connected to this machine. I can’t see them cause they probably modified the binaries to hide themselves.

[root@www tmp]# netstat -natp |grep sshd |awk '{print $4,$5,$6,$7}'
0.0.0.0:22 0.0.0.0:* LISTEN 1046/sshd
172.16.23.13:60986 22.22.22.22:6667 SYN_SENT 6860/sshd
123.123.123.123:22 22.22.22.22:59361 ESTABLISHED 22795/sshd
123.123.123.123:22 22.22.22.22:57434 ESTABLISHED 22796/sshd
123.123.123.123:57139 143.143.143.143:6667 ESTABLISHED 6860/sshd
123.123.123.123:57402 22.22.22.22:6667 ESTABLISHED 6860/sshd
123.123.123.123:22 143.143.143.143:49238 ESTABLISHED 8860/sshd
123.123.123.123:57134 22.22.22.22:6667 ESTABLISHED 6860/sshd
123.123.123.123:56845 22.22.22.22:6667 ESTABLISHED 6860/sshd
123.123.123.123:57127 143.143.143.143:6667 ESTABLISHED 6860/sshd

Step 5: Determine point of entry for original compromise

What to look for:

/var/log/[messages|secure] : check for brute forced ssh attempts.
apache access logs and error logs : May help determine which site is exploitable. Most attacks are linked against this.

When checking this, also cross reference IP’s against the logs if you think there is a chance it may have originated from there. Its a quick and easy way to trace down the origin point.

Simple ways of checking servers with a ton of web logs like the one used in this example:

cd /var/log/httpd
for i in `ls * |grep access`; do echo $i && grep wget $i; done
for i in `ls * |grep access`; do echo $i && grep curl $i; done

NOTE: wget was searched cause that was in root’s history file under what I believe may have been part of the entry point

Results:
Evidence found that the phpmyadmin installation in /var/www/html was exploited. The version of phpmyadmin was severely outdated. Keeping phpmyadmin patched on a regular schedule would have prevented this from happening.

Final thoughts

Investigating web or root level exploits is more of an art then a science. After you have investigating a few dozen, you will just ‘feel’ something is not right on the server. I cannot stress enough, when you are investigating a compromised system, you must do everything you can to determine exactly how the server got compromised in the first place. Once you have that information, you will then be able to successfully remediate the exploit.

SSH – Two Factor Authentication

Many people are using Google Authenticator to secure their google apps such as gmail. However what if you wanted to be able to utilize two factor authentication (something you have, something you know) for your SSH logins? What if you want to protect yourself against accidently using weak passwords, which can lead to a successful brute force attack?

On both RedHat and Debian based systems, Google Authenticator’s one time passwords are pretty simple to implement. For the purposes of this guide, I’ll be using CentOS 6 and Ubuntu 12.04.

It should be noted that by using this guide, ALL your users (including root) will be required to use the google authenticator to SSH in unless you have SSH keys already in place. Please check with your administration teams before setting this up to ensure you don’t accidently disable their access, or lock yourself out from SSH!

Procedure

1. Install the module

# RedHat 6 based systems
rpm -ivh http://linux.mirrors.es.net/fedora-epel/6/x86_64/epel-release-6-7.noarch.rpm
yum install google-authenticator

# Debian based systems
aptitude install libpam-google-authenticator

2. Now update the /etc/pam.d/sshd file and add the following at the end of the ‘auth’ section:

auth required pam_google_authenticator.so

3. Then update your /etc/ssh/sshd_config

# Change
ChallengeResponseAuthentication no

# To
ChallengeResponseAuthentication yes

4. Restart sshd

# Redhat:  
service sshd restart

# Ubuntu:  
service ssh restart

5. Now, setup keys for your user

google-authenticator

It will ask you to update your ~/.google_authenticator file, answer yes to this question, and whatever you would like to use for the next three. Once complete, the following will be present to you:

    New Secret Key
    Verification Code
    Emergency Scratch Codes

You will use the new secret key for adding the account to your phone’s google authenticator app. The emergency scratch codes should be copied down and stored somewhere secure. They can be used if you ever lose your iphone, or otherwise need to get into your account without your phone’s google authenticator app.

Now when you log into your server using your user account, it will prompt you for your google auth token, followed by your normal password for the server. Any accounts that don’t have the this setup will not be allowed to log in.

Final thoughts

Remember, two factor authentication is only one part of a defense in depth strategy. No security management system is perfect, but each layer you add will help increase your solutions security footprint.