cURL Cheat Sheet

Curl is the swiss army knife for gathering information while troubleshooting websites. There are many ways to use curl and this guide is simply documenting the ones I often use, but can never seem to remember the specific flags.

General Usage

Test a page behind a username/password prompt:

[user@workstation01 ~]# curl --user name:password http://www.example.com

Download files from github:

[user@workstation01 ~]# curl -O https://raw.github.com/username/reponame/master/filename

Download content via curl to a specific filename:

[user@workstation01 ~]# curl -o archive.tgz https://www.example.com/file.tgz

Run a script from a remote source on server. Understand the security implications of doing this as it can be dangerous!

[user@workstation01 ~]# source <( curl -sL https://www.example.com/your_script.sh)

Have curl make the connection through a specific interface on the server:

[user@workstation01 ~]# curl --interface bond0 icanhazip.com

Website Troubleshooting

Receive output detailing the HTTP response and request headers, following all redirects, and discarding the page body:

[user@workstation01 ~]# curl -Lsvo /dev/null https://www.example.com

Test a domain hosted on the local server, bypassing DNS:

[user@workstation01 ~]# curl -sLH "host: www.example.com" localhost

Test a domain against specific IP, bypassing the need for modifying /etc/hosts:

[user@workstation01 ~]# curl -IL https://www.example.com --resolve www.example.com:443:123.123.123.123

or 

[user@workstation01 ~]# curl -Lsvo /dev/null --resolve 'example.com:443:123.123.123.123' https://www.example.com/

or

[user@workstation01 ~]# curl -Lsvo /dev/null --header "Host: example.com" https://123.123.123.123/

Send request using a specific user-agent. Sometimes a server will have rules in place to block the default curl user-agent string. Or perhaps you need to test using a specific user-agent. You can pass specific user-agent strings by running:

[user@workstation01 ~]# curl --user-agent "USER_AGENT_STRING_HERE" www.example.com

A comprehensive listing of available user-agent strings available resides at:
http://www.useragentstring.com/pages/useragentstring.php

For example, lets say you need to test a mobile device user-agent to see if a custom redirect works:

[user@workstation01 ~]# curl -H "User-Agent: Mozilla/5.0 (Linux; Android 4.0.4; Galaxy Nexus Build/IMM76B) AppleWebKit/535.19 (KHTML, like Gecko) Chrome/18.0.1025.133 Mobile Safari/535.19" -IL http://www.example.com/about/contact-us

HTTP/1.1 301 Moved Permanently
Date: Tue, 17 Nov 2015 18:10:09 GMT
Server: Apache/2.4.6 (CentOS) OpenSSL/1.0.1e-fips
Location: http://www.example.com/mobi/contact.php
Content-Type: text/html; charset=iso-8859-1

SSL/TLS Testing

Test to see if the site supports specific SSL/TLS protocols:

[user@workstation01 ~]# curl --sslv2 https://www.example.com
[user@workstation01 ~]# curl --sslv3 https://www.example.com
[user@workstation01 ~]# curl --tlsv1 https://www.example.com
[user@workstation01 ~]# curl --tlsv1.0 https://www.example.com
[user@workstation01 ~]# curl --tlsv1.1 https://www.example.com
[user@workstation01 ~]# curl --tlsv1.2 https://www.example.com

Performance Troubleshooting

Load times can be impacted by a number of things, such as the TLS handshake, DNS lookup time, redirects, transfers, upload/downloads, etc. The curl command shown below will break down the times for each accordingly:

[user@workstation01 ~]# curl -Lsvo /dev/null https://www.example.com/ -w "\nContent Type: %{content_type} \
\nHTTP Code: %{http_code} \
\nHTTP Connect:%{http_connect} \
\nNumber Connects: %{num_connects} \
\nNumber Redirects: %{num_redirects} \
\nRedirect URL: %{redirect_url} \
\nSize Download: %{size_download} \
\nSize Upload: %{size_upload} \
\nSSL Verify: %{ssl_verify_result} \
\nTime Handshake: %{time_appconnect} \
\nTime Connect: %{time_connect} \
\nName Lookup Time: %{time_namelookup} \
\nTime Pretransfer: %{time_pretransfer} \
\nTime Redirect: %{time_redirect} \
\nTime Start Transfer (TTFB): %{time_starttransfer} \
\nTime Total: %{time_total} \
\nEffective URL: %{url_effective}\n" 2>&1

The example output is below:

...
HTTP Code: 200 
HTTP Connect:000 
Number Connects: 2 
Number Redirects: 1 
Redirect URL:  
Size Download: 136112 
Size Upload: 0 
SSL Verify: 0 
Time Handshake: 0.689 
Time Connect: 0.556 
Name Lookup Time: 0.528 
Time Pretransfer: 0.689 
Time Redirect: 0.121 
Time Start Transfer (TTFB): 0.738 
Time Total: 0.962 
Effective URL: https://www.example.com/

Another example for quickly seeing performance is below. For simplicity purposes, first create a file with our curl options already in it:

[user@workstation01 ~]# vim site-performance.cfg
\n
      DNS lookup                          :  %{time_namelookup}\n
      Connect to server (TCP)             :  %{time_connect}\n
      Connect to server (HTTP/S)          :  %{time_appconnect}\n
      Time from start until transfer began:  %{time_pretransfer}\n
      Time for redirection (if any)       :  %{time_redirect}\n
      Total time before transfer started  :  %{time_starttransfer}\n
\n
             Total time                   :  %{time_total}\n
             Size of download (bytes)     :  %{size_download}\n
             Average d/l speed (bytes/s)  :  %{speed_download}\n
\n

Then run it:

[user@workstation01 ~]# curl -w "@site-performance.cfg" -o /dev/null -s https://www.example.com
      DNS lookup                          :  0.138664
      Connect to server (TCP)             :  0.171131
      Connect to server (HTTP/S)          :  0.268969
      Time from start until transfer began:  0.269021
      Time for redirection (if any)       :  0.000000
      Total time before transfer started  :  0.532772

             Total time                   :  0.628730
             Size of download (bytes)     :  162510
             Average d/l speed (bytes/s)  :  258473.000

If a web server is running a bunch of sites and has a high load, how can you narrow down which site is likely causing the high load condition on the server? One way would be to see which site takes the longest to load, as that may indicate a resource intensive site. See the example below:

[user@workstation01 ~]# for i in www.example1.com www.example2.com www.example3.com; do echo -n "$i "; (time curl -IL $i -XGET) 2>&1 | grep -E "real|HTTP"; echo; done

www.example1.com HTTP/1.1 200 OK
real	0m0.642s

www.example2.com HTTP/1.1 200 OK
real	0m2.234s

www.example3.com HTTP/1.1 200 OK
real	0m0.421s

So www.example2.com takes 2 seconds to load. What happens to the load times on that domain during increased traffic. The example below will send 25 requests to the domain:

[user@workstation01 ~]# for i in {1..25}; do (time curl -sIL http://www.example2.com -X GET &) 2>&1 | grep -E "real|HTTP" & done

HTTP/1.1 200 OK
real	0m11.297s
HTTP/1.1 200 OK
real	0m11.395s
HTTP/1.1 200 OK
real	0m11.906s
HTTP/1.1 200 OK
real	0m12.079s
...
HTTP/1.1 200 OK
real	0m11.297s

Determining why this is happening will involve investigation outside the scope of this article. Mainly around investigating ways to cache the site or otherwise optimizing it. However at least now we know which site doesn’t perform well under increased requests, and may be causing the high server load.

Backing up permissions on directory

Before doing anything in Linux, it is also smart to have a rollback plan. Making blanket, recursive permission changes on a directory would certainly fall into this category!

Lets say you found a directory on your system where the file permissions were all 777, so you want to secure them a bit by changing the permissions over to 644. Something like:

[root@web01 ~]# find /var/www/vhosts/domain.com -type f -perm 0777 -print -exec chmod 644 {} \;

The paranoid among us will want to ensure we can revert things back to the way they were before. Thankfully there are two commands that can be used to either backup or restore permissions on a directory recursively: getfacl and setfacl

To backup all the permissions and ownerships within a given directory such as /var/www/vhosts/domain.com, do the following:

[root@web01 ~]# cd /var/www/vhosts/domain.com
[root@web01 ~]# getfacl -R . > permissions_backup

Now lets say you ran the find command, changed everything over to 644, then realized you broke your application cause it needed some files to be 664 or something, so you just want to roll back so you can investigate what happened.

You can roll back the permissions by running:

[root@web01 ~]# cd /var/www/vhosts/domain.com
[root@web01 ~]# setfacl --restore=permissions_backup

Backup entire servers permissions

If you wanted to backup the entire server’s permissions, you can do that by:

[root@web01 ~]# getfacl -R --absolute-names / > server_permissions_backup

And the restoration process remains the same:

[root@web01 ~]# setfacl --restore=server_permissions_backup

Find command examples

This is just a quick reference page for using find to do basic things.

Find is a pretty powerful tool that accepts a bunch of options for narrowing down your search. Some basic examples of stuff you can do are below.

Find a specific file type and extension older than 300 days and remove them

This will find files:
– Older than 300 days
– Is a file
– Match *.jpg
– Will not go into sub directories

This also works for those pesky directories that have millions of files.

First, always confirm the command will work before blindly removing files:

[root@web01 ~]# cd /path/to/directory
[root@web01 ~]# find . -maxdepth 1 -type f -name '*.jpg' -mtime +300 | xargs ls -al

Once you verified that the files displayed are the ones you want removed, remove them by running:

[root@web01 ~]# cd /path/to/directory
[root@web01 ~]# find . -maxdepth 1 -type f -name '*.jpg' -mtime +300 | xargs rm -f

Find files with 777 permissions

This will find all files that have 777 permissions:

[root@web01 ~]# cd /path/to/directory
[root@web01 ~]# find . -type f -perm 0777 -print

This will find all files that do NOT have 777 permissions

[root@web01 ~]# cd /path/to/directory
[root@web01 ~]# find / -type f ! -perm 777

Find Files with 777 Permissions and change to 644

Use caution with this, this is generally not smart to run blindly as it will go into subdirectories unless you set maxdepth.

[root@web01 ~]# cd /path/to/directory
[root@web01 ~]# find . -type f -perm 0777 -print -exec chmod 644 {} \;

Find Directories with 777 Permissions and change to 755

Use caution with this, this is generally not smart to run blindly as it will go into subdirectories unless you set maxdepth.

[root@web01 ~]# cd /path/to/directory
[root@web01 ~]# find . -type d -perm 777 -print -exec chmod 755 {} \;

Find empty directories

[root@web01 ~]# cd /path/to/directory
[root@web01 ~]# find /tmp -type d -empty

Find all hidden files within a directory

[root@web01 ~]# find /path/to/directory -type f -name ".*"

Find files owned by user or group

[root@web01 ~]# cd /path/to/directory
[root@web01 ~]# find /var/www -user apache
[root@web01 ~]# find /var/www -group apache

Find files that were modified in the last 30 days

[root@web01 ~]# find / -mtime 30

Find files that were modified in the last hour

[root@web01 ~]# find / -mmin -60

Find files that were changed within the last hour
Note, this one is specified in minutes only!

[root@web01 ~]# find / -cmin -60

Find files that were accessed in the last 5 days

[root@web01 ~]# find / -atime 5

Find files that were accessed within the last hour
Note, this one is specified in minutes only!

[root@web01 ~]# find / -amin -60

Count files per directory with find
This one is useful when you need to find the top 10 directories that contain the most amount of files.

[root@web01 ~]# vim count-files-per-directory.sh
#!/bin/bash

if [ $# -ne 1 ];then
  echo "Usage: `basename $0` DIRECTORY"
  exit 1
fi

echo "Please wait..."

find "$@" -type d -print0 2>/dev/null | while IFS= read -r -d '' file; do 
    echo -e `ls -A "$file" 2>/dev/null | wc -l` "files in:\t $file"
done | sort -nr | head | awk '{print NR".", "\t", $0}'

exit 0

Now run it against the / directory:

[root@web01 ~]# bash count-files-per-directory.sh /
Please wait...
1. 	 768 files in:	 /usr/share/man/man1
2. 	 631 files in:	 /usr/lib64/python2.6
3. 	 575 files in:	 /usr/share/locale
4. 	 566 files in:	 /usr/share/vim/vim74/syntax
5. 	 496 files in:	 /usr/bin
6. 	 487 files in:	 /usr/share/man/man8
7. 	 393 files in:	 /usr/share/perl5/unicore/lib/gc_sc
8. 	 380 files in:	 /usr/include/linux
9. 	 354 files in:	 /usr/lib64/python2.6/encodings
10. 	 334 files in:	 /usr/share/man/man3

Or if you only need to run the search in a specific directory:

[root@web01 ~]# bash count-files-per-directory.sh /usr/share/man
Please wait...
1. 	 768 files in:	 /usr/share/man/man1
2. 	 487 files in:	 /usr/share/man/man8
3. 	 334 files in:	 /usr/share/man/man3
4. 	 124 files in:	 /usr/share/man/man5
5. 	 49 files in:	 /usr/share/man
6. 	 35 files in:	 /usr/share/man/ru/man8
7. 	 31 files in:	 /usr/share/man/man7
8. 	 27 files in:	 /usr/share/man/fr/man8
9. 	 25 files in:	 /usr/share/man/de/man8
10. 	 22 files in:	 /usr/share/man/ja/man8

Using hpasmcli for HP servers

HP comes with their server utility scripts called hpssacli and hpacucli. These tools allow you to view and modify your hardware configuration on the server. The hpacucli is the older implementation of the toolkit, but the syntax is pretty similar.

HP tools information

To show the firmware version, run:

[root@web01 ~]# hpasmcli -s "show server"

If you want to see extended information, run:

[root@web01 ~]# hpssacli controller all show config detail

General information

To view information regarding the server model, cpu, type, memory, etc, run:

[root@web01 ~]# hpasmcli -s "show server"

Hardware Health

If you want to view the health of the system and chassis components, run:

[root@web01 ~]# hpasmcli -s "show server"

The chassis can also only return specific components, such as:

[root@web01 ~]# hpasmcli -s "show powersupply"
[root@web01 ~]# hpasmcli -s "show dimm"
[root@web01 ~]# hpasmcli -s "show fans"
[root@web01 ~]# hpasmcli -s "show temp"

Storage health

To view the physical and virtual disks on the server:

[root@web01 ~]# hpssacli ctrl all show config
[root@web01 ~]# hpssacli controller slot=3 physicaldrive all show
[root@web01 ~]# hpssacli controller slot=3 physicaldrive 2I:1:5 show detail
[root@web01 ~]# hpssacli controller slot=3 logicaldrive all show

On older HP servers, you can view the physical and virtual disks on the server by:

[root@web01 ~]# hpacucli controller slot=1 physicaldrive all show
[root@web01 ~]# hpacucli controller slot=1 physicaldrive 2I:1:5 show detail
[root@web01 ~]# hpacucli controller slot=1 logicaldrive all show

To see the storage battery status:

[root@web01 ~]# hpssacli controller all show detail | egrep -i 'battery\/|controller\ status|cache\ status'
   Controller Status: OK
   Cache Status: OK
   Battery/Capacitor Count: 1
   Battery/Capacitor Status: OK

Hardware logs

To display the hardware logs:

[root@web01 ~]# hpasmcli -s "show iml"

If you need to clear the hardware logs:

[root@web01 ~]# hpasmcli -s "clear iml"

CPU actions

To see if hyperthreading is enabled on the CPUs:

[root@web01 ~]# hpasmcli -s "show ht"

If you wanted to change the hyperthreading settings:

# Enable
[root@web01 ~]# hpasmcli -s "enable ht"

# Disable
[root@web01 ~]# hpasmcli -s "disable ht"

Using omreport and omconfig for Dell servers

Dell comes with their server utility scripts called omreport and omconfig. These tools allow you to view and modify your hardware configuration on the server.

Dell tools information

To see what version of the tools your running:

[root@web01 ~]# omreport about details=true

To see if there are updates available for the firmware:

[root@web01 ~]# omreport system version

To see what commands are available using omreport:

[root@web01 ~]# omreport system -?

General information

To view information regarding the server model, cpu type, memory, service tags, etc, run:

[root@web01 ~]# omreport system summary

Hardware Health

If you want to view the health of the system and chassis components, run:

[root@web01 ~]# omreport system

To only get the health information for the chassis:

[root@web01 ~]# omreport chassis

The chassis can also only return specific components, such as:

[root@web01 ~]# omreport chassis fans
[root@web01 ~]# omreport chassis memory
[root@web01 ~]# omreport chassis nics
[root@web01 ~]# omreport chassis processors
[root@web01 ~]# omreport chassis temps
[root@web01 ~]# omreport chassis batteries
[root@web01 ~]# omreport chassis pwrsupplies

Storage health

As a quick note, if the commands below report there are no controllers listed, check to be sure that the software is actually running by:

[root@web01 ~]# /opt/dell/srvadmin/sbin/srvadmin-services.sh status
dell_rbu (module) is stopped
ipmi driver is running
dsm_sa_datamgrd is stopped
dsm_sa_eventmgrd is stopped
dsm_sa_snmpd is stopped
dsm_om_shrsvcd is stopped
dsm_om_connsvcd is stopped
[root@web01 ~]# /opt/dell/srvadmin/sbin/srvadmin-services.sh restart

To view the physical and virtual disks on the server:

[root@web01 ~]# omreport storage pdisk controller=0
[root@web01 ~]# omreport storage vdisk controller=0
[root@web01 ~]# omreport storage pdisk controller=0 vdisk=0

If you just wanted a quick listing of the relevant disk information to see the state of the drives, run:

[root@web01 ~]# omreport storage pdisk controller=0 | grep -iE "^id|^status|name|state|Failure Predicted"
ID                              : 0:0:0
Status                          : Ok
Name                            : Physical Disk 0:0:0
State                           : Online
Failure Predicted               : No
ID                              : 0:0:1
Status                          : Ok
Name                            : Physical Disk 0:0:1
State                           : Online
Failure Predicted               : No

To see if there are any empty drive bays on the server:

[root@web01 ~]# omreport storage controller controller=0 info=pdslotreport | grep 'Empty Slots'

To see the storage battery status:

[root@web01 ~]# omreport storage battery controller=0

Hardware Logs

To display the hardware logs, run:

[root@web01 ~]# omreport system esmlog

If you need to view the alert logs:

[root@web01 ~]# omreport system alertlog

And if you needed to view the messages from the POST:

[root@web01 ~]# omreport system postlog

If you find you need to clear the logs, that can be performed by:

[root@web01 ~]# omconfig system esmlog action=clear
[root@web01 ~]# omconfig system alertlog action=clear
[root@web01 ~]# omconfig system postlog action=clear

CPU actions

To see if hyperthreading is enabled on the CPUs:

[root@web01 ~]# omreport chassis biossetup | grep -A 2 'HyperThreading'

If you wanted to enable hyperthreading:

# Dell R710
[root@web01 ~]# omconfig chassis biossetup attribute=cpuht setting=enabled

# Dell R720
[root@web01 ~]# omconfig chassis biossetup attribute=ProcCores setting=All

If you needed to enable or disable NUMA:

# Disable NUMA:
[root@web01 ~]# omconfig chassis biossetup attribute=numa setting=disabled

# Enable NUMA:
[root@web01 ~]# omconfig chassis biossetup attribute=numa setting=enabled

Changing your servers timezone

The systems timezone is usually set during installation. If the timezone needed to be changed, it can be done without rebooting the system.

Just be sure to watch for applications like MySQL and PHP that require additional steps for changing the timezone, which are noted near the bottom of this article.

CentOS 5 and CentOS 6

Modify the zone in /etc/sysconfig/clock. You can find the valid timezones in /usr/share/zoneinfo. The commonly used timezones include America/New_York, America/Chicago, America/Los_Angeles, and UTC.

[root@web01 ~]# vim /etc/sysconfig/clock
...
ZONE="America/New_York"
...

Then update /etc/localtime:

[root@web01 ~]# tzdata-update

Now sync the hardware clock against the system time:

[root@web01 ~]# hwclock --systohc

Go ahead and restart syslogd/rsyslogd and crond:

[root@web01 ~]# service crond restart
[root@web01 ~]# service rsyslog restart
[root@web01 ~]# service syslog restart

CentOS 7

Changing the timezone on CentOS 7 can be done with a few commands. You can find the valid timezones in /usr/share/zoneinfo. The commonly used timezones include America/New_York, America/Chicago, America/Los_Angeles, and UTC.

[root@web01 ~]# timedatectl set-timezone America/New_York
[root@web01 ~]# systemctl restart crond
[root@web01 ~]# systemctl restart rsyslog

Ubuntu 12.04 and Ubuntu 14.04

Modify the zone in /etc/timezone. You can find the valid timezones in /usr/share/zoneinfo. The commonly used timezones include America/New_York, America/Chicago, America/Los_Angeles, and UTC.

[root@web01 ~]# vim /etc/timezone
...
America/New_York
...

Now update active timezone:

[root@web01 ~]# dpkg-reconfigure --frontend noninteractive tzdata
Current default time zone: 'America/New_York'
Local time is now:      Tue Jan 17 01:18:04 EST 2017.
Universal Time is now:  Tue Jan 17 06:18:04 UTC 2017.

Restart rsyslog and cron:

[root@web01 ~]# service cron restart
[root@web01 ~]# service rsyslog restart

Ubuntu 16.04

Changing the timezone on Ubuntu 16.04 can be done with a few commands. You can find the valid timezones in /usr/share/zoneinfo. The commonly used timezones include America/New_York, America/Chicago, America/Los_Angeles, and UTC.

[root@web01 ~]# timedatectl set-timezone America/New_York
[root@web01 ~]# systemctl restart crond
[root@web01 ~]# systemctl restart rsyslog

MySQL, Percona and MariaDB

In order for MySQL, Percona, and MariaDB to register the new timezone settings, they need to be restarted. There really isn’t a way around this. As a temporary solution, and one that will not pick up future DST timezone changes, you can manually set the databases current time by:

# List current system date and time via MySQL:
[root@web01 ~]# mysql
mysql> SELECT @@global.time_zone;

# List the current date and time according to MySQL:
mysql> SELECT NOW();

# Update the timezone using the UTC offset:
mysql> SET @@global.time_zone = '+05:00';

Even with this temporary fix in place, unless you are using the UTC timezone, MySQL should be restarted very soon.

PHP

PHP should also have its timezone updated when you change it on the system. Determine where your php.ini resides, then update it accordingly. I am assuming CentOS 6 for this example:

[root@web01 ~]# vim /etc/php.ini
...
date.timezone = "America/New_York"
...

Then restart Apache or PHP-FPM for your system so the changes are applied. Afterwards, test the timezone change to PHP by:

[root@web01 ~]# php -i |grep date.timezone
date/time support => enabled
date.timezone => America/New_York => America/New_York

A list of supported timezone configurations for PHP can be found at:
http://www.php.net/manual/en/timezones.php

Changing the servers hostname

The hostname of a system is usually set during installation. However as time goes on, it may be determined that the hostname should be changed to something more suitable.

Outlined below is the procedure for changing the systems hostname without having to reboot.

CentOS 5 and 6

First, check to see what the existing hostname is on the server:

[root@web01 ~]# hostname

Then change the current hostname. In this example, we’re going to change the hostname to web04.mydomain.com:

[root@web01 ~]# hostname web04.mydomain.com

Now update /etc/hosts with the new hostname:

[root@web01 ~]# vim /etc/hosts
127.0.0.1      localhost.localdomain.com localhost
192.168.1.5    web04.mydomain.com web04

If you have ‘domain’ specified in /etc/resolv.conf, update that accordingly. There is not a need to add ‘domain’ if it is not already defined:

[root@web01 ~]# vim /etc/resolv.conf
domain web04.mydomain.com
nameserver 8.8.8.8
nameserver 8.8.4.4

Next update your network configuration:

[root@web01 ~]# vim /etc/sysconfig/network
...
HOSTNAME=web04.mydomain.com
...

Restart syslog so the new changes go into effect:

# CentOS 5
[root@web01 ~]# service syslog restart

# CentOS 6
[root@web01 ~]# service rsyslog restart

Finally, log out and back into the server via SSH and you should see the new hostname in effect. Also check /var/log/secure and /var/log/messages to ensure the new hostname is being used.

CentOS 7

In this example, we’re going to change the hostname to web04.mydomain.com. First run:

[root@web01 ~]# hostname web04.mydomain.com

Now update /etc/hosts with the new hostname:

[root@web01 ~]# vim /etc/hosts
127.0.0.1      localhost.localdomain.com localhost
192.168.1.5    web04.mydomain.com web04

Now update the hostname using the systemd command:

[root@web01 ~]# hostnamectl set-hostname web04.mydomain.com

If you have ‘domain’ specified in /etc/resolv.conf, update that accordingly. There is not a need to add ‘domain’ if it is not already defined:

[root@web01 ~]# vim /etc/resolv.conf
domain web04.mydomain.com
nameserver 8.8.8.8
nameserver 8.8.4.4

Now restart syslog so the new changes go into effect:

[root@web01 ~]# systemctl restart rsyslog

Finally, log out and back into the server via SSH and you should see the new hostname in effect. Also check /var/log/auth.log and /var/log/syslog to ensure the new hostname is being used.

Ubuntu 12.04 and Ubuntu 14.04

First, check to see what the existing hostname is on the server:

[root@web01 ~]# hostname

In this example, we’re going to change the hostname to web04.mydomain.com. So run:

[root@web01 ~]# hostname web04.mydomain.com

Now update /etc/hosts with the new hostname:

[root@web01 ~]# vim /etc/hosts
127.0.0.1      localhost.localdomain.com localhost
192.168.1.5    web04.mydomain.com web04

If you have ‘domain’ specified in /etc/resolv.conf, update that accordingly. There is not a need to add ‘domain’ if it is not already defined:

[root@web01 ~]# vim /etc/resolv.conf
domain web04.mydomain.com
nameserver 8.8.8.8
nameserver 8.8.4.4

Then update /etc/hostname accordingly:

[root@web01 ~]# vim /etc/hostname
web04.mydomain.com

Now restart syslog so the new changes go into effect:

[root@web01 ~]# service rsyslog restart

Finally, log out and back into the server via SSH and you should see the new hostname in effect. Also check /var/log/auth.log and /var/log/syslog to ensure the new hostname is being used.

Ubuntu 16.04 and Ubuntu 18.04

First, check to see what the existing hostname is on the server:

[root@web01 ~]# hostname

In this example, we’re going to change the hostname to web04.mydomain.com. So run:

[root@web01 ~]# hostname web04.mydomain.com

Now update /etc/hosts with the new hostname:

[root@web01 ~]# vim /etc/hosts
127.0.0.1      localhost.localdomain.com localhost
192.168.1.5    web04.mydomain.com web04

Now update the hostname using the systemd command:

[root@web01 ~]# hostnamectl set-hostname web04.mydomain.com

If you have ‘domain’ specified in /etc/resolv.conf, update that accordingly. There is not a need to add ‘domain’ if it is not already defined:

[root@web01 ~]# vim /etc/resolv.conf
domain web04.mydomain.com
nameserver 8.8.8.8
nameserver 8.8.4.4

Now restart syslog so the new changes go into effect:

[root@web01 ~]# systemctl restart rsyslog

On Ubuntu 18.04 only, check to see if /etc/cloud/cloud.cfg exists. If it does, confirm the preserve hostname is set to true as shown below:

[root@web01 ~]# vim /etc/cloud/cloud.cfg
...
preserve_hostname: true
...

Finally, log out and back into the server via SSH and you should see the new hostname in effect. Also check /var/log/auth.log and /var/log/syslog to ensure the new hostname is being used.

Using access control lists (FACLs)

An access control list (ACL) provides a more granular approach to permissions, allowing the system administrator the ability to move past the limitations of Linux permissions when the situation warrants it.

For instance, perhaps you have a complex use case that requires granting permissions to multiple individual users, or maybe to more then a single group. Some people would get around this by simply using 777 permissions, but that in and of itself is a very poor security practice.

ACLs can help solve this as they allow you to create a sophisticated permissions scheme that grant access to the users and groups that require it, without the need to open the permissions broadly for everyone.

With all this being said, ACLs should only be used when standard permissions cannot accomplish what your looking to do. ACLs are often used as the ‘quick fix’, but while they do work, they can overly complicate an environment that could cause major headaches while troubleshooting. There is also more room for error, as you are adding an additional layer of complication when employing ACLs.

Enabling ACLs on the filesystem

Setting up a system for ACLs are pretty simple. If you are running a system that uses systemd, then ACLs should already been enabled as its a dependency for systemd. For older systems, the process is shown below for installing and enabling ACLs.

First, check to see if ACLs are already enabled on the filesystem:

[root@web01 ~]# tune2fs -l /dev/sda1 |grep acl
Default mount options:    user_xattr acl

If you do not see ‘acl’ in the output, then you can install ACLs by:

# CentOS / RHEL:
[root@web01 ~]# yum install acl

# Ubuntu / Debian:
[root@web01 ~]# apt-get update
[root@web01 ~]# apt-get install acl

Now for all distros, enable ACLs in your /etc/fstab putting ‘acl’ in the mount opens as shown below:

[root@web01 ~]# cat /etc/fstab 
/dev/mapper/VolGroup-lv_root / ext4 defaults,acl 1 1

Then remount the filesystem so the new mount option takes effect by:

[root@web01 ~]# mount -o remount /

Then verify that ACL’s are now enabled on the filesystem:

[root@web01 ~]# tune2fs -l /dev/sda1 |grep acl
Default mount options:    user_xattr acl

Using ACLs

You can determine if a file or directory has an ACL in place as the permissions will be followed by a ‘+’ as shown below:

[root@web01 ~]#  ls -al
...
-rw-rwxr--+  1 root root   93 Jan 12 11:22 test
...

You can see what ACLs have been assigned to the file by:

[root@web01 ~]# getfacl test 
# file: test
# owner: root
# group: root
user::rw-
user:jdoe:rwx
group::r--
mask::rwx
other::r--

So in the example above, we see a user called jdoe also rwx permissions to the file called test.

To add or modify a user ACL:

[root@web01 ~]# setfacl -m u:jdoe:permissions /var/www/vhosts/domain.com

To add or modify a group ACL:

[root@web01 ~]# setfacl -m g:devs:permissions /var/www/vhosts/domain.com

To modify the ‘other’ ACL:

[root@web01 ~]# setfacl -m o:permissions /var/www/vhosts/domain.com

To add or modify an ACL recursively, you use the -R option. It is important to note that it is considered good practice to use the -X (capital X) permission when using recursion so files can retain the execute permission if they have them, but also allow you to traverse the directory as that requires the execute permission. The benefit comes into play with the -X (capital X) as it will prevent an admin from accidentally adding the execute permission on a regular file. An example of this is below:

[root@web01 ~]# setfacl -R -m u:jdoe:rwX /var/www/vhosts/domain.com

To set the default ACL so all new directories will inherit the ACL’s set on the parent directory:

[root@web01 ~]# setfacl -R -m default:u:jdoe:rwX /var/www/vhosts/domain.com

To remove a user from an ACL:

[root@web01 ~]# setfacl -x u:jdoe /var/www/vhosts/domain.com

To remove a group from an ACL:

[root@web01 ~]# setfacl -x g:devs /var/www/vhosts/domain.com

Remove ALL ACL’s on a file/directory:

[root@web01 ~]# setfacl -b /var/www/vhosts/domain.com

Remove ALL ACL’s on a file/directory recursively:

[root@web01 ~]# setfacl -R -b /var/www/vhosts/domain.com

Chroot SFTP-only users

In an environment where you have multiple developers working on different sites, or perhaps you have multiple clients hosting their websites on your solution, restricting access for those users can become important.

Security becomes a concern in a normal FTP environment as it doesn’t take much for a user to simply ‘cd ..’ and see what other users are on your server. We want a way to simply lock those users only into their home directories so they cannot essentially ‘break out’.

It is important to consider how you are going to give those chrooted SFTP users access to their directories. In a chroot, simply using a symlink will not work as the filesystem will have no knowledge of the data outside that chroot. Therefore you would have to consider either:

1. Chrooting the user to their web sites home directory
2. Chrooting the user to their home directory, then create a bind mount to their website.

Both have their pro’s and con’s, however it could be argued that chrooting them to their home directory and using bind mounts is more secure since its offers an added layer of security since you are not relying solely on permissions, but also on the chroot itself. For the purposes of this article, we are going to default to chrooting users to their home directory, then creating a bind mount to their website.

To get started, first, create the restrict SFTP-only group

[root@web01 ~]# groupadd sftponly

Next, edit the sshd config to setup the internal-sftp subsystem. You will need to comment out the first entry as shown below:

[root@web01 ~]# vim /etc/ssh/sshd_config
...
# Subsystem       sftp    /usr/libexec/openssh/sftp-server
Subsystem     sftp   internal-sftp
...

Now at the very bottom of /etc/ssh/sshd_config, setup the following block. It is important that this is created at the very end of the file:

[root@web01 ~]# vim /etc/ssh/sshd_config
...
Match Group sftponly
     ChrootDirectory %h
     X11Forwarding no
     AllowTCPForwarding no
     ForceCommand internal-sftp
...

Then restart SSHD by:

[root@web01 ~]# service sshd restart

Now that the foundation is complete, we can add chrooted SFTP-only users. You will notice I will set the home directory to /home/chroot/bob. This is optional, but I prefer it so you can quickly tell the difference between regular users and SFTP-only users. To create a new user called bob with the proper group assignments and permissions:

[root@web01 ~]# mkdir -p /home/chroot
[root@web01 ~]# useradd -d /home/chroot/bob -s /bin/false -G sftponly bob
[root@web01 ~]# passwd bob
[root@web01 ~]# chmod 755 /home/chroot/bob
[root@web01 ~]# chown root:root /home/chroot/bob

Users will not be able to write any data within their home directory since the home directory MUST be owned by root. If they want to be able to write files, create them a writable directory by:

[root@web01 ~]# mkdir /home/chroot/bob/files
[root@web01 ~]# chown bob:bob /home/chroot/bob/files

Now to allow them to access their content in /var/www/vhosts/domain.com, you need to create a bind mount:

[root@web01 ~]# vim /etc/fstab
...
/var/www/vhosts/domain.com   /home/chroot/bob/domain.com        none    bind    0 0
...

Finally, create the placeholder folder, and mount the bind mount:

[root@web01 ~]# mkdir /home/chroot/bob/domain.com
[root@web01 ~]# mount -a

Confirm user bob is setup in the right group, and that the root directory ‘/home/chroot/bob/domain.com has group writable perms. In my specific example, as the directory has the ownership apache:apache, I had to do the following:

[root@web01 ~]# usermod -a -G apache bob
[root@web01 ~]# chmod 775 /var/www/vhosts/domain.com

Benchmark MySQL with Sysbench

Tuning your MySQL configuration day in and day out without having an idea of what the hardware of the server can actually do in a perfect world can be a bit frustrating. This is where a tool like sysbench comes into play. Sysbench can allow you to get an idea of how MySQL will perform on your chosen server under load, using a basic set of tests.

It is important to note that this guide will not show you how to benchmark your existing MySQL dataset, but instead, it shows how your overall server will react to a generic MySQL dataset under heavy load.

Situations where this becomes useful is when you want to swap those SAS drives with SSD’s, or perhaps performing a comparison between running MySQL on a server vs using something like Amazon RDS or Rackspace Cloud Databases. It allows you to get a feel for where the bottlenecks may potentially come into play. Perhaps from IO, network saturation, CPU, etc.

Getting started with sysbench is pretty straight forward. I’ll outline how to create the test dataset, then perform a few benchmarks off that dataset. For the purposes of this article, I am most concerned about how many transactions per second MySQL can handle on my server in a perfect world.

First, log into your database server, and create a new test database. Do not attempt to use an existing database with content as sysbench will be populating it with its own tables. I posted 2 grant user statements on purpose. Set the access, username, and password as needed for your environment:

[root@db01 ~]# mysql
mysql> create database sbtest;
mysql> grant all on sbtest.* to 'sysbench'@'%' identified by 'your_uber_secure_password';
mysql> grant all on sbtest.* to 'sysbench'@'localhost' identified by 'your_uber_secure_password';
mysql> flush privileges;

Next, log into your server running sysbench, and install it:

# CentOS 6
[root@sysbench01 ~]#  rpm -ivh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
[root@sysbench01 ~]#  yum install sysbench

# CentOS 7
[root@sysbench01 ~]#  rpm -ivh http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-5.noarch.rpm
[root@sysbench01 ~]#  yum install sysbench

# Ubuntu 12.04 / Ubuntu 14.04
[root@sysbench01 ~]#  apt-get update
[root@sysbench01 ~]#  apt-get install sysbench

On the sysbench server, run sysbench with the prepare statement so it can generate a table with data to be used during the benchmark. This command will populate a table in the sbtest database with 1,000,000 rows of data, and force innodb:

[root@sysbench01 ~]# sysbench --test=oltp --oltp-table-size=1000000 --mysql-host=192.168.1.1 --mysql-db=sbtest --mysql-user=sysbench --mysql-password=your_uber_secure_password --db-driver=mysql --mysql-table-engine=innodb prepare

You can verify the table was written properly on your database server by:

[root@db01 ~]# mysql
mysql> use sbtest;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
mysql> show tables;
+------------------+
| Tables_in_sbtest |
+------------------+
| sbtest           |
+------------------+
1 row in set (0.00 sec)

mysql> select count(*) from sbtest;
+----------+
| count(*) |
+----------+
|  1000000 |
+----------+
1 row in set (0.13 sec)

Back on the server you are running sysbench on, we are going to run a benchmark using a read/write test (–oltp-read-only=off), for a max time of 60 seconds using 64 threads, with the test mode set to complex (range queries, range SUM, range ORDER by, inserts and updates on index, as well as non-index columns, delete rows).

[root@sysbench01 ~]# sysbench --test=oltp --oltp-table-size=1000000 --mysql-host=192.168.1.1 --mysql-db=sbtest --mysql-user=sysbench --mysql-password=your_uber_secure_password --max-time=60 --oltp-test-mode=complex --oltp-read-only=off --max-requests=0 --num-threads=64 --db-driver=mysql run

sysbench 0.4.12:  multi-threaded system evaluation benchmark

Running the test with following options:
Number of threads: 64

Doing OLTP test.
Running mixed OLTP test
Using Special distribution (12 iterations,  1 pct of values are returned in 75 pct cases)
Using "BEGIN" for starting transactions
Using auto_inc on the id column
Threads started!
Time limit exceeded, exiting...
(last message repeated 63 times)
Done.

OLTP test statistics:
    queries performed:
        read:                            1932084
        write:                           690030
        other:                           276012
        total:                           2898126
    transactions:                        138006 (2299.32 per sec.)
    deadlocks:                           0      (0.00 per sec.)
    read/write requests:                 2622114 (43687.09 per sec.)
    other operations:                    276012 (4598.64 per sec.)

Test execution summary:
    total time:                          60.0203s
    total number of events:              138006
    total time taken by event execution: 3839.0815
    per-request statistics:
         min:                                  8.76ms
         avg:                                 27.82ms
         max:                                313.65ms
         approx.  95 percentile:              50.64ms

Threads fairness:
    events (avg/stddev):           2156.3438/34.49
    execution time (avg/stddev):   59.9856/0.01

Lets say you want to run the same test, but perform the test using read only queries:

[root@sysbench01 ~]# sysbench --test=oltp --oltp-table-size=1000000 --mysql-host=192.168.1.1 --mysql-db=sbtest --mysql-user=sysbench --mysql-password=your_uber_secure_password --max-time=60 --oltp-test-mode=complex --oltp-read-only=on --max-requests=0 --num-threads=64 --db-driver=mysql run

Here is an example of running the test in read/write mode, and disconnecting and reconnecting after each query:

[root@sysbench01 ~]# sysbench --test=oltp --oltp-table-size=1000000 --mysql-host=192.168.1.1 --mysql-db=sbtest --mysql-user=sysbench --mysql-password=your_uber_secure_password --max-time=60 --oltp-test-mode=complex --oltp-read-only=off --max-requests=0 --num-threads=64 --db-driver=mysql --oltp-reconnect-mode=query run

Once you are done with your testing, you can clean up the the database by:

[root@db01 ~]# mysql
mysql> drop database sbtest;
mysql> DROP USER 'sysbench'@'localhost';
mysql> DROP USER 'sysbench'@'%';
mysql> flush privileges;
mysql> quit