Rackspace Cloud API – Enable SSL Termination on existing cloud load balancer

The purpose of this post is to show how you can enable SSL termination on an existing cloud load balancer through the API. Using the API will allow you to script deployments so you can avoid having to use the control panel. This also provides you the building blocks for understanding deployment automation.

This guide will show you how to enable SSL termination on an existing cloud load balancer as described in my previous post: Rackspace Cloud API – Create cloud load balancers

Feel free to review http://docs.rackspace.com for learning about all the possible operations that can be done through the API.

In this example, we are going to enable SSL termination on an existing cloud load balancer called lb.example.com. The domain we are load balancing is www.example.com. Please note that enabling SSL termination on a Rackspace cloud load balancer costs more then a regular cloud load balancer!

SPECIAL NOTE: I advise against using SSL termination if you are passing any PII (personally identifiable information) or other sensitive data through the cloud load balancer to the Cloud server. The transmission will only be encrypted from the clients browser to the load balancer. From there, the cloud load balancer will send the request in clear text through the rackspace network to your cloud server.

When working with the API, I like to use a tool called httpie to simplify things a bit. You can install this by:

yum install httpie

Now that we have httpie installed, lets get an auth token from the API:

echo '{"auth": {"RAX-KSKEY:apiKeyCredentials": {"username": "YOUR_USERNAME","apiKey":"YOUR_API_KEY"}}}' | http post https://identity.api.rackspacecloud.com/v2.0/tokens

The token you need will be listed next to “id” field as shown below

        "token": {
            "expires": "2013-07-09T23:17:08.634-05:00", 
            "id": "2334aasdf5555j3hfhd22245dhsr", 
            "tenant": {
                "id": "123456", 
                "name": "123456"

To simplify things moving forward, we will set some local variables that we’ll use when communicating with the API:

export token="YOUR_API_TOKEN_RECEIVED_ABOVE"
export account="YOUR_RACKSPACE_CLOUD_ACCOUNT_NUMBER"
export lb_endpoint="https://ord.loadbalancers.api.rackspacecloud.com/v1.0"

NOTE: Change the endpoints region accordingly (ord or dfw).

For the purposes of this guide, we will be creating a self signed SSL certificate. In production environments, you will want to purchase a SSL certificate through a CA.

We will create our self signed SSL certificate by using the openssl command:

openssl req -x509 -nodes -newkey rsa:2048 -keyout example.com.tmp.key -out example.com.cert -days 1825
Generating a 2048 bit RSA private key
........+++
.....................................+++
writing new private key to 'example.com.key'
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:US
State or Province Name (full name) [Some-State]:California
Locality Name (eg, city) []: Los Angeles
Organization Name (eg, company) [Internet Widgits Pty Ltd]:My Company LLC
Organizational Unit Name (eg, section) []:IT
Common Name (e.g. server FQDN or YOUR name) []:*.example.com
Email Address []:

Now we must covert the key so the API will be able to use it:

openssl rsa -in example.com.tmp.key -out example.com.key
rm -f example.com.tmp.key

Format the key and cert so it is more friendly for the API. Be sure to save the output as we will need it for the next step:

while read line; do echo -n "$line\n"; done < example.com.key
while read line; do echo -n "$line\n"; done < example.com.cert

Now, lets setup the json file that contains our certificate information:

cat << EOF > example.com.cert.json
{
   "certificate":"-----BEGIN CERTIFICATE-----\nMIIblahblahblahblah\n-----END CERTIFICATE-----",
   "enabled":true,
   "secureTrafficOnly":false,
   "privatekey":"-----BEGIN RSA PRIVATE KEY-----\nMIICWblahblahblahblah\n-----END RSA PRIVATE KEY-----",
   "intermediateCertificate":"",
   "securePort":443}
EOF

Finally, lets execute the json file to enable SSL on your pre-existing cloud load balancer:

http put $lb_endpoint/$account/loadbalancers/YOUR_CLOUD_LOAD_BALANCER_ID/ssltermination X-Auth-Token:$token @example.com.cert.json

SSL termination on your existing load balancer has now been enabled.

Rackspace Cloud API – Create cloud load balancers

The purpose of this post is to show how you can build Rackspace cloud load balancers using the API. Building via the API will allow you to script deployments so you can avoid having to use the control panel. This also provides you the building blocks for understanding deployment automation.

This guide will only show you how to create a cloud load balancer. Feel free to review http://docs.rackspace.com for learning about all the possible operations that can be done through the API.

In this example, we are going to deploy a single cloud load balancer called lb.example.com that will be directing HTTP traffic to two web servers:

test01.example.com
test02.example.com

When working with the API, I like to use a tool called httpie to simplify things a bit. You can install this by:

yum install httpie

Now that we have httpie installed, lets get an auth token from the API:

echo '{"auth": {"RAX-KSKEY:apiKeyCredentials": {"username": "YOUR_USERNAME","apiKey":"YOUR_API_KEY"}}}' | http post https://identity.api.rackspacecloud.com/v2.0/tokens

The token you need will be listed next to “id” field as shown below

        "token": {
            "expires": "2013-07-09T23:17:08.634-05:00", 
            "id": "2334aasdf5555j3hfhd22245dhsr", 
            "tenant": {
                "id": "123456", 
                "name": "123456"

To simplify things moving forward, we will set some local variables that we’ll use when communicating with the API:

export token="YOUR_API_TOKEN_RECEIVED_ABOVE"
export account="YOUR_RACKSPACE_CLOUD_ACCOUNT_NUMBER"
export lb_endpoint="https://ord.loadbalancers.api.rackspacecloud.com/v1.0"

NOTE: Change the endpoints region accordingly (ord or dfw).

Lets prep a json file that we’ll be using to build the cloud load balancer:

cat << EOF > lb.example.com.json
{
    "loadBalancer": {
        "name": "lb.example.com",
        "port": 80,
        "protocol": "HTTP",
        "virtualIps": [
            {
                "type": "PUBLIC"
            }
         ],
        "nodes": [
            {
                "address": "10.123.123.121",
                "port": 80,
                "condition": "ENABLED"
            }
        ]
    }
}
EOF

Now we execute the build by:

http post $lb_endpoint/$account/loadbalancers X-Auth-Token:$token @lb.example.com.json

This will return the VIP and id of the new load balancer as shown below:

                "address": "123.123.123.123", 
        "id": 123456, 
        "name": "lb.example.com", 

For the purposes of this guide, I left out adding the second server initially. So here is how we can add it to the load balancer so it will route traffic between test01.example.com and test02.example.com. First create a json file that contains test02.example.com:

cat << EOF > nodes.json
{"nodes": [
{
"address": "10.123.123.123",
"port": 80,
"condition": "ENABLED",
"type":"PRIMARY"
}
]
}
EOF

Now add this node to the load balancer:

http post $lb_endpoint/$account/loadbalancers/149275/nodes X-Auth-Token:$token @nodes.json

And your done! You can verify everything looks correct by running:

http get $lb_endpoint/$account/loadbalancers/YOUR_LOAD_BALANCER_ID X-Auth-Token:$token

Rackspace Cloud API – Create NextGen cloud servers

The purpose of this post is to show how you can build Rackspace Next Generation cloud servers using the API. Building via the API will allow you to script server builds so you can avoid having to use the control panel. This also provides you the building blocks for understanding deployment automation.

This guide will only show you how to create a cloud server. Feel free to review http://docs.rackspace.com for learning about all the possible operations that can be done through the API.

In this example, we are going to build 2 512M CentOS 6.4 Cloud Servers via the Rackspace Cloud API. The servers will be named:

test01.example.com
test02.example.com

When working with the API, I like to use a tool called httpie to simplify things a bit. You can install this by:

yum install httpie

Now that we have httpie installed, lets get an auth token from the API:

echo '{"auth": {"RAX-KSKEY:apiKeyCredentials": {"username": "YOUR_USERNAME","apiKey":"YOUR_API_KEY"}}}' | http post https://identity.api.rackspacecloud.com/v2.0/tokens

The token you need will be listed next to “id” field as shown below

        "token": {
            "expires": "2013-07-09T23:17:08.634-05:00", 
            "id": "2334aasdf5555j3hfhd22245dhsr", 
            "tenant": {
                "id": "123456", 
                "name": "123456"

To simplify things moving forward, we will set some local variables that we’ll use when communicating with the API:

export token="YOUR_API_TOKEN_RECEIVED_ABOVE"
export account="YOUR_RACKSPACE_CLOUD_ACCOUNT_NUMBER"
export endpoint="https://ord.servers.api.rackspacecloud.com/v2/"

NOTE: Change the endpoints region accordingly (ord or dfw).

Now, lets see what images are available. I’m looking for a CentOS 6.4 image:

http get $endpoint/$account/images/detail X-Auth-Token:$token

The id of the CentOS 6.4 image in this case is:

            "id": "e0ed4adb-3a00-433e-a0ac-a51f1bc1ea3d", 

We wanted a 512M server, so we must find the flavors id:

http get $endpoint/$account/flavors X-Auth-Token:$token

This shows that the 512M flavor has the id of:

            "id": "2", 

All the information has been collected. Time to prep 2 json files that we’ll be using to build the 2 servers:

cat << EOF > test01.example.com.json
{
    "server" : {
        "name" : "test01.example.com",
        "imageRef" : "e0ed4adb-3a00-433e-a0ac-a51f1bc1ea3d",
        "flavorRef" : "2"
    }
}
EOF

cat << EOF > test02.example.com.json
{
    "server" : {
        "name" : "test02.example.com",
        "imageRef" : "e0ed4adb-3a00-433e-a0ac-a51f1bc1ea3d",
        "flavorRef" : "2"
    }
}
EOF

Finally, we have everything we need to begin the builds. Execute the build by:

http post $endpoint/$account/servers @test01.example.com.json X-Auth-Token:$token
http post $endpoint/$account/servers @test02.example.com.json X-Auth-Token:$token

When you run each POST statement above, 2 fields will be returned by the API. Be sure to record these somewhere:
– adminPass : This is your servers root password
– id : This is your servers id number that will be referenced next.

A new server is not useful without knowing its IP address. After a few minutes pass, you can retrieve the IP address by running:

http get $endpoint/$account/servers/YOUR_SERVER_ID X-Auth-Token:$token
http get $endpoint/$account/servers/YOUR_OTHER_SERVER_ID X-Auth-Token:$token

The 2 relevant fields you will need for this example are posted below:

        "accessIPv4": "123.123.123.123", 
                    "addr": "10.123.123.123", 

Now you can SSH into your server using the IP and admin password that have been returned by the API.

Duplicity manager

Coming up with a secure and cost effective backup solution can be a daunting task as there are many considerations that much be taken into account. Some of the more basic items to think about are:

- Where to store your backups?
- Is the storage medium redundant?
- How will data retention will handled?
- How will the data at rest be encrypted?

A tool that I prefer for performing encrypted, bandwidth efficient backups to a variety of remote backends such as Rackspace Cloud Files, Amazon S3, and many others is Duplicity.

Taken from Duplicity’s site, (http://duplicity.nongnu.org), Duplicity back directories by producing encrypted tar-format volumes and uploading them to a remote or local file server. Because duplicity uses librsync, the incremental archives are space efficient and only record the parts of files that have changed since the last backup. Because duplicity uses GnuPG to encrypt and/or sign these archives, they will be safe from spying and/or modification by the server.

Duplicity-manager was created to act as a wrapper script for the tasks I commonly perform with Duplicity.

Features

- Simple invocation from cron for nightly backups.
- All in one script for performing backups, restores, searching for content from specific time period.
- Provides an optional menu driven interface to make backups as painless as possible.

Configuration

The currently configurable options are listed below:

# Configuring either Rackspace Cloud Files or Amazon S3 backends

# List of directories to backup
INCLUDE_LIST=( /etc /var/www /var/lib/mysqlbackup )

# GPG Passphrase for encrypting data at rest
# You can use the following to generate a decent GPG passphrase, just be sure
# to store it someone secure off this server.
# < /dev/urandom tr -dc _A-Z-a-z-0-9 | head -c64
export PASSPHRASE=YOUR_PASSPHRASE

# Backup Retention 
retention_type=remove-older-than
retention_max=14D
 
# Number of full backups to keep (alternative to above)
# retention_type=remove-all-but-n-full
# retention_max=3

# Force Full Backup Every XX Days
full_backup_days=7D

# Restore Directory
restore=/tmp

Usage

./duplicity-manager.sh 

Options:

--backup:                      runs a normal backup based off retention settings
--backup-force-full:           forces a full backup
--list-files [age]:            lists the files currently stored in backups
--restore-all [age]:           restores everything to restore directory
--restore-single [age] [path]: restores a specific file/dir to restore directory
--show-backups:                lists full and incremental backups in the archive
--menu:                        user friendly menu driven interface

Examples:

duplicity-manager.sh --list-files 0D              Lists the most recent files in archive
duplicity-manager.sh --restore-all 2D             Restores everything from 2 days ago
duplicity-manager.sh --restore-single 0D var/www/ Restores /var/www from latest backup

Implementation

Download script to desired directory and set it to be executable:

# Linux based systems
cd /root
git clone https://github.com/stephenlang/duplicity-manager

After configuring the tunables in the script (see above), create a cron job to execute the script one a day:

# Linux based systems
crontab -e
10 3 * * * /root/duplicity-manager/duplicity-manager.sh

As with any backup solution, it is critical that you test your backups often to ensure your data is recoverable in the event a restore is needed.

Encrypting Block Storage In The Cloud

For sensitive information being stored in the cloud outside your direct control, it is critical to encrypt your data at rest. Full disk encryption helps to protect you against unauthorized indivuduals mounting your volume without a key.

It should be noted that LUKS encryption will not protect your data when it is mounted and viewable by your server. A malicious user could in theory break into your server and traverse to that mount point.

My requirements

When writing this guide, I am using the following thought process when implementing LUKS on my Cloud Block Storage volume:

– No keys are to be stored on the server. This is for security purposes since your keys shouldn’t be stored on the server. Would you tape the keys to your Porche on the hood? No. The same logic applies here.
– The volume will not be mounted at boot. This is to prevent the server from stopping the boot process for you to enter in the key. Please note that if the server reboots, you will have to manually log in, mount the volume, and type the passphase for the volume before the system can use it again!

Procedure

This example is going to be specific for CentOS 6.

1. Create your Cloud Block Storage Volumes
In this instance, I am going to be using 2x 100G volumes which are already mounted on my server, and will be setting them up in a RAID 1 configuration.

yum install mdadm cryptsetup-luks cryptsetup-luks-devel
mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/xvdb /dev/xvdd

Now confirm the RAID is rebuilding itself by typing:

mdadm --detail /dev/md0

2. Now its time to setup LUKS. This will format your volume, so use caution.

cryptsetup luksFormat /dev/md0

Confirm the contents of the message, then type ‘YES’ in uppercase letters. Then enter a very secure passphrase, and store it somewhere safe. Never store the key on your server!

3. You can verify the results of the encryption process by typing the following:

cryptsetup luksDump /dev/md0

4. Time to mount the encrypted volume and give it a name:

cryptsetup luksOpen /dev/md0 mysecurevolume

5. Finally, lets put a filesystem on it and mount:

mkfs.ext4 /dev/mapper/mysecurevolume
mkdir /opt/mysecurevolume
mount /dev/mapper/mysecurevolume /opt/mysecurevolume

6. You can check the status to ensure its encrypted by typing:

cryptsetup status mysecurevolume

7. Now disable automount so your server won’t hang on boot waiting for the passphrase:

vi /etc/grub.conf
# Append the following at the end of all the kernel lines
rd_NO_LUKS

And your done!

To manually mount the volume after a reboot:

cryptsetup luksOpen /dev/md0 mysecurevolume
mount /dev/mapper/mysecurevolume /opt/mysecurevolume

To manually umount the volume, type:

umount /dev/mapper/mysecurevolume
cryptsetup luksClose mysecurevolume

Final thoughts

Remember, full disk encryption utilizing LUKS is only one part of a defense in depth strategy. No security management system is perfect, but each layer you add will help increase your solutions security footprint.

Keeping multiple web servers in sync with rsync

People looking to create a load balanced web server solution often ask, how can they keep their web servers in sync with each other? There are many ways to go about this: NFS, lsync, rsync, etc. This guide will discuss a technique using rsync that runs from a cron job every 10 minutes.

There will be two different options presented, pulling the updates from the master web server, and pushing the updates from the master web server down to the slave web servers.

Our example will consist of the following servers:

web01.example.com (192.168.1.1) # Master Web Server
web02.example.com (192.168.1.2) # Slave Web Server
web03.example.com (192.168.1.3) # Slave Web Server

Our master web server is going to the single point of truth for the web content of our domain. Therefore, the web developers will only be modifying content from the master web server, and will let rsync handle keeping all the slave nodes in sync with each other.

There are a few prerequisites that must be in place:
1. Confirm that rsync is installed.
2. If pulling updates from the master web server, all slave servers must be able to SSH to the master server using a SSH key with no pass phrase.
3. If pushing updates from the master down to the slave servers, the master server must be able to SSH to the slave web servers using a SSH key with no passphrase.

To be proactive about monitoring the status of the rsync job, both scripts posted below allow you to perform a http content check against a status file to see if the string “SUCCESS” exists. If something other then SUCCESS is found, that means that the rsync script may have failed and should be investigated. An example of this URL to monitor would be is: 192.168.1.1/datasync.status

Please note that the assumption is being made that your web server will serve files that are placed in /var/www/html/. If not, please update the $status variable accordingly.

Using rsync to pull changes from the master web server:

This is especially useful if you are in a cloud environment and scale your environment by snapshotting an existing slave web server to provision a new one. When the new slave web server comes online, and assuming it already has the SSH key in place, it will automatically grab the latest content from the master server with no interaction needed by yourself except to test, then enable in your load balancer.

The disadvantage with using the pull method for your rsync updates comes into play when you have multiple slave web servers all running the rsync job at the same time. This can put a strain on the master web servers CPU, which can cause performance degradation. However if you have under 10 servers, or if your site does not have a lot of content, then the pull method should work fine.

Below will show the procedure for setting this up:

1. Create SSH keys on each slave web server:

ssh-keygen -t dsa

2. Now copy the public key generated on the slave web server (/root/.ssh/id_dsa.pub) and append it to the master web servers, /root/.ssh/authorized_keys2 file.

3. Test ssh’ing in as root from the slave web server to the master web server
# On web02

ssh [email protected]

4. Assuming you were able to log in to the master web server cleanly, then its time to create the rsync script on each slave web server. Please note that I am assuming your sites documentroot’s are stored in /var/www/vhosts. If not, please change the script accordingly and test!

mkdir -p /opt/scripts/
vi /opt/scripts/pull-datasync.sh

#!/bin/bash
# pull-datasync.sh : Pull site updates down from master to front end web servers via rsync

status="/var/www/html/datasync.status"

if [ -d /tmp/.rsync.lock ]; then
echo "FAILURE : rsync lock exists : Perhaps there is a lot of new data to pull from the master server. Will retry shortly" > $status
exit 1
fi

/bin/mkdir /tmp/.rsync.lock

if [ $? = "1" ]; then
echo "FAILURE : can not create lock" > $status
exit 1
else
echo "SUCCESS : created lock" > $status
fi

echo "===== Beginning rsync ====="

nice -n 20 /usr/bin/rsync -axvz --delete -e ssh [email protected]:/var/www/vhosts/ /var/www/vhosts/

if [ $? = "1" ]; then
echo "FAILURE : rsync failed. Please refer to solution documentation" > $status
exit 1
fi

echo "===== Completed rsync ====="

/bin/rm -rf /tmp/.rsync.lock
echo "SUCCESS : rsync completed successfully" > $status

Be sure to set executable permissions on this script so cron can run it:

chmod 755 /opt/scripts/pull-datasync.sh

Using rsync to push changes from the master web server down to slave web servers:

Using rsync to push changes from the master down to the slaves also has some important advantages. First off, the slave web servers will not have SSH access to the master server. This could become critical if one of the slave servers is ever compromised and try’s to gain access to the master web server. The next advantage is the push method does not cause a serious CPU strain cause the master will run rsync against the slave servers, one at a time.

The disadvantage here would be if you have a lot of web servers syncing content that changes often. Its possible that your updates will not be pushed down to the web servers as quickly as expected since the master server is syncing the servers one at a time. So be sure to test this out to see if the results work for your solution. Also if you are cloning your servers to create additional web servers, you will need to update the rsync configuration accordingly to include the new node.

Below will show the procedure for setting this up:

1. To make administration easier, its recommended to setup your /etc/hosts file on the master web server to include a list of all the servers hostnames and internal IP’s.

vi /etc/hosts
192.168.1.1 web01 web01.example.com
192.168.1.2 web02 web02.example.com
192.168.1.3 web03 web03.example.com

2. Create SSH keys on the master web server:

ssh-keygen -t dsa

3. Now copy the public key generated on the master web server (/root/.ssh/id_dsa.pub) and append it to the slave web servers, /root/.ssh/authorized_keys2 file.

4. Test ssh’ing in as root from the master web server to each slave web server
# On web01

ssh root@web02

5. Assuming you were able to log in to the slave web servers cleanly, then its time to create the rsync script on the master web server. Please note that I am assuming your sites documentroot’s are stored in /var/www/vhosts. If not, please change the script accordingly and test!

mkdir -p /opt/scripts/
vi /opt/scripts/push-datasync.sh

#!/bin/bash
# push-datasync.sh - Push site updates from master server to front end web servers via rsync

webservers=(web01 web02 web03 web04 web05)
status="/var/www/html/datasync.status"

if [ -d /tmp/.rsync.lock ]; then
echo "FAILURE : rsync lock exists : Perhaps there is a lot of new data to push to front end web servers. Will retry soon." > $status
exit 1
fi

/bin/mkdir /tmp/.rsync.lock

if [ $? = "1" ]; then
echo "FAILURE : can not create lock" > $status
exit 1
else
echo "SUCCESS : created lock" > $status
fi

for i in ${webservers[@]}; do

echo "===== Beginning rsync of $i ====="

nice -n 20 /usr/bin/rsync -avzx --delete -e ssh /var/www/vhosts/ root@$i:/var/www/vhosts/

if [ $? = "1" ]; then
echo "FAILURE : rsync failed. Please refer to the solution documentation " > $status
exit 1
fi

echo "===== Completed rsync of $i =====";
done

/bin/rm -rf /tmp/.rsync.lock
echo "SUCCESS : rsync completed successfully" > $status

Be sure to set executable permissions on this script so cron can run it:

chmod 755 /opt/scripts/push-datasync.sh

Now that you have the script in place and tested, its now time to set this up to run automatically via cron. For the example here, I am setting up cron to run this script every 10 minutes.

If using the push method, put the following into the master web servers crontab:

crontab -e
# Datasync script
*/10 * * * * /opt/scripts/push-datasync.sh

If using the pull method, put the following onto each slave web servers crontab:

crontab -e
# Datasync script
*/10 * * * * /opt/scripts/pull-datasync.sh