How to setup Proxmox VE 5 with LXC containers on Rackspace Cloud

Testing out changes in a production environment is never a good idea. However prepping test servers can be tedious as you have to find the hardware and setup the operating system before you can begin. So I want a faster and more cost effective medium, turning a single Cloud Server into a virtualized host server for my test servers. Welcome LXC.

Taken from the providers site, LXC is an operating system-level virtualization technology for Linux. It allows a physical server to run multiple isolated operating system instances, called containers, virtual private servers (VPSs), or virtual environments (VEs.) LXC is similar to Solaris Containers, FreeBSD jails and OpenVZ.

To managed my LXC containers, I prefer to use Proxmox VE 5, which provides a clean control panel for managing my containers.

This guide will document how to install Proxmox on a 4G Rackspace Cloud Server running Debian 9. There will be a 50G SSD Cloud Block Storage volume attached to the server utilizing ZFS that will store the containers, which is outlined more below. The Proxmox installation will install everything needed to run LXC. The IP’s for the containers will be provided via NAT served from the server, therefore creating a self contained test environment.

Configure system for LXC according to best practices

Increase the open files limit by appending the following to the bottom of /etc/security/limits.conf:

[[email protected] ~]# vim /etc/security/limits.conf
...
*       soft    nofile  1048576 unset
*       hard    nofile  1048576 unset
root    soft    nofile  1048576 unset
root    hard    nofile  1048576 unset
*       soft    memlock 1048576 unset
*       hard    memlock 1048576 unset

Now setup some basic kernel tweaking at the bottom of /etc/sysctl.conf:

[[email protected] ~]# vim /etc/sysctl.conf
...
# LXD best practices:  https://github.com/lxc/lxd/blob/master/doc/production-setup.md
fs.inotify.max_queued_events = 1048576
fs.inotify.max_user_instances = 1048576
fs.inotify.max_user_watches = 1048576
vm.max_map_count = 262144

Install Proxmox VE 5

For this to work, we need a vanilla Debian 9 Cloud Server and install Proxmox on top of it, which will install the required kernel.

To get things started, update /etc/hosts to setup your fqdn, and remove any resolvable ipv6 domains:

[[email protected] ~]# cat /etc/hosts
127.0.0.1 localhost.localdomain localhost
123.123.123.123 proxmox01.yourdomain.com proxmox01-iad

# The following lines are desirable for IPv6 capable hosts
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts

Test to confirm /etc/files is setup properly. This should return your servers IP address:

[[email protected] ~]# hostname --ip-address

Add the Proxmox VE repo and add the repo key:

[[email protected] ~]# echo "deb http://download.proxmox.com/debian/pve stretch pve-no-subscription" > /etc/apt/sources.list.d/pve-install-repo.list
[[email protected]1 ~]# wget http://download.proxmox.com/debian/proxmox-ve-release-5.x.gpg -O /etc/apt/trusted.gpg.d/proxmox-ve-release-5.x.gpg

Update the package index and then update the system for Proxmox:

[[email protected] ~]# apt update && apt dist-upgrade
* Select option for 'Install the package maintainer's version' when asked about grub

Install Proxmox VE and reboot:

[[email protected] ~]# apt install proxmox-ve postfix open-iscsi
[[email protected] ~]# reboot

Once the cloud server comes back online, confirm you are running the pve kernel:

[[email protected] ~]# uname -a
Linux proxmox 4.13.4-1-pve #1 SMP PVE 4.13.4-25 (Fri, 13 Oct 2017 08:59:53 +0200) x86_64 GNU/Linux

Setup NAT for the containers

As the Rackspace Cloud server comes with 1 IP address, I will be making use of NAT’ed IP addresses to assign to my individual containers. The steps are documented below:

Update /etc/sysctl.conf to allow ip_forwarding:

[[email protected] ~]# vim /etc/sysctl.conf
...
net.ipv4.ip_forward = 1
...

Then apply the new settings without a reboot:

[[email protected] ~]# sysctl -p

To setup the NAT rules, we need to setup a script that will start on boot. Two things need to be taken into consideration here:

1. Change IP address below (123.123.123.123) in the NAT rule to your Cloud server’s public IP address.
2. This assumes you want to use a 192.168.1.0/24 network for your VE’s.

The quick and dirty script is below:

[[email protected] ~]# vim /etc/init.d/lxc-routing
#!/bin/sh
case "$1" in
 start) echo "lxc-routing started"
# It's important that you change the SNAT IP to the one of your server (not the local but the internet IP)
# The following line adds a route to the IP-range that we will later assign to the VPS. That's how you get internet access on # your VPS.
/sbin/iptables -t nat -A POSTROUTING -s 192.168.1.0/24 -o eth0 -j SNAT --to 123.123.123.123

# Allow servers to have access to internet:
/sbin/iptables -A FORWARD -s 192.168.1.0/24 -j ACCEPT
/sbin/iptables -A FORWARD -d 192.168.1.0/24 -j ACCEPT
# Be sure to add net.ipv4.ip_forward=1 to /etc/sysctl.conf, then run sysctl -p

# These are the rules for any port forwarding you want to do
# In this example, all traffic to and from the ports 11001-11019 gets routed to/from the VPS with the IP 192.168.1.1.
# Also the port 11000 is routed to the SSH port of the vps, later on you can ssh into your VPS through yourip:11000

#/sbin/iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 11000 -j DNAT --to 192.168.1.1:22
#/sbin/iptables -t nat -A PREROUTING -i eth0 -p udp --dport 11001:11019 -j DNAT --to 192.168.1.1
#/sbin/iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 11001:11019 -j DNAT --to 192.168.1.1

# In my case I also dropped outgoing SMTP traffic, as it's one of the most abused things on servers

#/sbin/iptables -A FORWARD -j DROP -p tcp --destination-port 25
#/sbin/iptables -A FORWARD -j DROP -p tcp --destination-port 2525
#/sbin/iptables -A FORWARD -j DROP -p tcp --destination-port 587
#/sbin/iptables -A FORWARD -j DROP -p tcp --destination-port 465
#/sbin/iptables -A FORWARD -j DROP -p tcp --destination-port 2526
#/sbin/iptables -A FORWARD -j DROP -p tcp --destination-port 110
#/sbin/iptables -A FORWARD -j DROP -p tcp --destination-port 143
#/sbin/iptables -A FORWARD -j DROP -p tcp --destination-port 993

;;

*) echo "Usage: /etc/init.d/lxc-routing {start}"
exit 2
;;

esac
exit 0

Setup permissions, set to run on boot, and run it:

[[email protected] ~]# chmod 755 /etc/init.d/lxc-routing
[[email protected] ~]# update-rc.d lxc-routing defaults
[[email protected] ~]# /etc/init.d/lxc-routing start

When you go to start a new container, the container will not start as Proxmox will complain about an error similar to below:

-- Unit [email protected] has begun starting up.
Nov 06 06:07:07 proxmox01.*********** systemd-udevd[11150]: Could not generate persistent MAC address for vethMVIWQY: No such file or directory
Nov 06 06:07:07 proxmox01.*********** kernel: IPv6: ADDRCONF(NETDEV_UP): veth100i0: link is not ready

This can be corrected by:

[[email protected] ~]# vim /etc/systemd/network/99-default.link
[Link]
NamePolicy=kernel database onboard slot path
MACAddressPolicy=none

Then reboot:

[[email protected] ~]# reboot

Navigate your browser to the control panel, login with your root SSH credentials, and setup a Linux Bridge

- Navigate your browser to: https://x.x.x.x:8006
- Click on System --> Network
- On top, click 'Create' --> 'Linux Bridge'
	- Name:  vmbr0
	- IP address:  192.168.1.1
	- Subnet mask: 255.255.255.0
	- Autostart:  checked
	- Leave everything else blank

Setup the 50G SSD Cloud Block Storage Volume with ZFS and add to proxmox. Assuming the device is already mounted, check to see what it got mapped to by:

[[email protected] ~]# lsblk 
NAME    MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda    202:0    0  80G  0 disk 
└─xvda1 202:1    0  80G  0 part /
xvdb    202:16   0  50G  0 disk   <--- This is my new volume

First, install the ZFS utils for Linux, and enable the kernel module:

[[email protected] ~]# apt-get install zfsutils-linux
[[email protected] ~]# /sbin/modprobe zfs

Then add the drive to the zpool:

[[email protected] ~]# zpool create zfs /dev/xvdb 
[[email protected] ~]# zpool list
NAME   SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
zfs   49.8G  97.5K  49.7G         -     0%     0%  1.00x  ONLINE  -

Now add the new disk to proxmox:

- Navigate your browser to: https://x.x.x.x:8006
- Click on Datacenter --> Storage
- On top, click 'Add' --> 'ZFS'
	- Name:  zfs
	- ZFS Pool:  zfs
	- Enable:  Checked
	- Thin provision:  Checked

Add Docker support to the containers

Docker can successfully run within a LXC container with some additional configuration. However, as the proxmox kernel may be older, the latest versions of Docker may fail to work properly. The versions of Docker you receive from the OS repos seem to work though.

First, create the containers as desired for Docker via Proxmox, then add the following to the bottom of containers LXC config file:

[[email protected] ~]# /etc/pve/lxc/100.conf
...
#insert docker part below
lxc.aa_profile: unconfined
lxc.cgroup.devices.allow: a
lxc.cap.drop:

After restarting that container, you will be able to install and configure Docker as normal on that container.

Add NFS support to the containers

NFS can successfully run within a LXC container with an additional configuration.

First, create an apparmor profile for NFS:

vim /etc/apparmor.d/lxc/lxc-default-with-nfs
# Do not load this file.  Rather, load /etc/apparmor.d/lxc-containers, which
# will source all profiles under /etc/apparmor.d/lxc

profile lxc-container-default-with-nfs flags=(attach_disconnected,mediate_deleted) {
  #include 

# allow NFS (nfs/nfs4) mounts.
  mount fstype=nfs*,
}

Then reload the LXC profiles by:

apparmor_parser -r /etc/apparmor.d/lxc-containers

You can explicitly allow NFS in containers by adding another apparmor profile for them
Finally, add the following to the bottom of containers LXC config file:

[[email protected] ~]# /etc/pve/lxc/100.conf
...
#insert near bottom
lxc.apparmor.profile: lxc-container-default-with-nfs

After restarting that container, you will be able to install and configure NFS as normal on that container.

Rackspace Cloud Monitoring syntax examples

This guide is just a quick reference article displaying the alarm criteria needed for certain checks. Most of these are available in the Rackspace control panel, and some are customized taken from the contrib github repo.

CPU

This check monitors the CPU for high CPU usage. The example below will send a warning message when the usage is at or over 90%, and will send a warning when the CPU usage is at or over 95%

Label:

:set consecutiveCount=5

if (metric['usage_average'] > 95) {
  return new AlarmStatus(CRITICAL, 'CPU usage is #{usage_average}%');
}

if (metric['usage_average'] > 90) {
  return new AlarmStatus(WARNING, 'CPU usage is #{usage_average}%');
}

return new AlarmStatus(OK, 'CPU usage is #{usage_average}%');

Memory

This check monitors your memory and swap usage. If your system has less than 5% memory available, and less than 10% of swap available, it will throw an alarm.

Label:

if (metric['swap_total'] > 0 && percentage(metric['swap_used'], metric['swap_total']) > 90
    && percentage(metric['actual_used'], metric['total']) > 95) {
  return new AlarmStatus(CRITICAL, 'Less than 5% of memory and 10% of swap available');
}

if (metric['swap_total'] == 0 && percentage(metric['actual_used'], metric['total']) > 95) {
  return new AlarmStatus(CRITICAL, 'Less than 5% of memory available');
}

return new AlarmStatus(OK, 'More than 5% of memory available');

Load Average

This checks the load average on the server. If the 15 minute load average is greater than 20, it will create an alarm.

Label: High Load Average

if (metric['15m'] > 20) {
  return new AlarmStatus(CRITICAL, '15 Minute Load Average is #{15m}');
}

return new AlarmStatus(OK, '15 Minute Load Average is #{15m}');

Filesystem

This one is broken down into 2 alarms. One for checking for avaiable space, and the other checking to see if the filesystem is in read only mode.

Please keep in mind that each alarm below should be a separate alarm!

Label: Low Filesystem Space

if (percentage(metric['used'], metric['total']) > 90) { 
    return new AlarmStatus(CRITICAL, 'Less than 10% free space available.'); 
    } 
if (percentage(metric['used'], metric['total']) > 80) { 
    return new AlarmStatus(WARNING, 'Less than 20% free space available.'); 
    } 
return new AlarmStatus(OK, 'Greater than 80% free space available.');

Label: Check for read only filesystem

if (metric['options'] regex ".*ro.*") {
return new AlarmStatus(CRITICAL, "Read-Only Filesystem");
}
return new AlarmStatus(OK, 'Filesystem in Read-Write mode.');

MySQL Replication Check

This is a custom agent plugin, so you need to download the plugin on your server first:

mkdir -p /usr/lib/rackspace-monitoring-agent/plugins
cd /usr/lib/rackspace-monitoring-agent/plugins/
wget https://raw.github.com/racker/rackspace-monitoring-agent-plugins-contrib/master/mysql_replication.py
chmod 755 mysql_replication.py

Now register the plugin with Cloud Monitoring:

curl -i -X POST -H 'Host: monitoring.api.rackspacecloud.com' -H 'Accept-Encoding: gzip,deflate' -H 'X-Auth-Token: YOUR_API_TOKEN' -H 'Content-Type: application/json; charset=UTF-8' -H 'Accept: application/json' --data-binary '{"label": "MySQL Replication Check", "type": "agent.plugin", "details": {"args": ["arg1"],"file": "mysql_replication.py"}}'  --compress 'https://monitoring.api.rackspacecloud.com:443/v1.0/YOUR_ACCOUNT_NUMBER/entities/ENTITY_ID/checks'

Finally, apply the alert criteria.

Label: MySQL Replication Check

if (metric['SLAVE_STATUS'] != 'ONLINE') {
  return new AlarmStatus(CRITICAL, 'MySQL Replication is OFFLINE.');
}

if (metric['SLAVE_STATUS'] == 'ONLINE' && metric['SECONDS_BEHIND_MASTER'] >= 120 && metric['SECONDS_BEHIND_MASTER'] < 300) {
  return new AlarmStatus(WARNING, 'MySQL Replication ONLINE but Slave is more than 2 minutes behind Master.');
}

if (metric['SLAVE_STATUS'] == 'ONLINE' && metric['SECONDS_BEHIND_MASTER'] >= 300) {
  return new AlarmStatus(CRITICAL, 'MySQL Replication ONLINE but Slave is more than 5 minutes behind Master.');
}

return new AlarmStatus(OK, 'MySQL Replication is ONLINE');

Holland Check

If you have Holland installed, you can monitor your nightly MySQL dumps to ensure no errors have been returned. It also checks to ensure MySQL is running, and that a valid /root/.my.cnf exists:

This is a custom agent plugin, so you need to download the plugin on your server first:

mkdir -p /usr/lib/rackspace-monitoring-agent/plugins
cd /usr/lib/rackspace-monitoring-agent/plugins/
wget https://raw.github.com/racker/rackspace-monitoring-agent-plugins-contrib/master/holland_mysqldump.py
chmod 755 holland_mysqldump.py

Now register the plugin with Cloud Monitoring:

raxmon-checks-create --entity-id=YOUR_ENTITY  --label=Holland --type=agent.plugin --username=YOUR_USERNAME --api-key=YOUR_API_KEY --details=file=holland_mysqldump.py

Finally, apply the alert criteria. Please keep in mind that each alarm below should be a separate alarm!

Label: Holland Log

if (metric['sql_ping_succeeds'] == 'true' && 
    metric['sql_creds_exist'] == 'true' && 
    metric['sql_status_succeeds'] == 'true' && 
    metric['dump_age'] < 172800 && 
    metric['error_count'] > 0) { 
  return new AlarmStatus(CRITICAL, 'holland-plugin: #{last_error}.'); 
} 
return new AlarmStatus(OK, 'holland-plugin: No errors found in most recent log entries.');

Label: MySQL Authenticates

if (metric['sql_ping_succeeds'] == 'true' && 
    metric['sql_creds_exist'] == 'true' && 
    metric['sql_status_succeeds'] == 'false') { 
  return new AlarmStatus(CRITICAL, 'holland-plugin: MySQL credentials do not authenticate.'); 
} 
return new AlarmStatus(OK, 'holland-plugin: MySQL credentials authenticate.');

Label: MySQL Credentials Exist

if (metric['sql_ping_succeeds'] == 'true' && 
    metric['sql_creds_exist'] == 'false') { 
  return new AlarmStatus(CRITICAL, 'holland-plugin: MySQL credentials file does not exist.'); 
} 
return new AlarmStatus(OK, 'holland-plugin: MySQL credentials file exists.');

Label: MySQL Running

if (metric['sql_ping_succeeds'] == 'false') { 
  return new AlarmStatus(CRITICAL, 'holland-plugin: MySQL is not running.'); 
} 
return new AlarmStatus(OK, 'holland-plugin: MySQL is running');

Label: Recent Backup Exists

if (metric['sql_ping_succeeds'] == 'true' && 
    metric['sql_creds_exist'] == 'true' && 
    metric['sql_status_succeeds'] == 'true' && 
    metric['dump_age'] > 172800) { 
  return new AlarmStatus(CRITICAL, 'holland-plugin: mysqldump file is older than 2d.'); 
} 
return new AlarmStatus(OK, 'holland-plugin: mysqldump file age is less than 2d.');

Process Check

This is a quick and dirty plugin I wrote for when you have to be notified if something is not running on the server, such as Lsyncd or Memcached since both these can’t (Lsyncd) or shouldn’t (Memcached) be listening on the public interface. Any process in the process list can be monitored with this.

This is a custom agent plugin, so you need to download the plugin on your server first:

mkdir -p /usr/lib/rackspace-monitoring-agent/plugins
cd /usr/lib/rackspace-monitoring-agent/plugins
wget https://raw.github.com/racker/rackspace-monitoring-agent-plugins-contrib/master/process_mon.sh
chmod 755 process_mon.sh

Now register the plugin with Cloud Monitoring:

curl -i -X POST -H 'Host: monitoring.api.rackspacecloud.com' -H 'Accept-Encoding: gzip,deflate' -H 'X-Auth-Token: YOUR_API_TOKEN' -H 'Content-Type: application/json; charset=UTF-8' -H 'Accept: application/json' --data-binary '{"label": "Process Check", "type": "agent.plugin", "details": {"args": ["PROCESS_NAME"],"file": "process_mon.sh"}}'  --compress 'https://monitoring.api.rackspacecloud.com:443/v1.0/YOUR_ACCOUNT/entities/YOUR_ENTITY/checks'

Finally, apply the alert criteria in the Rackspace control panel:

if (metric['process_mon'] == 0) {
return new AlarmStatus(CRITICAL, 'Process not running.');
}

return new AlarmStatus(OK, 'Process running normally.');

How to setup OpenVZ on the Rackspace Cloud

Testing out changes in a production environment is never a good idea. However prepping test servers can be tedious as you have to find the hardware and setup the operating system before you can begin. So I want a faster and more cost effective medium, turning a single Cloud Server into a virtualized host server for my test servers. Welcome OpenVZ.

Taken from the providers site, OpenVZ (Open Virtuozzo) is an operating system-level virtualization technology for Linux. It allows a physical server to run multiple isolated operating system instances, called containers, virtual private servers (VPSs), or virtual environments (VEs.) OpenVZ is similar to Solaris Containers and LXC.

To managed my OpenVZ containers, I prefer to use Proxmox, which provides a clean control panel for managing my containers.

This guide will document how to install Proxmox on a 2G Rackspace Cloud Server running Debian 7. The Proxmox installation will install everything needed to run OpenVZ.

Install Proxmox

For this to work, we need a vanilla Debian 7 Cloud Server, and install Proxmox on top of it, which will install the required kernel.

To get things started, update /etc/hosts to setup your fqdn, and remove any resolvable ipv6 domains:

[[email protected] ~]# cat /etc/hosts
127.0.0.1 localhost.localdomain localhost
192.168.6.177 proxmox.yourdomain.com proxmox pvelocalhost

# The following lines are desirable for IPv6 capable hosts
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts

Now backup the /etc/apt/source.list, and create a fresh one to use proxmox’s repos:

mv /etc/apt/sources.list /etc/apt/sources.list.bak
vim /etc/apt/sources.list
[ ADD ]
deb http://ftp.at.debian.org/debian wheezy main contrib

# PVE repository provided by proxmox.com, only for installation (this repo will stay on 3.1)
deb http://download.proxmox.com/debian wheezy pve

# security updates
deb http://security.debian.org/ wheezy/updates main contrib

Now add the Proxmox VE repository key:

wget -O- "http://download.proxmox.com/debian/key.asc" | apt-key add -

Update the package index and then update the system to install Proxmox:

apt-get update && apt-get dist-upgrade

Install proxmox kernel and headers:

apt-get install pve-firmware pve-kernel-2.6.32-26-pve
apt-get install pve-headers-2.6.32-26-pve

7. Update grub and reboot into proxmox kernel:

vim /etc/default/grub
# From
GRUB_DEFAULT=0
# To
GRUB_DEFAULT=3
...
update-grub2
reboot

Once the cloud server comes back online, confirm you are running the pve kernel

uname -a
Linux proxmox 2.6.32-26-pve #1 SMP Mon Oct 14 08:22:20 CEST 2013 x86_64 GNU/Linux

** If the kernel is a 3.2 kernel, something is wrong and grub booted off default kernel, not pve. Go back and confirm all the steps worked properly.

Remove the old Debian Kernel as it is no longer needed:

apt-get remove linux-image-amd64 linux-image-3.2.0-4-amd64 linux-base
update-grub

Install proxmox ve packages

apt-get install proxmox-ve-2.6.32 ntp ssh lvm2 postfix ksm-control-daemon vzprocps open-iscsi bootlogd

Open up firewall to allow inbound 8006 from your workstations IP address:

ufw allow from x.x.x.x

Setup NAT for VE’s

As the Rackspace Cloud server comes with 1 IP address, I will be making use of NAT’ed IP addresses to assign to my individual containers. The steps are documented below:

Update /etc/sysctl.conf to allow ip_forwarding:

vim /etc/sysctl.conf
[ ADD ]
net.ipv4.ip_forward=1

Then apply the new setting:

sysctl -p

To setup the NAT rules, we need to setup a script that will start on boot. Below is a script that I found on https://vpsaddicted.com/install-and-configure-proxmox-ve-for-nat-ipv4-vps-on-debian-wheezy/.

Two things need to be taken into consideration here:
1. Change IP address below (123.123.123.123) in the NAT rule to your Cloud server’s public IP address.
2. This assumes you want to use a 10.0.0.0/24 network for your VE’s.

vim /etc/init.d/vz-routing
#!/bin/sh
case "$1" in
 start) echo "vz-routing started"
# It's important that you change the SNAT IP to the one of your server (not the local but the internet IP)
# The following line adds a route to the IP-range that we will later assign to the VPS. That's how you get internet access on # your VPS.
/sbin/iptables -t nat -A POSTROUTING -s 10.0.0.0/24 -o eth0 -j SNAT --to 123.123.123.123

# Allow servers to have access to internet:
/sbin/iptables -A FORWARD -s 10.0.0.0/24 -j ACCEPT
/sbin/iptables -A FORWARD -d 10.0.0.0/24 -j ACCEPT
# Be sure to add net.ipv4.ip_forward=1 to /etc/sysctl.conf, then run sysctl -p

# These are the rules for any port forwarding you want to do
# In this example, all traffic to and from the ports 11001-11019 gets routed to/from the VPS with the IP 10.0.0.1.
# Also the port 11000 is routed to the SSH port of the vps, later on you can ssh into your VPS through yourip:11000

#/sbin/iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 11000 -j DNAT --to 10.0.0.1:22
#/sbin/iptables -t nat -A PREROUTING -i eth0 -p udp --dport 11001:11019 -j DNAT --to 10.0.0.1
#/sbin/iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 11001:11019 -j DNAT --to 10.0.0.1

# In my case I also dropped outgoing SMTP traffic, as it's one of the most abused things on servers

#/sbin/iptables -A FORWARD -j DROP -p tcp --destination-port 25
#/sbin/iptables -A FORWARD -j DROP -p tcp --destination-port 2525
#/sbin/iptables -A FORWARD -j DROP -p tcp --destination-port 587
#/sbin/iptables -A FORWARD -j DROP -p tcp --destination-port 465
#/sbin/iptables -A FORWARD -j DROP -p tcp --destination-port 2526
#/sbin/iptables -A FORWARD -j DROP -p tcp --destination-port 110
#/sbin/iptables -A FORWARD -j DROP -p tcp --destination-port 143
#/sbin/iptables -A FORWARD -j DROP -p tcp --destination-port 993

;;

*) echo "Usage: /etc/init.d/vz-routing {start}"
exit 2
;;

esac
exit 0

Setup permissions, set to run on boot, and run it:

chmod 755 /etc/init.d/vz-routing
update-rc.d vz-routing defaults
/etc/init.d/vz-routing start

That should be it! Navigate your browser to the control panel, login with your root SSH credentials, and your ready to go:

https://x.x.x.x:8006

Rackspace Cloud API – Enable SSL Termination on existing cloud load balancer

The purpose of this post is to show how you can enable SSL termination on an existing cloud load balancer through the API. Using the API will allow you to script deployments so you can avoid having to use the control panel. This also provides you the building blocks for understanding deployment automation.

This guide will show you how to enable SSL termination on an existing cloud load balancer as described in my previous post: Rackspace Cloud API – Create cloud load balancers

Feel free to review http://docs.rackspace.com for learning about all the possible operations that can be done through the API.

In this example, we are going to enable SSL termination on an existing cloud load balancer called lb.example.com. The domain we are load balancing is www.example.com. Please note that enabling SSL termination on a Rackspace cloud load balancer costs more then a regular cloud load balancer!

SPECIAL NOTE: I advise against using SSL termination if you are passing any PII (personally identifiable information) or other sensitive data through the cloud load balancer to the Cloud server. The transmission will only be encrypted from the clients browser to the load balancer. From there, the cloud load balancer will send the request in clear text through the rackspace network to your cloud server.

When working with the API, I like to use a tool called httpie to simplify things a bit. You can install this by:

yum install httpie

Now that we have httpie installed, lets get an auth token from the API:

echo '{"auth": {"RAX-KSKEY:apiKeyCredentials": {"username": "YOUR_USERNAME","apiKey":"YOUR_API_KEY"}}}' | http post https://identity.api.rackspacecloud.com/v2.0/tokens

The token you need will be listed next to “id” field as shown below

        "token": {
            "expires": "2013-07-09T23:17:08.634-05:00", 
            "id": "2334aasdf5555j3hfhd22245dhsr", 
            "tenant": {
                "id": "123456", 
                "name": "123456"

To simplify things moving forward, we will set some local variables that we’ll use when communicating with the API:

export token="YOUR_API_TOKEN_RECEIVED_ABOVE"
export account="YOUR_RACKSPACE_CLOUD_ACCOUNT_NUMBER"
export lb_endpoint="https://ord.loadbalancers.api.rackspacecloud.com/v1.0"

NOTE: Change the endpoints region accordingly (ord or dfw).

For the purposes of this guide, we will be creating a self signed SSL certificate. In production environments, you will want to purchase a SSL certificate through a CA.

We will create our self signed SSL certificate by using the openssl command:

openssl req -x509 -nodes -newkey rsa:2048 -keyout example.com.tmp.key -out example.com.cert -days 1825
Generating a 2048 bit RSA private key
........+++
.....................................+++
writing new private key to 'example.com.key'
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:US
State or Province Name (full name) [Some-State]:California
Locality Name (eg, city) []: Los Angeles
Organization Name (eg, company) [Internet Widgits Pty Ltd]:My Company LLC
Organizational Unit Name (eg, section) []:IT
Common Name (e.g. server FQDN or YOUR name) []:*.example.com
Email Address []:

Now we must covert the key so the API will be able to use it:

openssl rsa -in example.com.tmp.key -out example.com.key
rm -f example.com.tmp.key

Format the key and cert so it is more friendly for the API. Be sure to save the output as we will need it for the next step:

while read line; do echo -n "$line\n"; done < example.com.key
while read line; do echo -n "$line\n"; done < example.com.cert

Now, lets setup the json file that contains our certificate information:

cat << EOF > example.com.cert.json
{
   "certificate":"-----BEGIN CERTIFICATE-----\nMIIblahblahblahblah\n-----END CERTIFICATE-----",
   "enabled":true,
   "secureTrafficOnly":false,
   "privatekey":"-----BEGIN RSA PRIVATE KEY-----\nMIICWblahblahblahblah\n-----END RSA PRIVATE KEY-----",
   "intermediateCertificate":"",
   "securePort":443}
EOF

Finally, lets execute the json file to enable SSL on your pre-existing cloud load balancer:

http put $lb_endpoint/$account/loadbalancers/YOUR_CLOUD_LOAD_BALANCER_ID/ssltermination X-Auth-Token:$token @example.com.cert.json

SSL termination on your existing load balancer has now been enabled.

Rackspace Cloud API – Create cloud load balancers

The purpose of this post is to show how you can build Rackspace cloud load balancers using the API. Building via the API will allow you to script deployments so you can avoid having to use the control panel. This also provides you the building blocks for understanding deployment automation.

This guide will only show you how to create a cloud load balancer. Feel free to review http://docs.rackspace.com for learning about all the possible operations that can be done through the API.

In this example, we are going to deploy a single cloud load balancer called lb.example.com that will be directing HTTP traffic to two web servers:

test01.example.com
test02.example.com

When working with the API, I like to use a tool called httpie to simplify things a bit. You can install this by:

yum install httpie

Now that we have httpie installed, lets get an auth token from the API:

echo '{"auth": {"RAX-KSKEY:apiKeyCredentials": {"username": "YOUR_USERNAME","apiKey":"YOUR_API_KEY"}}}' | http post https://identity.api.rackspacecloud.com/v2.0/tokens

The token you need will be listed next to “id” field as shown below

        "token": {
            "expires": "2013-07-09T23:17:08.634-05:00", 
            "id": "2334aasdf5555j3hfhd22245dhsr", 
            "tenant": {
                "id": "123456", 
                "name": "123456"

To simplify things moving forward, we will set some local variables that we’ll use when communicating with the API:

export token="YOUR_API_TOKEN_RECEIVED_ABOVE"
export account="YOUR_RACKSPACE_CLOUD_ACCOUNT_NUMBER"
export lb_endpoint="https://ord.loadbalancers.api.rackspacecloud.com/v1.0"

NOTE: Change the endpoints region accordingly (ord or dfw).

Lets prep a json file that we’ll be using to build the cloud load balancer:

cat << EOF > lb.example.com.json
{
    "loadBalancer": {
        "name": "lb.example.com",
        "port": 80,
        "protocol": "HTTP",
        "virtualIps": [
            {
                "type": "PUBLIC"
            }
         ],
        "nodes": [
            {
                "address": "10.123.123.121",
                "port": 80,
                "condition": "ENABLED"
            }
        ]
    }
}
EOF

Now we execute the build by:

http post $lb_endpoint/$account/loadbalancers X-Auth-Token:$token @lb.example.com.json

This will return the VIP and id of the new load balancer as shown below:

                "address": "123.123.123.123", 
        "id": 123456, 
        "name": "lb.example.com", 

For the purposes of this guide, I left out adding the second server initially. So here is how we can add it to the load balancer so it will route traffic between test01.example.com and test02.example.com. First create a json file that contains test02.example.com:

cat << EOF > nodes.json
{"nodes": [
{
"address": "10.123.123.123",
"port": 80,
"condition": "ENABLED",
"type":"PRIMARY"
}
]
}
EOF

Now add this node to the load balancer:

http post $lb_endpoint/$account/loadbalancers/149275/nodes X-Auth-Token:$token @nodes.json

And your done! You can verify everything looks correct by running:

http get $lb_endpoint/$account/loadbalancers/YOUR_LOAD_BALANCER_ID X-Auth-Token:$token

Rackspace Cloud API – Create NextGen cloud servers

The purpose of this post is to show how you can build Rackspace Next Generation cloud servers using the API. Building via the API will allow you to script server builds so you can avoid having to use the control panel. This also provides you the building blocks for understanding deployment automation.

This guide will only show you how to create a cloud server. Feel free to review http://docs.rackspace.com for learning about all the possible operations that can be done through the API.

In this example, we are going to build 2 512M CentOS 6.4 Cloud Servers via the Rackspace Cloud API. The servers will be named:

test01.example.com
test02.example.com

When working with the API, I like to use a tool called httpie to simplify things a bit. You can install this by:

yum install httpie

Now that we have httpie installed, lets get an auth token from the API:

echo '{"auth": {"RAX-KSKEY:apiKeyCredentials": {"username": "YOUR_USERNAME","apiKey":"YOUR_API_KEY"}}}' | http post https://identity.api.rackspacecloud.com/v2.0/tokens

The token you need will be listed next to “id” field as shown below

        "token": {
            "expires": "2013-07-09T23:17:08.634-05:00", 
            "id": "2334aasdf5555j3hfhd22245dhsr", 
            "tenant": {
                "id": "123456", 
                "name": "123456"

To simplify things moving forward, we will set some local variables that we’ll use when communicating with the API:

export token="YOUR_API_TOKEN_RECEIVED_ABOVE"
export account="YOUR_RACKSPACE_CLOUD_ACCOUNT_NUMBER"
export endpoint="https://ord.servers.api.rackspacecloud.com/v2/"

NOTE: Change the endpoints region accordingly (ord or dfw).

Now, lets see what images are available. I’m looking for a CentOS 6.4 image:

http get $endpoint/$account/images/detail X-Auth-Token:$token

The id of the CentOS 6.4 image in this case is:

            "id": "e0ed4adb-3a00-433e-a0ac-a51f1bc1ea3d", 

We wanted a 512M server, so we must find the flavors id:

http get $endpoint/$account/flavors X-Auth-Token:$token

This shows that the 512M flavor has the id of:

            "id": "2", 

All the information has been collected. Time to prep 2 json files that we’ll be using to build the 2 servers:

cat << EOF > test01.example.com.json
{
    "server" : {
        "name" : "test01.example.com",
        "imageRef" : "e0ed4adb-3a00-433e-a0ac-a51f1bc1ea3d",
        "flavorRef" : "2"
    }
}
EOF

cat << EOF > test02.example.com.json
{
    "server" : {
        "name" : "test02.example.com",
        "imageRef" : "e0ed4adb-3a00-433e-a0ac-a51f1bc1ea3d",
        "flavorRef" : "2"
    }
}
EOF

Finally, we have everything we need to begin the builds. Execute the build by:

http post $endpoint/$account/servers @test01.example.com.json X-Auth-Token:$token
http post $endpoint/$account/servers @test02.example.com.json X-Auth-Token:$token

When you run each POST statement above, 2 fields will be returned by the API. Be sure to record these somewhere:
– adminPass : This is your servers root password
– id : This is your servers id number that will be referenced next.

A new server is not useful without knowing its IP address. After a few minutes pass, you can retrieve the IP address by running:

http get $endpoint/$account/servers/YOUR_SERVER_ID X-Auth-Token:$token
http get $endpoint/$account/servers/YOUR_OTHER_SERVER_ID X-Auth-Token:$token

The 2 relevant fields you will need for this example are posted below:

        "accessIPv4": "123.123.123.123", 
                    "addr": "10.123.123.123", 

Now you can SSH into your server using the IP and admin password that have been returned by the API.