How to setup Proxmox VE 5 with LXC containers on Rackspace Cloud

Testing out changes in a production environment is never a good idea. However prepping test servers can be tedious as you have to find the hardware and setup the operating system before you can begin. So I want a faster and more cost effective medium, turning a single Cloud Server into a virtualized host server for my test servers. Welcome LXC.

Taken from the providers site, LXC is an operating system-level virtualization technology for Linux. It allows a physical server to run multiple isolated operating system instances, called containers, virtual private servers (VPSs), or virtual environments (VEs.) LXC is similar to Solaris Containers, FreeBSD jails and OpenVZ.

To managed my LXC containers, I prefer to use Proxmox VE 5, which provides a clean control panel for managing my containers.

This guide will document how to install Proxmox on a 4G Rackspace Cloud Server running Debian 9. There will be a 50G SSD Cloud Block Storage volume attached to the server utilizing ZFS that will store the containers, which is outlined more below. The Proxmox installation will install everything needed to run LXC. The IP’s for the containers will be provided via NAT served from the server, therefore creating a self contained test environment.

Configure system for LXC according to best practices

Increase the open files limit by appending the following to the bottom of /etc/security/limits.conf:

[root@proxmox01 ~]# vim /etc/security/limits.conf
...
*       soft    nofile  1048576 unset
*       hard    nofile  1048576 unset
root    soft    nofile  1048576 unset
root    hard    nofile  1048576 unset
*       soft    memlock 1048576 unset
*       hard    memlock 1048576 unset

Now setup some basic kernel tweaking at the bottom of /etc/sysctl.conf:

[root@proxmox01 ~]# vim /etc/sysctl.conf
...
# LXD best practices:  https://github.com/lxc/lxd/blob/master/doc/production-setup.md
fs.inotify.max_queued_events = 1048576
fs.inotify.max_user_instances = 1048576
fs.inotify.max_user_watches = 1048576
vm.max_map_count = 262144

Install Proxmox VE 5

For this to work, we need a vanilla Debian 9 Cloud Server and install Proxmox on top of it, which will install the required kernel.

To get things started, update /etc/hosts to setup your fqdn, and remove any resolvable ipv6 domains:

[root@proxmox01 ~]# cat /etc/hosts
127.0.0.1 localhost.localdomain localhost
123.123.123.123 proxmox01.yourdomain.com proxmox01-iad

# The following lines are desirable for IPv6 capable hosts
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts

Test to confirm /etc/files is setup properly. This should return your servers IP address:

[root@proxmox01 ~]# hostname --ip-address

Add the Proxmox VE repo and add the repo key:

[root@proxmox01 ~]# echo "deb http://download.proxmox.com/debian/pve stretch pve-no-subscription" > /etc/apt/sources.list.d/pve-install-repo.list
[root@proxmox01 ~]# wget http://download.proxmox.com/debian/proxmox-ve-release-5.x.gpg -O /etc/apt/trusted.gpg.d/proxmox-ve-release-5.x.gpg

Update the package index and then update the system for Proxmox:

[root@proxmox01 ~]# apt update && apt dist-upgrade
* Select option for 'Install the package maintainer's version' when asked about grub

Install Proxmox VE and reboot:

[root@proxmox01 ~]# apt install proxmox-ve postfix open-iscsi
[root@proxmox01 ~]# reboot

Once the cloud server comes back online, confirm you are running the pve kernel:

[root@proxmox01 ~]# uname -a
Linux proxmox 4.13.4-1-pve #1 SMP PVE 4.13.4-25 (Fri, 13 Oct 2017 08:59:53 +0200) x86_64 GNU/Linux

Setup NAT for the containers

As the Rackspace Cloud server comes with 1 IP address, I will be making use of NAT’ed IP addresses to assign to my individual containers. The steps are documented below:

Update /etc/sysctl.conf to allow ip_forwarding:

[root@proxmox01 ~]# vim /etc/sysctl.conf
...
net.ipv4.ip_forward = 1
...

Then apply the new settings without a reboot:

[root@proxmox01 ~]# sysctl -p

To setup the NAT rules, we need to setup a script that will start on boot. Two things need to be taken into consideration here:

1. Change IP address below (123.123.123.123) in the NAT rule to your Cloud server’s public IP address.
2. This assumes you want to use a 192.168.1.0/24 network for your VE’s.

The quick and dirty script is below:

[root@proxmox01 ~]# vim /etc/init.d/lxc-routing
#!/bin/sh
case "$1" in
 start) echo "lxc-routing started"
# It's important that you change the SNAT IP to the one of your server (not the local but the internet IP)
# The following line adds a route to the IP-range that we will later assign to the VPS. That's how you get internet access on # your VPS.
/sbin/iptables -t nat -A POSTROUTING -s 192.168.1.0/24 -o eth0 -j SNAT --to 123.123.123.123

# Allow servers to have access to internet:
/sbin/iptables -A FORWARD -s 192.168.1.0/24 -j ACCEPT
/sbin/iptables -A FORWARD -d 192.168.1.0/24 -j ACCEPT
# Be sure to add net.ipv4.ip_forward=1 to /etc/sysctl.conf, then run sysctl -p

# These are the rules for any port forwarding you want to do
# In this example, all traffic to and from the ports 11001-11019 gets routed to/from the VPS with the IP 192.168.1.1.
# Also the port 11000 is routed to the SSH port of the vps, later on you can ssh into your VPS through yourip:11000

#/sbin/iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 11000 -j DNAT --to 192.168.1.1:22
#/sbin/iptables -t nat -A PREROUTING -i eth0 -p udp --dport 11001:11019 -j DNAT --to 192.168.1.1
#/sbin/iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 11001:11019 -j DNAT --to 192.168.1.1

# In my case I also dropped outgoing SMTP traffic, as it's one of the most abused things on servers

#/sbin/iptables -A FORWARD -j DROP -p tcp --destination-port 25
#/sbin/iptables -A FORWARD -j DROP -p tcp --destination-port 2525
#/sbin/iptables -A FORWARD -j DROP -p tcp --destination-port 587
#/sbin/iptables -A FORWARD -j DROP -p tcp --destination-port 465
#/sbin/iptables -A FORWARD -j DROP -p tcp --destination-port 2526
#/sbin/iptables -A FORWARD -j DROP -p tcp --destination-port 110
#/sbin/iptables -A FORWARD -j DROP -p tcp --destination-port 143
#/sbin/iptables -A FORWARD -j DROP -p tcp --destination-port 993

;;

*) echo "Usage: /etc/init.d/lxc-routing {start}"
exit 2
;;

esac
exit 0

Setup permissions, set to run on boot, and run it:

[root@proxmox01 ~]# chmod 755 /etc/init.d/lxc-routing
[root@proxmox01 ~]# update-rc.d lxc-routing defaults
[root@proxmox01 ~]# /etc/init.d/lxc-routing start

When you go to start a new container, the container will not start as Proxmox will complain about an error similar to below:

-- Unit [email protected] has begun starting up.
Nov 06 06:07:07 proxmox01.*********** systemd-udevd[11150]: Could not generate persistent MAC address for vethMVIWQY: No such file or directory
Nov 06 06:07:07 proxmox01.*********** kernel: IPv6: ADDRCONF(NETDEV_UP): veth100i0: link is not ready

This can be corrected by:

[root@proxmox01 ~]# vim /etc/systemd/network/99-default.link
[Link]
NamePolicy=kernel database onboard slot path
MACAddressPolicy=none

Then reboot:

[root@proxmox01 ~]# reboot

Navigate your browser to the control panel, login with your root SSH credentials, and setup a Linux Bridge

- Navigate your browser to: https://x.x.x.x:8006
- Click on System --> Network
- On top, click 'Create' --> 'Linux Bridge'
	- Name:  vmbr0
	- IP address:  192.168.1.1
	- Subnet mask: 255.255.255.0
	- Autostart:  checked
	- Leave everything else blank

Setup the 50G SSD Cloud Block Storage Volume with ZFS and add to proxmox. Assuming the device is already mounted, check to see what it got mapped to by:

[root@proxmox01 ~]# lsblk 
NAME    MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda    202:0    0  80G  0 disk 
└─xvda1 202:1    0  80G  0 part /
xvdb    202:16   0  50G  0 disk   <--- This is my new volume

First, install the ZFS utils for Linux, and enable the kernel module:

[root@proxmox01 ~]# apt-get install zfsutils-linux
[root@proxmox01 ~]# /sbin/modprobe zfs

Then add the drive to the zpool:

[root@proxmox01 ~]# zpool create zfs /dev/xvdb 
[root@proxmox01 ~]# zpool list
NAME   SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
zfs   49.8G  97.5K  49.7G         -     0%     0%  1.00x  ONLINE  -

Now add the new disk to proxmox:

- Navigate your browser to: https://x.x.x.x:8006
- Click on Datacenter --> Storage
- On top, click 'Add' --> 'ZFS'
	- Name:  zfs
	- ZFS Pool:  zfs
	- Enable:  Checked
	- Thin provision:  Checked

Add Docker support to the containers

Docker can successfully run within a LXC container with some additional configuration. However, as the proxmox kernel may be older, the latest versions of Docker may fail to work properly. The versions of Docker you receive from the OS repos seem to work though.

First, create the containers as desired for Docker via Proxmox, then add the following to the bottom of containers LXC config file:

[root@proxmox01 ~]# /etc/pve/lxc/100.conf
...
#insert docker part below
lxc.aa_profile: unconfined
lxc.cgroup.devices.allow: a
lxc.cap.drop:

After restarting that container, you will be able to install and configure Docker as normal on that container.

Add NFS support to the containers

NFS can successfully run within a LXC container with an additional configuration.

First, create an apparmor profile for NFS:

vim /etc/apparmor.d/lxc/lxc-default-with-nfs
# Do not load this file.  Rather, load /etc/apparmor.d/lxc-containers, which
# will source all profiles under /etc/apparmor.d/lxc

profile lxc-container-default-with-nfs flags=(attach_disconnected,mediate_deleted) {
  #include 

# allow NFS (nfs/nfs4) mounts.
  mount fstype=nfs*,
}

Then reload the LXC profiles by:

apparmor_parser -r /etc/apparmor.d/lxc-containers

You can explicitly allow NFS in containers by adding another apparmor profile for them
Finally, add the following to the bottom of containers LXC config file:

[root@proxmox01 ~]# /etc/pve/lxc/100.conf
...
#insert near bottom
lxc.apparmor.profile: lxc-container-default-with-nfs

After restarting that container, you will be able to install and configure NFS as normal on that container.

Docker quick start guide

I have avoided the whole Docker thing for some time now, mainly cause I didn’t understand why the world needed another kernel level virtualization product. I have been using OpenVZ and FreeBSD Jails for years, and just never saw the need to add another product to the mix.

But after spending some time with it, little did I realize that Docker is nothing more than a wrapper script essentially for using LXC on Linux. So if your already familiar with OpenVZ, LXC, FreeBSD Jails, or Solaris Zones, then Docker it actually really simple to pick up.

I am not going to go into the details of what container virtualization is and the pros and cons since there are literally hundreds of good sites online that dive into that.

Instead so you can see it in action, since many people learn what questions to ask by actually doing it, I’ll post a basic example here so you can get started quickly on CentOS 7 running Docker.

Our end goal for this quick start guide will simply illustrate how you can use Docker on a single CentOS 7 server to spin up multiple containers for a test environment. The Docker host can be a VM, cloud server, vagrant image, or a dedicated server.

So to get started, first configure the basics on your CentOS 7 server, such as setting the hostname, updating the system, installing NTP and sysstat, and rebooting so your running off the latest kernel:

[root@localhost ~]# hostnamectl set-hostname docker01.example.com
[root@docker01 ~]# yum -y update
[root@docker01 ~]# yum install ntp sysstat
[root@docker01 ~]# chkconfig ntpd on
[root@docker01 ~]# service ntpd start
[root@docker01 ~]# reboot

Now setup the official Docker repo on your server:

[root@docker01 ~]# vim /etc/yum.repos.d/docker.repo
[dockerrepo]
name=Docker Repository
baseurl=https://yum.dockerproject.org/repo/main/centos/$releasever/
enable=1
gpgcheck=1
gpgkey=https://yum.dockerproject.org/gpg

Then install Docker:

[root@docker01 ~]# yum install docker-engine
[root@docker01 ~]# systemctl enable docker
[root@docker01 ~]# systemctl start docker

Since there is really no reason to run Docker as root, we are going to setup a unprivileged user to run Docker:

[root@docker01 ~]# useradd dockeradmin
[root@docker01 ~]# usermod dockeradmin -G docker

Now log in as user dockeradmin, and confirm Docker is working:

[root@docker01 ~]# su - dockeradmin
[dockeradmin@docker01 ~]# docker ps
[dockeradmin@docker01 ~]# docker images

If no errors are returned, then Docker is working! So lets pull down some OS images I use often. This is not a requirement, but it just makes deploying new containers a bit faster:

[dockeradmin@docker01 ~]# docker pull centos:centos6
[dockeradmin@docker01 ~]# docker pull centos:centos7
[dockeradmin@docker01 ~]# docker pull ubuntu:precise
[dockeradmin@docker01 ~]# docker pull ubuntu:trusty
[dockeradmin@docker01 ~]# docker pull ubuntu:xenial

Lets spin up our first container:

[dockeradmin@docker01 ~]# docker run --name centos6-test01 -id --restart unless-stopped centos:centos6 /bin/bash

Once its running, you can get to a console by running:

[dockeradmin@docker01 ~]# docker exec -it centos6-test01 /bin/bash

From there, you will be able to configure and use the container as you see fit!

Some other basic commands you will need to know for how to interact with Docker are below.

To see your running containers:

[dockeradmin@docker01 ~]# docker ps

To see both your running and stopped containers:

[dockeradmin@docker01 ~]# docker ps -a

To stop a running container:

[dockeradmin@docker01 ~]# docker stop your_container_name

To start up a stopped container:

[dockeradmin@docker01 ~]# docker start your_container_name

To create an image of a running container that you can use to deploy new containers from:

[dockeradmin@docker01 ~]# docker commit -m "your_commit_message" -a dockeradmin your_container_name dockeradmin/your_new_image_name:v1

To see what images you have setup:

[dockeradmin@docker01 ~]# docker images

To remove an image:

[dockeradmin@docker01 ~]# docker rmi your_image_id

If you would like you view the stats of your containers:

[dockeradmin@docker01 ~]# docker stats

To delete a container:

[dockeradmin@docker01 ~]# docker stop your_container_name
[dockeradmin@docker01 ~]# docker rm your_container_name

If you want to stop all running containers and delete them from the system so you can start fresh:

[dockeradmin@docker01 ~]# docker stop `docker ps -a -q`
[dockeradmin@docker01 ~]# docker rm `docker ps -a -q`

This should get you started into the world of Docker. Once you get a feel for it, treating the containers as their own “vm’s”, you will start to recognize some of the benefits of Docker.

From there, you will be able to start diving deeper and instead of using containers as a “vm”, you can instead start thinking about how to break up your application into individual containers to give you a finer degree of control and portability over your application.

Vagrant Introduction

What is Vagrant? Taken directly from the vendors website:

Vagrant provides easy to configure, reproducible, and portable work environments built on top of industry-standard technology and controlled by a single consistent workflow to help maximize the productivity and flexibility of you and your team.

For more details about how Vagrant can be beneficial, I strongly encourage you to read the vendors website at:
https://www.vagrantup.com/docs/why-vagrant

Vagrant can be somewhat difficult to understand without seeing it in action. But in summary, building test servers can take a long time. With vagrant, you can run your test servers with a few simple commands so you can begin performing the tests that you wanted to get done without the wait. The environments are portable, so its very easy to share with colleagues.

All Vagrant does is communicate with providers of your choice such as VirtualBox, AWS, Rackspace Cloud, etc, and spins up the boxes using the respective providers API’s.

Installation

Installing Vagrant with VirtualBox is simple and easy on most desktop OS’s running Windows, Mac OSX, and Linux. Simply use the links below to download and install both VirtualBox and Vargrant on your desktop:

Known working versions
https://www.virtualbox.org/wiki/Download_Old_Builds_5_0
https://releases.hashicorp.com/vagrant/1.8.4/ **

** Vagrant v1.8.5 currently has issues with CentOS. Therefore, use v1.8.4 for now. Once v1.8.6 comes out, the bug should be addressed.

Once you have Vagrant and VirtualBox installed on your desktop, you are ready to begin deploying your test servers.

Getting familiar with Vagrant

Some quick terms you need to know:
Vagrantfile: This is the main configuration file for your test server.
Provider: Which API are we using? AWS, Rackspace Cloud, VirtualBox, etc.
Vagrant Boxes: This is the Vagrant base image. Think of it as your golden template that is used to deploy your test servers.

To manage your individual environments or test servers, you simply create the directory on your desktop, and tell Vagrant to deploy the OS to that individual directory.

An example of some of the test environments on my workstation are:

/home/user/vagrant/centos6
/home/user/vagrant/ubuntu1404
/home/user/vagrant/centos6-lamp-environment

Most of the common distro’s used today have official boxes available. I included puppetlabs as they have images that are up to date with base images as well as boxes with Puppet Enterprise already installed:
https://atlas.hashicorp.com/centos
https://atlas.hashicorp.com/ubuntu
https://atlas.hashicorp.com/debian
https://atlas.hashicorp.com/freebsd
https://atlas.hashicorp.com/puppetlabs

Quick start to see Vagrant in action

This is just a quick way to see Vagrant in action. It is important to remember that all the vagrant commands apply to whatever directory you are in on your desktop!

First, create the directory on your desktop for your test server:

[user@workstation ~]# mkdir -p vagrant/centos6-test
[user@workstation ~]# cd vagrant/centos6-test

Now initialize a Vagrantfile, which tells Vagrant what image to use, and how to configure it:

[user@workstation ~]# vagrant init centos/6

Then startup the test server:

[user@workstation ~]# vagrant up

And thats it! You can now log into your test server and get to work by running:

[user@workstation ~]# vagrant ssh

Once your done with your testing, you can remove the test server by running:

[user@workstation ~]# vagrant destroy

Command commands

These commands are to be ran from inside the directory of your project.

To check the status of a single test server:

[user@workstation ~]# vagrant status

If you made changes to your Vagrantfile and need to reload your settings:

[user@workstation ~]# vagrant reload

If you want to shutdown a test server, but you don’t want to destroy it:

[user@workstation ~]# vagrant halt

To check the status of all your test servers:

[user@workstation ~]# vagrant global-status

How to customize your test server

What makes Vagrant powerful is the Vagrantfile that resides in each directory for your test servers. This file allows you to specify IP’s, set users and passwords, run bootstrap scripts, or even spin up multiple servers from one file. This greatly speeds up the provisioning time so you can focus on testing what you truly needed to test.

The examples below are full examples of the Vagrantfile you can copy in your projects directory to spin up test servers.

Here is an example for spinning up a Ubuntu 12.04 server, and having Apache already installed:

[user@workstation ~]# mkdir vagrant/ubuntu1204-test01
[user@workstation ~]# cd vagrant/ubuntu1204-test01
[user@workstation ~]# vim Vagrantfile
  config.vm.box = "hashicorp/precise32"
  config.vm.provision "shell", inline: <<-SHELL
    sudo apt-get update
    sudo apt-get install -y apache2
    SHELL
end

[user@workstation ~]# vagrant up

Below is an example for spinning up a CentOS 6 server with the private IP address of 172.16.0.2, accessible only to your workstation:

[user@workstation ~]# mkdir vagrant/ubuntu1204-test01
[user@workstation ~]# cd vagrant/ubuntu1204-test01
[user@workstation ~]# vim Vagrantfile
Vagrant.configure(2) do |config|
  config.vm.box = "centos/6"
  config.vm.network "private_network", ip: "172.16.0.2"
end

[user@workstation ~]# vagrant up

Below is an example for spinning up a CentOS 6 server with the public IP address of 123.123.123.123, accessible by anyone:

[user@workstation ~]# mkdir vagrant/ubuntu1204-test01
[user@workstation ~]# cd vagrant/ubuntu1204-test01
[user@workstation ~]# vim Vagrantfile
Vagrant.configure(2) do |config|
  config.vm.box = "centos/6"
  config.vm.network "public_network", ip: "123.123.123.123"
end

[user@workstation ~]# vagrant up

Below is an example for spinning up a CentOS 6 server with Puppet Enterprise, with the private IP address of 10.1.0.3, and adding a bootstrap.sh file that we'll use for automatically our build process to install and configure Nginx:

[user@workstation ~]# mkdir vagrant/centos6-puppet-server01
[user@workstation ~]# cd vagrant/centos6-puppet-server01
[user@workstation ~]# vim Vagrantfile
Vagrant.configure(2) do |config|
  config.vm.box = "puppetlabs/centos-6.6-64-puppet"
  config.vm.network "private_network", ip: "10.1.0.3"
  config.vm.network "forwarded_port", guest: 80, host: 80
  config.vm.provision :shell, path: "bootstrap.sh"
end

[user@workstation ~]# vim bootstrap.sh
#!/usr/bin/env bash

# Sanity checks
if [ ! `whoami` = root ]; then
        echo "This script must be ran by the user:  root"
        exit 1
fi

# CentOS specific tasks:
if [ -f /etc/redhat-release ]; then
	selinux=`getenforce`
	if [ $selinux = "Enforcing" ]; then
		setenforce 0
	fi
	
	if [ `cat /etc/redhat-release | grep "Linux release 7" | wc -l` = 1 ]; then
		yum -y install iptables-services
	fi
fi

# Install required Puppet modules from the forge
puppet module install puppetlabs-stdlib
puppet module install jfryman-nginx
puppet module install puppetlabs-firewall

# General default.pp for Puppet
cat << EOF > /root/default.pp

class { 'nginx': }

nginx::resource::vhost { 'example.com':
  www_root    => '/var/www/example.com',
}

package { 'git':
  ensure => present,
}

firewall { '000 accept all icmp':
  proto  => 'icmp',
  action => 'accept',
}

firewall { '100 allow SSH':
  port   => [22],
  proto  => tcp,
  action => accept,
}

firewall { '101 allow http over port 80':
  port   => [80],
  proto  => tcp,
  action => accept,
}

EOF

# Execute first run of default.pp
puppet apply /root/default.pp

[user@workstation ~]# vagrant up

Below is an example for spinning up 2x CentOS 6 server with private IP addresses, accessible only to your workstation, both with 1G of memory, and installing MySQL on the db server, and Apache/PHP on the web server:

Vagrant.configure(2) do |config|
  config.vm.define "db01" do |db01|
    db01.vm.box = "centos/6"
    db01.vm.hostname = "web01.example.com"
    db01.vm.network "private_network", ip: "192.168.33.100"
    db01.vm.provision "shell", inline: <<-SHELL
        sudo yum update
        sudo yum install -y mysql
    SHELL
    db01.vm.provider "virtualbox" do |vb|
      vb.gui = false
      vb.memory = "1024"
    end
  end
  config.vm.define "web01" do |web01|
    web01.vm.box = "centos/6"
    web01.vm.hostname = "web02.example.com"
    web01.vm.network "private_network", ip: "192.168.33.101"
    web01.vm.provider "virtualbox" do |vb|
      vb.gui = false
      vb.memory = "1024"
    end
   web01.vm.provision "shell", inline: <<-SHELL
      sudo yum update
      sudo yum install -y httpd php
   SHELL
  end
end

[user@workstation ~]# vagrant up

How to setup OpenVZ on the Rackspace Cloud

Testing out changes in a production environment is never a good idea. However prepping test servers can be tedious as you have to find the hardware and setup the operating system before you can begin. So I want a faster and more cost effective medium, turning a single Cloud Server into a virtualized host server for my test servers. Welcome OpenVZ.

Taken from the providers site, OpenVZ (Open Virtuozzo) is an operating system-level virtualization technology for Linux. It allows a physical server to run multiple isolated operating system instances, called containers, virtual private servers (VPSs), or virtual environments (VEs.) OpenVZ is similar to Solaris Containers and LXC.

To managed my OpenVZ containers, I prefer to use Proxmox, which provides a clean control panel for managing my containers.

This guide will document how to install Proxmox on a 2G Rackspace Cloud Server running Debian 7. The Proxmox installation will install everything needed to run OpenVZ.

Install Proxmox

For this to work, we need a vanilla Debian 7 Cloud Server, and install Proxmox on top of it, which will install the required kernel.

To get things started, update /etc/hosts to setup your fqdn, and remove any resolvable ipv6 domains:

[root@proxmox ~]# cat /etc/hosts
127.0.0.1 localhost.localdomain localhost
192.168.6.177 proxmox.yourdomain.com proxmox pvelocalhost

# The following lines are desirable for IPv6 capable hosts
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts

Now backup the /etc/apt/source.list, and create a fresh one to use proxmox’s repos:

mv /etc/apt/sources.list /etc/apt/sources.list.bak
vim /etc/apt/sources.list
[ ADD ]
deb http://ftp.at.debian.org/debian wheezy main contrib

# PVE repository provided by proxmox.com, only for installation (this repo will stay on 3.1)
deb http://download.proxmox.com/debian wheezy pve

# security updates
deb http://security.debian.org/ wheezy/updates main contrib

Now add the Proxmox VE repository key:

wget -O- "http://download.proxmox.com/debian/key.asc" | apt-key add -

Update the package index and then update the system to install Proxmox:

apt-get update && apt-get dist-upgrade

Install proxmox kernel and headers:

apt-get install pve-firmware pve-kernel-2.6.32-26-pve
apt-get install pve-headers-2.6.32-26-pve

7. Update grub and reboot into proxmox kernel:

vim /etc/default/grub
# From
GRUB_DEFAULT=0
# To
GRUB_DEFAULT=3
...
update-grub2
reboot

Once the cloud server comes back online, confirm you are running the pve kernel

uname -a
Linux proxmox 2.6.32-26-pve #1 SMP Mon Oct 14 08:22:20 CEST 2013 x86_64 GNU/Linux

** If the kernel is a 3.2 kernel, something is wrong and grub booted off default kernel, not pve. Go back and confirm all the steps worked properly.

Remove the old Debian Kernel as it is no longer needed:

apt-get remove linux-image-amd64 linux-image-3.2.0-4-amd64 linux-base
update-grub

Install proxmox ve packages

apt-get install proxmox-ve-2.6.32 ntp ssh lvm2 postfix ksm-control-daemon vzprocps open-iscsi bootlogd

Open up firewall to allow inbound 8006 from your workstations IP address:

ufw allow from x.x.x.x

Setup NAT for VE’s

As the Rackspace Cloud server comes with 1 IP address, I will be making use of NAT’ed IP addresses to assign to my individual containers. The steps are documented below:

Update /etc/sysctl.conf to allow ip_forwarding:

vim /etc/sysctl.conf
[ ADD ]
net.ipv4.ip_forward=1

Then apply the new setting:

sysctl -p

To setup the NAT rules, we need to setup a script that will start on boot. Below is a script that I found on https://vpsaddicted.com/install-and-configure-proxmox-ve-for-nat-ipv4-vps-on-debian-wheezy/.

Two things need to be taken into consideration here:
1. Change IP address below (123.123.123.123) in the NAT rule to your Cloud server’s public IP address.
2. This assumes you want to use a 10.0.0.0/24 network for your VE’s.

vim /etc/init.d/vz-routing
#!/bin/sh
case "$1" in
 start) echo "vz-routing started"
# It's important that you change the SNAT IP to the one of your server (not the local but the internet IP)
# The following line adds a route to the IP-range that we will later assign to the VPS. That's how you get internet access on # your VPS.
/sbin/iptables -t nat -A POSTROUTING -s 10.0.0.0/24 -o eth0 -j SNAT --to 123.123.123.123

# Allow servers to have access to internet:
/sbin/iptables -A FORWARD -s 10.0.0.0/24 -j ACCEPT
/sbin/iptables -A FORWARD -d 10.0.0.0/24 -j ACCEPT
# Be sure to add net.ipv4.ip_forward=1 to /etc/sysctl.conf, then run sysctl -p

# These are the rules for any port forwarding you want to do
# In this example, all traffic to and from the ports 11001-11019 gets routed to/from the VPS with the IP 10.0.0.1.
# Also the port 11000 is routed to the SSH port of the vps, later on you can ssh into your VPS through yourip:11000

#/sbin/iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 11000 -j DNAT --to 10.0.0.1:22
#/sbin/iptables -t nat -A PREROUTING -i eth0 -p udp --dport 11001:11019 -j DNAT --to 10.0.0.1
#/sbin/iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 11001:11019 -j DNAT --to 10.0.0.1

# In my case I also dropped outgoing SMTP traffic, as it's one of the most abused things on servers

#/sbin/iptables -A FORWARD -j DROP -p tcp --destination-port 25
#/sbin/iptables -A FORWARD -j DROP -p tcp --destination-port 2525
#/sbin/iptables -A FORWARD -j DROP -p tcp --destination-port 587
#/sbin/iptables -A FORWARD -j DROP -p tcp --destination-port 465
#/sbin/iptables -A FORWARD -j DROP -p tcp --destination-port 2526
#/sbin/iptables -A FORWARD -j DROP -p tcp --destination-port 110
#/sbin/iptables -A FORWARD -j DROP -p tcp --destination-port 143
#/sbin/iptables -A FORWARD -j DROP -p tcp --destination-port 993

;;

*) echo "Usage: /etc/init.d/vz-routing {start}"
exit 2
;;

esac
exit 0

Setup permissions, set to run on boot, and run it:

chmod 755 /etc/init.d/vz-routing
update-rc.d vz-routing defaults
/etc/init.d/vz-routing start

That should be it! Navigate your browser to the control panel, login with your root SSH credentials, and your ready to go:

https://x.x.x.x:8006