RHCSA Study Guide – Objective 5 : Users

############################
Everything below are my raw notes that I took while attending an unofficial RHCSA training session.  I am posting them here in hopes they will assist others who may be preparing to take this exam.  

My notes are my own interpretation of the lectures, and are certainly not a replacement to classroom training either through your company, or by taking the official RHCSA classes offered through Red Hat.  If you are new to the Red Hat world, I strongly suggest looking into their training courses over at Red Hat.
############################

Users and Groups

EXAM NOTE: On the test, I will likely have to link this machine to a ldap or nis server.

Users and Groups define access to the OS through the file permission scheme. Root is the super user (uid 0). All users are associated with at least one group. Secondary group memberships can exist too.

User info is stored in:

/etc/passwd
/etc/shadow 
/etc/group

/etc/passwd has 7 fields

1.  username
2.  Where the pw used to be set, (but exists in /etc/shadow, so its just a place holder)
3.  Numberical identify for the account (UID)
4.  Numerical idenitfer for the primary group (GID)
5.  Comments field (aka gecos field). 
6.  Home directory where your homedir exists
7.  Your shell or program that executes when you log in.

/etc/shadow has 2 important fields

- login:encrypted_password: (The rest are password aging fields).
- aging fields track dates for passwd resets locks, etc

/etc/group

- group name, pw, gid,membergroups.  
- Group passwords allow temp management to a group are rarely used and not setup by default.

Management tools:

1. useradd – add user. Most common option is -g to specify primary group, and -G to add secondary groups. Example:

[root@web01 ~]# useradd -g clowns -G trouble,simpson bart

2. usermod – Modify a users settings. It takes pretty much all the options as useradd. Though, when modifying group behaviors, when you try to add him to a secondary group, just run:

[root@web01 ~]# usermod -a -G detension bart

3. userdel – Remove user from system. If you give it a -r, it’ll also remove his homedir and spool directories. Example:

userdel -r moe

4. groupadd – Add new group
5. groupmod – Mainly used to rename a group ex. groupmod -n mktg mkg
6. groupdel – Remove a group. Ex. groupdel microsoft
7. passwd – change pw
a. root can change all
b. can diasble accounts ex. passwd -l mary
c. Setup passwd aging
d. Time passwd resets
e. Account disabling (or use chage)

Passord aging

You can set max / min lifetimes for a user’s password.
example:

[root@web01 ~]# passwd -x days user

When a users pw has expired, you can set the nuber of days it can remain expired before disabling the account completely:

[root@web01 ~]# passwd -i days user

User environment files

Used files or defaults when creating accounts

1.  /etc/skel : default template for newly added users homedir
2.  /etc/profile : sets env variabled used by all users
3.  /etc/profile.d : contains scripts specific to certain rpms
4.  /etc/bashrc : contains global aliases and system settings
5.  ~/.bashrc : contains users aliases and functions
6.  ~/.bash_profile : contains user env settings, and can be set to automatically start programs at login.

Lab

EXAM NOTE: ALL this stuff is on the test.

1.  Create a new group 'dev'.  Create a new user 'alice' as a member of the 'dev' group, with a description of 'Alice from Dev' and a default shell of '/bin/csh'.  Use the password command to set a password for alice, then log in as alice and verify her access.

[root@web01 ~]# groupadd dev
[root@web01 ~]# useradd -G dev -c "Alice from Dev" -s /bin/csh alice
[root@web01 ~]# passwd alice

2.  Set a maximun pw lifetime of 4 weeks for the alice account.  Look at the password, shadow, and group files

[root@web01 ~]# passwd -x 30 alice

3.  Configure the users simon, linus, richard.  Set all their passwords to 'linux'
[root@web01 ~]# groupadd ru
[root@web01 ~]# useradd -G ru simon
[root@web01 ~]# useradd -G ru linus
[root@web01 ~]# useradd -G ru richard
[root@web01 ~]# passwd simon
[root@web01 ~]# passwd linux
[root@web01 ~]# passwd richard

4.  Make these users part of the ru group
See #3

5.  Configure the directory /home/linux so that each user from the ru group can read, create, and modify files:
[root@web01 ~]# mkdir /home/linux
[root@web01 ~]# chown -R root:ru /home/linux
[root@web01 ~]# chmod 775 /home/linux
[root@web01 ~]# chmod g+s /home/linux # This means that any files created in here will be writable by group ru regardless of ownership.

6.  Configure the directory /home/linux/work so that each user can create and read files, but only the files's owner can delete.
[root@web01 ~]# mkdir /home/linux/work
[root@web01 ~]# chown root:ru /home/linux/work
[root@web01 ~]# chmod 775 /home/linux/work
[root@web01 ~]# chmod -t /home/linux/work

7.  Use ACL's to allow alice, not in 'ru', access to the work folder.
[root@web01 ~]# setfacl -R -m u:alice:rwx /home/linux/work
[root@web01 ~]# setfacl -m default:u:alice:rwx /home/linux/work # As new objects are created in here, they will inherit the acl's.

NIS and LDAP

NIS and LDAP Servers can be configured to centrally manage system and account info.

NIS – This is suppose to be a very basic management system.

[root@web01 ~]# yum install rpcbind ypbind
[root@web01 ~]# system-config-authentication  # <-- GUI tool for setting this up.  Does everything for you.
[root@web01 ~]# setup -> authentication configuration

It’ll modify:

/etc/sysconfig/network
/etc/yp.conf
/etc/nsswitch.conf
/etc/pam.d/system-auth

LDAP – Widely used, flexible db for storing mac, unix, windows, acl’s, and a whole lot more.

[root@web01 ~]# yum install nss-pam-ldapd
[root@web01 ~]# system-config-authentication

It’ll modify:

/etc/ldap.conf
/etc/openldap/ldap.conf
/etc/nsswitch.conf
/etc/pam.d/system-auth

EXAM NOTE: You just need to know how to configure the clients. Setting up the servers isn’t required for rhcsa or rhce.

[root@web01 ~]# vim /etc/auto.nis
* server1:/nis/&

[root@web01 ~]# man 5 autofs

Side note:

All the kernel documentation that exists is available via:

[root@web01 ~]# yum install kernel-doc
[root@web01 ~]# cd /usr/share/docs/kernel-docs/blah

RHCSA Study Guide – Objective 4 : File Systems

############################
Everything below are my raw notes that I took while attending an unofficial RHCSA training session.  I am posting them here in hopes they will assist others who may be preparing to take this exam.  

My notes are my own interpretation of the lectures, and are certainly not a replacement to classroom training either through your company, or by taking the official RHCSA classes offered through Red Hat.  If you are new to the Red Hat world, I strongly suggest looking into their training courses over at Red Hat.
############################

Filesystem Administration

With partitioning, obviously use fdisk. Granted, apparently something called partprobe is no longer used in RHEL6. Thats great, cause I never used it before. So you will have to reboot to bring the system back up. There is a GUI based tool caused disk utility.

So once the partitioning is done, now create the filesystem:

[root@web01 ~]# mkfs.ext4 /dev/sda2

This can help show you if the disk is dirty:

[root@web01 ~]# tune2fs -l

File system tools:

e2label : view/set filesystem label
tune2fs : view/set filesystem attributes
mount/umount : Mount and un-mount filesystems

EXAM NOTE: Be sure that anything you do on the filesystem, you add it to your /etc/fstab cause the system will be rebooted before it will be graded, so you need to ensure that it works properly upon reboot.

Lab

1.  Using fdisk, create a new 100MB partition
[root@web01 ~]# fdisk /dev/sda
n
e
default
default
n
default
+100M
w

2.  Create a new fs on this partition using ext4, a blocksize of 1k, and a reserve space of 2%.  Confirm settings with tune2fs.  Mount the new fs as /u01, and set it to mount at boot.
[root@web01 ~]# mkfs.ext4 -b 1024 -m 2 /dev/sda5
[root@web01 ~]# mount -t blah and update fstab accordingly

3.  Unmount the /u01 fs and force an integrity check.  Remount the /u01 filesystem.  Use e2label to set the fs label on /u01 to /u01.
[root@web01 ~]# umount /u01
[root@web01 ~]# fsck -f /dev/sda5  # NOTE:  You have to specify the -f to FORCE the fsck.  It will NOT run just because you asked for it.  
[root@web01 ~]# e2label /dev/sda5 /u01
[root@web01 ~]# mount -a
[root@web01 ~]# blkid ; just another way to verify your superblock settings.

EXAM NOTE: This may be on test, but it’ll probably be lvm stuff.

Automount (Autofs)

Autofs monitors a certain directory and can automatically mount a file system when a call is made to files in that directory. It will also unmount the directory in RHEL6 after it hasn’t been touched in 5 minutes.

Its configuration is in:

/etc/auto.master

EXAM NOTE: Will need to know how to tell system which directories to monitor.

/etc/auto.master
path config file
ex.  /misc /etc/auto.misc

This tells automountd to ‘watch’ the /misc pathname for activity, and if activity is observed, consult /etc/auto.misc for instructions.

So for the basic syntax:

path    options   mount device nfs -fstype=nfs,ro  nfsserver:/share/nfs

* This tells automountd to dynamically mount the nfs share to /share/nfs. Autofs will mount stuff as needed.

Lab

1.  Configure your server to automatically mount /share as an NFS share from server1 to /server1/share when a process changes directories there.

[root@web01 ~]# vi /etc/auto.master
...
/server1        /etc/auto.server1
...

[root@web01 ~]# vi /etc/auto.server1
...
share 192.168.1.100:/share
...

[root@web01 ~]# service autofs restart

EXAM NOTE: I would imagine this will be on the test.

Extended Attributes

lsattr - list attributes
chattr - change attributes

EXAM NOTE: Redhat will likely test on the -i flag. So watch out for it.

ACL’s

getfacl
setfacl

You must have the acl mount option set. It’ll work on / since rh does this by default, but you will have to specify this on any new partitions.

[root@web01 ~]# setfacl -m u:bob:w memo.txt  -> Set facls
[root@web01 ~]# setfacl -x g:ru memo.txt -> removes facls
[root@web01 ~]# setfacl -m default:u:bob:w memo.txt -> setfacls

EXAM NOTE: These WILL be on the test.

Quotas

Quotes allow you to limit fs resources to users. Basically disk quotas. To enable, add the following to the mount options:

[root@web01 ~]# vi /etc/fstab
usrquota,grpquota

[root@web01 ~]# quotacheck -mavug
[root@web01 ~]# quotaon -a # Turn on quotas
[root@web01 ~]# edquota -u test # Set limits

EXAM NOTE: These will be on the test.

Lab

1.  Create a quota for the user student with:
- block soft limit of 100M and a hard limit of 150M
- soft inode limit of 30 and a hard inode limit of 100

2.  Create a quota for the group gdm so that its members collectively have:
- a block soft limit of 200M and a hard limit of 300M
- a soft inode limit of 50 and a hard inode limit of 200

Answers:

[root@web01 ~]# vi /etc/fstab # Add the following mount options
usrquota,grpquota

[root@web01 ~]# mount -o remount /
[root@web01 ~]# quotacheck -mavug
[root@web01 ~]# quotaon /home # Turn on quotas
[root@web01 ~]# edquota student # Set limits

# Interesting note:  To do the math quickly on the cli, do:
[root@web01 ~]# echo $((1024*1*100))
[root@web01 ~]# edquota -g gdm

# Set quotas accordingly.
[root@web01 ~]# repquota -g /home

EXAM NOTE: This may be on the exam.

Disk Encryption – LUKS

Quick start for those interested:
http://www.cyberciti.biz/hardware/howto-linux-hard-disk-encryption-with-luks-cryptsetup-command/

[root@web01 ~]# cryptsetup luksFormat   # NOTE:  This will delete all your stuff on disk!!!
[root@web01 ~]# cryptsetup luksOpen  ...  
[root@web01 ~]# cryptsetup luksOpen /dev/sda5 crypt01

This will exist in /dev/mapper/mapname ie. /dev/mapper/crypt01.
# NOTE: The luksformat will prompt for a passphrase, and you can set it to use 8 keys if you like.

Now you will be able to format it:

[root@web01 ~]# mkfs -t ext4 /dev/mapper/crypt01
[root@web01 ~]# mkdir /crypt01
[root@web01 ~]# mount /dev/mapper/crypt01 /crypt01

Now add entry into fstab:

[root@web01 ~]# vi /etc/fstab
...
/dev/mapper/crypt01 /crypt01 ext4 defaults 1 2
...

Once done, now close it (encrypt it) by:

[root@web01 ~]# cryptsetup luksClose /dev/mapper/crypt01

To make this stuff persistent at boot, edit /etc/crypttab as shown below.

1. To make a LUKS encrypted device available at boot time:

[root@web01 ~]# vim /etc/crypttab
mapname device keyfile options

2. To create a keyfile:

[root@web01 ~]# dd if=/dev/urandom of=/etc/keyfile bs=1k count=4
[root@web01 ~]# cryptsetup luksAddKey  /etc/keyfile

3. Add to crypttab

[root@web01 ~]# vi /etc/crypttab
...
/dev/mapper/crypt01 /dev/sda5 [/path/to/keyfile] [option] 
...

EXAM NOTE: Use keyfiles for test. But in practice, use a passphrase, but understand risks involved.

LAB

1.  Create a new 100M physical volume, then set up a luks encrypted ext4 filessystem on the logical volume, which will be persistent across reboots.

2.  Reboot your machine to verify LUKS filesystems prompt for the passphrase and become accessible automatically after bootup

3.  Browse through the man pages on cryptsetup and crypttab

Answers:

1.  Create your 100M logical partition through fdisk
2.  Setup luks stuff
[root@web01 ~]# cryptsetup luksFormat /dev/sda5  # Answer YES, and type your passphrase
[root@web01 ~]# blkid # confirm it setup the type:  cryptoluks
[root@web01 ~]# cryptsetup luksOpen /dev/sda5 crypto  # now enter your password
3.  Now put fs on the device
[root@web01 ~]# mkfs -t ext4 /dev/mapper/crypto
[root@web01 ~]# blkid # You can now see both the raw device, and the crypted device
4.  Setup /etc/fstab
[root@web01 ~]# vi /etc/fstab
...
/dev/mapper/crypto /u02 ext4 default 1 2  # If the test is wonky, set it to 0 0 to prevent fsck.
...
5.  Mount it and your done.
6.  Now create crypttab stuff

# Quick and dirty
[root@web01 ~]# echo -n test > /etc/keyfile  # You need the -n to prevent the newline character
[root@web01 ~]# cryptsetup luksClose /dev/mapper/crypto
[root@web01 ~]# cryptsetup luksOpen /dev/sda5 crypto -d /etc/keyfile # The -d flag forces the key to be used.

# Better way of setting up key - If you don't want to use a pw at all, then do the lukFormat with the -d to specify keyfile.
[root@web01 ~]# dd if=/dev/urandom of=/etc/keyfile bs=1k count=4
[root@web01 ~]# cryptsetup luksAddKey /dev/sda5 /etc/keyfile
# add your original key password
[root@web01 ~]# chmod 400 /etc/keyfile
Now your key works, and so does your passphrase.

[root@web01 ~]# vi /etc/crypttab
crypto /dev/sda5 # If you leave it like this, it'll prompt you for pw at boot
crypto /dev/sda5 /etc/keyfile   # <-- This is how you should do it.

# The method above gives you a secure key, and also a backup passphrase to ensure all is well if you lose your key, you aren't in trouble.

# How to verify all this:
# Confirm your device is unmounted
# This is basically just a way to verify your system will boot most likely.  
[root@web01 ~]# bash
[root@web01 ~]# source /etc/init.d/functions
[root@web01 ~]# init_crypto 1 # This is the function that processes crypttab.  It accepts 0 or 1.  Think of it like mount -a sorta.
[root@web01 ~]# ls -al /dev/mapper
[root@web01 ~]# mount -a
[root@web01 ~]# ls -al /u02

SELinux

Exam note: You can likely leave SElinux disabled or permissive. They will likely not test it at all. It'll be on the RHCE though.

SElinux sits on top of the kernel, telling the kernel what is permitted and what is not. There are 3 levels:

- Disabled : Extentions and hooks are not in kernel
- Permissive : Extension and hooks are there, but if there is a policy violation, the kernel will still allow it.
- Enabled:  Everything there, and blocking accordingly.

Redhat made policies called TARGETED. These are done by groups such as web, mail, ftp, db, etc. Its RHEL's way of making our lives a bit easier. Therefore, by using these targetted polcies, we may just have to fix the files/directories contexts or booleans.

So every process or object has a SELinux context:
- identity:role:domain/type

a.  What identities can use which roles
b.  What roles can enter which domains
c.  What domains can access which types.

Again, RHEL makes this easier and basically just uses the types, nothing else. We can take it further, but that is our choice to make that work.

So in short, SELinux tells the kernel weather or not to allow access to whatever thing.

If you want to view a context for a process, run:

[root@web01 ~]# ps -Z - List the processes contexts
[root@web01 ~]# ls -Z - List the file contexts

To change the context of a file, use:

[root@web01 ~]# chcon -R --reference=/var/www/html file

So what does that mean: Go to this other location (/var/www/html), and apply it to my target (file). So if I put my docroots in /srv, to get SELinux to like this directory, we had to change the context of /srv by:

[root@web01 ~]# chcon -R --reference=/var/www/html /srv

So as long as you know the default location where the contexts reside, you can cheat and just copy the context over to the new location.

All policy violations will be logged to /var/log/audit/audit.log as AVC (access vector cache) denials.
** setroubleshoot is a good tool for reading the output of that log.

Lets say you borked your entire setup, you can reapply the default contexts on all common pathnames. So to restore things, you just do:

[root@web01 ~]# restorecon -R path path...

* NOTE: This will not affect your new stuff like /srv, cause that is not in the default labeling. You can set the semanage stuff (may have to install it), and set the default paths.

restorecon knows about the policies and defaults. chcon only changes things.. that is all chcon knows.

EXAM NOTE: restorecon will not really be needed on the RHCE.. unless you break something hardcore.

There is a graphic tool for selinux: system-config-selinux.
NOTE: You MUST reboot the system when enabling selinux, or disabling it since it mucks with the kernel hooks and stuff.

The config for selinux: /etc/sysconfig/selinux
* This is where you set your enforcing/targetting/disabled, etc. Just the startup mode stuff.

Commands:

getenforce - shows the current SELIinux mode
setenforce - will allow you to change the mode.  ie:  setenforce 0 (dir)
setenforce 0 # Disable selinux temporiatly
setenforce 1 # Enable selinux

NOTE: If the server is completely broken and you cannot even boot, you can disable SELinux in grub by passing the enforcing=0 to the kernel line in grub when booting.

Other troubleshooting tools:

policycoreutils
setroubleshoot

Boolean:
These are basically simple on/off flags for enabling/disabling these:

[root@web01 ~]# getsebool -a |grep httpd  # or whatever.
[root@web01 ~]# setsebool -P blah
# IMPORTANT:  DO NOT FORGET TO SPECIFY THE -P TO MAKE THE CHANGE PERSISTENT ACROSS REBOOTS!

What are some practical uses for selinux:

- Allow you to change the default paths for like where you store db, web, etc, etc.  
- Change the boolean's to allow like public_html directories ie: getsebool -a |grep httpd

Lab

1.  With SELinux enforcing, configure a website to be served from /srv
2.  Dont focus on advanced apache settings, accomplish this in the simplest way possible, change the global documentroot
3.  Populate a simple index.html file.  
4.  The settroubleshoot tool is useful here.  Don't be confused by any typos in the output.

Answer:
Easy enough, just get apache setup, then:

[root@web01 ~]# yum -y install setroubleshootd 
# You will see the stuff needed in /var/log/audit/audit.log or /var/log/messages.
[root@web01 ~]# service auditd restart
[root@web01 ~]# chcon -R --reference=/var/www/html /srv

RHCSA Study Guide – Objective 3 : System Administration

############################
Everything below are my raw notes that I took while attending an unofficial RHCSA training session.  I am posting them here in hopes they will assist others who may be preparing to take this exam.  

My notes are my own interpretation of the lectures, and are certainly not a replacement to classroom training either through your company, or by taking the official RHCSA classes offered through Red Hat.  If you are new to the Red Hat world, I strongly suggest looking into their training courses over at Red Hat.
############################

Kickstart : The kickstart file is nothing more then a flat file answer file.

Anaconda will look for this, and use it to install/configure your server. Stuff you can set are:

- Partitioning and filesystems
- Software packages
- Users, groups, passwords
- Features, networking and more

You can build them:

a.  From scratch
b.  From an existing kickstart file (Probably most common way)
c.  Using system-config-kickstart (Tool is very basic in nature)

How does this work exactly? When you start up anaconda, it will look for a ks line within the kernel section, then fetch the path to your kickstart file.

EXAM NOTE: While this is an objective, this is not on the test as there is no way to test for this.

Network Administration

2 ways to set network ip’s:
– Static
– Dynamic

There are a few different methods of doing this.

1.  Type:  setup
2.  Edit the files directly:  vi /etc/sysconfig/network-scripts/ifcfg-ethX
3.  Using the GUI

Interesting note:
ifconfig – deprecated. Replaced with ip addr list
The ip also has ip route, ip link show, etc, etc.

EXAM NOTE: This likely won’t make a difference on the test. Just make sure your settings are persistent cause redhat will reboot your server before they grade it!

To view routes:

[root@web01 ~]# ip route show

Consider differences between:

/etc/hosts
/etc/resolv.conf
/etc/nsswitch.conf

EXAM NOTE: Shouldn’t have to worry about those 3 really. They shouldn’t be broken.

When changing your ip, hostname, etc, you need to watch with RHEL6, as its slightly different from RHEL5, and that is that RHEL6 uses network manager. So what do most people do? Remove network manager and go back to the old way using:

/etc/sysconfig/network-scripts/ifcfg-ethX
/etc/sysconfig/network

For DHCP configurations, setup the configuration as follows:

[root@web01 ~]# vim /etc/sysconfig/network-scripts/ifcfg-ethX
...
DEVICE=eth0
BOOTPROTO=dhcp
ONBOOT=yes
...

For STATIC configurations, setup the configuration as follows:

[root@web01 ~]# vim /etc/sysconfig/network-scripts/ifcfg-ethX
...
DEVICE=eth0
BOOTPROTO=none
IPADDR=x.x.x.x
NETMASK=xxx.xxx.xxx.xxx
ONBOOT=yes
GATEWAY=x.x.x.x  <--- THis required in RHEL6... no longer in /etc/sysconfig/network
DNS1=8.8.8.8 <---These 2 are also new to RHEL6
DNS2=8.8.4.4 <--- These 2 are also new to RHEL6
DOMAIN=www.blah.com
...

When configuring the network settings, its recommended to use nmcli or nm-connection editor.

In practice, its a pain for server administration. If you want, you can remove network manager and just go back and use your normal things.

[root@web01 ~]# service NetworkManager stop
[root@web01 ~]# chkconfig NetworkManager off

Network Manager is great for desktops, but when you are doing server administration, it just gets in the way.

EXAM NOTES: Your first tasks may be to reset root password and fix networking. So be comfortable with this stuff!

Cron

/etc/anacrontab defines the system cron jobs : This is a more flexible way to run cron. So anacrontab wakes up and realizes it missed a job, it'll go back and run it, when it feels like it. Basically this is useful if you are using a desktop, and your system is asleep in the middle of the night, the next morning when you wake the laptop, it'll run its stuff within reason to get things caught back up intelligently. In theory, this is bogus for servers since they run 24/7, but useful for desktops.

EXAM NOTES: If a user can't run a cron job, check for a cron.deny file. The test is known to throw this out there. So this one sounds critical.

EXAM NOTE: Read the man 5 crontab before taking test as it'll be on there.

Lab:

1.  Create a cronjob for the user root that checks the amount of available space on the system every Friday at 12:34PM

2.  Create a crontjob as a regular user that lists teh contents of /tmp at 3:54AM on Sunday, January 2.  

Answer (Plus interesting note)

man 5 crontab : note: The day of a command's execution can be specified by two fields -- day of month, and day of week.  If both fields are restricted (ie, are not *), the command will be run when either field matches the current time.  For example, `30 4 1,15 * 5'' would cause a command to be run at 4:30 am on the 1st and 15th of each month, plus every Friday.

Syslog

RHEL6 is now using rsyslog as the system logger. They did this cause rsyslog can be sent over TCP, and it also supports a cache function so if the message doesn't get out to the log, it'll cache it till it the message actually makes it to the remote device.

Log messages consists of 3 parts:

a.  Facility : Describes where it came from in the OS.  Ie. Kernel, auth, etc
b.  Level  : what is the priority of the message
c.  Messages : the actual log itself

NOTE: local0-local7 are reserved for your own use/defination.

So to use it, you basically just pipe your stuff through logger out to one of these and create a defination within /etc/rsyslog to redirect the output to a file. Another note: Redhat uses local7 for boot.log as they have a built in library which displays all the fun startup messages. IE: HTTPD OK

Config file: /etc/rsyslog.conf
Defines where all the messages should go. Not much different from /etc/syslog.conf

Interesting Note:
*.err /dev/root # If you set this, or any username, the console will get the message displayed on their screen. Def didn't know that.

When you use a .none (ie. *.none;mail.none;authpriv;none /var/log/messages, this means all will be catched except mail and authpriv since those are set to .none.

EXAM NOTE: Most of this will not be applicable to the test. The test will have everything setup to its default locations.

Logrotate

Config file: /etc/logrotate.conf
Basically RHEL6 rotates once a week, with a retention rate of 1 month total. Interesting note, they no longer appear to do 1.gz, 2.gz, etc. They apparently pop the timestamp at the end of the file.

Extended configurations are stored in /etc/logrotate.d file. Each one of these is its own logrotate configuration file. Just use this for your applications, etc.

You can force log rotation to go through by:

[root@web01 ~]# logrotate -vf /etc/logrotate.conf (or whatever file)

In the logrotate.d/blah file:

sharedscripts - Means run all the global scripts
postrotate - This is where you can run your custom thing such as:  service httpd restart

Troubleshooting

A lot of the troubleshooting objectives on the exame are permissions based, and possibly some minor SELinux, as well as locating error messages in logs.

Some useful tools that will be used on test:

- top
- df -h
- ldd : list library dependecies
- ldconfig: Update library locatation database.  Think mlocate.  It basically just reads /etc/ld.so.conf and /etc/ld.so.conf.d/* to create its indexes.

EXAM NOTE: Thankfully, the ldd/ldconfig stuff won't be on test really.

Nice level

The kernel users these priorities when figuring out what needs to run when. Ranges are from -20 to 19. Its not the actual priority, but just a minor tunable. Its not guaranteed to do anything cause its up to the process scheduler at the end of the day, but it'll at least tell it to try to give your (ie database) more priority.

So if you want to give yourself a higher priority, then give yourself a -20. Regular users cannot set their stuff to be a higher priority. Only root can do that. Regular users can however give themselves a lower priority of like 20.. therefore, their application will not impact others as hard.

EXAM NOTE: Probably won't be on test.

Lab

1.  Take a few minutes to browse through the various logs in /var/log.  Familiarize yourself with the kinds of info available.

2.  Browse the man page for rsyslog.conf

3.  Find where the audit service keeps its log and add a corresponding new entry to your logrotate configuration.  Force a rotation to see everything work.

Answer:  it logs to /var/log/audit/*.  So create a new config:  cp cups.conf to audit.conf
Modify the log entry.  Rerun: logrotate -vf /etc/logrotate.conf

4.  Remove the audit logrotate configuration and restart the auditd service

5.  Locate the PIDS of the highest memory and highest CPU utilization processes.  Play with their nice levels.

RHCSA Study Guide – Objective 2 : Packages

############################
Everything below are my raw notes that I took while attending an unofficial RHCSA training session.  I am posting them here in hopes they will assist others who may be preparing to take this exam.  

My notes are my own interpretation of the lectures, and are certainly not a replacement to classroom training either through your company, or by taking the official RHCSA classes offered through Red Hat.  If you are new to the Red Hat world, I strongly suggest looking into their training courses over at Red Hat.
############################

EXAM NOTE: Will need to know how to manually enable/create a repo

rpm -i : install
rpm -q : query the database
rpm -e : erase rpm.

EXAM NOTE: Probably won’t need to know much about rpm other then the above.

rpm -qa : Queries and lets you know everything that is installed.
rpm -qi : Queries the rpm database for pkg info.
rpm -qf : Determines which rpm a file is associated with.
rpm -ql : Queries the rpm database to determine which files are associated with an rpm.
rpm -Va : Verifies all installed packages.
rpm -Vi  : Verifies given package.
rpm -Va |grep ^..5  : This will show you everything user has changed recently.  Can be useful!

EXAM NOTE: Asides from the last one, nothing here is likely going to be applicable for the test.

How to extract RPM Contents:

cd /temp/dir
rpm2cpio /path/to/package | cpio -i -d -m

EXAM NOTE: This will not be on test. If Apache is messed up, just reinstall it.

The wrapper for RPM is yum (Yellowdog updater modified).

install : Install stuff
search : Find stuff  : ex.  yum search bash
provides : Find files within packages when yum search doesn't help : ex. yum provides sed
clean all : Useful if you broke your conf file and yum is broke.  ex. yum clean all

EXAM NOTE: The above stuff will be used on test.

How to setup repository when Redhat says: “All your packages can be found at:
http://www.example.com/directory/of/packages.” To do this, first setup the repo:

vi /etc/yum.repos.d/myrepo.repo
[myrepo]
name = my repo thingy
gpgcheck = 0
baseurl=http://www.example.com/directory/of/packages

Now list the available repos:

yum repolist

To import key if you like:

yum import /url/to/gpg/key

EXAM NOTE: **IMPORTANT** The above will be on test! This is CRITICAL. Without this, you cannot do anything!

To use a local repo, you set the baseurl as follows:

baseurl=file:///path/to/your/file

RHCSA Study Guide – Objective 1 : Booting

############################
Everything below are my raw notes that I took while attending an unofficial RHCSA training session.  I am posting them here in hopes they will assist others who may be preparing to take this exam.  

My notes are my own interpretation of the lectures, and are certainly not a replacement to classroom training either through your company, or by taking the official RHCSA classes offered through Red Hat.  If you are new to the Red Hat world, I strongly suggest looking into their training courses over at Red Hat.
############################

Basic things to know on test:
– Reset root passwd via single user mode

Upstart

RHEL 6 is now using upstart (upstart.ubuntu.com). SystemV stuff is finally getting deprecated, but inittab is still there.

Quick notes about Upstart:

- Upstart allows for faster boot times, running jobs concurrently
- Upstart is just used for startup/shutdown/service management.
- The only place I'll notice this on test is in /etc/inittab.  Specifically for modifying the runlevels within /etc/inittab.
- Configuration files are in /etc/init : These tell the services how to handle events such as ctrl-alt-delete, runlevels, etc.

Runlevels:

0 : os stopped
1 : single
2 : multi-user, no NFS shares
3 : full multiuser, TUI
4 : unused
5 : full multiuser, GUI
6 : Reboot

Switch runlevels by:

telinit

(init is technically NOT the command, but works as legacy for now)

1. At bootup, kernel starts /sbin/init
2. The startup event causes /etc/init/rcS.conf to fire.
3. rcS.conf then greps out the /etc/inittab to get init level, and runs it.
4. These fires off /etc/init/rc.conf, which fires off the /etc/rc.d/rc script
5. All the startup scripts in /etc/rcX.d will get fired off accordingly.

EXAM NOTE: Nothing major is really needed for the exam on upstart. Obviously know run levels.

Init Scripts:
– To view all startup files for a run level, just look at:

ex.  runlevel 5 : /etc/rc.5.d
ex.  runlevel 3 : /etc/rc.3.d

Notes about identifiers:

- S means to start the service
- K means to kill the service
- After the S or K, there is a two digit number used for ordering the
execution of the scripts, ie.  priority...
- When entering run levels, kill scripts run first, then start scripts
- All the scripts reside in /etc/init.d.. they are just symlinked to the /etc/rc.X
- All these symlinks are just managed by chkconfig : When you just do:
chkconfig  on, it'll just default the service levels based off the defualts in the init script.

EXAM NOTE: Know how to use chkconfig

Grub

Grub is responsible for the initial kernel load at boot time.

EXAM NOTE: Probably won’t need to do much with /boot/grub/grub.conf

To get into command mode when system is booting, type:

c : command mode
e : edit mode
a : append mode
esc : brings you back to previous menu
enter : boots the machine based off your selection

To get into single user mode, add a ‘s’ or the word ‘single’ to the end of the kernel command line.

EXAM NOTE: This WILL be on test. Make sure you know how to reset lost root passwd

Lab:

1.  Reboot machine into single user mode and reset root passwd
2.  Review a few of the init.d scripts to get familiar
3.  Review the configuration files in /etc/init.

Server monitoring script

Without an agent based monitoring system, monitoring your servers internals for items such as CPU, memory, storage, processes, etc becomes very difficult without manually checking. There are many reputable monitoring services on the web such as Pingdom (www.pingdom.com), and most hosting providers provide a monitoring system, but they do not provide an agent. Therefore, you can only do basic external checks such as ping, port, and http content checks. There is no way to report if your MySQL replication has failed, some critical process has stopped running, or if your about to max out your / partition.

This simple bash script located on github, is meant to compliment these types of monitoring services. Just drop the script into a web accessible directory, configure a few options and thresholds, setup a URL content check that looks at the status page searching for the string ‘OK’, and then you can rest easy at night that your monitoring service will alert you if any of the scripts conditions are triggered.

Security note: To avoid revealing information about your system, it is strongly recommended that you place this and all web based monitoring scripts behind a htaccess file that has authentication, whitelisting your monitoring servers IP addresses if they are known.

Features

– Memory Check
– Swap Check
– Load Check
– Storage Check
– Process Check
– Replication Check

Configuration

The currently configurable options and thresholds are listed below:

 # Status page
status_page=/var/www/system-health-check.html

# Enable / Disable Checks
memory_check=off
swap_check=on
load_check=on
storage_check=on
process_check=on
replication_check=off

# Configure partitions for storage check
partitions=( / )

# Configure process(es) to check
process_names=( httpd mysqld postfix )

# Configure Thresholds
memory_threshold=99
swap_threshold=80
load_threshold=10
storage_threshold=80

Implementation

Download script to desired directory and set it to be executable:

 cd /root
git clone https://github.com/stephenlang/system-health-check
chmod 755 system-health-check/system-health-check.sh

After configuring the tunables in the script (see above), create a cron job to execute the script every 5 minutes:

 crontab -e
*/5 * * * * /root/system-health-check/system-health-check.sh

Now configure a URL content check with your monitoring providers tools check the status page searching for the string “OK”. Below are two examples:

 http://1.1.1.1/system-health-check.html
http://www.example.com/system-health-check.html

Testing

It is critical that you test this monitoring script before you rely on it. Bugs always exist somewhere, so test this before you implement it on your production systems! Here are some basic ways to test:

1. Configure all the thresholds really low so they will create an alarm. Manually run the script or wait for the cronjob to fire it off, then check the status page to see if it reports your checks are now in alarm.

2. To test out the process monitoring (assuming the system is not in production), configure the processes you want the script to check, then stop the process you are testing, and check the status page after the script runs to see if it reports your process is not running.

3. To test out the replication monitoring (assuming the system is not in production), log onto your MySQL slave server and run ‘stop slave;’. Then check the status page after the script runs to see if it reports an error on replication.

Resetting a MySQL root password

Resetting a forgotten MySQL root password is a pretty straight forward task to complete assuming you have sudo or root access to the server. It is important to note that by performing this procedure, MySQL will be down till you complete everything. So be sure to do this during a time when it will not impact your business.

First, stop the MySQL service

service mysqld stop # RHEL clones
service mysql stop # Ubuntu / Debian distro

Now its time to bring MySQL back up in safe mode. This means we will start MySQL, but we are simply skipping the user privileges table:

sudo mysqld_safe --skip-grant-tables &

Time to log into MySQL and switch to the MySQL database:

mysql -uroot
use mysql;

Now reset the root password and flush privileges:

update user set password=PASSWORD("enternewpasswordhere") where User='root';
flush privileges;
quit

Once you have completed that, its time to take MySQL out of safe mode and start it back up normally. NOTE: Be sure MySQL is fully stopped before trying to start the service again. You may need to kill -9 the process if its being stubborn.

Stop MySQL:

service mysqld stop # RHEL clones
service mysql stop # Ubuntu / Debian distro

Verify that the MySQL process is no longer running. If they are, you may have to kill -9 the process:

ps -waux |grep mysql

Start MySQL back up:

service mysqld start # RHEL clones
service mysql start # Ubuntu / Debian distro

Finally, test out logging in to ensure its now working properly:

mysql -uroot -p

Keeping multiple web servers in sync with rsync

People looking to create a load balanced web server solution often ask, how can they keep their web servers in sync with each other? There are many ways to go about this: NFS, lsync, rsync, etc. This guide will discuss a technique using rsync that runs from a cron job every 10 minutes.

There will be two different options presented, pulling the updates from the master web server, and pushing the updates from the master web server down to the slave web servers.

Our example will consist of the following servers:

web01.example.com (192.168.1.1) # Master Web Server
web02.example.com (192.168.1.2) # Slave Web Server
web03.example.com (192.168.1.3) # Slave Web Server

Our master web server is going to the single point of truth for the web content of our domain. Therefore, the web developers will only be modifying content from the master web server, and will let rsync handle keeping all the slave nodes in sync with each other.

There are a few prerequisites that must be in place:
1. Confirm that rsync is installed.
2. If pulling updates from the master web server, all slave servers must be able to SSH to the master server using a SSH key with no pass phrase.
3. If pushing updates from the master down to the slave servers, the master server must be able to SSH to the slave web servers using a SSH key with no passphrase.

To be proactive about monitoring the status of the rsync job, both scripts posted below allow you to perform a http content check against a status file to see if the string “SUCCESS” exists. If something other then SUCCESS is found, that means that the rsync script may have failed and should be investigated. An example of this URL to monitor would be is: 192.168.1.1/datasync.status

Please note that the assumption is being made that your web server will serve files that are placed in /var/www/html/. If not, please update the $status variable accordingly.

Using rsync to pull changes from the master web server:

This is especially useful if you are in a cloud environment and scale your environment by snapshotting an existing slave web server to provision a new one. When the new slave web server comes online, and assuming it already has the SSH key in place, it will automatically grab the latest content from the master server with no interaction needed by yourself except to test, then enable in your load balancer.

The disadvantage with using the pull method for your rsync updates comes into play when you have multiple slave web servers all running the rsync job at the same time. This can put a strain on the master web servers CPU, which can cause performance degradation. However if you have under 10 servers, or if your site does not have a lot of content, then the pull method should work fine.

Below will show the procedure for setting this up:

1. Create SSH keys on each slave web server:

ssh-keygen -t dsa

2. Now copy the public key generated on the slave web server (/root/.ssh/id_dsa.pub) and append it to the master web servers, /root/.ssh/authorized_keys2 file.

3. Test ssh’ing in as root from the slave web server to the master web server
# On web02

ssh [email protected]

4. Assuming you were able to log in to the master web server cleanly, then its time to create the rsync script on each slave web server. Please note that I am assuming your sites documentroot’s are stored in /var/www/vhosts. If not, please change the script accordingly and test!

mkdir -p /opt/scripts/
vi /opt/scripts/pull-datasync.sh

#!/bin/bash
# pull-datasync.sh : Pull site updates down from master to front end web servers via rsync

status="/var/www/html/datasync.status"

if [ -d /tmp/.rsync.lock ]; then
echo "FAILURE : rsync lock exists : Perhaps there is a lot of new data to pull from the master server. Will retry shortly" > $status
exit 1
fi

/bin/mkdir /tmp/.rsync.lock

if [ $? = "1" ]; then
echo "FAILURE : can not create lock" > $status
exit 1
else
echo "SUCCESS : created lock" > $status
fi

echo "===== Beginning rsync ====="

nice -n 20 /usr/bin/rsync -axvz --delete -e ssh [email protected]:/var/www/vhosts/ /var/www/vhosts/

if [ $? = "1" ]; then
echo "FAILURE : rsync failed. Please refer to solution documentation" > $status
exit 1
fi

echo "===== Completed rsync ====="

/bin/rm -rf /tmp/.rsync.lock
echo "SUCCESS : rsync completed successfully" > $status

Be sure to set executable permissions on this script so cron can run it:

chmod 755 /opt/scripts/pull-datasync.sh

Using rsync to push changes from the master web server down to slave web servers:

Using rsync to push changes from the master down to the slaves also has some important advantages. First off, the slave web servers will not have SSH access to the master server. This could become critical if one of the slave servers is ever compromised and try’s to gain access to the master web server. The next advantage is the push method does not cause a serious CPU strain cause the master will run rsync against the slave servers, one at a time.

The disadvantage here would be if you have a lot of web servers syncing content that changes often. Its possible that your updates will not be pushed down to the web servers as quickly as expected since the master server is syncing the servers one at a time. So be sure to test this out to see if the results work for your solution. Also if you are cloning your servers to create additional web servers, you will need to update the rsync configuration accordingly to include the new node.

Below will show the procedure for setting this up:

1. To make administration easier, its recommended to setup your /etc/hosts file on the master web server to include a list of all the servers hostnames and internal IP’s.

vi /etc/hosts
192.168.1.1 web01 web01.example.com
192.168.1.2 web02 web02.example.com
192.168.1.3 web03 web03.example.com

2. Create SSH keys on the master web server:

ssh-keygen -t dsa

3. Now copy the public key generated on the master web server (/root/.ssh/id_dsa.pub) and append it to the slave web servers, /root/.ssh/authorized_keys2 file.

4. Test ssh’ing in as root from the master web server to each slave web server
# On web01

ssh root@web02

5. Assuming you were able to log in to the slave web servers cleanly, then its time to create the rsync script on the master web server. Please note that I am assuming your sites documentroot’s are stored in /var/www/vhosts. If not, please change the script accordingly and test!

mkdir -p /opt/scripts/
vi /opt/scripts/push-datasync.sh

#!/bin/bash
# push-datasync.sh - Push site updates from master server to front end web servers via rsync

webservers=(web01 web02 web03 web04 web05)
status="/var/www/html/datasync.status"

if [ -d /tmp/.rsync.lock ]; then
echo "FAILURE : rsync lock exists : Perhaps there is a lot of new data to push to front end web servers. Will retry soon." > $status
exit 1
fi

/bin/mkdir /tmp/.rsync.lock

if [ $? = "1" ]; then
echo "FAILURE : can not create lock" > $status
exit 1
else
echo "SUCCESS : created lock" > $status
fi

for i in ${webservers[@]}; do

echo "===== Beginning rsync of $i ====="

nice -n 20 /usr/bin/rsync -avzx --delete -e ssh /var/www/vhosts/ root@$i:/var/www/vhosts/

if [ $? = "1" ]; then
echo "FAILURE : rsync failed. Please refer to the solution documentation " > $status
exit 1
fi

echo "===== Completed rsync of $i =====";
done

/bin/rm -rf /tmp/.rsync.lock
echo "SUCCESS : rsync completed successfully" > $status

Be sure to set executable permissions on this script so cron can run it:

chmod 755 /opt/scripts/push-datasync.sh

Now that you have the script in place and tested, its now time to set this up to run automatically via cron. For the example here, I am setting up cron to run this script every 10 minutes.

If using the push method, put the following into the master web servers crontab:

crontab -e
# Datasync script
*/10 * * * * /opt/scripts/push-datasync.sh

If using the pull method, put the following onto each slave web servers crontab:

crontab -e
# Datasync script
*/10 * * * * /opt/scripts/pull-datasync.sh

Using logrotate for custom log directories with wildcards

Logrotate is a useful application for automatically rotating your log files. If you choose to store certain logs in directories that logrotate doesn’t know about, you need to create a definition for this.

In the example listed near the bottom of the post, we are showing how we can rotate our logs that are stored in /home/sites/logs. These logs are for Apache access and error logs. But also has a third log file that ends in .logs.

If you ran through this quickly, one might simply create a definition as a wildcard, such as:
/home/sites/logs/*

Unfortunately, logrotate will read this literally and now will rotate compressed log files that were already in rotation, leaving you with a mess in your log directories like this:

/home/sites/logs/http-access.log.1.gz
/home/sites/logs/http-access.log.1.gz.1
/home/sites/logs/http-access.log.1.gz.1.1
/home/sites/logs/http-access.log.1.gz.1.1.1
/home/sites/logs/http-access.log.1.gz.1.1.1.1

And it just goes down hill from there. This exact thing happened to me cause I forgot to read the man page which clearly stated:

 Please use wildcards with caution. If you specify *, log rotate will
rotate all files, including previously rotated ones. A way around this
is to use the olddir directive or a more exact wildcard (such as
*.log).

So using wildcards are still acceptable, but use them with caution. As I had three types of files to rotate in this example, I found that I can string them together as follows:

/home/sites/logs/*.error /home/sites/logs/*.access /home/sites/logs/*.log

I’ll post the example configuration below that was implemented on a FreeBSD solution for logging this custom directory:

cat /usr/local/etc/logrotate.conf
# see "man log rotate" for details
# rotate log files weekly
daily

# keep 4 weeks worth of backlogs
rotate 30

# create new (empty) log files after rotating old ones
create

# uncomment this if you want your log files compressed
compress

# RPM packages drop log rotation information into this directory
# include /usr/local/etc/logrotate.d

# system-specific logs may be configured here
/home/sites/logs/*.error
/home/sites/logs/*.access
/home/sites/logs/*.log {
daily
rotate 30
sharedscripts
postrotate
/usr/local/etc/rc.d/apache22 reload > /dev/null 2>/dev/null
endscript
}

Now lets test this out to confirm the logs will rotate properly by running:

logrotate -f /usr/local/etc/logrotate.conf
logrotate -f /usr/local/etc/logrotate.conf
logrotate -f /usr/local/etc/logrotate.conf

When you check your logs directory, you should now see the files rotating out properly:

/home/sites/logs/http-access.log
/home/sites/logs/http-access.log.1.gz
/home/sites/logs/http-access.log.2.gz
/home/sites/logs/http-access.log.3.gz

Full Server Rsync Migrations

There are times when you need to do a one to one server migration.  This can seem like a daunting task, however with the magic of rsync, performing full server migrations can be a fairly painless task requiring little downtime.

Prerequisites

On both the old and new servers, you want to ensure the following requirements are met:

1. Confirm both the old and new servers are using the same hardware architecture. You cannot perform an rsync migration if one server is 32-bit, and the other is a 64-bit system. This can be verified by running the following command, and checking to see if it has “i386”, which means 32-bit, or if both have “x86_64”, which stands for 64-bit.

uname -a
Linux demo 2.6.18-308.el5xen #1 SMP Tue Feb 21 20:47:10 EST 2012 x86_64 x86_64 x86_64 GNU/Linux

So in our example, I have verified that both the old and new servers are 64-bit.

2. Confirm they are both running the same exact version of the operating system. Normally this means simply confirming both servers are at the latest patch level which you can do by running:

yum update
cat /etc/redhat-release
CentOS release 5.8 (Final)

For this article, I will be using two servers that are running CentOS 5.8 (Final).

3. Confirm rsync is installed on both servers

yum install rsync

4. Clean up server before migration. Depending on the amount of files on your server, the initial rsync migration can take quite a while. So you will want to browse through your old server and remove any extraneous temporary, cache, or other large files that you no longer need. You should also check your logs and ensure that their sizes are reasonable, and archive or delete the older logs you no longer need.

5. If you are not going to be swapping IP’s, and simply updating DNS to point to the new server, confirm that all your domains on the old server have a very low TTL set in the zonefile. A TTL of 300 is usually considered the lowest acceptable TTL to set.

Begin server migration

The procedure I’ll be writing out below is a two step process. It is meant to help minimize the amount of downtime that is involved when you swap the IP’s or update DNS, assuming you have a low TTL set. The steps are below:
1. Perform initial rsync
2. Perform final rsync and ip swap (or DNS update)

The initial rsync is just used to get the majority of the static files over to the new server. The final rsync is meant to update anything that is dynamic, such as logs, updated web content, databases, etc.

So before we begin, you will want to create an excludes file on the old server. This file will tell rsync NOT to copy over system specific information that is not needed for your new system.

vi /root/rsync-exclude.txt
/boot
/proc
/sys
/tmp
/dev
/var/lock
/etc/fstab
/etc/mdadm.conf
/etc/mtab
/etc/resolv.conf
/etc/conf.d/net
/etc/network/interfaces
/etc/networks
/etc/sysconfig/network*
/etc/sysconfig/hwconf
/etc/sysconfig/ip6tables-config
/etc/sysconfig/kernel
/etc/hostname
/etc/HOSTNAME
/etc/hosts
/etc/modprobe*
/etc/modules
/etc/udev
/net
/lib/modules
/etc/rc.conf
/lock
/etc/ssh/ssh_host_rsa_key.pub
/etc/ssh/ssh_host_rsa_key
/etc/ssh/ssh_host_dsa_key.pub
/etc/ssh/ssh_host_dsa_key
/etc/network.d
/etc/network/*
/etc/machine-id
/usr/share/nova-agent*
/usr/sbin/nova-agent*
/etc/rc.d/init.d/nova-agent*
/etc/init.d/nova-agent*
/etc/rackspace-monitoring-agent*
/etc/rackspace
/etc/driveclient/bootstrap.json

The example above should cover a couple of different distros. But always review someone else’s example before applying it to your own systems!!

Now that we have the proper excludes in place, lets run a dry run of rsync to see what would have happened before we actually do it. Please note that this is to run on the old server. Replace xxx.xxx.xxx.xxx with the IP of the new server:

rsync -azPx -e ssh --dry-run -azPx --delete-after --exclude-from="/root/rsync-exclude.txt" / [email protected]:/

If all looks well, lets perform the actual initial rsync:

rsync -azPx -e ssh -azPx --delete-after --exclude-from="/root/rsync-exclude.txt" / [email protected]:/

Depending on how much data you have, this could take a few minutes, or many hours. Once this is complete, you will want to schedule a maintenance window to perform the final rsync and IP swap (or DNS update). You want to perform this during a maintenance window as you will need to stop any database services or anything else that has very dynamic data to avoid corruption. So in the example, I just have a basic LAMP server, so I will need to shut down MySQL before performing the final rsync. Here are the steps I’ll be using:
1. Stop MySQL on old server
2. Perform final rsync
3. On new server, reboot server and test everything
4. Swap IP from old server to new, or update your DNS accordingly.

On the old server:

service mysql stop
rsync -azPx -e ssh -azPx --delete-after --exclude-from="/root/rsync-exclude.txt" / [email protected]:/

Now we are ready to start testing our new server.

Testing And Go-Live

Lets wave that dead chicken over the alter, its time to see if your new server survives a reboot, and if all the services come back online properly. There are no guides for this. Just reboot your server, then test out your sites, databases, and keep a wary eye on your system logs.

Once you have confirmed that everything looks good, it will then be safe to swap the IP’s, or update DNS accordingly. In the event that a problem surfaces shortly after the migration, you always have the option of rolling back to your older server, assuming you won’t be losing any critical transactions.