RHCSA Study Guide – Objective 7 : File Sharing

############################
Everything below are my raw notes that I took while attending an unofficial RHCSA training session.  I am posting them here in hopes they will assist others who may be preparing to take this exam.  

My notes are my own interpretation of the lectures, and are certainly not a replacement to classroom training either through your company, or by taking the official RHCSA classes offered through Red Hat.  If you are new to the Red Hat world, I strongly suggest looking into their training courses over at Red Hat.
############################

NFS

The network file service (NFS) is used to share data with other servers.

To see if the NFS server has the ports listening:

[[email protected] ~]# rpcinfo -p server1

To see what shares are setup on the NFS server:

[[email protected] ~]# showmount -e server1

To mount the NFS share:

[[email protected] ~]# mount x.x.x.x:/share1 /mnt

To make it persistent across reboots:

[[email protected] ~]# vi /etc/fstab
...
x.x.x.x:/share /mnt nfs defaults 0 0
...

EXAM NOTE: You just need to know how to mount a share for the rhcsa. No real nfs configuration needed

Lab

Mount the /share NFS share from server1, and add it to your fstab for persistence across reboots
[[email protected] ~]# mount -t nfs server1:/share /mnt
[[email protected] ~]# vim /etc/fstab
...
server1:/share  /mnt nfs defaults 0 0
...

VSFTPD

The default FTP server is vsftpd. The primary configuration file is:

/etc/vsftpd/vsftpd.conf

Two types of access are allowed:

1.  Anonymous : By default, these users are chrooted to /var/ftp for security.  (NOTE for SElinux), could use that --reference flag if changing dir
2.  User :  By default, users do not get chrooted.

Indivudual users can be denied by placing their names in:

[[email protected] ~]# vim /etc/vsftpd/ftpusers

Lab

1.  Configure VSFTPd to only allow the user 'richard' to ftp to your server
[[email protected] ~]# yum install vsftpd
[[email protected] ~]# chkconfig vsftpd on

# Now, need to set selinux to allow users to write to their homedir
[[email protected] ~]# getsebool -a |grep ftp
[[email protected] ~]# setsebool -P ftp_home_dir on
[[email protected] ~]# setsebool -P sftpd_enable_homedirs on

# EXAM NOTE: DO NOT FORGET TO SPECIFY THE -P SO THE CHANGE IS PERSISTENT ACROSS REBOOTS!

# Now, set vsftpd to only allow richard in:
[[email protected] ~]# vi /etc/vsftpd/vsftpd.conf
...
userlist_enable=NO
...

[[email protected] ~]# vi /etc/vsftpd/user_list
# Remove everything and add
richard

# Test by:
[[email protected] ~]# ftp localhost

2.  Browse through the man page on vsftpd.conf
[[email protected] ~]# man vsftpd.conf

3.  Make sure vsftpd is started at boot time
[[email protected] ~]# chkconfig vsftpd on

RHCSA Study Guide – Objective 6 : Kernel Features

############################
Everything below are my raw notes that I took while attending an unofficial RHCSA training session.  I am posting them here in hopes they will assist others who may be preparing to take this exam.  

My notes are my own interpretation of the lectures, and are certainly not a replacement to classroom training either through your company, or by taking the official RHCSA classes offered through Red Hat.  If you are new to the Red Hat world, I strongly suggest looking into their training courses over at Red Hat.
############################

LVM

Quick note: There is a gui tool for all this if that is what you like:

[[email protected] ~]# yum install system-config-lvm
[[email protected] ~]# system-config-lvm

LVM abstracts the physical hardware into logical drive spaces which can be dynamically grown/shrunk and span disparate physical devices. Its good for hard drive management as it abstracts away the details of the underlying storage devices.

There is a low amount of overhead to the VFS layer, so you are going to take a slight performance hit.

LVM Terminology:

- Physical Volume (pv) - A physical volume is the partition/raid device for the lvm space.
- Physical Extent (pe) - EXAM NOTE:  You may need to change this on text.  Its a chuck of disk space.  Defaults to 4M
- Volume Group (vg) - Collection of physical volumes
- Logical Volume (lv) - A logical volume is a grouping of physical extents from your physical volume.  This is where you format your fs.

Gist of how this works:
pvcreate : Create a physical volume

[[email protected] ~]# pvcreate /dev/sda4

vgcreate : Create a volume group on PV

vgcreate VolGroup00 /dev/sda4  # This is where the extents are created.

lvcreate : Create a logical volume on VG

[[email protected] ~]# lvcreate -n myvol -L 10G VolGroup00 # This is where the extents are allocated.

Other commands:

vgextend
lvextend
lvresize
resize2fs
lvresize # Use this one when doing your resize:  
ex:  lvresize -r {-l (+extents) | -L (+size)} (lv)

When mounting your new fs, use: /dev/vg/lv.

Lab

EXAM NOTE: Most of this stuff will be on the test

1.  Add logical volume management on top of a new partition.  Use a physical extent size of 16MB.  Use fdisk or whatever to create new volume.  
[[email protected] ~]# pvcreate /dev/sda7
[[email protected] ~]# vgcreate -s 16 vg0 /dev/sda7
[[email protected] ~]# vgdisplay vg0

2.  Use half the available space for a logical volume formatted with ext4 and mounted persistently across reboots.
[[email protected] ~]# lvcreate -n myvol -L 5G vg0
[[email protected] ~]# ls -al /dev/vg0/myvol
[[email protected] ~]# ls -al /dev/mapper/vg0-myvol
[[email protected] ~]# ls -al /dev/dm-3
[[email protected] ~]# mkfs -t ext4 /dev/vg0/myvol
[[email protected] ~]# vi /etc/fstab
...
/dev/vg0/myvol /u03 ext4 defaults 1 2
...
[[email protected] ~]# mkdir /u03
[[email protected] ~]# mount /u03

3.  Take a snapshot of this logical volume and check the file system for errors
[[email protected] ~]# ls -al /u03
[[email protected] ~]# cp /var/log/* /u03/
[[email protected] ~]# lvcreate -s /dev/vg0/myvol -n snap-of-myvol -L 500M
[[email protected] ~]# ls -al /dev/vg0
[[email protected] ~]# lvdisplay vg0 # You will see your 2 logical volumes (and snapshot in 'source of')
[[email protected] ~]# mount /dev/vg0/snap-of-myvol /mnt  # This is how you mount that snapshot.

4.  Assuming none are found, reset the counter for days and mounts until a check is forced on the original file system.
[[email protected] ~]# umount /mnt
[[email protected] ~]# fsck /dev/vg0/snap-of-myvol  # If you see clean, then you should be okay
[[email protected] ~]# tune2fs /dev/vg0/snap-of-myvol # Shows more verifications
[[email protected] ~]# tune2fs -C 25 /dev/vg0/snap-of-myvol # Fake out the system to make it believe it has been mounted 25 times so it will actually fsck
[[email protected] ~]# fsck /dev/vg0/snap-of-myvol
[[email protected] ~]# umount /u03
[[email protected] ~]# lvresize -r -L +100M /dev/vg0/myvol
[[email protected] ~]# lvchange -an /dev/vg0/myvol
[[email protected] ~]# lvchange -ay /dev/vg0/myvol
[[email protected] ~]# lvresize -r -L +100M /dev/vg0/myvol
[[email protected] ~]# lvremove /dev/vg0/snap-of-myvol # If this fails, just try a few more times.  Its a known issue

5.  Copy some data onto the LV, then expand it and the filesystem by 50MB.  fsck, then re-mount the filesystem and verify it's contents.  Also try reducing by 50MB 
[[email protected] ~]# umount /u03
[[email protected] ~]# lvresize -r -L +100M /dev/vg0/myvol
[[email protected] ~]# lvchange -an /dev/vg0/myvol
[[email protected] ~]# lvchange -ay /dev/vg0/myvol
[[email protected] ~]# lvresize -r -L +100M /dev/vg0/myvol
[[email protected] ~]# lvremove /dev/vg0/snap-of-myvol # If this fails, just try a few more times.  Its a known issue.  You can resize till snap is gone.
[[email protected] ~]# lvresize -r -L +100M /dev/vg0/myvol # Grow it
[[email protected] ~]# lvresize -r -L -300M /dev/vg0/myvol # Shrink it
# Or you can set it to 2G by
[[email protected] ~]# lvresize -r -L 2G /dev/vg0/myvol
# NOTE:  The -r flag does everything with the resize2fs... so you won't have to resize your system manually.

Swap space

Swap space allows the kernel to better manage limited system memory by copying segments of memory onto disk.

To create 2G of additional swap space using a file:

[[email protected] ~]# dd if=/dev/zero of=/swap01 bs=1024 count=2097152
[[email protected] ~]# mkswap /swap01
[[email protected] ~]# swapon /swap01

If you no longer need the /swap01, just:

[[email protected] ~]# swapoff /swap01

Now list your active swap areas by:

[[email protected] ~]# cat /proc/swaps

Performance note:
Creating a swap device via lvm or like its own partition (even better) is better for performance. Setting it up on a file within an existing fs is going to be really horrendous for performance.

Lab

1.  Add 500MB of swap space to your system using a device
[[email protected] ~]# lvcreate -n swap02 -L 500M vg0
[[email protected] ~]# mkdir /swap02
[[email protected] ~]# mount /dev/vg0/swap02 /swap02
[[email protected] ~]# mkswap /swap02
[[email protected] ~]# swapon /swap02
[[email protected] ~]# vi /etc/fstab
...
/dev/vg0/swap02 swap swap defaults 0 0
...

2.  Add 500MB of swap space to your system using a swap file
# Calculate how much 500M is:
[[email protected] ~]# echo $((1024*500))
512000
[[email protected] ~]# dd if=/dev/zero of=/swap01 bs=1024 count=512000
[[email protected] ~]# mkswap /swap01
[[email protected] ~]# swapon /swap01
[[email protected] ~]# vi /etc/fstab
...
/swap01 swap swap defaults 0 0
...

RHCSA Study Guide – Objective 5 : Users

############################
Everything below are my raw notes that I took while attending an unofficial RHCSA training session.  I am posting them here in hopes they will assist others who may be preparing to take this exam.  

My notes are my own interpretation of the lectures, and are certainly not a replacement to classroom training either through your company, or by taking the official RHCSA classes offered through Red Hat.  If you are new to the Red Hat world, I strongly suggest looking into their training courses over at Red Hat.
############################

Users and Groups

EXAM NOTE: On the test, I will likely have to link this machine to a ldap or nis server.

Users and Groups define access to the OS through the file permission scheme. Root is the super user (uid 0). All users are associated with at least one group. Secondary group memberships can exist too.

User info is stored in:

/etc/passwd
/etc/shadow 
/etc/group

/etc/passwd has 7 fields

1.  username
2.  Where the pw used to be set, (but exists in /etc/shadow, so its just a place holder)
3.  Numberical identify for the account (UID)
4.  Numerical idenitfer for the primary group (GID)
5.  Comments field (aka gecos field). 
6.  Home directory where your homedir exists
7.  Your shell or program that executes when you log in.

/etc/shadow has 2 important fields

- login:encrypted_password: (The rest are password aging fields).
- aging fields track dates for passwd resets locks, etc

/etc/group

- group name, pw, gid,membergroups.  
- Group passwords allow temp management to a group are rarely used and not setup by default.

Management tools:

1. useradd – add user. Most common option is -g to specify primary group, and -G to add secondary groups. Example:

[[email protected] ~]# useradd -g clowns -G trouble,simpson bart

2. usermod – Modify a users settings. It takes pretty much all the options as useradd. Though, when modifying group behaviors, when you try to add him to a secondary group, just run:

[[email protected] ~]# usermod -a -G detension bart

3. userdel – Remove user from system. If you give it a -r, it’ll also remove his homedir and spool directories. Example:

userdel -r moe

4. groupadd – Add new group
5. groupmod – Mainly used to rename a group ex. groupmod -n mktg mkg
6. groupdel – Remove a group. Ex. groupdel microsoft
7. passwd – change pw
a. root can change all
b. can diasble accounts ex. passwd -l mary
c. Setup passwd aging
d. Time passwd resets
e. Account disabling (or use chage)

Passord aging

You can set max / min lifetimes for a user’s password.
example:

[[email protected] ~]# passwd -x days user

When a users pw has expired, you can set the nuber of days it can remain expired before disabling the account completely:

[[email protected] ~]# passwd -i days user

User environment files

Used files or defaults when creating accounts

1.  /etc/skel : default template for newly added users homedir
2.  /etc/profile : sets env variabled used by all users
3.  /etc/profile.d : contains scripts specific to certain rpms
4.  /etc/bashrc : contains global aliases and system settings
5.  ~/.bashrc : contains users aliases and functions
6.  ~/.bash_profile : contains user env settings, and can be set to automatically start programs at login.

Lab

EXAM NOTE: ALL this stuff is on the test.

1.  Create a new group 'dev'.  Create a new user 'alice' as a member of the 'dev' group, with a description of 'Alice from Dev' and a default shell of '/bin/csh'.  Use the password command to set a password for alice, then log in as alice and verify her access.

[[email protected] ~]# groupadd dev
[[email protected] ~]# useradd -G dev -c "Alice from Dev" -s /bin/csh alice
[[email protected] ~]# passwd alice

2.  Set a maximun pw lifetime of 4 weeks for the alice account.  Look at the password, shadow, and group files

[[email protected] ~]# passwd -x 30 alice

3.  Configure the users simon, linus, richard.  Set all their passwords to 'linux'
[[email protected] ~]# groupadd ru
[[email protected] ~]# useradd -G ru simon
[[email protected] ~]# useradd -G ru linus
[[email protected] ~]# useradd -G ru richard
[[email protected] ~]# passwd simon
[[email protected] ~]# passwd linux
[[email protected] ~]# passwd richard

4.  Make these users part of the ru group
See #3

5.  Configure the directory /home/linux so that each user from the ru group can read, create, and modify files:
[[email protected] ~]# mkdir /home/linux
[[email protected] ~]# chown -R root:ru /home/linux
[[email protected] ~]# chmod 775 /home/linux
[[email protected] ~]# chmod g+s /home/linux # This means that any files created in here will be writable by group ru regardless of ownership.

6.  Configure the directory /home/linux/work so that each user can create and read files, but only the files's owner can delete.
[[email protected] ~]# mkdir /home/linux/work
[[email protected] ~]# chown root:ru /home/linux/work
[[email protected] ~]# chmod 775 /home/linux/work
[[email protected] ~]# chmod -t /home/linux/work

7.  Use ACL's to allow alice, not in 'ru', access to the work folder.
[[email protected] ~]# setfacl -R -m u:alice:rwx /home/linux/work
[[email protected] ~]# setfacl -m default:u:alice:rwx /home/linux/work # As new objects are created in here, they will inherit the acl's.

NIS and LDAP

NIS and LDAP Servers can be configured to centrally manage system and account info.

NIS – This is suppose to be a very basic management system.

[[email protected] ~]# yum install rpcbind ypbind
[[email protected] ~]# system-config-authentication  # <-- GUI tool for setting this up.  Does everything for you.
[[email protected] ~]# setup -> authentication configuration

It’ll modify:

/etc/sysconfig/network
/etc/yp.conf
/etc/nsswitch.conf
/etc/pam.d/system-auth

LDAP – Widely used, flexible db for storing mac, unix, windows, acl’s, and a whole lot more.

[[email protected] ~]# yum install nss-pam-ldapd
[[email protected] ~]# system-config-authentication

It’ll modify:

/etc/ldap.conf
/etc/openldap/ldap.conf
/etc/nsswitch.conf
/etc/pam.d/system-auth

EXAM NOTE: You just need to know how to configure the clients. Setting up the servers isn’t required for rhcsa or rhce.

[[email protected] ~]# vim /etc/auto.nis
* server1:/nis/&

[[email protected] ~]# man 5 autofs

Side note:

All the kernel documentation that exists is available via:

[[email protected] ~]# yum install kernel-doc
[[email protected] ~]# cd /usr/share/docs/kernel-docs/blah

RHCSA Study Guide – Objective 4 : File Systems

############################
Everything below are my raw notes that I took while attending an unofficial RHCSA training session.  I am posting them here in hopes they will assist others who may be preparing to take this exam.  

My notes are my own interpretation of the lectures, and are certainly not a replacement to classroom training either through your company, or by taking the official RHCSA classes offered through Red Hat.  If you are new to the Red Hat world, I strongly suggest looking into their training courses over at Red Hat.
############################

Filesystem Administration

With partitioning, obviously use fdisk. Granted, apparently something called partprobe is no longer used in RHEL6. Thats great, cause I never used it before. So you will have to reboot to bring the system back up. There is a GUI based tool caused disk utility.

So once the partitioning is done, now create the filesystem:

[[email protected] ~]# mkfs.ext4 /dev/sda2

This can help show you if the disk is dirty:

[[email protected] ~]# tune2fs -l

File system tools:

e2label : view/set filesystem label
tune2fs : view/set filesystem attributes
mount/umount : Mount and un-mount filesystems

EXAM NOTE: Be sure that anything you do on the filesystem, you add it to your /etc/fstab cause the system will be rebooted before it will be graded, so you need to ensure that it works properly upon reboot.

Lab

1.  Using fdisk, create a new 100MB partition
[[email protected] ~]# fdisk /dev/sda
n
e
default
default
n
default
+100M
w

2.  Create a new fs on this partition using ext4, a blocksize of 1k, and a reserve space of 2%.  Confirm settings with tune2fs.  Mount the new fs as /u01, and set it to mount at boot.
[[email protected] ~]# mkfs.ext4 -b 1024 -m 2 /dev/sda5
[[email protected] ~]# mount -t blah and update fstab accordingly

3.  Unmount the /u01 fs and force an integrity check.  Remount the /u01 filesystem.  Use e2label to set the fs label on /u01 to /u01.
[[email protected] ~]# umount /u01
[[email protected] ~]# fsck -f /dev/sda5  # NOTE:  You have to specify the -f to FORCE the fsck.  It will NOT run just because you asked for it.  
[[email protected] ~]# e2label /dev/sda5 /u01
[[email protected] ~]# mount -a
[[email protected] ~]# blkid ; just another way to verify your superblock settings.

EXAM NOTE: This may be on test, but it’ll probably be lvm stuff.

Automount (Autofs)

Autofs monitors a certain directory and can automatically mount a file system when a call is made to files in that directory. It will also unmount the directory in RHEL6 after it hasn’t been touched in 5 minutes.

Its configuration is in:

/etc/auto.master

EXAM NOTE: Will need to know how to tell system which directories to monitor.

/etc/auto.master
path config file
ex.  /misc /etc/auto.misc

This tells automountd to ‘watch’ the /misc pathname for activity, and if activity is observed, consult /etc/auto.misc for instructions.

So for the basic syntax:

path    options   mount device nfs -fstype=nfs,ro  nfsserver:/share/nfs

* This tells automountd to dynamically mount the nfs share to /share/nfs. Autofs will mount stuff as needed.

Lab

1.  Configure your server to automatically mount /share as an NFS share from server1 to /server1/share when a process changes directories there.

[[email protected] ~]# vi /etc/auto.master
...
/server1        /etc/auto.server1
...

[[email protected] ~]# vi /etc/auto.server1
...
share 192.168.1.100:/share
...

[[email protected] ~]# service autofs restart

EXAM NOTE: I would imagine this will be on the test.

Extended Attributes

lsattr - list attributes
chattr - change attributes

EXAM NOTE: Redhat will likely test on the -i flag. So watch out for it.

ACL’s

getfacl
setfacl

You must have the acl mount option set. It’ll work on / since rh does this by default, but you will have to specify this on any new partitions.

[[email protected] ~]# setfacl -m u:bob:w memo.txt  -> Set facls
[[email protected] ~]# setfacl -x g:ru memo.txt -> removes facls
[[email protected] ~]# setfacl -m default:u:bob:w memo.txt -> setfacls

EXAM NOTE: These WILL be on the test.

Quotas

Quotes allow you to limit fs resources to users. Basically disk quotas. To enable, add the following to the mount options:

[[email protected] ~]# vi /etc/fstab
usrquota,grpquota

[[email protected] ~]# quotacheck -mavug
[[email protected] ~]# quotaon -a # Turn on quotas
[[email protected] ~]# edquota -u test # Set limits

EXAM NOTE: These will be on the test.

Lab

1.  Create a quota for the user student with:
- block soft limit of 100M and a hard limit of 150M
- soft inode limit of 30 and a hard inode limit of 100

2.  Create a quota for the group gdm so that its members collectively have:
- a block soft limit of 200M and a hard limit of 300M
- a soft inode limit of 50 and a hard inode limit of 200

Answers:

[[email protected] ~]# vi /etc/fstab # Add the following mount options
usrquota,grpquota

[[email protected] ~]# mount -o remount /
[[email protected] ~]# quotacheck -mavug
[[email protected] ~]# quotaon /home # Turn on quotas
[[email protected] ~]# edquota student # Set limits

# Interesting note:  To do the math quickly on the cli, do:
[[email protected] ~]# echo $((1024*1*100))
[[email protected] ~]# edquota -g gdm

# Set quotas accordingly.
[[email protected] ~]# repquota -g /home

EXAM NOTE: This may be on the exam.

Disk Encryption – LUKS

Quick start for those interested:
http://www.cyberciti.biz/hardware/howto-linux-hard-disk-encryption-with-luks-cryptsetup-command/

[[email protected] ~]# cryptsetup luksFormat   # NOTE:  This will delete all your stuff on disk!!!
[[email protected] ~]# cryptsetup luksOpen  ...  
[[email protected] ~]# cryptsetup luksOpen /dev/sda5 crypt01

This will exist in /dev/mapper/mapname ie. /dev/mapper/crypt01.
# NOTE: The luksformat will prompt for a passphrase, and you can set it to use 8 keys if you like.

Now you will be able to format it:

[[email protected] ~]# mkfs -t ext4 /dev/mapper/crypt01
[[email protected] ~]# mkdir /crypt01
[[email protected] ~]# mount /dev/mapper/crypt01 /crypt01

Now add entry into fstab:

[[email protected] ~]# vi /etc/fstab
...
/dev/mapper/crypt01 /crypt01 ext4 defaults 1 2
...

Once done, now close it (encrypt it) by:

[[email protected] ~]# cryptsetup luksClose /dev/mapper/crypt01

To make this stuff persistent at boot, edit /etc/crypttab as shown below.

1. To make a LUKS encrypted device available at boot time:

[[email protected] ~]# vim /etc/crypttab
mapname device keyfile options

2. To create a keyfile:

[[email protected] ~]# dd if=/dev/urandom of=/etc/keyfile bs=1k count=4
[[email protected] ~]# cryptsetup luksAddKey  /etc/keyfile

3. Add to crypttab

[[email protected] ~]# vi /etc/crypttab
...
/dev/mapper/crypt01 /dev/sda5 [/path/to/keyfile] [option] 
...

EXAM NOTE: Use keyfiles for test. But in practice, use a passphrase, but understand risks involved.

LAB

1.  Create a new 100M physical volume, then set up a luks encrypted ext4 filessystem on the logical volume, which will be persistent across reboots.

2.  Reboot your machine to verify LUKS filesystems prompt for the passphrase and become accessible automatically after bootup

3.  Browse through the man pages on cryptsetup and crypttab

Answers:

1.  Create your 100M logical partition through fdisk
2.  Setup luks stuff
[[email protected] ~]# cryptsetup luksFormat /dev/sda5  # Answer YES, and type your passphrase
[[email protected] ~]# blkid # confirm it setup the type:  cryptoluks
[[email protected] ~]# cryptsetup luksOpen /dev/sda5 crypto  # now enter your password
3.  Now put fs on the device
[[email protected] ~]# mkfs -t ext4 /dev/mapper/crypto
[[email protected] ~]# blkid # You can now see both the raw device, and the crypted device
4.  Setup /etc/fstab
[[email protected] ~]# vi /etc/fstab
...
/dev/mapper/crypto /u02 ext4 default 1 2  # If the test is wonky, set it to 0 0 to prevent fsck.
...
5.  Mount it and your done.
6.  Now create crypttab stuff

# Quick and dirty
[[email protected] ~]# echo -n test > /etc/keyfile  # You need the -n to prevent the newline character
[[email protected] ~]# cryptsetup luksClose /dev/mapper/crypto
[[email protected] ~]# cryptsetup luksOpen /dev/sda5 crypto -d /etc/keyfile # The -d flag forces the key to be used.

# Better way of setting up key - If you don't want to use a pw at all, then do the lukFormat with the -d to specify keyfile.
[[email protected] ~]# dd if=/dev/urandom of=/etc/keyfile bs=1k count=4
[[email protected] ~]# cryptsetup luksAddKey /dev/sda5 /etc/keyfile
# add your original key password
[[email protected] ~]# chmod 400 /etc/keyfile
Now your key works, and so does your passphrase.

[[email protected] ~]# vi /etc/crypttab
crypto /dev/sda5 # If you leave it like this, it'll prompt you for pw at boot
crypto /dev/sda5 /etc/keyfile   # <-- This is how you should do it.

# The method above gives you a secure key, and also a backup passphrase to ensure all is well if you lose your key, you aren't in trouble.

# How to verify all this:
# Confirm your device is unmounted
# This is basically just a way to verify your system will boot most likely.  
[[email protected] ~]# bash
[[email protected] ~]# source /etc/init.d/functions
[[email protected] ~]# init_crypto 1 # This is the function that processes crypttab.  It accepts 0 or 1.  Think of it like mount -a sorta.
[[email protected] ~]# ls -al /dev/mapper
[[email protected] ~]# mount -a
[[email protected] ~]# ls -al /u02

SELinux

Exam note: You can likely leave SElinux disabled or permissive. They will likely not test it at all. It'll be on the RHCE though.

SElinux sits on top of the kernel, telling the kernel what is permitted and what is not. There are 3 levels:

- Disabled : Extentions and hooks are not in kernel
- Permissive : Extension and hooks are there, but if there is a policy violation, the kernel will still allow it.
- Enabled:  Everything there, and blocking accordingly.

Redhat made policies called TARGETED. These are done by groups such as web, mail, ftp, db, etc. Its RHEL's way of making our lives a bit easier. Therefore, by using these targetted polcies, we may just have to fix the files/directories contexts or booleans.

So every process or object has a SELinux context:
- identity:role:domain/type

a.  What identities can use which roles
b.  What roles can enter which domains
c.  What domains can access which types.

Again, RHEL makes this easier and basically just uses the types, nothing else. We can take it further, but that is our choice to make that work.

So in short, SELinux tells the kernel weather or not to allow access to whatever thing.

If you want to view a context for a process, run:

[[email protected] ~]# ps -Z - List the processes contexts
[[email protected] ~]# ls -Z - List the file contexts

To change the context of a file, use:

[[email protected] ~]# chcon -R --reference=/var/www/html file

So what does that mean: Go to this other location (/var/www/html), and apply it to my target (file). So if I put my docroots in /srv, to get SELinux to like this directory, we had to change the context of /srv by:

[[email protected] ~]# chcon -R --reference=/var/www/html /srv

So as long as you know the default location where the contexts reside, you can cheat and just copy the context over to the new location.

All policy violations will be logged to /var/log/audit/audit.log as AVC (access vector cache) denials.
** setroubleshoot is a good tool for reading the output of that log.

Lets say you borked your entire setup, you can reapply the default contexts on all common pathnames. So to restore things, you just do:

[[email protected] ~]# restorecon -R path path...

* NOTE: This will not affect your new stuff like /srv, cause that is not in the default labeling. You can set the semanage stuff (may have to install it), and set the default paths.

restorecon knows about the policies and defaults. chcon only changes things.. that is all chcon knows.

EXAM NOTE: restorecon will not really be needed on the RHCE.. unless you break something hardcore.

There is a graphic tool for selinux: system-config-selinux.
NOTE: You MUST reboot the system when enabling selinux, or disabling it since it mucks with the kernel hooks and stuff.

The config for selinux: /etc/sysconfig/selinux
* This is where you set your enforcing/targetting/disabled, etc. Just the startup mode stuff.

Commands:

getenforce - shows the current SELIinux mode
setenforce - will allow you to change the mode.  ie:  setenforce 0 (dir)
setenforce 0 # Disable selinux temporiatly
setenforce 1 # Enable selinux

NOTE: If the server is completely broken and you cannot even boot, you can disable SELinux in grub by passing the enforcing=0 to the kernel line in grub when booting.

Other troubleshooting tools:

policycoreutils
setroubleshoot

Boolean:
These are basically simple on/off flags for enabling/disabling these:

[[email protected] ~]# getsebool -a |grep httpd  # or whatever.
[[email protected] ~]# setsebool -P blah
# IMPORTANT:  DO NOT FORGET TO SPECIFY THE -P TO MAKE THE CHANGE PERSISTENT ACROSS REBOOTS!

What are some practical uses for selinux:

- Allow you to change the default paths for like where you store db, web, etc, etc.  
- Change the boolean's to allow like public_html directories ie: getsebool -a |grep httpd

Lab

1.  With SELinux enforcing, configure a website to be served from /srv
2.  Dont focus on advanced apache settings, accomplish this in the simplest way possible, change the global documentroot
3.  Populate a simple index.html file.  
4.  The settroubleshoot tool is useful here.  Don't be confused by any typos in the output.

Answer:
Easy enough, just get apache setup, then:

[[email protected] ~]# yum -y install setroubleshootd 
# You will see the stuff needed in /var/log/audit/audit.log or /var/log/messages.
[[email protected] ~]# service auditd restart
[[email protected] ~]# chcon -R --reference=/var/www/html /srv

RHCSA Study Guide – Objective 3 : System Administration

############################
Everything below are my raw notes that I took while attending an unofficial RHCSA training session.  I am posting them here in hopes they will assist others who may be preparing to take this exam.  

My notes are my own interpretation of the lectures, and are certainly not a replacement to classroom training either through your company, or by taking the official RHCSA classes offered through Red Hat.  If you are new to the Red Hat world, I strongly suggest looking into their training courses over at Red Hat.
############################

Kickstart : The kickstart file is nothing more then a flat file answer file.

Anaconda will look for this, and use it to install/configure your server. Stuff you can set are:

- Partitioning and filesystems
- Software packages
- Users, groups, passwords
- Features, networking and more

You can build them:

a.  From scratch
b.  From an existing kickstart file (Probably most common way)
c.  Using system-config-kickstart (Tool is very basic in nature)

How does this work exactly? When you start up anaconda, it will look for a ks line within the kernel section, then fetch the path to your kickstart file.

EXAM NOTE: While this is an objective, this is not on the test as there is no way to test for this.

Network Administration

2 ways to set network ip’s:
– Static
– Dynamic

There are a few different methods of doing this.

1.  Type:  setup
2.  Edit the files directly:  vi /etc/sysconfig/network-scripts/ifcfg-ethX
3.  Using the GUI

Interesting note:
ifconfig – deprecated. Replaced with ip addr list
The ip also has ip route, ip link show, etc, etc.

EXAM NOTE: This likely won’t make a difference on the test. Just make sure your settings are persistent cause redhat will reboot your server before they grade it!

To view routes:

[[email protected] ~]# ip route show

Consider differences between:

/etc/hosts
/etc/resolv.conf
/etc/nsswitch.conf

EXAM NOTE: Shouldn’t have to worry about those 3 really. They shouldn’t be broken.

When changing your ip, hostname, etc, you need to watch with RHEL6, as its slightly different from RHEL5, and that is that RHEL6 uses network manager. So what do most people do? Remove network manager and go back to the old way using:

/etc/sysconfig/network-scripts/ifcfg-ethX
/etc/sysconfig/network

For DHCP configurations, setup the configuration as follows:

[[email protected] ~]# vim /etc/sysconfig/network-scripts/ifcfg-ethX
...
DEVICE=eth0
BOOTPROTO=dhcp
ONBOOT=yes
...

For STATIC configurations, setup the configuration as follows:

[[email protected] ~]# vim /etc/sysconfig/network-scripts/ifcfg-ethX
...
DEVICE=eth0
BOOTPROTO=none
IPADDR=x.x.x.x
NETMASK=xxx.xxx.xxx.xxx
ONBOOT=yes
GATEWAY=x.x.x.x  <--- THis required in RHEL6... no longer in /etc/sysconfig/network
DNS1=8.8.8.8 <---These 2 are also new to RHEL6
DNS2=8.8.4.4 <--- These 2 are also new to RHEL6
DOMAIN=www.blah.com
...

When configuring the network settings, its recommended to use nmcli or nm-connection editor.

In practice, its a pain for server administration. If you want, you can remove network manager and just go back and use your normal things.

[[email protected] ~]# service NetworkManager stop
[[email protected] ~]# chkconfig NetworkManager off

Network Manager is great for desktops, but when you are doing server administration, it just gets in the way.

EXAM NOTES: Your first tasks may be to reset root password and fix networking. So be comfortable with this stuff!

Cron

/etc/anacrontab defines the system cron jobs : This is a more flexible way to run cron. So anacrontab wakes up and realizes it missed a job, it'll go back and run it, when it feels like it. Basically this is useful if you are using a desktop, and your system is asleep in the middle of the night, the next morning when you wake the laptop, it'll run its stuff within reason to get things caught back up intelligently. In theory, this is bogus for servers since they run 24/7, but useful for desktops.

EXAM NOTES: If a user can't run a cron job, check for a cron.deny file. The test is known to throw this out there. So this one sounds critical.

EXAM NOTE: Read the man 5 crontab before taking test as it'll be on there.

Lab:

1.  Create a cronjob for the user root that checks the amount of available space on the system every Friday at 12:34PM

2.  Create a crontjob as a regular user that lists teh contents of /tmp at 3:54AM on Sunday, January 2.  

Answer (Plus interesting note)

man 5 crontab : note: The day of a command's execution can be specified by two fields -- day of month, and day of week.  If both fields are restricted (ie, are not *), the command will be run when either field matches the current time.  For example, `30 4 1,15 * 5'' would cause a command to be run at 4:30 am on the 1st and 15th of each month, plus every Friday.

Syslog

RHEL6 is now using rsyslog as the system logger. They did this cause rsyslog can be sent over TCP, and it also supports a cache function so if the message doesn't get out to the log, it'll cache it till it the message actually makes it to the remote device.

Log messages consists of 3 parts:

a.  Facility : Describes where it came from in the OS.  Ie. Kernel, auth, etc
b.  Level  : what is the priority of the message
c.  Messages : the actual log itself

NOTE: local0-local7 are reserved for your own use/defination.

So to use it, you basically just pipe your stuff through logger out to one of these and create a defination within /etc/rsyslog to redirect the output to a file. Another note: Redhat uses local7 for boot.log as they have a built in library which displays all the fun startup messages. IE: HTTPD OK

Config file: /etc/rsyslog.conf
Defines where all the messages should go. Not much different from /etc/syslog.conf

Interesting Note:
*.err /dev/root # If you set this, or any username, the console will get the message displayed on their screen. Def didn't know that.

When you use a .none (ie. *.none;mail.none;authpriv;none /var/log/messages, this means all will be catched except mail and authpriv since those are set to .none.

EXAM NOTE: Most of this will not be applicable to the test. The test will have everything setup to its default locations.

Logrotate

Config file: /etc/logrotate.conf
Basically RHEL6 rotates once a week, with a retention rate of 1 month total. Interesting note, they no longer appear to do 1.gz, 2.gz, etc. They apparently pop the timestamp at the end of the file.

Extended configurations are stored in /etc/logrotate.d file. Each one of these is its own logrotate configuration file. Just use this for your applications, etc.

You can force log rotation to go through by:

[[email protected] ~]# logrotate -vf /etc/logrotate.conf (or whatever file)

In the logrotate.d/blah file:

sharedscripts - Means run all the global scripts
postrotate - This is where you can run your custom thing such as:  service httpd restart

Troubleshooting

A lot of the troubleshooting objectives on the exame are permissions based, and possibly some minor SELinux, as well as locating error messages in logs.

Some useful tools that will be used on test:

- top
- df -h
- ldd : list library dependecies
- ldconfig: Update library locatation database.  Think mlocate.  It basically just reads /etc/ld.so.conf and /etc/ld.so.conf.d/* to create its indexes.

EXAM NOTE: Thankfully, the ldd/ldconfig stuff won't be on test really.

Nice level

The kernel users these priorities when figuring out what needs to run when. Ranges are from -20 to 19. Its not the actual priority, but just a minor tunable. Its not guaranteed to do anything cause its up to the process scheduler at the end of the day, but it'll at least tell it to try to give your (ie database) more priority.

So if you want to give yourself a higher priority, then give yourself a -20. Regular users cannot set their stuff to be a higher priority. Only root can do that. Regular users can however give themselves a lower priority of like 20.. therefore, their application will not impact others as hard.

EXAM NOTE: Probably won't be on test.

Lab

1.  Take a few minutes to browse through the various logs in /var/log.  Familiarize yourself with the kinds of info available.

2.  Browse the man page for rsyslog.conf

3.  Find where the audit service keeps its log and add a corresponding new entry to your logrotate configuration.  Force a rotation to see everything work.

Answer:  it logs to /var/log/audit/*.  So create a new config:  cp cups.conf to audit.conf
Modify the log entry.  Rerun: logrotate -vf /etc/logrotate.conf

4.  Remove the audit logrotate configuration and restart the auditd service

5.  Locate the PIDS of the highest memory and highest CPU utilization processes.  Play with their nice levels.

RHCSA Study Guide – Objective 2 : Packages

############################
Everything below are my raw notes that I took while attending an unofficial RHCSA training session.  I am posting them here in hopes they will assist others who may be preparing to take this exam.  

My notes are my own interpretation of the lectures, and are certainly not a replacement to classroom training either through your company, or by taking the official RHCSA classes offered through Red Hat.  If you are new to the Red Hat world, I strongly suggest looking into their training courses over at Red Hat.
############################

EXAM NOTE: Will need to know how to manually enable/create a repo

rpm -i : install
rpm -q : query the database
rpm -e : erase rpm.

EXAM NOTE: Probably won’t need to know much about rpm other then the above.

rpm -qa : Queries and lets you know everything that is installed.
rpm -qi : Queries the rpm database for pkg info.
rpm -qf : Determines which rpm a file is associated with.
rpm -ql : Queries the rpm database to determine which files are associated with an rpm.
rpm -Va : Verifies all installed packages.
rpm -Vi  : Verifies given package.
rpm -Va |grep ^..5  : This will show you everything user has changed recently.  Can be useful!

EXAM NOTE: Asides from the last one, nothing here is likely going to be applicable for the test.

How to extract RPM Contents:

cd /temp/dir
rpm2cpio /path/to/package | cpio -i -d -m

EXAM NOTE: This will not be on test. If Apache is messed up, just reinstall it.

The wrapper for RPM is yum (Yellowdog updater modified).

install : Install stuff
search : Find stuff  : ex.  yum search bash
provides : Find files within packages when yum search doesn't help : ex. yum provides sed
clean all : Useful if you broke your conf file and yum is broke.  ex. yum clean all

EXAM NOTE: The above stuff will be used on test.

How to setup repository when Redhat says: “All your packages can be found at:
http://www.example.com/directory/of/packages.” To do this, first setup the repo:

vi /etc/yum.repos.d/myrepo.repo
[myrepo]
name = my repo thingy
gpgcheck = 0
baseurl=http://www.example.com/directory/of/packages

Now list the available repos:

yum repolist

To import key if you like:

yum import /url/to/gpg/key

EXAM NOTE: **IMPORTANT** The above will be on test! This is CRITICAL. Without this, you cannot do anything!

To use a local repo, you set the baseurl as follows:

baseurl=file:///path/to/your/file

RHCSA Study Guide – Objective 1 : Booting

############################
Everything below are my raw notes that I took while attending an unofficial RHCSA training session.  I am posting them here in hopes they will assist others who may be preparing to take this exam.  

My notes are my own interpretation of the lectures, and are certainly not a replacement to classroom training either through your company, or by taking the official RHCSA classes offered through Red Hat.  If you are new to the Red Hat world, I strongly suggest looking into their training courses over at Red Hat.
############################

Basic things to know on test:
– Reset root passwd via single user mode

Upstart

RHEL 6 is now using upstart (upstart.ubuntu.com). SystemV stuff is finally getting deprecated, but inittab is still there.

Quick notes about Upstart:

- Upstart allows for faster boot times, running jobs concurrently
- Upstart is just used for startup/shutdown/service management.
- The only place I'll notice this on test is in /etc/inittab.  Specifically for modifying the runlevels within /etc/inittab.
- Configuration files are in /etc/init : These tell the services how to handle events such as ctrl-alt-delete, runlevels, etc.

Runlevels:

0 : os stopped
1 : single
2 : multi-user, no NFS shares
3 : full multiuser, TUI
4 : unused
5 : full multiuser, GUI
6 : Reboot

Switch runlevels by:

telinit

(init is technically NOT the command, but works as legacy for now)

1. At bootup, kernel starts /sbin/init
2. The startup event causes /etc/init/rcS.conf to fire.
3. rcS.conf then greps out the /etc/inittab to get init level, and runs it.
4. These fires off /etc/init/rc.conf, which fires off the /etc/rc.d/rc script
5. All the startup scripts in /etc/rcX.d will get fired off accordingly.

EXAM NOTE: Nothing major is really needed for the exam on upstart. Obviously know run levels.

Init Scripts:
– To view all startup files for a run level, just look at:

ex.  runlevel 5 : /etc/rc.5.d
ex.  runlevel 3 : /etc/rc.3.d

Notes about identifiers:

- S means to start the service
- K means to kill the service
- After the S or K, there is a two digit number used for ordering the
execution of the scripts, ie.  priority...
- When entering run levels, kill scripts run first, then start scripts
- All the scripts reside in /etc/init.d.. they are just symlinked to the /etc/rc.X
- All these symlinks are just managed by chkconfig : When you just do:
chkconfig  on, it'll just default the service levels based off the defualts in the init script.

EXAM NOTE: Know how to use chkconfig

Grub

Grub is responsible for the initial kernel load at boot time.

EXAM NOTE: Probably won’t need to do much with /boot/grub/grub.conf

To get into command mode when system is booting, type:

c : command mode
e : edit mode
a : append mode
esc : brings you back to previous menu
enter : boots the machine based off your selection

To get into single user mode, add a ‘s’ or the word ‘single’ to the end of the kernel command line.

EXAM NOTE: This WILL be on test. Make sure you know how to reset lost root passwd

Lab:

1.  Reboot machine into single user mode and reset root passwd
2.  Review a few of the init.d scripts to get familiar
3.  Review the configuration files in /etc/init.

Server monitoring script

Without an agent based monitoring system, monitoring your servers internals for items such as CPU, memory, storage, processes, etc becomes very difficult without manually checking. There are many reputable monitoring services on the web such as Pingdom (www.pingdom.com), and most hosting providers provide a monitoring system, but they do not provide an agent. Therefore, you can only do basic external checks such as ping, port, and http content checks. There is no way to report if your MySQL replication has failed, some critical process has stopped running, or if your about to max out your / partition.

This simple bash script located on github, is meant to compliment these types of monitoring services. Just drop the script into a web accessible directory, configure a few options and thresholds, setup a URL content check that looks at the status page searching for the string ‘OK’, and then you can rest easy at night that your monitoring service will alert you if any of the scripts conditions are triggered.

Security note: To avoid revealing information about your system, it is strongly recommended that you place this and all web based monitoring scripts behind a htaccess file that has authentication, whitelisting your monitoring servers IP addresses if they are known.

Features

– Memory Check
– Swap Check
– Load Check
– Storage Check
– Process Check
– Replication Check

Configuration

The currently configurable options and thresholds are listed below:

 # Status page
status_page=/var/www/system-health-check.html

# Enable / Disable Checks
memory_check=off
swap_check=on
load_check=on
storage_check=on
process_check=on
replication_check=off

# Configure partitions for storage check
partitions=( / )

# Configure process(es) to check
process_names=( httpd mysqld postfix )

# Configure Thresholds
memory_threshold=99
swap_threshold=80
load_threshold=10
storage_threshold=80

Implementation

Download script to desired directory and set it to be executable:

 cd /root
git clone https://github.com/stephenlang/system-health-check
chmod 755 system-health-check/system-health-check.sh

After configuring the tunables in the script (see above), create a cron job to execute the script every 5 minutes:

 crontab -e
*/5 * * * * /root/system-health-check/system-health-check.sh

Now configure a URL content check with your monitoring providers tools check the status page searching for the string “OK”. Below are two examples:

 http://1.1.1.1/system-health-check.html
http://www.example.com/system-health-check.html

Testing

It is critical that you test this monitoring script before you rely on it. Bugs always exist somewhere, so test this before you implement it on your production systems! Here are some basic ways to test:

1. Configure all the thresholds really low so they will create an alarm. Manually run the script or wait for the cronjob to fire it off, then check the status page to see if it reports your checks are now in alarm.

2. To test out the process monitoring (assuming the system is not in production), configure the processes you want the script to check, then stop the process you are testing, and check the status page after the script runs to see if it reports your process is not running.

3. To test out the replication monitoring (assuming the system is not in production), log onto your MySQL slave server and run ‘stop slave;’. Then check the status page after the script runs to see if it reports an error on replication.

Resetting a MySQL root password

Resetting a forgotten MySQL root password is a pretty straight forward task to complete assuming you have sudo or root access to the server. It is important to note that by performing this procedure, MySQL will be down till you complete everything. So be sure to do this during a time when it will not impact your business.

First, stop the MySQL service

service mysqld stop # RHEL clones
service mysql stop # Ubuntu / Debian distro

Now its time to bring MySQL back up in safe mode. This means we will start MySQL, but we are simply skipping the user privileges table:

sudo mysqld_safe --skip-grant-tables &

Time to log into MySQL and switch to the MySQL database:

mysql -uroot
use mysql;

Now reset the root password and flush privileges:

update user set password=PASSWORD("enternewpasswordhere") where User='root';
flush privileges;
quit

Once you have completed that, its time to take MySQL out of safe mode and start it back up normally. NOTE: Be sure MySQL is fully stopped before trying to start the service again. You may need to kill -9 the process if its being stubborn.

Stop MySQL:

service mysqld stop # RHEL clones
service mysql stop # Ubuntu / Debian distro

Verify that the MySQL process is no longer running. If they are, you may have to kill -9 the process:

ps -waux |grep mysql

Start MySQL back up:

service mysqld start # RHEL clones
service mysql start # Ubuntu / Debian distro

Finally, test out logging in to ensure its now working properly:

mysql -uroot -p

Keeping multiple web servers in sync with rsync

People looking to create a load balanced web server solution often ask, how can they keep their web servers in sync with each other? There are many ways to go about this: NFS, lsync, rsync, etc. This guide will discuss a technique using rsync that runs from a cron job every 10 minutes.

There will be two different options presented, pulling the updates from the master web server, and pushing the updates from the master web server down to the slave web servers.

Our example will consist of the following servers:

web01.example.com (192.168.1.1) # Master Web Server
web02.example.com (192.168.1.2) # Slave Web Server
web03.example.com (192.168.1.3) # Slave Web Server

Our master web server is going to the single point of truth for the web content of our domain. Therefore, the web developers will only be modifying content from the master web server, and will let rsync handle keeping all the slave nodes in sync with each other.

There are a few prerequisites that must be in place:
1. Confirm that rsync is installed.
2. If pulling updates from the master web server, all slave servers must be able to SSH to the master server using a SSH key with no pass phrase.
3. If pushing updates from the master down to the slave servers, the master server must be able to SSH to the slave web servers using a SSH key with no passphrase.

To be proactive about monitoring the status of the rsync job, both scripts posted below allow you to perform a http content check against a status file to see if the string “SUCCESS” exists. If something other then SUCCESS is found, that means that the rsync script may have failed and should be investigated. An example of this URL to monitor would be is: 192.168.1.1/datasync.status

Please note that the assumption is being made that your web server will serve files that are placed in /var/www/html/. If not, please update the $status variable accordingly.

Using rsync to pull changes from the master web server:

This is especially useful if you are in a cloud environment and scale your environment by snapshotting an existing slave web server to provision a new one. When the new slave web server comes online, and assuming it already has the SSH key in place, it will automatically grab the latest content from the master server with no interaction needed by yourself except to test, then enable in your load balancer.

The disadvantage with using the pull method for your rsync updates comes into play when you have multiple slave web servers all running the rsync job at the same time. This can put a strain on the master web servers CPU, which can cause performance degradation. However if you have under 10 servers, or if your site does not have a lot of content, then the pull method should work fine.

Below will show the procedure for setting this up:

1. Create SSH keys on each slave web server:

ssh-keygen -t dsa

2. Now copy the public key generated on the slave web server (/root/.ssh/id_dsa.pub) and append it to the master web servers, /root/.ssh/authorized_keys2 file.

3. Test ssh’ing in as root from the slave web server to the master web server
# On web02

ssh [email protected]

4. Assuming you were able to log in to the master web server cleanly, then its time to create the rsync script on each slave web server. Please note that I am assuming your sites documentroot’s are stored in /var/www/vhosts. If not, please change the script accordingly and test!

mkdir -p /opt/scripts/
vi /opt/scripts/pull-datasync.sh

#!/bin/bash
# pull-datasync.sh : Pull site updates down from master to front end web servers via rsync

status="/var/www/html/datasync.status"

if [ -d /tmp/.rsync.lock ]; then
echo "FAILURE : rsync lock exists : Perhaps there is a lot of new data to pull from the master server. Will retry shortly" > $status
exit 1
fi

/bin/mkdir /tmp/.rsync.lock

if [ $? = "1" ]; then
echo "FAILURE : can not create lock" > $status
exit 1
else
echo "SUCCESS : created lock" > $status
fi

echo "===== Beginning rsync ====="

nice -n 20 /usr/bin/rsync -axvz --delete -e ssh [email protected]:/var/www/vhosts/ /var/www/vhosts/

if [ $? = "1" ]; then
echo "FAILURE : rsync failed. Please refer to solution documentation" > $status
exit 1
fi

echo "===== Completed rsync ====="

/bin/rm -rf /tmp/.rsync.lock
echo "SUCCESS : rsync completed successfully" > $status

Be sure to set executable permissions on this script so cron can run it:

chmod 755 /opt/scripts/pull-datasync.sh

Using rsync to push changes from the master web server down to slave web servers:

Using rsync to push changes from the master down to the slaves also has some important advantages. First off, the slave web servers will not have SSH access to the master server. This could become critical if one of the slave servers is ever compromised and try’s to gain access to the master web server. The next advantage is the push method does not cause a serious CPU strain cause the master will run rsync against the slave servers, one at a time.

The disadvantage here would be if you have a lot of web servers syncing content that changes often. Its possible that your updates will not be pushed down to the web servers as quickly as expected since the master server is syncing the servers one at a time. So be sure to test this out to see if the results work for your solution. Also if you are cloning your servers to create additional web servers, you will need to update the rsync configuration accordingly to include the new node.

Below will show the procedure for setting this up:

1. To make administration easier, its recommended to setup your /etc/hosts file on the master web server to include a list of all the servers hostnames and internal IP’s.

vi /etc/hosts
192.168.1.1 web01 web01.example.com
192.168.1.2 web02 web02.example.com
192.168.1.3 web03 web03.example.com

2. Create SSH keys on the master web server:

ssh-keygen -t dsa

3. Now copy the public key generated on the master web server (/root/.ssh/id_dsa.pub) and append it to the slave web servers, /root/.ssh/authorized_keys2 file.

4. Test ssh’ing in as root from the master web server to each slave web server
# On web01

ssh [email protected]

5. Assuming you were able to log in to the slave web servers cleanly, then its time to create the rsync script on the master web server. Please note that I am assuming your sites documentroot’s are stored in /var/www/vhosts. If not, please change the script accordingly and test!

mkdir -p /opt/scripts/
vi /opt/scripts/push-datasync.sh

#!/bin/bash
# push-datasync.sh - Push site updates from master server to front end web servers via rsync

webservers=(web01 web02 web03 web04 web05)
status="/var/www/html/datasync.status"

if [ -d /tmp/.rsync.lock ]; then
echo "FAILURE : rsync lock exists : Perhaps there is a lot of new data to push to front end web servers. Will retry soon." > $status
exit 1
fi

/bin/mkdir /tmp/.rsync.lock

if [ $? = "1" ]; then
echo "FAILURE : can not create lock" > $status
exit 1
else
echo "SUCCESS : created lock" > $status
fi

for i in ${webservers[@]}; do

echo "===== Beginning rsync of $i ====="

nice -n 20 /usr/bin/rsync -avzx --delete -e ssh /var/www/vhosts/ [email protected]$i:/var/www/vhosts/

if [ $? = "1" ]; then
echo "FAILURE : rsync failed. Please refer to the solution documentation " > $status
exit 1
fi

echo "===== Completed rsync of $i =====";
done

/bin/rm -rf /tmp/.rsync.lock
echo "SUCCESS : rsync completed successfully" > $status

Be sure to set executable permissions on this script so cron can run it:

chmod 755 /opt/scripts/push-datasync.sh

Now that you have the script in place and tested, its now time to set this up to run automatically via cron. For the example here, I am setting up cron to run this script every 10 minutes.

If using the push method, put the following into the master web servers crontab:

crontab -e
# Datasync script
*/10 * * * * /opt/scripts/push-datasync.sh

If using the pull method, put the following onto each slave web servers crontab:

crontab -e
# Datasync script
*/10 * * * * /opt/scripts/pull-datasync.sh