Setting up MySQL Master Slave Replication with LVM snapshots

This article is part of a series of setting up MySQL replication. As with most things, there is always more than one way to do something. In the case of setting up MySQL replication, or rebuilding it, some options are better than others depending on your use case.

The articles in the series are below:
Setting up MySQL Replication using mysqldump
Setting up MySQL Replication using Percona XtraBackup
Setting up MySQL Replication using Rsync
Setting up MySQL Replication using LVM snapshots

This guide will document how to setup MySQL Master / Slave Replication using LVM snapshots. So why use LVM snapshots for setting up or rebuilding MySQL Replication? If your databases and tables are large, you can greatly limit the downtime felt to the application using LVM snapshots. This should still be performed during a scheduled maintenance window as you will be flushing the tables with READ LOCK.

Some prerequisites before proceeding are below:

1. Confirming that your datadir is indeed configured on a partition running LVM:

[root@db01 ~]# lvs

2. Confirming that you have enough free space in your Volume Group for the LVM snapshot:

[root@db01 ~]# vgs

So in the sections below, we’ll configure the Master and Slave MySQL server for replication, then we’ll use an LVM snapshot for syncing the databases over to db02.

Setup the Master MySQL server

Configure the my.cnf as shown below:

log-bin=/var/lib/mysql/db01-binary-log
expire-logs-days=5
server-id=1

Then restart MySQL to apply the settings:

# CentOS / RHEL:
[root@db01 ~]# service mysqld restart

# Ubuntu / Debian:
[root@db01 ~]# service mysql restart

Finally, grant access to the Slave so it has access to communicate with the Master:

mysql> GRANT REPLICATION SLAVE ON *.* to 'repl’@’10.x.x.x’ IDENTIFIED BY 'your_password';

Setup the Slave MySQL server

Configure the my.cnf as shown below:

relay-log=/var/lib/mysql/db02-relay-log
relay-log-space-limit = 4G
read-only=1
server-id=2

Then restart MySQL to apply the settings:

# CentOS / RHEL:
[root@db02 ~]# service mysqld restart

# Ubuntu / Debian:
[root@db02 ~]# service mysql restart

Use LVM snapshots for syncing over the databases

For reference, the rest of this guide will refer to the servers as follows:

db01 - Master MySQL Server
db02 - Slave MySQL Server

On db02 only, rename the existing MySQL datadir, and create a fresh folder:

[root@db02 ~]# service mysqld stop
[root@db02 ~]# mv /var/lib/mysql /var/lib/mysql.old
[root@db02 ~]# mkdir /var/lib/mysql
[root@db02 ~]# chown mysql:mysql /var/lib/mysql

On db01 only, create a snapshot script for MySQL to ensure things move quick to limit downtime:

[root@db01 ~]# vim /root/lvmscript.sql
FLUSH LOCAL TABLES;
FLUSH LOCAL TABLES WITH READ LOCK;
SHOW MASTER STATUS;
SYSTEM lvcreate -L 10G -s vglocal00/mysql00 -n mysqlsnapshot00 3>&-
SHOW MASTER STATUS;
UNLOCK TABLES;

On db01 only, during a scheduled maintenance window, run the script to create the LVM snapshot, and be sure to take note of master status information as that will be needed later:

[root@db01 ~]# mysql -t < /root/lvmscript.sql 

On db01 only, mount the snapshot, sync over the contents to db02, and then remove the snapshot since it will no longer be needed:

[root@db01 ~]# mount /dev/mapper/vglocal00-mysqlsnapshot00 /mnt
[root@db01 ~]# rsync -axvz --delete -e ssh /mnt/ root@db02:/var/lib/mysql/
[root@db01 ~]# umount /mnt
[root@db01 ~]# lvremove vglocal00/mysqlsnapshot00

On db02 only, remove the stale mysql.sock file, startup MySQL, configure db02 to connect to db01 using the information from the show master status command you ran on db01 previously, and start replication:

[root@db02 ~]# rm /var/lib/mysql/mysql.sock
[root@db02 ~]# service mysqld start
[root@db02 ~]# mysql
mysql> CHANGE MASTER TO MASTER_HOST='10.x.x.x', MASTER_USER='repl', MASTER_PASSWORD='your_password', MASTER_LOG_FILE='db01-bin-log.000001', MASTER_LOG_POS=1456783;
mysql> start slave;
mysql> show slave status\G
...
        Slave_IO_Running: Yes
        Slave_SQL_Running: Yes
        Seconds_Behind_Master: 0
...

If those values are the same as what is shown above, then replication is working properly! Perform a final test by creating a test database on the Master MySQL server, then check to ensure it shows up on the Slave MySQL server. Afterwards, feel free to drop that test database on the Master MySQL server.

From here, you should be good to go! Just be sure to setup a monitoring check to ensure that replication is always running and doesn’t encounter any errors. A very basic MySQL Replication check can be found here:
https://github.com/stephenlang/system-health-check

VMware disk expansion

There are a number of ways to add more disk space to a VM on VMware. This guide will discuss 5 different ways to handle expanding the existing disk in VMware, which are:

- Expand existing disk to LVM (Not previously expanded)
- Expand existing disk to LVM (Previously expanded)
- Expand existing disk with LVM not in use (Dangerous)
- Add a new disk into an existing LVM Volume Group
- Add a new disk as a separate mount point

Many VMware solutions set their disk labels to MBR, so for this guide, we’ll be making extensive use of fdisk. If your disk label is set to GPT, please use caution when following this guide!

As with any disk expansion operations, always be sure you have solid backups in place in case something goes wrong!

Expand existing disk to LVM (Not previously expanded)

Assuming the VM’s disk has already been expanded within VMware, you have to rescan the specific SD device to detect the new properties. You can do this by:

[root@web01 ~]# echo 1 > /sys/block/sdX/device/rescan
or
[root@web01 ~]# echo 1 > /sys/bus/scsi/drivers/sd/SCSI-ID/block/device/rescan
or
[root@web01 ~]# echo 1 > /sys/bus/scsi/drivers/sd/SCSI-ID/block\:sdX/device/rescan

Whether the added a new disk, or expanded an existing one, you can usually detect the change by:

[root@web01 ~]# dmesg|tail
...
sd 2:0:0:0: [sda] 67108864 512-byte logical blocks: (34.3 GB/32.0 GiB)
sd 2:0:0:0: [sda] Cache data unavailable
sd 2:0:0:0: [sda] Assuming drive cache: write through
sda: detected capacity change from 17179869184 to 34359738368

Now you need to determine if the volume has actually be expanded. Check for the ‘resize_inode’ flag by:

[root@web01 ~]# tune2fs -l /dev/vglocal00/lvroot | grep -i "^filesystem features"

Check to see if storage has increased in size yet first by:

[root@web01 ~]# fdisk -cul /dev/sda
[root@web01 ~]# pvs
[root@web01 ~]# vgs
[root@web01 ~]# lvs
[root@web01 ~]# df -h

Once the root disk has been expanded in VMware, rescan the disk which should now show additional sectors have been added:

[root@web01 ~]# echo 1 > /sys/block/sda/device/rescan
[root@web01 ~]# fdisk -cul /dev/sda

Now we need to add a partition for the new space. As fdisk only allows 4 primary partitions, we are going to use extended partitions so we can create logical partitions to hold the new space:

[root@web01 ~]# fdisk -cu /dev/sda
p
n
e (extended)
3
enter
enter
n
l (logical)
enter
enter
p
w

Now rescan the partitions so the system can detect the new one without rebooting:

[root@web01 ~]# ls /dev/sda*
[root@web01 ~]# partx -l /dev/sda
[root@web01 ~]# partx -v -a /dev/sda # There may be some errors here, ignore.
[root@web01 ~]# partx -l /dev/sda
[root@web01 ~]# ls /dev/sda*

Now setup LVM on that new partition, and add it to the existing volume group and expand the logical volume:

[root@web01 ~]# pvcreate /dev/sda5
[root@web01 ~]# vgs
[root@web01 ~]# vgextend YOUR_VG_NAME /dev/sda5
[root@web01 ~]# pvdisplay /dev/sda5 | grep Free
  Free PE               4095
[root@web01 ~]# lvextend --extents +4095 -n /dev/YOUR_VG_NAME/lv_root

Finally, expand the filesystem doing an online resize:

[root@web01 ~]# resize2fs /dev/YOUR_VG_NAME/lv_root
[root@web01 ~]# df -h |grep root

Expand existing disk to LVM (Previously expanded)

If there is a VM where a previous expansion already took place, or otherwise is already on an extended partition with the first (only) logical partition taking up all the room, then this is the section you want.

Once the root disk has been expanded in VMware, rescan the disk which should now show additional sectors have been added:

# Print out disk information
[root@web01 ~]# fdisk -cul /dev/sda

# Then rescan the device
[root@web01 ~]# echo 1 > /sys/block/sdX/device/rescan
or
[root@web01 ~]# echo 1 > /sys/bus/scsi/drivers/sd/SCSI-ID/block/device/rescan
or
[root@web01 ~]# echo 1 > /sys/bus/scsi/drivers/sd/SCSI-ID/block\:sdX/device/rescan

# Print out disk information to confirm it detected the additional space
[root@web01 ~]# fdisk -cul /dev/sda

Expand the existing extended partition:

[root@web01 ~]# parted /dev/sda
unit s
pri
  Number  Start      End        Size       Type      File system  Flags
   1      2048s      41431039s  41428992s  primary                lvm
   2      41431040s  41943039s  512000s    primary   ext3         boot
   3      41943040s  52428799s  10485760s  extended
   5      41945088s  52428799s  10483712s  logical
resize 3 41943040s -1  (Take whatever the extended start value is, and the number)
pri
quit

Now partition the new space, setup LVM, expand and resize the filesystem:

[root@web01 ~]# fdisk -cu /dev/sda
p
n
l (logical)
enter
enter
p
w

[root@web01 ~]# ls -hal /dev/sda*
[root@web01 ~]# partx -l /dev/sda
[root@web01 ~]# partx -v -a /dev/sda # There may be some errors here, ignore.
[root@web01 ~]# partx -l /dev/sda
[root@web01 ~]# ls -hal /dev/sda*

[root@web01 ~]# pvcreate /dev/sda6 # Or whatever the new partition was
[root@web01 ~]# vgs
[root@web01 ~]# vgextend YOUR_VG_NAME /dev/sda6
[root@web01 ~]# pvdisplay /dev/sda6 | grep Free
  Free PE               4607
[root@web01 ~]# lvextend --extents +4607 -n /dev/YOUR_VG_NAME/lv_root
[root@web01 ~]# df -h
[root@web01 ~]# resize2fs /dev/YOUR_VG_NAME/lv_root
[root@web01 ~]# df -h

Expand existing disk to without LVM (Dangerous)

This section assumes that LVM was never setup for the disk. Therefore you would need to recreate the partitions to use the new space.

Re-creating partitions is a high risk operation as there is there is the potential for data loss, so make sure you have known good backups you can restore to. And at the very least, snapshot your VM! It also requires a reboot to occur on the VM. Ideally, you should first check to see if an additional disk can simply be mounted to a different mount point instead.

First, list the current partitions:

[root@web01 ~]# fdisk -l

Now within VMware or on the SAN presenting the disk, expand the disk. Once that is done, we need to rescan the volume and confirm the new space:

[root@web01 ~]# echo 1 > /sys/block/sda/device/rescan
[root@web01 ~]# fdisk -l
     Device Boot      Start         End      Blocks   Id  System
  /dev/sda1   *           1          13      104391   83  Linux
  /dev/sda2              14         274     2096482+  83  Linux
  /dev/sda3             275         796     4192965   82  Linux swap / Solaris
  /dev/sda4             797        2610    14570955    5  Extended
  /dev/sda5             797        2610    14570923+  83  Linux

Using the example above, you will notice that the new partitions (4 and 5) end on the same cylinder (2610). So the extended and logical partitions need to be set to use the new space. So to attempt to help you in the event everything goes wrong, list out the following information and store it somewhere safe so you can refer to it later:

[root@web01 ~]# fdisk -l /dev/sda
[root@web01 ~]# df -h
[root@web01 ~]# cat /etc/fstab

Now hold on to your butts and expand the disks by deleting the partitions (which *shouldn’t* affect the underlining data), then recreate the partitions with the new sizes:

[root@web01 ~]# fdisk /dev/sda
d
5
d
4
n
e
(Pick original extended position (Should be default, just hit enter)
(Pick the new, much larger cylinder ending position (for default all space until end, hit enter)
n
5 (or it should just assume right now here)
(Pick original logical partition started point (should be default next cylinder, hit enter)
(Pick new, much larger cylinder ending position (just default all space until end, hit enter)
p (Double check everything, ensure starting cylinders to extended partition 4 and logical partition 5 have the SAME starting cylinder number
w

Now reboot the system so it can use the new space:

[root@web01 ~]# shutdown -r now

Then expand the filesystem:

[root@web01 ~]# df -h | grep sda5
  /dev/sda5              14G  2.6G   11G  21% /
[root@web01 ~]# resize2fs /dev/sda5
[root@web01 ~]# df -h | grep sda5
  /dev/sda5              19G  2.6G   15G  15% /

Add a new disk or into an existing LVM Volume Group

This section assumes the new disk is the second disk on the VM, and is enumerated as /dev/sdb. The disk will be added to an existing Volume Group, and we’ll use all the new space on the disk for the volume group and logical volume.

[root@web01 ~]# parted -s -- /dev/sdb mklabel gpt
[root@web01 ~]# parted -s -a optimal -- /dev/sdb mkpart primary 2048s -1
[root@web01 ~]# parted -s -- /dev/sdb align-check optimal 1
[root@web01 ~]# parted /dev/sdb set 1 lvm on
[root@web01 ~]# parted /dev/sdb unit s print
[root@web01 ~]# pvcreate --metadatasize 250k /dev/sdb1
[root@web01 ~]# vgs
[root@web01 ~]# vgextend YOUR_VG_NAME /dev/sdb1
[root@web01 ~]# pvdisplay /dev/sdb1 | grep Free
  Free PE               4095
[root@web01 ~]# lvextend --extents +4095 -n /dev/YOUR_VG_NAME/lv_root
[root@web01 ~]# df -h
[root@web01 ~]# resize2fs /dev/YOUR_VG_NAME/lv_root
[root@web01 ~]# df -h

Add a new disk as a separate mount point

This section assumes the new disk is the second disk on the VM, and it enumerated as /dev/sdb. We are going to use GPT and LVM as a best practice (even if the root disk/partition has the disk label set to MBR or is non-LVM). This example also uses the whole disk in one partition.

# RHEL/CentOS 5:  Scan for new disk, check for existing partitions
# setup gpt, align, and partition:
[root@web01 ~]# for x in /sys/class/scsi_host/host*/scan; do echo "- - -" > ${x}; done
[root@web01 ~]# parted /dev/sdb unit s print
[root@web01 ~]# fdisk -l /dev/sdb
[root@web01 ~]# parted /dev/sdb
mktable gpt
quit
[root@web01 ~]# parted -s -- /dev/sdb mkpart primary 2048s -1
[root@web01 ~]# parted /dev/sdb set 1 lvm on
[root@web01 ~]# parted /dev/sdb unit s print

# RHEL/CentOS 6:  Scan for new disk, check for existing partitions
# setup gpt, align, and partition:
[root@web01 ~]# for x in /sys/class/scsi_host/host*/scan; do echo "- - -" > ${x}; done
[root@web01 ~]# parted /dev/sdb unit s print
[root@web01 ~]# fdisk -l /dev/sdb
[root@web01 ~]# parted -s -- /dev/sdb mklabel gpt
[root@web01 ~]# parted -s -a optimal -- /dev/sdb mkpart primary 2048s -1
[root@web01 ~]# parted -s -- /dev/sdb align-check optimal 1
[root@web01 ~]# parted /dev/sdb set 1 lvm on
[root@web01 ~]# parted /dev/sdb unit s print

Now on both OS’s, setup LVM, format, and mount the volume to /mnt/data:

[root@web01 ~]# VGNAME=vglocal$(date +%Y%m%d)
[root@web01 ~]# LVNAME=lvdata01
[root@web01 ~]# MOUNTPOINT="/mnt/data"
[root@web01 ~]# FILESYSTEM=`mount | egrep "\ \/\ " | awk '{print $5}'`
[root@web01 ~]# pvcreate --metadatasize 250k /dev/sdb1
[root@web01 ~]# vgcreate ${VGNAME} /dev/sdb1
[root@web01 ~]# lvcreate --extents 100%VG -n ${LVNAME} ${VGNAME}
[root@web01 ~]# mkfs.${FILESYSTEM} /dev/mapper/${VGNAME}-${LVNAME}
[root@web01 ~]# mkdir ${MOUNTPOINT}
[root@web01 ~]# echo -e "/dev/mapper/${VGNAME}-${LVNAME}\t${MOUNTPOINT}\t${FILESYSTEM}\tdefaults\t0 0" >> /etc/fstab
[root@web01 ~]# mount -a
[root@web01 ~]# df -hP | grep "${VGNAME}-${LVNAME}"
/dev/mapper/vglocal20160830-lvdata01   16G   44M   15G   1% /mnt/data

LVM basics

Logical Volume Management, or LVM for short, takes entire disks or individual partitions, and combines them together so the group can act as a single managable entity.

A few best practices to keep in mind when using LVM.

1. The Volume Group name should represent what kind of storage it exists on, such as vglocal00, vgsan00, vgdas00, vgiscsi00, vgraid00, etc.

2. The Logical Volume name should represent what the LV is being used for where possible, such as nfs00, data00, mysql00, var00, root00, etc. So the end result of a LV for MySQL running on SAN would be: /dev/vgsan00/mysql00

3. Never combine disks coming from different raids. In other words, don’t combine disks from a raid 1 and a raid 5 in the same Volume Group.

4. Never combine disks from different storage mediums, such as local storage and remote (SAN, DAS, iSCSI, etc).

5. Never combine non-partitioned and partitioned devices due to performance issues and general end user confusion.

6. To avoid end user confusion, a partition should be created on the new physical device as some tools may not be able to see that data already resides on the physical volumes when using tools like fdisk, parted, gdisk, etc.

Setup new disk

We are going to assume that your new disk is setup on /dev/sdb. First, determine if there is a disk label already set, and check for any existing information. You just want to avoid accidentally data loss:

[root@web01 ~]# parted /dev/sdb unit s print | grep Table
[root@web01 ~]# parted /dev/sdb unit s print

Set the disk label on the new disk to GPT:

[root@web01 ~]# parted -s -- /dev/sdb mklabel gpt

On the first partition only, start the partition on sector 2048 to follow generally accepted best practices to ensure partition alignment:

[root@web01 ~]# parted -s -a optimal -- /dev/sdb mkpart primary 2048s -1

Now confirm the starting sector of the partition is aligned for the disk:

[root@web01 ~]# parted -s -- /dev/sdb align-check optimal 1

Set the partition to use LVM:

[root@web01 ~]# parted /dev/sdb set 1 lvm on

Now review the disks newly created partition layout:

[root@web01 ~]# parted /dev/sdb unit s print

Setup the new disk with LVM:

[root@web01 ~]# pvcreate --metadatasize 250k /dev/sdb1

Create the volume group:

[root@web01 ~]# vgcreate vglocal00 /dev/sdb1

And now setup the logical volume to use all available disk space:

[root@web01 ~]# lvcreate -n data00 -l 100%FREE vglocal00

Format the logical volume with your filesystem:

[root@web01 ~]# mkfs.ext4 -v -m2 /dev/vglocal00/data00

And finally, mount the new volume:

[root@web01 ~]# mkdir /mnt/data
[root@web01 ~]# echo "/dev/vglocal00/data00   /mnt/data       ext4    defaults 0 0" >> /etc/fstab
[root@web01 ~]# mount -a
[root@web01 ~]# df -h

Shrink an existing Logical Volume

If you have to shrink an existing volume, there are a few steps that need to be taken. While its generally safe, you should always ensure that you have known good backups in place before proceeding.

Also note that you cannot shrink an existing volume while it is mounted. So this should be done during a scheduled maintenance window as you will need to stop any services that are using data from that volume.

First, unmount the logical volume:

[root@web01 ~]# umount /mnt/data

Run a file system check on the logical volume:

[root@web01 ~]# e2fsck -f /dev/vglocal00/data00

Now shrink the volume. In this example, we’re going to shrink it down to be 15G in size:

[root@web01 ~]# resize2fs /dev/vglocal00/data00 15G

Now reduce the file system to be 15G in size:

[root@web01 ~]# lvreduce -L 15G /dev/vglocal00/data00

Finally, mount the filesystem for normal use again:

[root@web01 ~]# mount -a

Shrink the root Logical Volume

As the / logical volume cannot be unmounted while the system is running, you need to boot the server off the distro’s cd, or boot in it a rescue environment if your running a Cloud server that supports this. While its generally safe to resize a volume, you should always ensure that you have known good backups in place before proceeding.

In this example, I’m running my server in VMware, so I can simply boot using a CentOS 6 installation cdrom. When the installation screen comes up, select:

Rescue installed system

When the screen asks if you would like the rescue environment to attempt to find your Linux installation and mount it under the directory /mnt/sysimage, select:

Skip

Now that your booted into the rescue enviroment, run the following commands so the system is aware of your LVM setup:

pvscan
vgscan
vgchange -a y
lvscan

In my case, my root logical volume is /dev/vg_local/lv_root. I want to shrink it from 60G down to 6G. I already confirmed that my data in the / partition does not exceed 6G.

First, run a file system check on the logical volume:

[root@web01 ~]# e2fsck -f /dev/vglocal00/lv_root

Now shrink the root volume. In this example, we’re going to shrink it down to be 6G in size:

[root@web01 ~]# resize2fs /dev/vglocal00/lv_root 6G

Then reduce the file system to be 6G in size:

[root@web01 ~]# lvreduce -L 6G /dev/vglocal00/lv_root

Finally, eject the CD, reboot the system, and check to ensure your / file system is now at 6G:

[root@web01 ~]# df -h /

Expand an existing Logical Volume

This operation can be done live with LVM2.

First confirm you have enough free space in the volume group by running:

[root@web01 ~]# vgs
[root@web01 ~]# vgdisplay vglocal00

Now lets expand the logical volume ‘data00’ from 15G to 25G total.

[root@web01 ~]# df -h
[root@web01 ~]# lvextend -L 25G /dev/vglocal00/data00
[root@web01 ~]# resize2fs /dev/vglocal00/data00
[root@web01 ~]# df -h

Add a new Logical Volume to an existing Volume Group

First confirm you have enough free space in the volume group by running:

[root@web01 ~]# vgs
[root@web01 ~]# vgdisplay vglocal00

Now create a new 5G logical volume called mysql00:

[root@web01 ~]# lvcreate -n mysql00 -L 5G vglocal00
[root@web01 ~]# mkfs.ext4 -v -m2 /dev/vglocal00/mysql00

Finally, mount the new logical volume:

[root@web01 ~]# mkdir /mnt/mysql-fs
[root@web01 ~]# echo "/dev/vglocal00/mysql00   /mnt/mysql-fs       ext4    defaults 0 0" >> /etc/fstab
[root@web01 ~]# mount -a
[root@web01 ~]# df -h

Remove a Logical Volume

First, unmount the volume:

[root@web01 ~]# umount /mnt/mysql-fs

Then remove the volume:

[root@web01 ~]# lvremove /dev/vglocal00/mysql00 
Do you really want to remove active logical volume mysql00? [y/n]: y
  Logical volume "mysql00" successfully removed

RHCSA Study Guide – Objective 6 : Kernel Features

############################
Everything below are my raw notes that I took while attending an unofficial RHCSA training session.  I am posting them here in hopes they will assist others who may be preparing to take this exam.  

My notes are my own interpretation of the lectures, and are certainly not a replacement to classroom training either through your company, or by taking the official RHCSA classes offered through Red Hat.  If you are new to the Red Hat world, I strongly suggest looking into their training courses over at Red Hat.
############################

LVM

Quick note: There is a gui tool for all this if that is what you like:

[root@web01 ~]# yum install system-config-lvm
[root@web01 ~]# system-config-lvm

LVM abstracts the physical hardware into logical drive spaces which can be dynamically grown/shrunk and span disparate physical devices. Its good for hard drive management as it abstracts away the details of the underlying storage devices.

There is a low amount of overhead to the VFS layer, so you are going to take a slight performance hit.

LVM Terminology:

- Physical Volume (pv) - A physical volume is the partition/raid device for the lvm space.
- Physical Extent (pe) - EXAM NOTE:  You may need to change this on text.  Its a chuck of disk space.  Defaults to 4M
- Volume Group (vg) - Collection of physical volumes
- Logical Volume (lv) - A logical volume is a grouping of physical extents from your physical volume.  This is where you format your fs.

Gist of how this works:
pvcreate : Create a physical volume

[root@web01 ~]# pvcreate /dev/sda4

vgcreate : Create a volume group on PV

vgcreate VolGroup00 /dev/sda4  # This is where the extents are created.

lvcreate : Create a logical volume on VG

[root@web01 ~]# lvcreate -n myvol -L 10G VolGroup00 # This is where the extents are allocated.

Other commands:

vgextend
lvextend
lvresize
resize2fs
lvresize # Use this one when doing your resize:  
ex:  lvresize -r {-l (+extents) | -L (+size)} (lv)

When mounting your new fs, use: /dev/vg/lv.

Lab

EXAM NOTE: Most of this stuff will be on the test

1.  Add logical volume management on top of a new partition.  Use a physical extent size of 16MB.  Use fdisk or whatever to create new volume.  
[root@web01 ~]# pvcreate /dev/sda7
[root@web01 ~]# vgcreate -s 16 vg0 /dev/sda7
[root@web01 ~]# vgdisplay vg0

2.  Use half the available space for a logical volume formatted with ext4 and mounted persistently across reboots.
[root@web01 ~]# lvcreate -n myvol -L 5G vg0
[root@web01 ~]# ls -al /dev/vg0/myvol
[root@web01 ~]# ls -al /dev/mapper/vg0-myvol
[root@web01 ~]# ls -al /dev/dm-3
[root@web01 ~]# mkfs -t ext4 /dev/vg0/myvol
[root@web01 ~]# vi /etc/fstab
...
/dev/vg0/myvol /u03 ext4 defaults 1 2
...
[root@web01 ~]# mkdir /u03
[root@web01 ~]# mount /u03

3.  Take a snapshot of this logical volume and check the file system for errors
[root@web01 ~]# ls -al /u03
[root@web01 ~]# cp /var/log/* /u03/
[root@web01 ~]# lvcreate -s /dev/vg0/myvol -n snap-of-myvol -L 500M
[root@web01 ~]# ls -al /dev/vg0
[root@web01 ~]# lvdisplay vg0 # You will see your 2 logical volumes (and snapshot in 'source of')
[root@web01 ~]# mount /dev/vg0/snap-of-myvol /mnt  # This is how you mount that snapshot.

4.  Assuming none are found, reset the counter for days and mounts until a check is forced on the original file system.
[root@web01 ~]# umount /mnt
[root@web01 ~]# fsck /dev/vg0/snap-of-myvol  # If you see clean, then you should be okay
[root@web01 ~]# tune2fs /dev/vg0/snap-of-myvol # Shows more verifications
[root@web01 ~]# tune2fs -C 25 /dev/vg0/snap-of-myvol # Fake out the system to make it believe it has been mounted 25 times so it will actually fsck
[root@web01 ~]# fsck /dev/vg0/snap-of-myvol
[root@web01 ~]# umount /u03
[root@web01 ~]# lvresize -r -L +100M /dev/vg0/myvol
[root@web01 ~]# lvchange -an /dev/vg0/myvol
[root@web01 ~]# lvchange -ay /dev/vg0/myvol
[root@web01 ~]# lvresize -r -L +100M /dev/vg0/myvol
[root@web01 ~]# lvremove /dev/vg0/snap-of-myvol # If this fails, just try a few more times.  Its a known issue

5.  Copy some data onto the LV, then expand it and the filesystem by 50MB.  fsck, then re-mount the filesystem and verify it's contents.  Also try reducing by 50MB 
[root@web01 ~]# umount /u03
[root@web01 ~]# lvresize -r -L +100M /dev/vg0/myvol
[root@web01 ~]# lvchange -an /dev/vg0/myvol
[root@web01 ~]# lvchange -ay /dev/vg0/myvol
[root@web01 ~]# lvresize -r -L +100M /dev/vg0/myvol
[root@web01 ~]# lvremove /dev/vg0/snap-of-myvol # If this fails, just try a few more times.  Its a known issue.  You can resize till snap is gone.
[root@web01 ~]# lvresize -r -L +100M /dev/vg0/myvol # Grow it
[root@web01 ~]# lvresize -r -L -300M /dev/vg0/myvol # Shrink it
# Or you can set it to 2G by
[root@web01 ~]# lvresize -r -L 2G /dev/vg0/myvol
# NOTE:  The -r flag does everything with the resize2fs... so you won't have to resize your system manually.

Swap space

Swap space allows the kernel to better manage limited system memory by copying segments of memory onto disk.

To create 2G of additional swap space using a file:

[root@web01 ~]# dd if=/dev/zero of=/swap01 bs=1024 count=2097152
[root@web01 ~]# mkswap /swap01
[root@web01 ~]# swapon /swap01

If you no longer need the /swap01, just:

[root@web01 ~]# swapoff /swap01

Now list your active swap areas by:

[root@web01 ~]# cat /proc/swaps

Performance note:
Creating a swap device via lvm or like its own partition (even better) is better for performance. Setting it up on a file within an existing fs is going to be really horrendous for performance.

Lab

1.  Add 500MB of swap space to your system using a device
[root@web01 ~]# lvcreate -n swap02 -L 500M vg0
[root@web01 ~]# mkdir /swap02
[root@web01 ~]# mount /dev/vg0/swap02 /swap02
[root@web01 ~]# mkswap /swap02
[root@web01 ~]# swapon /swap02
[root@web01 ~]# vi /etc/fstab
...
/dev/vg0/swap02 swap swap defaults 0 0
...

2.  Add 500MB of swap space to your system using a swap file
# Calculate how much 500M is:
[root@web01 ~]# echo $((1024*500))
512000
[root@web01 ~]# dd if=/dev/zero of=/swap01 bs=1024 count=512000
[root@web01 ~]# mkswap /swap01
[root@web01 ~]# swapon /swap01
[root@web01 ~]# vi /etc/fstab
...
/swap01 swap swap defaults 0 0
...