Using PAM to enforce access based on time

Sometimes there is a need to restrict user access based on time. This could be access to one particular service, or all PAM enabled services. A common example is to only allow access for the user ‘bob’ monday through friday between 9:00AM – 6:00PM. This can be enforced by utilizing the pam_time module.

The pam_time module is an account module type. No arguments are passed directly to the module, but instead all configuration takes place within /etc/security/time.conf.

The time.conf operates based on rules, and each rule uses the following syntax:

services;ttys;users;times

Example Rules
Restrict SSHD access for bob to weekdays between 9:00AM – 7:00PM

sshd;*;bob;Wk0900-1900

Restrict ALL access for bob to weekdays between 9:00AM – 5:00PM

*;*;bob;Wk0900-1700

Restrict ALL access for ALL users except root to weekdays between 9:00AM – 5:00PM

sshd;*;bob;Wk0900-1700

Restrict SSH access for ALL users except bob and jane to weekdays between 9:00AM – 5:00PM

sshd;*;!bob|!jane;Wk0900-1700

To only allow bob to access SSH on Tuesdays between 3:23PM and 4:24PM:

sshd;*;bob;Tu1523-1624

Below is all the available abbreviates for the days of the week:

Mo : Monday Fr : Friday Wd : Sa/Su
Tu : Tuesday Sa : Saturday wk : Mo/Tu/We/Th/Fr
We : Wenesday Su : Sunday
Th : Thursday Al : All Days

Installation And Configuration
In our example, I am going be setting this up on a CentOS 5.x server. For the restricted user, the following variables will be used:

username: bob
allowed access times: 9:00AM - 6:00PM
restricted services: SSHD

First, add the user and time restriction to /etc/security/time.conf:

sshd;*;bob;Wk0900-1800

Now, update the pam module for login and sshd. You are including ‘account required pam_time.so‘. But I’ll post entire file for reference

cat /etc/pam.d/sshd
#%PAM-1.0
auth required pam_sepermit.so
auth include password-auth
account required pam_time.so
account required pam_nologin.so
account include password-auth
password include password-auth
# pam_selinux.so close should be the first session rule
session required pam_selinux.so close
session required pam_loginuid.so
# pam_selinux.so open should only be followed by sessions to be executed in the user context
session required pam_selinux.so open env_params
session optional pam_keyinit.so force revoke
session include password-auth
cat /etc/pam.d/system-auth
#%PAM-1.0
# This file is auto-generated.
# User changes will be destroyed the next time authconfig is run.
auth required pam_env.so
auth sufficient pam_unix.so nullok try_first_pass
auth requisite pam_succeed_if.so uid >= 500 quiet
auth required pam_deny.so
account required pam_time.so
account required pam_unix.so
account sufficient pam_localuser.so
account sufficient pam_succeed_if.so uid < 500 quiet
account required pam_permit.so
password requisite pam_cracklib.so try_first_pass retry=3 type=
password sufficient pam_unix.so md5 shadow nullok try_first_pass use_authtok
password required pam_deny.so
session optional pam_keyinit.so revoke
session required pam_limits.so
session [success=1 default=ignore] pam_succeed_if.so service in crond quiet use_uid
session required pam_unix.so

And finally, restart SSH

service sshd restart

Creating table indexes in MySQL

You may ask, what is a table index and how will it help performance? Table indexes provide MySQL a more efficient way to retrieve records. I often like to use the following example to explain it:

Imagine you have a phone book in front of you, and there are no letters in the top right corner that you can reference if you are looking up a last name. Therefore, you have to search page by page through hundreds of pages that have tens of thousands of results. Very inefficient and intensive. Think of this as a full table scan.

Now picture the phone book having the letter references in the top right corner. You can flip right to section “La – Lf” and only have to search through a smaller result set. The time to find the results is must faster and easier.

Common symptoms where this logic can be applied is when you log onto a server and see MySQL frequently chewing up a lot of CPU time, either constantly, or in spikes. The slow-query-log is also a great indicator cause if the query is taking a long time to execute, chances are it was because the query was making MySQL work too hard performing full table scans.

The information below will provide you with the tools to help identify these inefficient queries and how to help speed them up.

There are 2 common ways to identify queries that are very inefficient and may be creating CPU contention issues:

View MySQL’s process list:

When entering into MySQL CLI, you will want to look for any queries that you see that are often running to evaluate. You can see the queries by:

mysql
show processlist;

View slow queries log:

To view this, first check to ensure the slow-query-log variables are enabled in the my.cnf:

log-slow-queries=/var/lib/mysqllogs/slow-log
long_query_time=5

Now, lets look at an example of a slow query that got logged. Please note, these queries got logged here cause they took longer to run then the max seconds defined on long_query_time:

# Time: 110404 22:45:25
# User@Host: wpadmin[wordpressdb] @ localhost []
# Query_time: 14.609104  Lock_time: 0.000054 Rows_sent: 4  Rows_examined: 83532
SET timestamp=1301957125;
SELECT * FROM wp_table WHERE `key`='5544dDSDFjjghhd2544xGFDE' AND `carrier`='13';

Here is a query that we know is know runs often, and takes over 5 seconds to execute:

SELECT * FROM wp_table WHERE `key`='5544dDSDFjjghhd2544xGFDE' AND `carrier`='13';

Within the MySQL cli, run the following to view some more details about this query:

explain SELECT * FROM wp_table WHERE `key`='5544dDSDFjjghhd2544xGFDE' AND `carrier`='13';
+----+-------------+------------+------+---------------+------+---------+------+-------+-------------+
| id | select_type | table      | type | possible_keys | key  | key_len | ref  | rows  | Extra       |
+----+-------------+------------+------+---------------+------+---------+------+-------+-------------+
|  1 | SIMPLE      | wp_table   | ALL  | NULL          | NULL |    NULL | NULL | 83532 | Using where |
+----+-------------+------------+------+---------------+------+---------+------+-------+-------------+

The 2 important fields here are:

- Type: When you see "ALL", MySQL is performing a full table scan which is a very CPU intensive operation.
- Row: This is the total amount of rows returned in the table, so 83,000 results is a lot to sort through.

In general, when you are creating an index, you want to choose a field that has the highest amount of unique characters. In our case, we are going to use the field ‘key’ as shown below:

create index key_idx on wp_table(`key`);

Now, lets rerun our explain to see if the query is now returning less rows:

explain SELECT * FROM wp_table WHERE `key`='5544dDSDFjjghhd2544xGFDE' AND `carrier`='13';
+----+-------------+------------+------+---------------+------+---------+------+-------+-------------+
| id | select_type | table      | type | possible_keys | key  | key_len | ref  | rows  | Extra       |
+----+-------------+------------+------+---------------+------+---------+------+-------+-------------+
|  1 | SIMPLE      | wp_table   | ALL  | NULL          | NULL |    NULL | NULL | 13    | Using where |
+----+-------------+------------+------+---------------+------+---------+------+-------+-------------+

This is much better. Now each time that common query runs, MySQL will only have to go through 13 rows, instead of it having to check through 83,000.

Important note: Each time a table is updated, MySQL has to update the indexes, which could create some performance issues. Therefore, its recommended to keep the amount of indexes per table low, perhaps in the 4-6 range.

How to see what indexes already exist on a table and their cardinality:

show indexes from wp_table;

How to remove a table index:

delete index key_idx from wp_table;

RCS – Introduction

When there are 40+ admin’s logging into a client’s server, it can become difficult to keep track of who modified what. And more importantly, in the event that a change created an undesired result, being able to find out exactly what was changed so it can be quickly rolled back. This also becomes a critical component of change control if the client requires specific security requirements such as PCI-DSS 2.0.

This system of revision control is much cleaner to track changes rather creating a bunch of apache2.bak, apache2.20120212, apahce2.conf.031212, etc. Instead, you can view all the versions of the file available simply by:
rlog /etc/apache2/apache2.conf

RCS offers the following features in a very easy to use CLI:

- Store and retrieve multiple revisions of text
- Maintain a complete history of changes
- Maintain a tree of revisions
- Automatically identify each revision with name, revision number, creation time, author, etc
- And much more

For our specific use case, critical files to check into RCS would be configuration files such as /etc/sysctl.conf, /etc/ssh/sshd_config /etc/vsftpd/vsftpd.conf, /etc/httpd/conf/httpd.conf and stuff of that nature.

If RCS is not already installed, then simply run the following depending on your operating system:

yum install rcs
apt-get install rcs

Basic Use Case
The easiest way to learn RCS is to see it in action. So in the use case below, we are going to perform a series of changes to the httpd.conf file.  Before making changes to the file, check it into RCS first so we have a starting point:

root@web01:/etc/apache2# ci -l -wjdoe /etc/apache2/apache2.conf
/etc/apache2/apache2.conf,v  <--  /etc/apache2/apache2.conf
enter description, terminated with single '.' or end of file:
NOTE: This is NOT the log message!
>> Original Apache Configuration File
>> .
initial revision: 1.1
done

Now we can make our change to the config. As an example, we are going to be making some tuning changes to Apache.

vi /etc/apache2/apache2.conf

Once our changes are made, we check the changes in:

root@web01:/etc/apache2# ci -l -wjdoe /etc/apache2/apache2.conf
/etc/apache2/apache2.conf,v  <--  /etc/apache2/apache2.conf
new revision: 1.2; previous revision: 1.1
enter log message, terminated with single '.' or end of file:
>> Tuning changes per ticket #123456
>> .
done

Pretend a few days go by and you receive a call from the client reporting issues with Apache. You log into the server and checks to see if anyone recently made changes to Apache:

root@web01:/etc/apache2# rlog /etc/apache2/apache2.conf

RCS file: /etc/apache2/apache2.conf,v
Working file: /etc/apache2/apache2.conf
head: 1.2
branch:
locks: strict
        root: 1.2
access list:
symbolic names:
keyword substitution: kv
total revisions: 2;     selected revisions: 2
description:
Original Apache Configuration File
----------------------------
revision 1.2    locked by: root;
date: 2012/03/19 15:44:06;  author: jdoe;  state: Exp;  lines: +3 -3
Tuning changes per ticket #123456
----------------------------
revision 1.1
date: 2012/03/19 15:28:38;  author: jdoe;  state: Exp;
Initial revision
=============================================================================

So this tells us that user jdoe make changes to the apache2.conf on 3/19/2012 per ticket #123456. Lets see what changes he made by comparing version 1.1 to version 1.2:

root@web01:/etc/apache2# rcsdiff -r1.1 -r1.2 /etc/apache2/apache2.conf
===================================================================
RCS file: /etc/apache2/apache2.conf,v
retrieving revision 1.1
retrieving revision 1.2
diff -r1.1 -r1.2
77c77
< KeepAlive On
---
> KeepAlive Off
105,106c105,106
<     MaxSpareServers      10
<     MaxClients          150
---
>     MaxSpareServers      1
>     MaxClients          15
root@web01:/etc/apache2#

From the looks of this, it appears he may have typo’ed the MaxClient and MaxSpareServer variable when working that ticket. So lets roll back the configuration file to version 1.1 since that was the last known working version:

root@web01:/etc/apache2# co -r1.1 /etc/apache2/apache2.conf
/etc/apache2/apache2.conf,v  -->  /etc/apache2/apache2.conf
revision 1.1
writable /etc/apache2/apache2.conf exists; remove it? [ny](n): y
done

Then test Apache to confirm everything is working again. Be sure to commit your changes as a rollback is still a change:

root@web01:/etc/apache2# ci -l -wmsmith /etc/apache2/apache2.conf
/etc/apache2/apache2.conf,v  <--  /etc/apache2/apache2.conf
new revision: 1.3; previous revision: 1.2
enter log message, terminated with single '.' or end of file:
>> Rolling back changes made in ticket #123456 due to problems
>> .
done

When the next person logs in to see what changes have been made to the apache.conf, they will see the following:

root@web01:/etc/apache2# rlog /etc/apache2/apache2.conf

RCS file: /etc/apache2/apache2.conf,v
Working file: /etc/apache2/apache2.conf
head: 1.3
branch:
locks: strict
        root: 1.3
access list:
symbolic names:
keyword substitution: kv
total revisions: 3;     selected revisions: 3
description:
Original Apache Configuration File
----------------------------
revision 1.3    locked by: root;
date: 2012/03/19 16:00:38;  author: msmith;  state: Exp;  lines: +3 -3
Rolling back changes made in ticket #123456 due to problems
----------------------------
revision 1.2
date: 2012/03/19 15:44:06;  author: jdoe;  state: Exp;  lines: +3 -3
Tuning changes per ticket #123456
----------------------------
revision 1.1
date: 2012/03/19 15:28:38;  author: jdoe;  state: Exp;
Initial revision
=============================================================================

Rsync Migration Guidelines

There are numerous gotcha’s and things that you must be aware of before you can confidently perform a migration. Migrations go bad all the time. There is no guranteed way of knowing when it will fail. But there are ways to minimize the potential for some problem creeping up a month after the migration. Outlined below are steps that should be taken before proceeding with the migration.

1. Evaluate the server to be migrated. This involves:
– How large are the drives?
– Are there any directories that have hundreds of thousands of files?
– Are there any directories that contain thousands of other directories?
– Is the server extremely busy?

2. Check your backups to ensure everything is in place
You must first determine how you are going to do the migration. Are you build a new server, then use the backups to do a server restore to it? Or are you just going to perform a straight rsync migration. If your going to attempt to utilize your system’s backups:
– Check to see when the last known good backup was. You need confirmation that its good, and also attempt to get a rough eta on how long a server restore will take.
– Setup a new server with the same EXACT specs as the orginal server.

3. Ask questions
Below are some basic questions that should be asked before performing a migration. It may help shed insight on the server’s day to day tasks to ensure a smooth migration. Things to ask:
– Are their any known quirks experienced from time to time on the server?
– What are the key critical services that need special attention when migrating?
– How can you test the server to ensure the migration was successful? ie. websites that can check that utilizies both apache and mysql
– When is a good time the server can be shutdown the services on the production box so the final rsync can be performed?

4. Perform phase 1 of 2 of the rsync migration
The goal here is to create a base system on the new server. You want to be able to get the majority of the data copied over. This is to minimize the downtime the public will have during the final rsync phase. There is no need to schedule this, this is safe to do whenever.
– If you are utilizing your backups, get the new server jumped, throw on a temp ip, and do a full server restore.
– If you are just going to use rsync for everything, be aware that the server may seem sluggish as rsync may eat up the system resources.

To perform the rsync, log onto the old server (the one currently in production), and start a screen session:

screen -S migrations

Create a shared key between the old server and the new server

cd / && exec bash
for i in `ls |grep -v 'proc|etc'` do rsync -axvz --delete-after -e ssh $i [email protected]:/; done

To disconnect from the screen session, just hit:

ctrl-a then hit d

Depending on how much data and what the data structure is, at best your looking at 3-5G per hour. To speed up, use a cross connect on the gig ethernet ports.

5. Final rsync

At specified time stop all services on the production machine except sshd. (its better to just drop to single user mode with networking). Then using the same screen session, type:

for i in s | grep -v proc' do rsync -axvz --delete-after -e ssh $i [email protected]:/; done

Once complete, reboot and wave dead chicken over alter. You will want to swap the ips after you verified the new server at least boots.

6. Testing

This involves:
– Confirm websites work properly
– Confirm you can send and receive email
– Confirm mysql is functioning
– Go through error logs and correct any problems.

7. Troubleshooting

If machine doesn’t boot, you may have to fix grub (redhat). Also make sure /etc/fstab and /boot/grub/grub.conf have the labels setup right, or just specify the device: ex. /dev/hda1

How Qmail Works

Qmail is a very compartmentalized program. Its broken down into multiple tiny programs that govern a very specific piece of the MTA process. This guide documents how Qmail handles email in a nutshell.

Below is a rough diagram of what Qmail looks like:
Qmail Diagram

Messages can enter the mail server in one of 2 ways, either the message came from a remote mailserver like hotmail.com, or the message is being sent from the local server (ie. imap, webmail, mail functions, etc) The 2 daemons that are responsible for this are:

1. qmail-smtpd –> handles mail coming from an outside mail server. ie. hotmail.com

2. qmail-inject –> handles any messages generated locally by the server. ie. imap, webmail, or php mail functions, etc. This service injects the messages directly into the mail queue.

The primary objective of qmail-smtpd and qmail-inject is to pass the message along to qmail-queue.

3. qmail-queue –> This is a complicated program. This writes all the messages to the central queue directory: /var/qmail/queue/. The qmail-queue program can be invoked by qmail-inject for locally generated messages, qmail-smtpd for messages received through SMTP, qmail-local for forwarded messages, or qmail-send for bounced messages. If this is confusing, just remember that this is the program that acually writes the messages to the mail queue. Now, if you are curious like me and want to know the nitty gritty, here it is. /var/qmail/queue is comprised of 5 directories. pid/, mess/, intd/, todo/, info/, and remote/.

Below is a diagram that shows how the message gets handled by qmail-queue during the various message “stages”. Next to each folder I also noted which program controls the message at that particular point in time.

pid/111 --  (S1)  # qmail-queue
          \_ mess/111 (S2)  # qmail-queue
                          |
                          |
                      _ intd/111 (S3)  # qmail-queue
                     /
          todo/111 -- (S4)  # qmail-queue
              |
              |
          info/111 -- local/111  (S4 - S5)  # qmail-send
              |
              |
          remote/111 (S4 - S5)  # qmail-send

Key:
# qmail-send --> responsible for this part of queue
# qmail-queue --> responsible for this part of queue
S1 -->  -mess -intd -todo -info -local -remote -bounce
S2 --> +mess -intd -todo -info -local -remote -bounce
S3 --> +mess +intd -todo -info -local -remote -bounce
S4 --> +mess ?intd +todo ?info ?local ?remote -bounce (queued)
S5 --> +mess -intd -todo +info ?local ?remote ?bounce (preprocessed)

Here are all possible states for a message. + means a file exists; - means it does not exist; ? means it may or may not exist in that folder

It is also well documented in the qmail src file called INTERNALS which explains it better than I can!

Short and sweet overview of qmail-queue: It is responsible for writing the message to the queue.

4. qmail-send –> This takes the message from qmail-queue and passes the message either to qmail-rspawn (for remote delivery) or it sends the message to qmail-lspawn (for local delivery)

5a. qmail-rspawn (remote delivery) –> This sends the message to the remote mail server (ie. yahoo.com)
– qmail-remote –> This transmits the message to the remote mail server.

5b. qmail-lspawn (local delivery) –> This sends the message for local delivery.
– qmail-local –> This passes the message off to a local delivery agent. It reads the users .qmail-default first, which basically just tells qmail-local that vdelivermail is going to handle delivery.

– vdelivermail -> This delivers the mail to the local users. It locates the users Maildir and passes the message off to preline. (preline passes teh mail to other filters or commands) In the users maildir, search for a .qmail file that relates to the user and cat the file. If a cat .qmail-USERNAME doesn’t exist, then it defaults to .qmail-default, which tells it to send the message to procmail.

– procmail -> Procmail performs the mail filtering and local delivery to the mailbox. In procmail, this is where you can send the message to spamassassin or another filtering agent for processing before delivery.

– spamassassin -> Spam filtering. Take special note of the spamassassin versions and permissions, and the spamd vs spamc methods.

Below are qmail’s configuration files, found within /var/qmail/control:

Control file                    Purpose

badmailfrom             blacklisted From addresses
bouncefrom              username of bounce sender
bouncehost              hostname of bounce sender
concurrencyincoming     max simultaneous incoming SMTP connections
concurrencylocal        max simultaneous local deliveries
concurrencyremote       max simultaneous remote deliveries
defaultdomain           domain name
defaulthost             host name
databytes               max number of bytes in message (0=no limit)
doublebouncehost        host name of double bounce sender
doublebounceto          user to receive double bounces
locals                  domains that we deliver locally
morercpthosts           secondary rcpthosts database
queuelifetime           seconds a message can remain in queue
rcpthosts               domains that we accept mail for
smtproutes              artificial SMTP routes
timeoutconnect          how long, in seconds, to wait for SMTP connection
timeoutremote           how long, in seconds, to wait for remote server
timeoutsmtpd            how long, in seconds, to wait for SMTP client
virtualdomains          virtual domains and users

How to read the Qmail logs

This is a quick guide on how to read the Qmail logs.

All message activity is written to /var/log/qmail/current. There can be a lot of information here, so lets break it down line by line. Below is a snippet from /var/log/qmail/current. I added numbers on the left hand side for the sake of learning.

1.  @40000000461d81581ec60f34 new msg 5915497
2.  @40000000461d81581eda8194 info msg 5915497: bytes 22122 from <[email protected]> qp 99024 uid 89
3.  @40000000461d8158214e737c starting delivery 4088258: msg 5915497 to local myuser@localhost
4.  @40000000461d81582155ca64 status: local 2/10 remote 0/60
5.  @40000000461d815824de127c delivery 4088258: success: did_1+0+0/
6.  @40000000461d815824f572dc end msg 5915497

Holy @#$%!, what does all this mean? Here it is, line by line:

1. This indicates that a new message has entered the queue. It is denoted by number: 5915497
2. This tells us where the message was from. In this case: [email protected]
3. Here, we see that the message is trying to deliver to a local user: myuser@localhost. Note the delivery sub id number: 4088258
4. This tells us what the queue volume is like. Not important at this moment.
5. This lets us know the message was delivered successfully to myuser@localhost. Note again the delivery sub id number: 4088258
6. Now qmail says, okay, the message denoted by the number: 5915497 is complete.

So when looking at the logs, first locate a from address or destination address in the logs. Once that is found, find the message id number (should be up 1 or 2 lines) and once you find that, you can discover what the message is doing when in the queue.