IO Scheduler tuning

What is an I/O scheduler? The I/O scheduler is a kernel level tunable whose purpose is to optimize disk access requests. Traditionally this is critical for spinning disks as I/O requests can be grouped together to avoid “seeking”.

Different I/O schedulers have their pro’s and con’s, so choosing which one to use depends on the type of environment and workload. There is no one right I/O scheduler to use, it all simply ‘depends’. Benchmarking your application before and after the I/O scheduler change is usually your best indicator. The good news is, the I/O scheduler can be changed at run time and can be configured to persist after reboots.

The three common I/O schedulers are:
– noop
– deadline
– cfq

noop

The noop I/O scheduler is optimized for systems that don’t need an I/O scheduler such as VMware, AWS EC2, Google Cloud, Rackspace public cloud, etc. Since the hypervisor already controls the I/O scheduling, it doesn’t make sense for the VM to waste CPU cycles on it. The noop I/O scheduler simply works as a FIFO (First In First Out) queue.

You can update the I/O scheduler to noop by:

## CentOS 6

# Change at runtime
[[email protected] ~]# cat /sys/block/sda/queue/scheduler
noop anticipatory deadline [cfq] 
[[email protected] ~]# echo 'noop' > /sys/block/sda/queue/scheduler
[[email protected] ~]# cat /sys/block/sda/queue/scheduler
[noop] anticipatory deadline cfq

# Change at boot time by appending 'elevator=noop' to end of kernel line:
[[email protected] ~]# vim /boot/grub/grub.conf
kernel /vmlinuz-2.6.9-67.EL ro root=/dev/vg0/lv0 elevator=noop


## CentOS 7

# Change at run time
[[email protected] ~]# cat /sys/block/sda/queue/scheduler
noop anticipatory deadline [cfq] 
[[email protected] ~]# echo 'noop' > /sys/block/sda/queue/scheduler
[[email protected] ~]# cat /sys/block/sda/queue/scheduler
[noop] anticipatory deadline cfq

# Change at boot time by appending 'elevator=noop' end of the following line, then rebuild the grub config:
[[email protected] ~]# vim /etc/default/grub
...
GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=rhel00/root rd.lvm.lv=rhel00/swap elevator=noop"
...
[[email protected] ~]# grub2-mkconfig -o /boot/grub2/grub.cfg


## Ubuntu 14.04

# Change at runtime
[[email protected] ~]# cat /sys/block/sda/queue/scheduler
noop [deadline] cfq
[[email protected] ~]# echo noop > /sys/block/sda/queue/scheduler
[[email protected] ~]# cat /sys/block/sda/queue/scheduler
[noop] deadline cfq

# Change at boot time by appending 'elevator=noop' end of the following line, then rebuild the grub config:
[[email protected] ~]# vim /etc/default/grub
...
GRUB_CMDLINE_LINUX="elevator=noop"
...
[[email protected] ~]# grub-mkconfig -o /boot/grub/grub.cfg


## Ubuntu 16.04

# Change at runtime
[[email protected] ~]# cat /sys/block/sda/queue/scheduler
noop [deadline] cfq
[[email protected] ~]# echo noop > /sys/block/sda/queue/scheduler
[[email protected] ~]# cat /sys/block/sda/queue/scheduler
[noop] deadline cfq

# Change at boot time by appending 'elevator=noop' end of the following line, then rebuild the grub config:
[[email protected] ~]# vim /etc/default/grub
...
GRUB_CMDLINE_LINUX="elevator=noop"
...
[[email protected] ~]# grub2-mkconfig -o /boot/grub2/grub.cfg

deadline

The deadline I/O scheduler is optimized by default for read heavy workloads like MySQL. It attempts to optimize I/O request by putting it in a read queue or write queue and assigning a timestamp to the request. For requests in the read queue, they have 500ms (by default) to execute before they are given the highest priority to run. Requests entering the write queue have 5000ms to execute before they are given the highest priority to run.

This deadline assigned to each I/O request is what makes deadline I/O scheduler optimal for read heavy workloads like MySQL.

You can update the I/O scheduler to deadline by:

## CentOS 6

# Change at runtime
[[email protected] ~]# cat /sys/block/sda/queue/scheduler
noop anticipatory deadline [cfq] 
[[email protected] ~]# echo 'deadline' > /sys/block/sda/queue/scheduler
[[email protected] ~]# cat /sys/block/sda/queue/scheduler
noop anticipatory [deadline] cfq

# Change at boot time by appending 'elevator=deadline' to end of kernel line apply the changes to grub:
[[email protected] ~]# vim /boot/grub/grub.conf
kernel /vmlinuz-2.6.9-67.EL ro root=/dev/vg0/lv0 elevator=deadline


## CentOS 7

# Change at run time
[[email protected] ~]# cat /sys/block/sda/queue/scheduler
noop anticipatory deadline [cfq] 
[[email protected] ~]# echo 'deadline' > /sys/block/sda/queue/scheduler
[[email protected] ~]# cat /sys/block/sda/queue/scheduler
noop anticipatory [deadline] cfq

# Change at boot time by appending 'elevator=deadline' end of the following line and apply the changes to grub:
[[email protected] ~]# vim /etc/default/grub
...
GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=rhel00/root rd.lvm.lv=rhel00/swap elevator=deadline"
...
[[email protected] ~]# grub2-mkconfig -o /boot/grub2/grub.cfg


# Ubuntu 14.04

# Change at runtime
[[email protected] ~]# cat /sys/block/sda/queue/scheduler
noop deadline [cfq]
[[email protected] ~]# echo deadline > /sys/block/sda/queue/scheduler
[[email protected] ~]# cat /sys/block/sda/queue/scheduler
noop [deadline] cfq

# Change at boot time by appending 'elevator=deadline' end of the following line apply the changes to grub:
[[email protected] ~]# vim /etc/default/grub
...
GRUB_CMDLINE_LINUX="elevator=deadline"
...
[[email protected] ~]# grub-mkconfig -o /boot/grub/grub.cfg


# Ubuntu 16.04

# Change at runtime
[[email protected] ~]# cat /sys/block/sda/queue/scheduler
noop deadline [cfq]
[[email protected] ~]# echo deadline > /sys/block/sda/queue/scheduler
[[email protected] ~]# cat /sys/block/sda/queue/scheduler
noop [deadline] cfq

# Change at boot time by appending 'elevator=deadline' end of the following line apply the changes to grub:
[[email protected] ~]# vim /etc/default/grub
...
GRUB_CMDLINE_LINUX="elevator=deadline"
...
[[email protected] ~]# grub2-mkconfig -o /boot/grub2/grub.cfg

cfg

The cfg I/O scheduler is probably best geared towards things running GUIs (like a desktop) where each process needs a fast response. The goal of the cfq I/O scheduler (Complete Fairness Queueing) is to give a fair allocation of disk I/O bandwidth for all the processes which requests an I/O operation.

You can update the I/O scheduler to cfq by:

## CentOS 6

# Change at runtime
[[email protected] ~]# cat /sys/block/sda/queue/scheduler
noop anticipatory [deadline] cfq 
[[email protected] ~]# echo 'cfq' > /sys/block/sda/queue/scheduler
[[email protected] ~]# cat /sys/block/sda/queue/scheduler
noop anticipatory deadline [cfq]

# Change at boot time by appending 'elevator=cfq' to end of kernel line apply the changes to grub:
[[email protected] ~]# vim /boot/grub/grub.conf
kernel /vmlinuz-2.6.9-67.EL ro root=/dev/vg0/lv0 elevator=cfq


## CentOS 7

# Change at run time
[[email protected] ~]# cat /sys/block/sda/queue/scheduler
noop anticipatory [deadline] cfq 
[[email protected] ~]# echo 'cfg' > /sys/block/sda/queue/scheduler
[[email protected] ~]# cat /sys/block/sda/queue/scheduler
noop anticipatory deadline [cfq]

# Change at boot time by appending 'elevator=cfq' end of the following line and apply the changes to grub:
[[email protected] ~]# vim /etc/default/grub
...
GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=rhel00/root rd.lvm.lv=rhel00/swap elevator=cfq"
...
[[email protected] ~]# grub2-mkconfig -o /boot/grub2/grub.cfg


# Ubuntu 14.04

# Change at runtime
[[email protected] ~]# cat /sys/block/sda/queue/scheduler
noop [deadline] cfq
[[email protected] ~]# echo cfq > /sys/block/sda/queue/scheduler
[[email protected] ~]# cat /sys/block/sda/queue/scheduler
noop deadline [cfq]

# Change at boot time by appending 'elevator=cfq' end of the following line apply the changes to grub:
[[email protected] ~]# vim /etc/default/grub
...
GRUB_CMDLINE_LINUX="elevator=cfq"
...
[[email protected] ~]# grub-mkconfig -o /boot/grub/grub.cfg


# Ubuntu 16.04

# Change at runtime
[[email protected] ~]# cat /sys/block/sda/queue/scheduler
noop [deadline] cfq
[[email protected] ~]# echo cfq > /sys/block/sda/queue/scheduler
[[email protected] ~]# cat /sys/block/sda/queue/scheduler
noop deadline [cfq]

# Change at boot time by appending 'elevator=cfq' end of the following line apply the changes to grub:
[[email protected] ~]# vim /etc/default/grub
...
GRUB_CMDLINE_LINUX="elevator=cfq"
...
[[email protected] ~]# grub2-mkconfig -o /boot/grub2/grub.cfg

As with any performance tuning recommendations, there is never a one size fits all solution! Always benchmark your application to establish a baseline before you make the change. After the performance changes have been made, run the same benchmark and compare the results to ensure that they had the desired outcomes.

Disabling Transparent Huge Pages in Linux

Transparent Huge Pages (THP) is a Linux memory management system that reduces the overhead of Translation Lookaside Buffer (TLB) lookups on machines with large amounts of memory by using larger memory pages.

However, database workloads often perform poorly with THP, because they tend to have sparse rather than contiguous memory access patterns. The overall recommendation for MySQL, MongoDB, Oracle, etc is to disable THP on Linux machines to ensure best performance.

You can check to see if THP is enabled or not by running:

[[email protected] ~]# cat /sys/kernel/mm/transparent_hugepage/enabled
[always] madvise never
[[email protected] ~]# cat /sys/kernel/mm/transparent_hugepage/defrag
[always] madvise never

If the result shows [never], then THP is disabled. However if the result shows [always], then THP is enabled.

You can disable THP at runtime on CentOS 6/7 and Ubuntu 14.04/16.04 by running:

[[email protected] ~]# echo 'never' > /sys/kernel/mm/transparent_hugepage/enabled
[[email protected] ~]# echo 'never' > /sys/kernel/mm/transparent_hugepage/defrag

However once the system reboots, it will go back to its default value again. To make the setting persistent on CentOS 7 and Ubuntu 16.04, you can disable THP on system startup by making a systemd unit file:

# CentOS 7 / Ubuntu 16.04:
[[email protected] ~]# vim /etc/systemd/system/disable-thp.service
[Unit]
Description=Disable Transparent Huge Pages (THP)

[Service]
Type=simple
ExecStart=/bin/sh -c "echo 'never' > /sys/kernel/mm/transparent_hugepage/enabled && echo 'never' > /sys/kernel/mm/transparent_hugepage/defrag"

[Install]
WantedBy=multi-user.target

[[email protected] ~]# systemctl daemon-reload
[[email protected] ~]# systemctl start disable-thp
[[email protected] ~]# systemctl enable disable-thp

On CentOS 6 and Ubuntu 14.04, you can disable THP on system startup by adding the following to /etc/rc.local. If this is on Ubuntu 14.04, make sure its added before the ‘exit 0’:

# CentOS 6 / Ubuntu 14.04
[[email protected] ~]# vim /etc/rc.local
...
if test -f /sys/kernel/mm/transparent_hugepage/enabled; then
   echo never > /sys/kernel/mm/transparent_hugepage/enabled
fi
if test -f /sys/kernel/mm/transparent_hugepage/defrag; then
   echo never > /sys/kernel/mm/transparent_hugepage/defrag
fi
...

How to install Elastic Stack

Your logs are trying to talk to you! The problem though is that reading through logs is like trying to pick out one conversation in a crowded and noisy room. Some people talk loud and others speak softly. With all this noise, how can you pick out the critical information? This is where Elastic Stack can help!

Elastic Stack is a group of open source products from Elastic designed to help users take data from any type of source and in any format and search, analyze, and visualize that data in real time. This is commonly referred to as an ELK stack (Elasticsearch, Logstash, and Kibana).

Setting up Elastic Stack can be quite confusing as there are several moving parts. As a very basic primer, logstash is the workhouse that applies various filters to parse the logs better. Logstash will then forward the parsed logs to elasticsearch for indexing. Kibana allows you to visualize the data stored in elasticsearch.

Server Installation

This guide is going to be based on CentOS/RHEL 7. Elasticsearch needs at least 2G of memory. So for the entire stack (Elasticsearch, Logstash and Kibana) to work, the absolute minimum required memory should be around 4G. Anything less than this may cause the services to become unstable or not start up at all.

Elastic Stack relies on Java, so install Java 1.8.0 by:

[[email protected] ~]# yum install java-1.8.0-openjdk
[[email protected] ~]# java -version
openjdk version "1.8.0_151"
OpenJDK Runtime Environment (build 1.8.0_151-b12)
OpenJDK 64-Bit Server VM (build 25.151-b12, mixed mode)

Elastic Stack packages all the needed software within their own repos, so to setup their repo by:

[[email protected] ~]# rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
[[email protected] ~]# echo '[elasticstack-6.x]
name=Elastic Stack repository for 6.x packages
baseurl=https://artifacts.elastic.co/packages/6.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md' > /etc/yum.repos.d/elasticstack.repo

Now install the needed packages for Elastic Stack and set them to start on boot:

[[email protected] ~]# yum install elasticsearch kibana logstash filebeat
[[email protected] ~]# systemctl daemon-reload
[[email protected] ~]# systemctl enable elasticsearch kibana logstash filebeat

Server Configuration

Setup Elasticsearch to listen for connects on the public IP of the server. Mine is also configured to listen on localhost as well since I am monitoring logs locally as well:

[[email protected] ~]# vim /etc/elasticsearch/elasticsearch.yml
...
network.host: 123.123.123.123, localhost
...

Setup Elasticsearch to be able to use geoip and user-agent by:

[[email protected] ~]# /usr/share/elasticsearch/bin/elasticsearch-plugin install ingest-geoip
[[email protected] ~]# /usr/share/elasticsearch/bin/elasticsearch-plugin install ingest-user-agent

Configure logstash with a basic configuration to accept logs from filebeats and forward them to elasticsearch by:

[[email protected] ~]# echo 'input {
  beats {
    port => 5044
  }
}

# The filter part of this file is commented out to indicate that it is
# optional.
# filter {
#
# }

filter {
  if [type] == "apache-access" {
    # This will parse the apache access event
    grok {
      match => [ "message", "%{COMBINEDAPACHELOG}" ]
    }
  }
}

output {
  elasticsearch {
    hosts => "localhost:9200"
    manage_template => false
    index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}" 
    document_type => "%{[@metadata][type]}" 
  }
}' > /etc/logstash/conf.d/logstash.conf

Start and test services by:

[[email protected] ~]# systemctl start kibana elasticsearch logstash filebeat

Elasticsearch will take about 15 seconds or more to start. To ensure elasticsearch is running, check that the output is similar to the following:

[[email protected] ~]# curl -XGET 'localhost:9200/?pretty'
{
  "name" : "Cp8oag6",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "AT69_T_DTp-1qgIJlatQqA",
  "version" : {
    "number" : "6.0.1",
    "build_hash" : "f27399d",
    "build_date" : "2016-03-30T09:51:41.449Z",
    "build_snapshot" : false,
    "lucene_version" : "7.0.1",
    "minimum_wire_compatibility_version" : "1.2.3",
    "minimum_index_compatibility_version" : "1.2.3"
  },
  "tagline" : "You Know, for Search"
}

Then log into Kibana by navigating your browser to:

http://localhost:5601

If this is installed on a remote server, then you can easily install Nginx to act as a front end for Kibana by:

# Install Nginx
[[email protected] ~]# yum install nginx httpd-tools

# Setup username/password
[[email protected] ~]# htpasswd -c /etc/nginx/htpasswd.users kibanaadmin

# Create Nginx vhost
[[email protected] ~]# echo 'server {
    listen 80;

    server_name kibana.yourdomain.com;

    auth_basic "Restricted Access";
    auth_basic_user_file /etc/nginx/htpasswd.users;

    location / {
        proxy_pass http://localhost:5601;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;        
    }
}' > /etc/nginx/conf.d/kibana.conf

# Set services to start on boot and start nginx
[[email protected] ~]# systemctl daemon-reload
[[email protected] ~]# systemctl enable nginx
[[email protected] ~]# systemctl start nginx

# Open up the firewall to allow inbound port 80 traffic from anywhere
[[email protected] ~]# firewall-cmd --zone=public --add-port=80/tcp --permanent
[[email protected] ~]# firewall-cmd --reload

# Allow nginx to connect to Kibana port 5601 if you’re using SELinux:
[[email protected] ~]# semanage port -a -t http_port_t -p tcp 5601

# Navigate your browser to your new domain you setup, assuming you already setup DNS for it:
http://kibana.yourdomain.com

Client installation – Filebeat

The question now becomes, how can I get the log messages from other servers into our Elastic Stack server? As my needs are more basic since I am not doing any manipulation of log data, I can make use of filebeat and its associated plugins to get the Apache, Nginx, MySQL, Syslog, etc data I need over to the ElasticSearch server.

Assuming filebeat is not installed, ensure that you have the repos setup for it:

[[email protected] ~]# rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
[[email protected] ~]# echo '[elasticstack-6.x]
name=Elastic Stack repository for 6.x packages
baseurl=https://artifacts.elastic.co/packages/6.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md' > /etc/yum.repos.d/elasticstack.repo

Then install filebeat by:

[[email protected] ~]# yum install filebeat
[[email protected] ~]# systemctl daemon-reload
[[email protected] ~]# systemctl enable filebeat

Setup filebeats to send the logs over to your Elastic Stack server:

[[email protected] ~]# vim /etc/filebeat/filebeat.yml
...
output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["123.123.123.123:9200"]
...

Now setup the plugins for filebeat. Only setup the ones you need. Be sure to restart filebeat after you have your desired modules enabled.

To send over Apache logs

[[email protected] ~]# filebeat modules enable apache2
[[email protected] ~]# filebeat setup -e
[[email protected] ~]# systemctl restart filebeat

Note, you may need to modify the filebeat apache2 module to pickup your logs. In my case, I had to set the ‘var.paths’ for both the access and error logs by:

[[email protected] ~]# vim /etc/filebeat/modules.d/apache2.yml
- module: apache2
  # Access logs
  access:
    enabled: true

    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    var.paths: ["/var/log/httpd/*access.log*"]

  # Error logs
  error:
    enabled: true

    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    var.paths: ["/var/log/httpd/*error.log*"]

[[email protected] ~]# systemctl restart filebeat

To send over syslog data:

[[email protected] ~]# filebeat modules enable system
[[email protected] ~]# filebeat setup -e
[[email protected] ~]# systemctl restart filebeat

To handle MySQL data:

[[email protected] ~]# filebeat modules enable mysql
[[email protected] ~]# filebeat setup -e
[[email protected] ~]# systemctl restart filebeat

To send over auditd logs

[[email protected] ~]# filebeat modules enable auditd
[[email protected] ~]# filebeat setup -e
[[email protected] ~]# systemctl restart filebeat

To send over Nginx logs

[[email protected] ~]# filebeat modules enable nginx
[[email protected] ~]# filebeat setup -e
[[email protected] ~]# systemctl restart filebeat

Enable Docker log shipping to elasticsearch. There is no plugin for this, but its easy enough to configure:
Reference: https://www.elastic.co/blog/enrich-docker-logs-with-filebeat

[[email protected] ~]# vim /etc/filebeat/filebeat.yml
filebeat.prospectors:
...
- type: log
  paths:
   - '/var/lib/docker/containers/*/*.log'
  json.message_key: log
  json.keys_under_root: true
  processors:
  - add_docker_metadata: ~
...

[[email protected] ~]# systemctl restart filebeat

Then browse to the Kibana dashboard to view the available dashboards for Filebeat, or create your own!

Client installation – Metricbeat

What about shipping metrics and statistics over to the Elastic Stack server? This is where Metricbeat comes into play. Metricbeat is a lightweight shipper that you can install on your client nodes that will collect metrics and ships them to Elasticsearch. There are modules for Apache, HAProxy, MySQL, Nginx, PostgreSQL, Redis, System and more. This can be installed on your client servers or on the ELK server itself if you like.

Assuming Metricbeat is not installed, ensure that you have the repos setup for it:

[[email protected] ~]# rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
[[email protected] ~]# echo '[elasticstack-6.x]
name=Elastic Stack repository for 6.x packages
baseurl=https://artifacts.elastic.co/packages/6.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md' > /etc/yum.repos.d/elasticstack.repo

Then install Metricbeat by:

[[email protected] ~]# yum install metricbeat
[[email protected] ~]# systemctl daemon-reload
[[email protected] ~]# systemctl enable metricbeat

Setup Metricbeat to send the logs over to your Elastic Stack server:

[[email protected] ~]# vim /etc/metricbeat/metricbeat.yml
...
output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["123.123.123.123:9200"]
...

Now setup the plugins for Metricbeat. Only setup the ones you need. Be sure to restart Metricbeat after you have your desired modules enabled.

To see the full listing of available modules and what is currently enabled:

[[email protected] ~]# metricbeat modules list

To send over Apache, MySQL, Nginx and System metrics:

[[email protected] ~]# metricbeat modules enable apache mysql nginx system
[[email protected] ~]# filebeat setup -e

After enabling each one, be sure to check out the modules associated config file as you may need to make changes to it so it will work with your environment. The modules config files can be found in:

[[email protected] ~]# cd /etc/metricbeat/modules.d

Once the configurations are updated accordingly, restart Filebeat by:

[[email protected] ~]# systemctl restart filebeat

Then browse to the Kibana dashboard to view the available dashboards for Metricbeat, or create your own!

Quick Kibana Primer

Now that you have data coming into Elasticsearch, you can use Kibana to generate some quick searches and visualizations. This is not meant to be a full fledge tutorial on how to use Kibana, just a way to jump start the learning process as Kibana can be somewhat complicated if you have never seen it.

To search Kibana looking for failed logins, type the following in the discover search box:

system.auth.ssh.event:Failed OR system.auth.ssh.event:Invalid

To see what packages have been recently installed, type the following in the discover search box:

source: "/var/log/messages" AND system.syslog.message: *Install*

What about visualizations? To see the top 5 countries accessing Apache:

- Click 'Visualizations' over on the left
- Select 'Vertical bar chart'
- Select 'filebeat-*' from the existing index
Click 'X-Axis'
Aggregation:  Terms
Field:  apache2.access.geoip.country_iso_code
Order by:  Metric: Count
Order Descending:  5

To break it down further by city:
- Click 'Add sub-buckets'
- Select 'Split series'
Sub Aggregation:  Terms
Field:  apache2.access.geoip.city_name
Order by:  metric: Count
Order Decending:  5
Click run

View the top 5 remote IP’s hitting Apache:

- Click 'Visualizations' over on the left
- Select 'Vertical bar chart'
- Select 'filebeat-*' from the existing index
- Click 'X-Axis'
Aggregation:  Terms
Field:  apache2.access.remote_ip
Size:  5

Click 'Add sub-buckets'
- Select 'Split series'
Sub Aggregation:  Terms
Field:  apache2.access.remote_ip
Order by:  metric: Count
Order: Descending
Size:  5

View the top 10 requested URL’s in Apache:

- Click 'Visualizations' over on the left
- Select 'Data Table'
- Select 'filebeat-*' from the existing index
- Under Buckets, click 'Split Rows'
Aggregation:  Terms
Field:  apache2.access.url
Order By:  metric: Count
Order:  Descending
Size:  10
Custom Label:  URL

Then click 'Split Rows'
Sub Aggregation:  Terms
Field:  apache2.access.body_sent_bytes
Order By:  metric: Count
Descending:  10
Custom Label:  Size
Click run

Create line chart for apache response codes:

- Click 'Visualizations' over on the left
- Select 'Line chart'
- Select 'filebeat-*' from the existing index
- Click X-Axis
Aggregation:  Date Histogram
Field:  @timestamp
Interval:  Minute

Click 'Split Series'
Sub Aggregation:  Terms
Field:  apache2.access.response_code
Oder by:  metric: Count
Order:  Descending
Size: 5
Click run

See which logs are receiving a lot of activity:

- Click 'Visualizations' over on the left
- Select 'Pie Chart'
- Select 'filebeat-*' from the existing index
- Click 'Split Slices'
Aggregation:  Terms
Field:  source
Order by:  metric: Count
Order: Descending
Size: 5

Backing up permissions on directory

Before doing anything in Linux, it is also smart to have a rollback plan. Making blanket, recursive permission changes on a directory would certainly fall into this category!

Lets say you found a directory on your system where the file permissions were all 777, so you want to secure them a bit by changing the permissions over to 644. Something like:

[[email protected] ~]# find /var/www/vhosts/domain.com -type f -perm 0777 -print -exec chmod 644 {} \;

The paranoid among us will want to ensure we can revert things back to the way they were before. Thankfully there are two commands that can be used to either backup or restore permissions on a directory recursively: getfacl and setfacl

To backup all the permissions and ownerships within a given directory such as /var/www/vhosts/domain.com, do the following:

[[email protected] ~]# cd /var/www/vhosts/domain.com
[[email protected] ~]# getfacl -R . > permissions_backup

Now lets say you ran the find command, changed everything over to 644, then realized you broke your application cause it needed some files to be 664 or something, so you just want to roll back so you can investigate what happened.

You can roll back the permissions by running:

[[email protected] ~]# cd /var/www/vhosts/domain.com
[[email protected] ~]# setfacl --restore=permissions_backup

Backup entire servers permissions

If you wanted to backup the entire server’s permissions, you can do that by:

[[email protected] ~]# getfacl -R --absolute-names / > server_permissions_backup

And the restoration process remains the same:

[[email protected] ~]# setfacl --restore=server_permissions_backup

Find command examples

This is just a quick reference page for using find to do basic things.

Find is a pretty powerful tool that accepts a bunch of options for narrowing down your search. Some basic examples of stuff you can do are below.

Find a specific file type and extension older than 300 days and remove them

This will find files:
– Older than 300 days
– Is a file
– Match *.jpg
– Will not go into sub directories

This also works for those pesky directories that have millions of files.

First, always confirm the command will work before blindly removing files:

[[email protected] ~]# cd /path/to/directory
[[email protected] ~]# find . -maxdepth 1 -type f -name '*.jpg' -mtime +300 | xargs ls -al

Once you verified that the files displayed are the ones you want removed, remove them by running:

[[email protected] ~]# cd /path/to/directory
[[email protected] ~]# find . -maxdepth 1 -type f -name '*.jpg' -mtime +300 | xargs rm -f

Find files with 777 permissions

This will find all files that have 777 permissions:

[[email protected] ~]# cd /path/to/directory
[[email protected] ~]# find . -type f -perm 0777 -print

This will find all files that do NOT have 777 permissions

[[email protected] ~]# cd /path/to/directory
[[email protected] ~]# find / -type f ! -perm 777

Find Files with 777 Permissions and change to 644

Use caution with this, this is generally not smart to run blindly as it will go into subdirectories unless you set maxdepth.

[[email protected] ~]# cd /path/to/directory
[[email protected] ~]# find . -type f -perm 0777 -print -exec chmod 644 {} \;

Find Directories with 777 Permissions and change to 755

Use caution with this, this is generally not smart to run blindly as it will go into subdirectories unless you set maxdepth.

[[email protected] ~]# cd /path/to/directory
[[email protected] ~]# find . -type d -perm 777 -print -exec chmod 755 {} \;

Find empty directories

[[email protected] ~]# cd /path/to/directory
[[email protected] ~]# find /tmp -type d -empty

Find all hidden files within a directory

[[email protected] ~]# find /path/to/directory -type f -name ".*"

Find files owned by user or group

[[email protected] ~]# cd /path/to/directory
[[email protected] ~]# find /var/www -user apache
[[email protected] ~]# find /var/www -group apache

Find files that were modified in the last 30 days

[[email protected] ~]# find / -mtime 30

Find files that were modified in the last hour

[[email protected] ~]# find / -mmin -60

Find files that were changed within the last hour
Note, this one is specified in minutes only!

[[email protected] ~]# find / -cmin -60

Find files that were accessed in the last 5 days

[[email protected] ~]# find / -atime 5

Find files that were accessed within the last hour
Note, this one is specified in minutes only!

[[email protected] ~]# find / -amin -60

Count files per directory with find
This one is useful when you need to find the top 10 directories that contain the most amount of files.

[[email protected] ~]# vim count-files-per-directory.sh
#!/bin/bash

if [ $# -ne 1 ];then
  echo "Usage: `basename $0` DIRECTORY"
  exit 1
fi

echo "Please wait..."

find "[email protected]" -type d -print0 2>/dev/null | while IFS= read -r -d '' file; do 
    echo -e `ls -A "$file" 2>/dev/null | wc -l` "files in:\t $file"
done | sort -nr | head | awk '{print NR".", "\t", $0}'

exit 0

Now run it against the / directory:

[[email protected] ~]# bash count-files-per-directory.sh /
Please wait...
1. 	 768 files in:	 /usr/share/man/man1
2. 	 631 files in:	 /usr/lib64/python2.6
3. 	 575 files in:	 /usr/share/locale
4. 	 566 files in:	 /usr/share/vim/vim74/syntax
5. 	 496 files in:	 /usr/bin
6. 	 487 files in:	 /usr/share/man/man8
7. 	 393 files in:	 /usr/share/perl5/unicore/lib/gc_sc
8. 	 380 files in:	 /usr/include/linux
9. 	 354 files in:	 /usr/lib64/python2.6/encodings
10. 	 334 files in:	 /usr/share/man/man3

Or if you only need to run the search in a specific directory:

[[email protected] ~]# bash count-files-per-directory.sh /usr/share/man
Please wait...
1. 	 768 files in:	 /usr/share/man/man1
2. 	 487 files in:	 /usr/share/man/man8
3. 	 334 files in:	 /usr/share/man/man3
4. 	 124 files in:	 /usr/share/man/man5
5. 	 49 files in:	 /usr/share/man
6. 	 35 files in:	 /usr/share/man/ru/man8
7. 	 31 files in:	 /usr/share/man/man7
8. 	 27 files in:	 /usr/share/man/fr/man8
9. 	 25 files in:	 /usr/share/man/de/man8
10. 	 22 files in:	 /usr/share/man/ja/man8

Rolling back yum transactions

Ever had the system update a package, which winds up breaking the most random things? How can you roll back? How can you prevent that same buggy package from updating itself again the next time the system checks for updates, yet still get newer versions of that package when its released?

I ran across something like this recently. The symptom was that PHPMyAdmin was no longer working on this LAMP server. In short, it was found that an Apache update was to blame, which was found in this bug report: https://bz.apache.org/bugzilla/show_bug.cgi?id=61202

So how can the update to Apache be rolled back? First, try to confirm that Apache was indeed updated recently:

[[email protected] ~]# tail /var/log/yum.log
Jul 08 04:23:49 Updated: httpd24u-filesystem-2.4.26-1.ius.centos6.noarch
Jul 08 04:23:49 Updated: httpd24u-tools-2.4.26-1.ius.centos6.x86_64
Jul 08 04:23:50 Updated: httpd24u-2.4.26-1.ius.centos6.x86_64
Jul 08 04:23:50 Updated: 1:httpd24u-mod_ssl-2.4.26-1.ius.centos6.x86_64

Now find the transaction ID within yum by running:

[[email protected] ~]# yum history
ID     | Login user               | Date and time    | Action(s)      | Altered
-------------------------------------------------------------------------------
   220 | root               | 2017-07-08 04:23 | Update         |    4

View the details of this transaction by running:

[[email protected] ~]# yum history info 220
...
Transaction performed with:
    Installed     rpm-4.8.0-55.el6.x86_64                       @centos6-x86_64
    Installed     yum-3.2.29-81.el6.centos.noarch               @centos6-x86_64
    Installed     yum-metadata-parser-1.1.2-16.el6.x86_64       @anaconda-CentOS-201410241409.x86_64/6.6
    Installed     yum-plugin-fastestmirror-1.1.30-40.el6.noarch @centos6-x86_64
    Installed     yum-rhn-plugin-2.4.6-1.el6.noarch             @spacewalk
Packages Altered:
    Updated httpd24u-2.4.25-4.ius.centos6.x86_64            @rackspace-centos6-x86_64-ius
    Update           2.4.26-1.ius.centos6.x86_64            @rackspace-centos6-x86_64-ius
    Updated httpd24u-filesystem-2.4.25-4.ius.centos6.noarch @rackspace-centos6-x86_64-ius
    Update                      2.4.26-1.ius.centos6.noarch @rackspace-centos6-x86_64-ius
    Updated httpd24u-mod_ssl-1:2.4.25-4.ius.centos6.x86_64  @rackspace-centos6-x86_64-ius
    Update                   1:2.4.26-1.ius.centos6.x86_64  @rackspace-centos6-x86_64-ius
    Updated httpd24u-tools-2.4.25-4.ius.centos6.x86_64      @rackspace-centos6-x86_64-ius
    Update                 2.4.26-1.ius.centos6.x86_64      @rackspace-centos6-x86_64-ius
history info
...

To roll back the updates, getting us back to Apache 2.4.25 in this case, simple undo the transaction by running:

[[email protected] ~]# yum history undo 220

Then confirm Apache is back to the previous version 2.4.25:

[[email protected] ~]# rpm -qa |grep -i httpd24u
httpd24u-filesystem-2.4.25-4.ius.centos6.noarch
httpd24u-2.4.25-4.ius.centos6.x86_64
httpd24u-mod_ssl-2.4.25-4.ius.centos6.x86_64
httpd24u-tools-2.4.25-4.ius.centos6.x86_64

Next, restart Apache so the changes take place:

[[email protected] ~]# service httpd restart

Finally, exclude the buggy packages from ever being installed again. In this example, Apache 2.4.26 will never be installed, however any newer versions released after that will install/update normally.

[[email protected] ~]# yum install yum-plugin-versionlock
[[email protected] ~]# yum versionlock add! httpd24u-mod_ssl-2.4.26-1.ius.centos6.x86_64 httpd24u-2.4.26-1.ius.centos6.x86_64 httpd24u-tools-2.4.26-1.ius.centos6.x86_64 httpd24u-filesystem-2.4.26-1.ius.centos6.noarch

Setting default kernel in grub2

With newer systems like CentOS 7 and Ubuntu 14.04 and 16.04 using Grub2, you can no longer simply update a single file to have your kernel boot off an older or newer kernel. There are a series of steps that must be followed. The examples below will show how to boot off an older kernel for their respective operating systems.

Please note, the instructions in this article will lock your kernel on whichever one you selected. Even if your system receives automatic kernel updates, those new kernels will have to be manually enabled within grub if you want to use them.

CentOS 7

First, check to see which kernel is currently running:

[[email protected] ~]# uname -r
3.10.0-514.16.1.el7.x86_64

That shows us we’re running 3.10.0-514.16.1, however I need to be running 3.10.0-327.36.3. So to use this specific named kernel, first changed the GRUB_DEFAULT to ‘saved’ in /etc/default/grub by:

[[email protected]~]# cp /etc/default/grub /etc/default/grub.bak
[[email protected]~]# vim /etc/default/grub
...
GRUB_DEFAULT=saved
...

Now create a backup of the grub config for recovery purposes if needed, then rebuild grub:

[[email protected] ~]# cp /boot/grub2/grub.cfg /boot/grub2/grub.cfg.bak
[[email protected] ~]# grub2-mkconfig -o /boot/grub2/grub.cfg

Determine what the full kernel name is you want to use. Get the listing by running:

[[email protected]~]# grep "^menuentry" /boot/grub2/grub.cfg | cut -d "'" -f2
CentOS Linux (3.10.0-514.16.1.el7.x86_64) 7 (Core)
CentOS Linux (3.10.0-514.2.2.el7.x86_64) 7 (Core)
CentOS Linux (3.10.0-327.36.3.el7.x86_64) 7 (Core)
CentOS Linux (3.10.0-327.22.2.el7.x86_64) 7 (Core)
CentOS Linux (3.10.0-327.3.1.el7.x86_64) 7 (Core)
CentOS Linux (0-rescue-c11d017c89ca4e8685ae3d9791d472ca) 7 (Core)

I want to use the 3.10.0-327.36.3 kernel. Setting that to be the default kernel is simple. Set the desired kernel by running:

[[email protected] ~]# grub2-set-default "CentOS Linux (3.10.0-327.36.3.el7.x86_64) 7 (Core)"

Now verify that the change got applied in the configs by running:

[[email protected] ~]# grub2-editenv list
saved_entry=CentOS Linux (3.10.0-327.36.3.el7.x86_64) 7 (Core)

Reboot the system so it boots off the older kernel:

[email protected]:~# reboot

Finally, once the system comes back online, verify the desired kernel is running by:

[[email protected] ~]# uname -r
3.10.0-327.36.3.el7.x86_64

If the system rebooted, and dropped you into a grub shell with an error, you can boot up off of the backup grub.cfg file that was created by:

grub2> configfile (hd0,1)/boot/grub2/grub.cfg.bak

Ubuntu 14.04 and Ubuntu 16.04

First, check to see which kernel is currently running:

[email protected]:~# uname -r
4.4.0-48-generic

That shows us we’re running 4.4.0-48, however I need to be running 4.4.0-47. So to use this specific named kernel, first changed the GRUB_DEFAULT to ‘saved’ in /etc/default/grub by:

[email protected]:~# cp /etc/default/grub /etc/default/grub.bak
[email protected]:~# vim /etc/default/grub
...
GRUB_DEFAULT=saved
...

Now create a backup of the grub config for recovery purposes if needed, then rebuild grub:

[email protected]:~# cp /boot/grub/grub.cfg /boot/grub/grub.cfg.bak
[email protected]:~# update-grub

Determine what the full kernel name is you want to use. Get the listing by running:

[email protected]:~# egrep "^[[:space:]]?(submenu|menuentry)" /boot/grub/grub.cfg | cut -d "'" -f2
Ubuntu
Advanced options for Ubuntu
Ubuntu, with Linux 4.4.0-78-generic
Ubuntu, with Linux 4.4.0-78-generic (recovery mode)
Ubuntu, with Linux 4.4.0-47-generic
Ubuntu, with Linux 4.4.0-47-generic (recovery mode)

I want to use the 4.4.0-47-generic kernel. Setting that to be the default kernel is simple. However, you MUST prepend ‘Advanced options for Ubuntu’ to the kernel name as shown below since Ubuntu makes use of sub menus in the kernel listing. So set the desired kernel by running:

[email protected]:~# grub-set-default 'Advanced options for Ubuntu>Ubuntu, with Linux 4.4.0-47-generic'

Now verify that the change got applied in the configs by running:

[email protected]:~# grub-editenv list
saved_entry=Advanced options for Ubuntu>Ubuntu, with Linux 4.4.0-47-generic

Reboot the system so it boots off the older kernel:

[email protected]:~# reboot

Finally, once the system comes back online, verify the desired kernel is running by:

[email protected]:~# uname -r
4.4.0-47-generic

If the system rebooted, and dropped you into a grub shell with an error, you can boot up off of the backup grub.cfg file that was created by:

grub2> configfile (hd0,1)/boot/grub2/grub.cfg.bak

Setting up the old-releases repo for ubuntu

This is a guide on enabling the old-releases repos for the Ubuntu project. When Ubuntu marks a release EOL, the mainline repos are moved to an alternate location where they are preserved for historical purposes. It should be noted that when a system reaches EOL, the repos for it are no longer maintained.

It goes without saying that enabling the old-releases repos for EOL systems should only be used as a last resort. Continuing to run a system that has reached the end of life status is dangerous as it no longer receives security patches and bug fixes. There are also no promises that things will continue to work.

Mitigating security and reliability issues can be resolved all together by updating the system to a supported version of the operating system. If there is a compelling reason to enable the old-releases repos so packages can be installed, then proceed below.

Ubuntu 10.04 EOL Repos

Ubuntu 10.04 LTS went EOL on 4/2015. The procedure for setting up the system to use the old-releases repos are below.

First, create a backup of the /etc/apt/sources.list by:

[[email protected] ~]# cp /etc/apt/sources.list /etc/apt/sources.list.bak

Now update /etc/apt/sources.list to point to the old-releases repos accordingly. Keep in mind that there may be repos specified in here for Nginx, Varnish, Docker, etc. So be sure to only update the items needed by Ubuntu. The end result should look something like this:

[[email protected] ~]# vim /etc/apt/sources.list
# See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to
# newer versions of the distribution.

deb http://old-releases.ubuntu.com/ubuntu/ lucid main restricted
deb-src http://old-releases.ubuntu.com/ubuntu/ lucid main restricted

## Major bug fix updates produced after the final release of the
## distribution.
deb http://old-releases.ubuntu.com/ubuntu/ lucid-updates main restricted
deb-src http://old-releases.ubuntu.com/ubuntu/ lucid-updates main restricted

## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu
## team. Also, please note that software in universe WILL NOT receive any
## review or updates from the Ubuntu security team.
deb http://old-releases.ubuntu.com/ubuntu/ lucid universe
deb-src http://old-releases.ubuntu.com/ubuntu/ lucid universe
deb http://old-releases.ubuntu.com/ubuntu/ lucid-updates universe
deb-src http://old-releases.ubuntu.com/ubuntu/ lucid-updates universe

## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu 
## team, and may not be under a free licence. Please satisfy yourself as to 
## your rights to use the software. Also, please note that software in 
## multiverse WILL NOT receive any review or updates from the Ubuntu
## security team.
deb http://old-releases.ubuntu.com/ubuntu/ lucid multiverse
deb-src http://old-releases.ubuntu.com/ubuntu/ lucid multiverse
deb http://old-releases.ubuntu.com/ubuntu/ lucid-updates multiverse
deb-src http://old-releases.ubuntu.com/ubuntu/ lucid-updates multiverse

## Uncomment the following two lines to add software from the 'backports'
## repository.
## N.B. software from this repository may not have been tested as
## extensively as that contained in the main release, although it includes
## newer versions of some applications which may provide useful features.
## Also, please note that software in backports WILL NOT receive any review
## or updates from the Ubuntu security team.
# deb http://old-releases.ubuntu.com/ubuntu/ lucid-backports main restricted universe multiverse
# deb-src http://old-releases.ubuntu.com/ubuntu/ lucid-backports main restricted universe multiverse

## Uncomment the following two lines to add software from Canonical's
## 'partner' repository.
## This software is not part of Ubuntu, but is offered by Canonical and the
## respective vendors as a service to Ubuntu users.
# deb http://old-releases.ubuntu.com/ubuntu lucid partner
# deb-src http://old-releases.ubuntu.com/ubuntu lucid partner

deb http://old-releases.ubuntu.com/ubuntu lucid-security main restricted
deb-src http://old-releases.ubuntu.com/ubuntu lucid-security main restricted
deb http://old-releases.ubuntu.com/ubuntu lucid-security universe
deb-src http://old-releases.ubuntu.com/ubuntu lucid-security universe
deb http://old-releases.ubuntu.com/ubuntu lucid-security multiverse
deb-src http://old-releases.ubuntu.com/ubuntu lucid-security multiverse

Now refresh the package index from their sources by running:

[[email protected] ~]# apt-get update

Address any 404’s accordingly as that means the URL may be incorrect or may not longer exist.

Ubuntu 12.04 EOL Repos

Ubuntu 12.04 LTS is going EOL on 4/2017. The procedure for setting up the system to use the old-releases repos are below.

First, create a backup of the /etc/apt/sources.list by:

[[email protected] ~]# cp /etc/apt/sources.list /etc/apt/sources.list.bak

Now update /etc/apt/sources.list to point to the old-releases repos accordingly. Keep in mind that there may be repos specified in here for Nginx, Varnish, Docker, etc. So be sure to only update the items needed by Ubuntu. The end result should look something like this:

[[email protected] ~]# vim /etc/apt/sources.list
# See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to
# newer versions of the distribution.
deb http://old-releases.ubuntu.com/ubuntu/ precise main restricted
deb-src http://old-releases.ubuntu.com/ubuntu/ precise main restricted

## Major bug fix updates produced after the final release of the
## distribution.
deb http://old-releases.ubuntu.com/ubuntu/ precise-updates main restricted
deb-src http://old-releases.ubuntu.com/ubuntu/ precise-updates main restricted

## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu
## team. Also, please note that software in universe WILL NOT receive any
## review or updates from the Ubuntu security team.
deb http://old-releases.ubuntu.com/ubuntu/ precise universe
deb-src http://old-releases.ubuntu.com/ubuntu/ precise universe
deb http://old-releases.ubuntu.com/ubuntu/ precise-updates universe
deb-src http://old-releases.ubuntu.com/ubuntu/ precise-updates universe

## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu 
## team, and may not be under a free licence. Please satisfy yourself as to 
## your rights to use the software. Also, please note that software in 
## multiverse WILL NOT receive any review or updates from the Ubuntu
## security team.
deb http://old-releases.ubuntu.com/ubuntu/ precise multiverse
deb-src http://old-releases.ubuntu.com/ubuntu/ precise multiverse
deb http://old-releases.ubuntu.com/ubuntu/ precise-updates multiverse
deb-src http://old-releases.ubuntu.com/ubuntu/ precise-updates multiverse

## N.B. software from this repository may not have been tested as
## extensively as that contained in the main release, although it includes
## newer versions of some applications which may provide useful features.
## Also, please note that software in backports WILL NOT receive any review
## or updates from the Ubuntu security team.
deb http://old-releases.ubuntu.com/ubuntu/ precise-backports main restricted universe multiverse
deb-src http://old-releases.ubuntu.com/ubuntu/ precise-backports main restricted universe multiverse

deb http://old-releases.ubuntu.com/ubuntu precise-security main restricted
deb-src http://old-releases.ubuntu.com/ubuntu precise-security main restricted
deb http://old-releases.ubuntu.com/ubuntu precise-security universe
deb-src http://old-releases.ubuntu.com/ubuntu precise-security universe
deb http://old-releases.ubuntu.com/ubuntu precise-security multiverse
deb-src http://old-releases.ubuntu.com/ubuntu precise-security multiverse

## Uncomment the following two lines to add software from Canonical's
## 'partner' repository.
## This software is not part of Ubuntu, but is offered by Canonical and the
## respective vendors as a service to Ubuntu users.
# deb http://old-releases.ubuntu.com/ubuntu precise partner
# deb-src http://old-releases.ubuntu.com/ubuntu precise partner

## Uncomment the following two lines to add software from Ubuntu's
## 'extras' repository.
## This software is not part of Ubuntu, but is offered by third-party
## developers who want to ship their latest software.
# deb http://old-releases.ubuntu.com/ubuntu precise main
# deb-src http://old-releases.ubuntu.com/ubuntu precise main

Now refresh the package index from their sources by running:

[[email protected] ~]# apt-get update

Address any 404’s accordingly as that means the URL may be incorrect or may not longer exist.

Using hpasmcli for HP servers

HP comes with their server utility scripts called hpssacli and hpacucli. These tools allow you to view and modify your hardware configuration on the server. The hpacucli is the older implementation of the toolkit, but the syntax is pretty similar.

HP tools information

To show the firmware version, run:

[[email protected] ~]# hpasmcli -s "show server"

If you want to see extended information, run:

[[email protected] ~]# hpssacli controller all show config detail

General information

To view information regarding the server model, cpu, type, memory, etc, run:

[[email protected] ~]# hpasmcli -s "show server"

Hardware Health

If you want to view the health of the system and chassis components, run:

[[email protected] ~]# hpasmcli -s "show server"

The chassis can also only return specific components, such as:

[[email protected] ~]# hpasmcli -s "show powersupply"
[[email protected] ~]# hpasmcli -s "show dimm"
[[email protected] ~]# hpasmcli -s "show fans"
[[email protected] ~]# hpasmcli -s "show temp"

Storage health

To view the physical and virtual disks on the server:

[[email protected] ~]# hpssacli controller all show
[[email protected] ~]# hpssacli controller slot=3 physicaldrive all show
[[email protected] ~]# hpssacli controller slot=3 logicaldrive all show

On older HP servers, you can view the physical and virtual disks on the server by:

[[email protected] ~]# hpacucli controller slot=1 physicaldrive all show
[[email protected] ~]# hpacucli controller slot=1 logicaldrive all show

To see the storage battery status:

[[email protected] ~]# hpssacli controller all show status

Hardware logs

To display the hardware logs:

[[email protected] ~]# hpasmcli -s "show iml"

If you need to clear the hardware logs:

[[email protected] ~]# hpasmcli -s "clear iml"

CPU actions

To see if hyperthreading is enabled on the CPUs:

[[email protected] ~]# hpasmcli -s "show ht"

If you wanted to change the hyperthreading settings:

# Enable
[[email protected] ~]# hpasmcli -s "enable ht"

# Disable
[[email protected] ~]# hpasmcli -s "disable ht"

Using omreport and omconfig for Dell servers

Dell comes with their server utility scripts called omreport and omconfig. These tools allow you to view and modify your hardware configuration on the server.

Dell tools information

To see what version of the tools your running:

[[email protected] ~]# omreport about details=true

To see if there are updates available for the firmware:

[[email protected] ~]# omreport system version

To see what commands are available using omreport:

[[email protected] ~]# omreport system -?

General information

To view information regarding the server model, cpu type, memory, service tags, etc, run:

[[email protected] ~]# omreport system summary

Hardware Health

If you want to view the health of the system and chassis components, run:

[[email protected] ~]# omreport system

To only get the health information for the chassis:

[[email protected] ~]# omreport chassis

The chassis can also only return specific components, such as:

[[email protected] ~]# omreport chassis fans
[[email protected] ~]# omreport chassis memory
[[email protected] ~]# omreport chassis nics
[[email protected] ~]# omreport chassis processors
[[email protected] ~]# omreport chassis temps
[[email protected] ~]# omreport chassis batteries
[[email protected] ~]# omreport chassis pwrsupplies

Storage health

As a quick note, if the commands below report there are no controllers listed, check to be sure that the software is actually running by:

[[email protected] ~]# /opt/dell/srvadmin/sbin/srvadmin-services.sh status
dell_rbu (module) is stopped
ipmi driver is running
dsm_sa_datamgrd is stopped
dsm_sa_eventmgrd is stopped
dsm_sa_snmpd is stopped
dsm_om_shrsvcd is stopped
dsm_om_connsvcd is stopped
[[email protected] ~]# /opt/dell/srvadmin/sbin/srvadmin-services.sh restart

To view the physical and virtual disks on the server:

[[email protected] ~]# omreport storage pdisk controller=0
[[email protected] ~]# omreport storage vdisk controller=0
[[email protected] ~]# omreport storage pdisk controller=0 vdisk=0

If you just wanted a quick listing of the relevant disk information to see the state of the drives, run:

[[email protected] ~]# omreport storage pdisk controller=0 | grep -iE "^id|^status|name|state|Failure Predicted"
ID                              : 0:0:0
Status                          : Ok
Name                            : Physical Disk 0:0:0
State                           : Online
Failure Predicted               : No
ID                              : 0:0:1
Status                          : Ok
Name                            : Physical Disk 0:0:1
State                           : Online
Failure Predicted               : No

To see if there are any empty drive bays on the server:

[[email protected] ~]# omreport storage controller controller=0 info=pdslotreport | grep 'Empty Slots'

To see the storage battery status:

[[email protected] ~]# omreport storage battery controller=0

Hardware Logs

To display the hardware logs, run:

[[email protected] ~]# omreport system esmlog

If you need to view the alert logs:

[[email protected] ~]# omreport system alertlog

And if you needed to view the messages from the POST:

[[email protected] ~]# omreport system postlog

If you find you need to clear the logs, that can be performed by:

[[email protected] ~]# omconfig system esmlog action=clear
[[email protected] ~]# omconfig system alertlog action=clear
[[email protected] ~]# omconfig system postlog action=clear

CPU actions

To see if hyperthreading is enabled on the CPUs:

[[email protected] ~]# omreport chassis biossetup | grep -A 2 'HyperThreading'

If you wanted to enable hyperthreading:

# Dell R710
[[email protected] ~]# omconfig chassis biossetup attribute=cpuht setting=enabled

# Dell R720
[[email protected] ~]# omconfig chassis biossetup attribute=ProcCores setting=All

If you needed to enable or disable NUMA:

# Disable NUMA:
[[email protected] ~]# omconfig chassis biossetup attribute=numa setting=disabled

# Enable NUMA:
[[email protected] ~]# omconfig chassis biossetup attribute=numa setting=enabled