How to increment MySQL replication counters

Sometimes, MySQL replication may break, displaying an error in ‘show slave status’ such as:

mysql> show slave status\G
********************* 1. row *********************
                 Last_Errno: 1062
                 Last_Error: Error 'Duplicate entry '123456' for key 1' on query. Default database: 'XXX'. Query: 'YYY'
               Skip_Counter: 0

When you see this, you can try to increment the counter by:

mysql> stop slave;
mysql> set global sql_slave_skip_counter=1; start slave;

Then rerun the ‘show slave status’ again to see if replication is now working again. If you start seeing many of these, it could possibly indicate that your replication is not consistent between the Master and Slave MySQL servers, and you may need to consider rebuilding replication.

Resetting a MySQL root password

Resetting a forgotten MySQL root password is a pretty straight forward task to complete assuming you have sudo or root access to the server. It is important to note that by performing this procedure, MySQL will be down till you complete everything. So be sure to do this during a time when it will not impact your business.

First, stop the MySQL service

service mysqld stop # RHEL clones
service mysql stop # Ubuntu / Debian distro

Now its time to bring MySQL back up in safe mode. This means we will start MySQL, but we are simply skipping the user privileges table:

sudo mysqld_safe --skip-grant-tables &

Time to log into MySQL and switch to the MySQL database:

mysql -uroot
use mysql;

Now reset the root password and flush privileges:

update user set password=PASSWORD("enternewpasswordhere") where User='root';
flush privileges;
quit

Once you have completed that, its time to take MySQL out of safe mode and start it back up normally. NOTE: Be sure MySQL is fully stopped before trying to start the service again. You may need to kill -9 the process if its being stubborn.

Stop MySQL:

service mysqld stop # RHEL clones
service mysql stop # Ubuntu / Debian distro

Verify that the MySQL process is no longer running. If they are, you may have to kill -9 the process:

ps -waux |grep mysql

Start MySQL back up:

service mysqld start # RHEL clones
service mysql start # Ubuntu / Debian distro

Finally, test out logging in to ensure its now working properly:

mysql -uroot -p

Creating table indexes in MySQL

You may ask, what is a table index and how will it help performance? Table indexes provide MySQL a more efficient way to retrieve records. I often like to use the following example to explain it:

Imagine you have a phone book in front of you, and there are no letters in the top right corner that you can reference if you are looking up a last name. Therefore, you have to search page by page through hundreds of pages that have tens of thousands of results. Very inefficient and intensive. Think of this as a full table scan.

Now picture the phone book having the letter references in the top right corner. You can flip right to section “La – Lf” and only have to search through a smaller result set. The time to find the results is must faster and easier.

Common symptoms where this logic can be applied is when you log onto a server and see MySQL frequently chewing up a lot of CPU time, either constantly, or in spikes. The slow-query-log is also a great indicator cause if the query is taking a long time to execute, chances are it was because the query was making MySQL work too hard performing full table scans.

The information below will provide you with the tools to help identify these inefficient queries and how to help speed them up.

There are 2 common ways to identify queries that are very inefficient and may be creating CPU contention issues:

View MySQL’s process list:

When entering into MySQL CLI, you will want to look for any queries that you see that are often running to evaluate. You can see the queries by:

mysql
show processlist;

View slow queries log:

To view this, first check to ensure the slow-query-log variables are enabled in the my.cnf:

log-slow-queries=/var/lib/mysqllogs/slow-log
long_query_time=5

Now, lets look at an example of a slow query that got logged. Please note, these queries got logged here cause they took longer to run then the max seconds defined on long_query_time:

# Time: 110404 22:45:25
# User@Host: wpadmin[wordpressdb] @ localhost []
# Query_time: 14.609104  Lock_time: 0.000054 Rows_sent: 4  Rows_examined: 83532
SET timestamp=1301957125;
SELECT * FROM wp_table WHERE `key`='5544dDSDFjjghhd2544xGFDE' AND `carrier`='13';

Here is a query that we know is know runs often, and takes over 5 seconds to execute:

SELECT * FROM wp_table WHERE `key`='5544dDSDFjjghhd2544xGFDE' AND `carrier`='13';

Within the MySQL cli, run the following to view some more details about this query:

explain SELECT * FROM wp_table WHERE `key`='5544dDSDFjjghhd2544xGFDE' AND `carrier`='13';
+----+-------------+------------+------+---------------+------+---------+------+-------+-------------+
| id | select_type | table      | type | possible_keys | key  | key_len | ref  | rows  | Extra       |
+----+-------------+------------+------+---------------+------+---------+------+-------+-------------+
|  1 | SIMPLE      | wp_table   | ALL  | NULL          | NULL |    NULL | NULL | 83532 | Using where |
+----+-------------+------------+------+---------------+------+---------+------+-------+-------------+

The 2 important fields here are:

- Type: When you see "ALL", MySQL is performing a full table scan which is a very CPU intensive operation.
- Row: This is the total amount of rows returned in the table, so 83,000 results is a lot to sort through.

In general, when you are creating an index, you want to choose a field that has the highest amount of unique characters. In our case, we are going to use the field ‘key’ as shown below:

create index key_idx on wp_table(`key`);

Now, lets rerun our explain to see if the query is now returning less rows:

explain SELECT * FROM wp_table WHERE `key`='5544dDSDFjjghhd2544xGFDE' AND `carrier`='13';
+----+-------------+------------+------+---------------+------+---------+------+-------+-------------+
| id | select_type | table      | type | possible_keys | key  | key_len | ref  | rows  | Extra       |
+----+-------------+------------+------+---------------+------+---------+------+-------+-------------+
|  1 | SIMPLE      | wp_table   | ALL  | NULL          | NULL |    NULL | NULL | 13    | Using where |
+----+-------------+------------+------+---------------+------+---------+------+-------+-------------+

This is much better. Now each time that common query runs, MySQL will only have to go through 13 rows, instead of it having to check through 83,000.

Important note: Each time a table is updated, MySQL has to update the indexes, which could create some performance issues. Therefore, its recommended to keep the amount of indexes per table low, perhaps in the 4-6 range.

How to see what indexes already exist on a table and their cardinality:

show indexes from wp_table;

How to remove a table index:

delete index key_idx from wp_table;