Determine table size in MySQL

Understanding which tables within your database that contain large amounts of data can be a very useful. Tables that have a ton of data could indicate that some pruning may be needed.

Perhaps the space may be taken up by stats data, lead generation, or some other type of data which dates back 10 years. It will be up to the DBA’s to determine if any of that data can be archived of course.

Very large tables can give a performance penalty if the queries are not well written and return too many rows. It can also slow down your database dumps, or make them use up more disk space. Disk space in general is also a very real concern as well!

You can easily determine what tables within your database(s) are using up significant space by running:

mysql> SELECT table_schema as `Database`, table_name AS `Table`,
round(((data_length + index_length) / 1024 / 1024), 2) `Size in MB`
FROM information_schema.TABLES
ORDER BY (data_length + index_length) DESC;

And the returned results look something like the following:

+---------------------+---------------------------------------+------------+
| Database            | Table                                 | Size in MB |
+---------------------+---------------------------------------+------------+
| example_one         | tracker                               |    2854.70 |
| example_one         | corehits                              |    1321.00 |
| example_two         | reg_submissions                       |    1313.14 |
| example_one         | unsubscribes                          |    1312.00 |
| example_one         | members                               |     930.08 |
| example_two         | admin                                 |     939.55 |
| example_one         | submissions                           |     829.91 |
| example_two         | tracker_archive                       |     813.17 |
| example_two         | tracker_archive2                      |     757.56 |
| example_two         | affiliates                            |     749.56 |
| example_two         | post_data                             |     712.92 |
| example_one         | signups                               |     711.52 |
| example_one         | guests                                |     703.31 |
| example_one         | success                               |     586.41 |
| example_two         | tracker                               |     574.70 |
| example_one         | e_data                                |     434.28 |
| example_one         | reg_data                              |     426.88 |
+---------------------+---------------------------------------+------------+

Another possible way to run this that I recently found on https://www.percona.com/blog/2008/02/04/finding-out-largest-tables-on-mysql-server/ is posted below.

This one shows more detailed information about each table including how many rows it has, how much data, index size, total size, and the last column shows how much data the index takes compared to the data. If you see a large index size (idx column), it could be worth checking the indexes to see if you have any redundant indexes in place.

The command is:

mysql> SELECT CONCAT(table_schema, '.', table_name), CONCAT(ROUND(table_rows / 1000000, 2), 'M') rows, CONCAT(ROUND(data_length / ( 1024 * 1024 * 1024 ), 2), 'G') DATA, CONCAT(ROUND(index_length / ( 1024 * 1024 * 1024 ), 2), 'G') idx, CONCAT(ROUND(( data_length + index_length ) / ( 1024 * 1024 * 1024 ), 2), 'G') total_size, ROUND(index_length / data_length, 2) idxfrac FROM information_schema.TABLES ORDER  BY data_length + index_length DESC LIMIT  10;

I shortened the column titles a bit so it will fit on this blog, but the output will look something like:

+-----------------------+---------+-------+-------+------------+---------+
| CONCAT(.. table_name) | rows    | DATA  | idx   | total_size | idxfrac |
+-----------------------+---------+-------+-------+------------+---------+
| page_cache            | 38.58M  | 3.09G | 2.97G | 6.06G      |    0.96 |
| exit_pages            | 106.84M | 4.78G | 1.02G | 5.80G      |    0.21 |
| shipping_log004       | 33.94M  | 4.06G | 1.16G | 5.22G      |    0.29 |
| visitor_data          | 9.55M   | 4.61G | 0.09G | 4.70G      |    0.02 |
| orders                | 2.40M   | 1.57G | 0.30G | 1.87G      |    0.19 |
| marketing_info        | 25.16M  | 1.85G | 0.00G | 1.85G      |    0.00 |
| clients               | 1.27M   | 1.29G | 0.13G | 1.42G      |    0.10 |
| directory             | 5.87M   | 0.93G | 0.17G | 1.10G      |    0.18 |
| promos                | 2.31M   | 0.79G | 0.05G | 0.84G      |    0.07 |
| product_information   | 9.33M   | 0.39G | 0.35G | 0.74G      |    0.89 |
+-----------------------+---------+-------+-------+------------+---------+
10 rows in set (0.26 sec)

Maybe you have to see if you can set a retention rate on the data stored in a table if its for caching or logging purposes.

Perhaps you may need to run an optimize tables to reclaim the physical storage space. Just keep in mind that the optimize table command will lock your tables, so it may be best to run that during a low traffic time. See the following URL for more information:
http://dev.mysql.com/doc/refman/5.7/en/optimize-table.html