Helping to Reduce Page Compression Failures Rate
- by Vasil Dimov
When InnoDB compresses a page it needs the result to fit into its predetermined compressed page size (specified with KEY_BLOCK_SIZE). When the result does not fit we call that a compression failure. In this case InnoDB needs to split up the page and try to compress again. That said, compression failures are bad for performance and should be minimized.Whether the result of the compression will fit largely depends on the data being compressed and some tables and/or indexes may contain more compressible data than others. And so it would be nice if the compression failure rate, along with other compression stats, could be monitored on a per table or even on a per index basis, wouldn't it?This is where the new INFORMATION_SCHEMA table in MySQL 5.6 kicks in. INFORMATION_SCHEMA.INNODB_CMP_PER_INDEX provides exactly this helpful information. It contains the following fields:
+-----------------+--------------+------+
| Field | Type | Null |
+-----------------+--------------+------+
| database_name | varchar(192) | NO |
| table_name | varchar(192) | NO |
| index_name | varchar(192) | NO |
| compress_ops | int(11) | NO |
| compress_ops_ok | int(11) | NO |
| compress_time | int(11) | NO |
| uncompress_ops | int(11) | NO |
| uncompress_time | int(11) | NO |
+-----------------+--------------+------+
similarly to INFORMATION_SCHEMA.INNODB_CMP, but this time the data is grouped by "database_name,table_name,index_name" instead of by "page_size".So a query like
SELECT
database_name,
table_name,
index_name,
compress_ops - compress_ops_ok AS failures
FROM information_schema.innodb_cmp_per_index
ORDER BY failures DESC;
would reveal the most problematic tables and indexes that have the highest compression failure rate.From there on the way to improving performance would be to try to increase the compressed page size or change the structure of the table/indexes or the data being stored and see if it will have a positive impact on performance.