Database Reference
In-Depth Information
Another route is to use the contrib/pgstattuple module, supplied with PostgreSQL. This
provides overkill statistics about what's happening in your tables and indexes, which it derives
by scanning the whole table or index, and literally counting everything. It's very good, and I am
not dismissing it. Just use carefully: if you have time to scan the table, you may as well have
VACUUMed the whole table anyway.
Scan tables using pgstattuple() as follows:
test=> SELECT * FROM pgstattuple('pg_catalog.pg_proc');
-[ RECORD 1 ]------+-------
table_len | 458752
tuple_count | 1470
tuple_len | 438896
tuple_percent | 95.67
dead_tuple_count | 11
dead_tuple_len | 3157
dead_tuple_percent | 0.69
free_space | 8932
free_percent | 1.95
and scan indexes using pgstatindex() as follows:
postgres=> SELECT * FROM pgstatindex('pg_cast_oid_index');
-[ RECORD 1 ]------+------
version | 2
tree_level | 0
index_size | 8192
root_block_no | 1
internal_pages | 0
leaf_pages | 1
empty_pages | 0
deleted_pages | 0
avg_leaf_density | 50.27
leaf_fragmentation | 0
There's more...
You may want this as a Nagios plugin.
Look at check_postgres_bloat as part of the check_postgres plugins. That provides some
flexible options to assess bloat. Unfortunately, its not that well documented, though if you've
read this, it should make sense. You'll need to play with it to get the thresholding correct
anyway, so that shouldn't be a problem.
 
Search WWH ::




Custom Search