Database Reference
In-Depth Information
Is the second run of same query also slow?
This test is related to the previous one, and it checks if the slowdown may be caused by some
of the needed data not fitting in memory, or being pushed out of memory by other queries.
If the second run of the query is fast, then you probably have developed a problem of not
enough memory.
Again, see chapter Performance & Concurrency .
Table and index bloat
Something that can develop over time if some maintenance processes can't be run properly is
table bloat. That is, due to the way MVCC works, your table will contain a lot of old versions of
rows, in case these old versions can't be removed in a timely manner.
There are several ways this can develop, but all involve lots of updates or deletes, and inserts
while the autovacuum is prevented from doing its job of getting rid of old tuples. And it is
possible that even after the old versions are deleted, the table stays at its now acquired large
size, thanks to visible rows being located at the end of table and preventing PostgreSQL
from shrinking the file. There have been cases where a one-row table was grown to several
gigabytes in size.
If you suspect some table may contain bloat, then run the following:
select pg_relation_size(relid) as tablesize,schemaname,relname,n_live_
tup
from pg_stat_user_tables
where relname = <tablename>;
And see if the relation of tablesize to n_live_tup makes sense.
For example, if the table size is tens of megabytes, and there are only a small number of rows,
then you have bloat.
See also
F Is anybody using a specific table / Collecting daily usage statistics shows one way
to collect info on table changes
F The whole Chapter, Performance & Concurrency
Investigating and reporting a bug
When you find out that PostgreSQL is not doing what it should, then it's time to investigate.
 
Search WWH ::




Custom Search