Database Reference
In-Depth Information
How it works...
The scripts make static copies of tables pg_stat_user_tables , pg_statio_user_tables ,
pg_stat_user_indexes , and pg_statio_user_indexes at each run, then they compare
the current copies with the ones saved at the last run, and save the timestamped deltas to log
tables stat_user_tables_delta_log and stat_user_indexes_delta_log , which can
then be analyzed later to learn about access and I/O patterns.
The latest set of deltas are also kept in tables stat_user_tables_delta and stat_
user_indexes_delta which can be used for external monitoring systems, such as cacti, to
get a graphical representation of it.
There's more...
These scripts just collect data indefinitely. This should not cause performance problems, as
the large log tables are without indexes, and thus inserts into these are fast, but if you are low
on disk space and have many tables, you may want to introduce a rotation scheme for these
tables which throws away older data.
There is a script doing this in the downloaded package. Running the following:
psql mydatabase <./pg_statlogs_rotate.sql
each Sunday keeps four weeks of data.
Another statistics collection package
There is also a package available at at http://pgstatspack.projects.postgresql.org/ for similar
data collecting.
Finding what makes SQL slow
A SQL statement can be slow for a lot of reasons. Following, we give a shortlist of these
and at least one way for recognizing each of these.
Too much data is processed
Run the query with explain analyse to see how much data is processed for completing
the query as follows:
mydb=# explain analyse select count(*) from t;
QUERY PLAN
----------------------------------------------------------------------
Aggregate (cost=4427.27..4427.28 rows=1 width=0) \
(actual time=32.953..32.954 rows=1 loops=1)
-> Seq Scan on t (cost=0.00..4425.01 rows=901 width=0) \
 
Search WWH ::




Custom Search