Database Reference
In-Depth Information
How it works...
Analyze updates statistics about data size and data distribution in all tables. If a table size
has changed significantly without its statistics being updated, then PostgreSQL's statistics-
based optimizer can choose a bad plan. Running the ANALYZE command manually, updates
the statistics for all tables.
There's more...
There are a few other common problems.
Do the queries return significantly more data than earlier?
If you initially tested your queries on almost empty tables, it is entirely possible that you are
querying much more data than you need.
As an example, if you select all users items, and then show the first 10, then this runs very
fast when user has 10 or even 50 items, but not so well when he/she has 50,000.
Check that you don't ask for more data than you need; use the LIMIT clause to return less
data to your application (and to at least give the optimizer a chance to select a plan which
processes less data when selecting and may also have lower startup cost).
Do the queries also run slowly when run alone?
If you can, then try to run the same slow query when the database has no or very little other
queries running concurrently.
If it runs well in this situation, then it may be that the database host is just overloaded (CPU,
memory, or disk I/O) and a plan that worked well under light load, is not so good any more. It
may even be that it is not a very good query plan with which to begin, and you were fooled by
modern computers being really fast.
db=# select count(*) from t;
count
---------
1000000
(1 row)
Time: 329.743 ms
As you can see, scanning one million rows takes just 0.3 seconds on a laptop, a few years old,
if these rows are already cached.
But, if you have a few of such queries running in parallel, and also other queries competing for
memory, this query can slow down an order of magnitude or two.
See chapter, Performance & Concurrency for general performance-tuning advice.
 
Search WWH ::




Custom Search