Database Reference
In-Depth Information
The documents mention that you can join pg_locks to pg_prepared_xacts , though they
don't mention that if you join directly on the transaction id, all it tells you is that there is a
transaction lock, unless there are some row-level locks. The table locks are listed as being
held by a virtual transaction. A simpler query is the following:
postgres=# SELECT DISTINCT x.database, l.relation
FROM pg_locks l JOIN pg_prepared_xacts x
ON l.virtualtransaction = '-1/' ||
x.transaction::text
WHERE l.locktype != 'transactionid';
database | relation
----------+----------
postgres | 16390
postgres | 16401
(2 rows)
This tells you which relations in which databases have been touched by the remaining
prepared transactions. We can't tell the names because we'd need to connect to those
databases to check.
You can then fully scan each of those tables, looking for changes like the following:
SELECT * FROM table WHERE xmin = 121083;
which will show you all the rows in that table inserted by transaction 121083 , taken from the
transaction column of pg_prepared_xacts .
Actions for heavy users of temporary tables
If you are a heavy user of temporary tables in your applications, then there are some
additional actions you may need to perform.
How to do it...
There are four main things to check, which are as follows:
1. Make sure you run VACUUM on system tables, or enable autovacuum to do this for you.
2. Monitor running queries to see how many and how large temporary files are active.
3. Tune memory parameters. Think about increasing the temp_buffers parameter,
though be careful not to overallocate memory by doing so.
4. Separate temp table I/O. In a query intensive system, you may find that read/write to
temporary files exceeds reads/writes on permanent data tables and indexes. In this
case, you should create new tablespace(s) on separate disks, and ensure that the
temp_tablespaces parameter is configured to use the additional tablespace(s).
 
Search WWH ::




Custom Search