Database Reference
In-Depth Information
together are quite selective. Consider, for example, the query, “Find people
who have brown hair, glasses, between 30 and 40, blue eyes, in the computer
industry, live in California.” This would entail intersecting the brown hair
bitmap, the glasses bitmap, the union of the ages between 30 and 40, and so on.
With current disks, a semifat B-tree index is the fastest solution even
for CIA-like applications, provided there are not too many range predicates.
For the example above, an index starting with columns HAIRCOLOUR,
GLASSES, EYECOLOUR, INDUSTRY, and STATE in any order, together with
DATE OF BIRTH as the sixth column, would provide excellent performance
because there would be six matching columns : The index slice containing the
suspects would be very thin. If the result table contains a hundred rows, the
QUBE for the local response time would be
101 × 10 ms + 100 × 0 . 01 ms + 100 × 0 . 1ms = 1s
Reading long and efficiently compressed bitmap vectors would probably take
more CPU time. In addition, single-row inserts, updates, and deletes are slow
and disruptive because of locking with bitmap indexes. On the other hand, when
the number of potential WHERE clauses is large, it becomes impossible to have
good B-tree indexes for every WHERE clause.
INDEX OR ING
Index ORing is a more significant feature than index ANDing.
Before multiple index access paths were introduced as part of the optimiz-
ers' capabilities, which access path would be chosen for the SELECT statement
shown in SQL 10.3? If the table had single-column indexes A and B (refer to
Fig. 10.3), the optimizer had only one reasonable choice, namely a full table scan
because both predicates are now non-BT—the DBMS cannot reject a row when
a predicate is evaluated false, as we discussed in Chapter 6.
1
2
A
B
A, B
3
4
Sort and merge
pointer sets
(remove
duplicates)
TX
TX
5
Figure 10.3 Index ORing
is good news.
Sort result rows
 
Search WWH ::




Custom Search