Database Reference
In-Depth Information
300
270
240
210
180
150
120
90
60
30
0
> # identified Q17 and Q20 as the ones that could be improved with more space (Q17 is
the worst with 90 units)
> # Use constraints to tune again so that no query is worse than 1.2x the cost under
refC, but additionally
> # Q17 is expected to execute in fewer than 60 units. For that, try to get as close as
possible to 2000MB
> $ct1 = "FOR Q in W ASSERT cost(Q,C) <= cost(Q,refC)*1.2"
> $ct2 = "ASSERT cost(W['Q17'], C) <= 60"
> $ct3 = "SOFT ASSERT size(C) = 2000"
> TuneConstrained-Workload -Workload $w -Timeout 600 -Constraints $ct1, $ct2, $ct3
> ...
FIGURE 12.4
(Continued)
new solution is associated with a custom, ad hoc set of experiments that val-
idate the approach. An important challenge is to devise a principled way to
generate databases and workloads to compare competing tools that might be
based on different approaches. Some work in the area assumes that the un-
derlying database system does not change across alternative physical design
tuners. If this assumption does not hold, it is not even clear how the dif-
ferent tuners could (or should) be compared. This is a rather deep problem
that might have profound implications in future research on physical database
design. We next comment on three components of a physical design bench-
mark: the set of databases and workloads to tune, a baseline configuration to
compare against recommendations, and the evaluation metrics themselves.
12.4.1 Database/Workloads
A very important component of a benchmark is the actual databases and
workloads over which the physical design would be tuned. Numerous examples
in the literature show how careful we need to be when designing benchmarks:
a poorly designed benchmark can give unfair advantage to certain approaches
or can open the door to specific ways of “gaming the benchmark.” Database
and workload generation for the purposes of physical design benchmarking is
Search WWH ::




Custom Search