Geoscience Reference
In-Depth Information
Table 18.3 Various types of
optimality errors for any
location model f ( S : V )
No.
Error name
Error definition
1
Total error at S 0
e ( S 0 ) D f ( S 0 : V 0 ) f ( S 0 : V )
f ( S* : V ) f ( S 0 : V 0 )
2
Opportunity cost error
f ( S* : V ) f ( S 0 : V )
3
Optimality error
Ideal error measures are zero
aggregate is because we cannot afford, computationally, to make many function
evaluations of f ( S : V ). We want to aggregate to make the error small; however,
algorithms to do this typically require numerous function evaluations of f ( S : V )and
thus cannot be used for this purpose. Usually it is practical, however, to compute
error measures for at least a few S , and we certainly recommend doing so whenever
possible. For example, given we know V and V 0 , we can use a sampling approach to
compute a random sample of size K of p -servers, say S 1 , :::, S K , compute f ( S k : V 0 )
and f ( S k : V ) for each sample element S k , and then compute a sample error estimate
of any error measure of interest.
Location error (Casillas 1987 ;Daskinetal. 1989 ) involves some comparison of
the p -server locations S* and S 0 . There are several difficulties with using this con-
cept. First, if we really knew S* we would not need to do the aggregation. Second,
when j S* j 2, there appears to be no accepted way to define the difference between
S* and S 0 . Third (assuming we do know S* ), the function f ( S : V ), particularly if
it is the PMM function, may well be relatively flat in the neighborhood of S* ,as
pointed out by Erkut and Bozkaya ( 1999 ). This means we could have some S 0 with
f ( S 0 : V ) only a little larger than f ( S* : V ), but S 0 is “far” from S* . Fourth, S 0 and S*
may not be unique global minima. Why are comparisons made between S 0 and S* ?
We speculate they are made in part due to unstated subjective evaluation criteria, or
known but unstated supplementary evaluation criteria. As another possible example
of the use of location error, we might solve the approximating model with three
different levels of aggregation (numbers of ADPs), obtaining three corresponding
optimal p -servers say S 0 , S 00 and S 000 . In this case, differences between successive
pairs of these p -servers might be of interest; we might want to know how stable
the optimal server locations are as we change the level of aggregation (Murray and
Gottsegen 1997 ).
Subjective or unstated aggregation error criteria may well be important, but are
not well-defined. Thus two analysts can study the same DP aggregation and not
agree on whether it is good or not. Further, if a subjective evaluation derives from
some visual representation of DPs and ADPs, such an analysis may single out
some relatively simple visual error feature that is inappropriate for the actual model
structure. For example, a visual analysis could not evaluate the (computationally
intense) absolute error for the PMM. Some generally accepted way to measure
location error is desirable.
How should we measure the location error diff ( S , Y ), the “difference”
between any two p -servers S and Y ? The answer is not simple, because the
numbering of the elements of S and of Y is arbitrary, and we must find a
way to match up corresponding elements. Further, S and Y
are not vectors,
 
Search WWH ::




Custom Search