Information Technology Reference
In-Depth Information
optimization problem is solved. But ignoring large portions of the available stochastic
information has been shown to be detrimental to solution quality (e.g., [6]).
Shapiro, et al. [1, 9] proposed an alternative approximation technique called sample
average approximation (SAA) to reduce the number of scenarios. They suggest using
only a subset of the scenarios, randomly sampled according to the distribution over
scenarios, to represent the full scenario space. An important theoretical justification for
this method is that as the number of scenarios sampled increases, the solution to the
approximate problem converges to an optimal solution in the expected sense. Indeed,
the convergence rate is exponentially fast.
In this paper, we attempt to scale up the SAA method to harder problems than
those previously studied (e.g., [1, 9]). We tackle two stochastic optimization problems
that arise naturally in the context of the annual Trading Agent Competition (TAC)
(see http://www.sics.se/tac). The first problem is a bidding problem inspired by the
TAC Travel game; the second problem is a scheduling problem inspired by the TAC
Supply Chain Management game. Nested inside each of these problems is an NP-hard
optimization problem.
We find that runtime increases linearly with the number of scenarios in the bidding
problem, whereas it increases exponentially in the scheduling problem. Indeed, given
reasonable time constraints we cannot reliably solve the scheduling problem with a
sample size of more than 8 out of 2 40 scenarios. We conclude that the theory which jus-
tifies the SAA method is inapplicable to some stochastic optimization problems. Con-
sequently, we experiment with optimizing the tradeoff between the number of scenarios
and the number of policies (i.e., candidate solutions). We generate multiple policies by
sampling from the set of scenarios multiple times, and solving the ensuing approxima-
tion problems. We evaluate these policies with respect to a large, yet fixed, sample of
scenarios. In the bidding problem, as the theory suggests, we find that maximizing the
number of scenarios yields the best solution; in the scheduling problem, however, it is
necessary to evaluate multiple policies to find the best solution, since increasing the
number of scenarios becomes expensive very quickly.
2
Sample Average Approximation
Following [9], we are interested in solving optimization problems of the form:
v =max
x∈X
g ( x )
(1)
where
g ( x )=
E Q [ G ( x, Y )]
(2)
Here, x
; Y is a
vector of discrete random variables with joint probability distribution Q over universe
Ω ; G ( x, ω ) is a real-valued function of x
∈X
is a vector of decision variables that takes values in the finite set
X
∈X
and ω
Ω ;and
E Q [ G ( x, Y )] is the
expected value of G at x :i.e.,
E Q [ G ( x, Y )] =
ω∈Ω
Q ( ω ) G ( x, ω )
(3)
Search WWH ::




Custom Search