Information Technology Reference
In-Depth Information
we ran all combinations of the parameter settings for which we predicted the running
time would be less than 25 seconds (in TAC SCM games, each day lasts 15 seconds)
We made these predictions as follows. For all values of S , we recorded the average time
to solve Equation 4; call this α ( S ). For all values of E ,werecordedtheaveragetimeto
solve Equation 5; call this β ( E ). For each combination of E , S ,and P , we predicted the
running time would be P ( α ( S )+ β ( E )). Note that because of these time constraints,
we never exhausted the space limitations of our machines.
The results of our search for optimal parameter settings are depicted in Table 1.
Each table was generated as follows. First, we sorted our data according to time. Then,
we traversed this sorted list. Whenever the value of the solution improved, we output
new parameter settings and the corresponding values. The first column in each table
lists time in seconds. For the bidding problem, the second column lists rewards; for
scheduling, the second and third columns list the statistical upper and lower bounds on
revenue (Equations 6 and 7). The revenue earned in the scheduling problem is precisely
the estimated lower bound. The last three columns list the settings of E , S ,and P that
generated these values.
In the bidding problem, the best performing parameter settings are those with 64
scenarios and only 2 policies, increasing the number of evaluations as time permits.
It appears that the increase in solution quality obtained by increasing the number of
scenarios exceeded any increase that could have been obtained by increasing the number
of policies. Given that time increases only linearly with the number of scenarios in the
bidding problem, but that solution quality converges to the optimal exponentially fast,
this outcome is not surprising.
In the scheduling problem, the best performing parameter settings are those with 4-6
scenarios and multiple policies. When the number of scenarios was 7 or 8, the policy
generation process was much slower than it was when there were fewer scenarios. In-
deed, only one “algorithm” with 7 scenarios appears in the table, and no “algorithms”
with 8 scenarios appear at all. We conclude that in the scheduling problem, choosing
the best among multiple policies yields better solutions than increasing the number of
scenarios to a point at which only a few policies can be evaluated.
5
Related Work
The sample average approximation method has been applied to a variety of stochas-
tic optimization problems of varying degrees of difficulty. It appears to us, however,
that none were quite so hard as the problems attacked here. For example, [9] exper-
iment with the stochastic knapsack problem, with only 20 first stage binary decision
variables, and [1] experiment with a stochastic optimization problem with 2 continuous
first stage variables and 4 integer second stage variables. In our formulation of the bid-
ding problem, there are 70400 first stage variables and 201216 second stage variables;
in our formulation of the scheduling problem, there are 16 first stage integer variables
and 320 second stage binary variables. Moreover, nested inside each of our problems
are NP-hard second stage decision problems.
To our knowledge, Algorithm 1 is the best-known method for solving stochastic opti-
mization problems. Thus, rather than compare sample average approximation with other
Search WWH ::




Custom Search