Java Reference
In-Depth Information
Unfortunately, knowing the answer for 163 is of almost no use at all
when solving for 164. One solution is: 9, 54, 101.
If you tried solving these examples, you probably found yourself doing a lot of
trial-and-error and a lot of backtracking. To come up with an algorithm, we want
an organized way to go through the possible subsets. Is there a way to make the
problem smaller, so that we can apply divide and conquer? We essentially have two
parts to the input: The knapsack size K and the n items. It probably will not do us
much good to try and break the knapsack into pieces and solve the sub-pieces (since
we already saw that knowing the answer for a knapsack of size 163 did nothing to
help us solve the problem for a knapsack of size 164).
So, what can we say about solving the problem with or without the nth item?
This seems to lead to a way to break down the problem. If the nth item is not
needed for a solution (that is, if we can solve the problem with the first n1 items)
then we can also solve the problem when the nth item is available (we just ignore
it). On the other hand, if we do include the nth item as a member of the solution
subset, then we now would need to solve the problem with the first n 1 items
and a knapsack of size K k n (since the nth item is taking up k n space in the
knapsack).
To organize this process, we can define the problem in terms of two parameters:
the knapsack size K and the number of items n. Denote a given instance of the
problem as P(n;K). Now we can say that P(n;K) has a solution if and only if
there exists a solution for either P(n1;K) or P(n1;Kk n ). That is, we can
solve P(n;K) only if we can solve one of the sub problems where we use or do
not use the nth item. Of course, the ordering of the items is arbitrary. We just need
to give them some order to keep things straight.
Continuing this idea, to solve any subproblem of size n 1, we need only to
solve two subproblems of size n 2. And so on, until we are down to only one
item that either fills the knapsack or not. This naturally leads to a cost expressed
by the recurrence relation T(n) = 2T(n 1) + c = (2 n ). That can be pretty
expensive!
But... we should quickly realize that there are only n(K + 1) subproblems
to solve! Clearly, there is the possibility that many subproblems are being solved
repeatedly. This is a natural opportunity to apply dynamic programming. We sim-
ply build an array of size nK + 1 to contain the solutions for all subproblems
P(i;k); 1 i n; 0 k K.
There are two approaches to actually solving the problem. One is to start with
our problem of size P(n;K) and make recursive calls to solve the subproblems,
each time checking the array to see if a subproblem has been solved, and filling
in the corresponding cell in the array whenever we get a new subproblem solution.
The other is to start filling the array for row 1 (which indicates a successful solution
Search WWH ::




Custom Search