Database Reference
In-Depth Information
Algorithm 8.4 Branch-and-bound method for estimating argmax i m v i ;
h
w
i
Input: refinement tree T (obtained from recursive k-means clustering), relaxa-
tion parameter h
[0;1]
Output: list (approximate) maximizers and the corresponding value
1: L : ¼ { root ( T )}
initialize list of candidate nodes with the root of the
refinement tree
2:
repeat
3:
t 1 : ¼ select_node ( L )
select node to be branched
4:
t 1 ,
...
, t k : ¼ children ( t )
5:
L : ¼ L \{ t }
, k do
7: if u h ( t i ) max s L l h ( s ) then
8: L : ¼ L [ { t i }
9: if l h ( t i ) max s L l h ( s ) then
10: for s
6:
for i ¼ 1,
...
L do
11:
if u h ( s )
<
l h ( t i ) then
12:
L : ¼ L \{ s }
discard node if necessary
13: end if
14: end for
15: end if
16: end if
17: end for
18: until max s L l h ( s ) ¼ max s L u h ( s ) ^ L contains a leaf node of T
19:
return L , max s L l h ( s )
If h is taken to be zero, this procedure comes down to the heuristic method
outlined in the beginning of this section, whereas h ¼ 1 gives rise to an exact
though by far less efficient procedure. To attain a satisfactory compromise between
accuracy and efficiency, the parameter h must be adjusted from experience with
respect to the nature of the inputs under consideration.
Since it would exceed the scope of this chapter, we leave a final assessment of
the effectiveness of the above-devised procedure for future research. In first appli-
cations it proved to be very effective, where h was about 0.95.
8.7 Summary
We have seen that matrix factorization is a potentially valuable instrument of
approximation with regard to devising recommendation engines. It offers different
advantages. It may reduce the complexity of data representation (with respect to
models of the environment or the cost function) and thus render the practical
usability of the data feasible in the first place. Moreover,
it allows for a
Search WWH ::




Custom Search