Database Reference
In-Depth Information
Table 9.3 Comparison of
prediction rates (absolute):
various SVDs with
variable rank
Approach
k (or t)
p 1
Mode 2
10
26
Mode 3/projection
10/10
48
Mode 3/HOSVD
10/10/10
96
Obviously, the method works out. Moreover, the result is somewhat better
than that of the two-dimensional case in Table 9.1 . The improvement is hitherto
modest, though.
We shall now consider the complete HOSVD Algorithm 9.2 for evaluation. This
is consistent with the just-described approach of a projection procedure, however,
including the frontal mode. For the latter, we employ the somewhat awkward online
procedure (Algorithm 9.2).
To do so, we must carry out steps 2-7 of Algorithm 9.2 for the frontal mode d after
each product view leading to an update of the slice B , which, in particular, includes
the incremental update step 5, and delete the updated matrix U afterward. Then we
apply the projection procedure ( 9.5 ) to the thus updated slice B ( d ) , which is consistent
with steps 8-10. We save U not until termination of the session. Hence, the actual
“learning” takes place not until the end of the session.
The prediction rate of the complete HOSVD, along with that of the foregoing
procedures, is summarized in Table 9.3 . For the HOSVD, we need 3 ranks, namely,
one for each of the dimensions of the masters, the variations, and the sessions. Sadly
enough, due to the high computational complexity of Algorithm 9.2, the compar-
ison is feasible for low ranks only. We use 10 for each dimension.
The result suggests that the complete HOSVD works well in principle and
furthermore yields better results than the projection method. This, however, is a
statement under reservation: technically, we would have to carry out the entire
comparison of prediction rates over varying ranks.
Example 9.5 We now consider the transition probabilities as a function of the
sessions. The first dimension is thus the considered product ( s ), the second one is
the destination of the transition ( s 0 ), and the third one is the session ( u ) itself.
Hence, the first two dimensions span the transition probabilities for each session .A
new slice therefore represents the matrix P u ¼ p u , ss 0 s , s 0 ∈S of the transition
probabilities that have hitherto occurred in the session. By applying the factoriza-
tion, we obtain the matrix P u of all estimated transition probabilities for the current
session.
Example 9.6 Eventually, we may also consider the transition probabilities as a
function of the recommendation a . This corresponds to the approach from
Example 9.5 with the recommendation a in lieu of the session u . Here, however,
all dimensions have the same cardinality and the third dimension does not grow
dynamically. Therefore, the adaptive approach makes no sense with respect to
content (though, possibly as a technology for offline learning). We thus factorize
Search WWH ::




Custom Search