Database Reference
In-Depth Information
Table 8.4 Comparison of
prediction rates of an LVP
with variable rank with
respect to the directly
succeeding product and the
remainder of the session
Immediate
Remainder of the session
k
p 1
p 3
p 1
p 3
1
0.001
0.003
0.47
0.76
5
0.55
2.05
1.28
2.50
50
0.61
2.07
1.48
2.98
100
0.60
1.89
1.40
2.62
200
0.67
1.51
1.37
2.20
500
0.64
1.32
1.34
1.79
Further results are the factorization is at least a useful tool for generating new
recommendations. Indeed, the better results of three recommendations as compared
to one recommendation simply result from the low-rank approximation's generating
more recommendations. Moreover, the SVD turns out to be the best procedure, with
respect to not only approximation error but also quality of prediction. The expecta-
tion that the nonnegativity inherent to NMF give rise to better prediction results turns
out to be false. The deterioration of approximation quality of NMF with increasing
rank (at a constant number of iterations) also reveals that the number of iterations of
ALS must increase with a growing problem size, which results in bad scaling
properties of the method, as the iteration steps themselves are computationally
expensive. At the end of the day, LVP turns out to be the best choice, since it
exhibits considerably better scaling properties - at an only slightly large approxima-
tion error.
Example 8.7 Next, we would like to return to computing recommendations
according to the profile-based approach from this chapter, in particular from
Example 8.1, hence based upon all transactions of the session before the prediction.
We shall, however, again, restrict ourselves to prediction of product transitions -
consequently, all clicks are endowed with the reward 1, the remainder with 0.
Thus, the training data act as the matrix A . We use the Lanczos vector projection,
which has turned out to be very efficient in Example 8.6. Thus, we use Algorithm
8.2 to compute the Lanczos vectors Q k from A and the left-projection approxima-
tion ( 8.25 ) to compute recommendations for the test data set.
We use the same data set as in Example 8.6. We again compute one or three
recommendations, respectively, and evaluate their prediction rate. Here, we
evaluate the recommendations with respect to the immediately following product,
i.e., in analogy to Example 8.6, and, additionally, with respect to the entire
remainder of the session, in analogy to Example 8.5. The result is displayed in
Table 8.4 .
The prediction rates of direct product acceptance may be compared to those from
Table 8.3 . Although we see that the low-rank approximation works in principle
(with an optimal rank of approximately 50), the prediction rates are so low that the
approach turns out to be practically irrelevant. This is simply due to the fact that few
sessions are sufficiently long for the prediction to work well. With respect to the
entire remainder of the session, the results are, according to nature, better, but still
poor. The reason for the poor prediction rate as compared to Example 8.5 is that
Search WWH ::




Custom Search