Database Reference
In-Depth Information
Table 5.4 Prediction qualities for different prediction methods
Rate for p fg
ss 0
Rate for p ss 0
Rate for p ss 0
k min
n s
1 rec
3 recs
1 rec
3 recs
1 rec
3 recs
0
15,235
8.72
18.32
7.11
15.01
8.79
19.02
1
13,088
8.90
18.70
7.47
15.61
8.31
17.47
2
10,316
8.82
19.02
7.58
16.51
7.97
17.43
5
6,891
9.26
20.24
7.92
17.21
8.84
18.53
10
4,498
9.20
19.83
8.09
17.23
8.62
17.96
20
1,877
8.95
20.78
9.85
19.82
10.12
20.03
50
172
7.56
23.84
10.47
20.93
10.47
24.42
As we see the special treatment of multipl e recommendations does not seem to
have great impact. Of course, the relation p ss a >
p ss a
holds but the difference is
relatively small.
Now we will compare our both Assumptions 5.1 and 5.2 regarding their prediction
quality of product views (clicks): for each recommendation-relevant product view,
we first recommend the products s' having the highest unconditional probabilities
p ss 0 ¼
m p ss 0 . We use Algorithm 4.1 but applied to all product transitions (instead
recommendations only). This corresponds to Assumption 5.1 and the P-Version.
For Assumption 5.2 of the DP-Version, we secondly recommend the products of
the highest probabilities p ss 0 according to ( 5.3 ). Since we hav e multiple recommen-
dations in the transaction data, we need the probabilities p ss 0 instead of just p ss 0 .
Their computation was done by Algorithm 5.2. In order to estimate the efficiency of
our approach, we will include the unconditional probabilitie s p ss 0 calculated by
Algorithm 5.2 in the comparison which we will denote by p fg
ss 0 in order to avoid
confusion with unconditional probabilities p ss 0 of Assumption 5.1.
The comparison of the prediction methods is again provided for different k min ,
by imposing the requirement that at least one of the recommendations must satisfy
k min . The number of these valid product views is denoted by n s . For k min ¼ 0we
obtain all recommendation-relevant product views, n s ¼ 15,235. With increasing
k min this number is correspondingly decreasing. Furthermore, we test one and three
recommendations. Th e result is given in Table 5.4 .
As we can see, p ss 0 exhibits comparable prediction rates to p ss 0 . At the first sight
this may look like a sad result. However, a deeper analysis leads to a more
optimistic interpretation. First we emphasize that our aim is not to make good
predictions but to find good recommendations. This means that even if our model
does not possess the highest prediction quality, as far as it is applicable in principle,
the separation into unconditional and conditional probabilities and their right
treatment provide an increased return. We will see this impressively in the exper-
iment of the next section where the P-Version exhibits a slightly higher prediction
quality than the DP-Version but leads to a much lower return. Having said all this,
of course, we do not question the need of good predictions. They are integral for
good recommendations.
Search WWH ::




Custom Search