Information Technology Reference
In-Depth Information
Algorithm 2 Recommending articles based on similarity between user model and
item features.
for all f
a do
if f
value then
Increase count for matching feature
umANDweigth
countMatchingFeatures
++
else
Increase count for NOT matching feature
countNotMatchingFeatures
++
end if
end for
if countMatchingFeatures
thresholdPositive then
Recommend Article
end if
8.6 Evaluation of CBR
The CBR approach presented in the previous section is evaluated and compared using
the described dataset. The mainmeasure for comparison is Precision . More precisely,
the precision for the Not Returned class. So, the best recommender is the one with
the best precision for items a user will keep. The decision to use Precision was done
in cooperation with the company providing the dataset. The prediction what item a
user will most likely keep has the biggest impact on revenue and profit. Precision
is defined as the fraction of correctly predicted items (TP) and the number of all
items, correct predictions (TP), and incorrect predictions (FP), the recommender
predicted as bought. In our scenario, TP are all clothes which are bought and which
the recommender predicted as clothes that are likely to be bought. FP denotes the
results where the recommender predicted that the clothes are likely to be bought but
the clothes were not bought.
TP
Precision
=
(8.1)
(
TP
+
FP
)
Optimizing and focusing on one measure, precision as described before, could
imply that we miss out other information, about the data, that another measure would
have shown. Given our scenario, we have a two-class prediction problem—Not
Returned or Returned. The measure described before only takes into account the
prediction for class Not Returned . To not loose the sight of the two-class problem,
we also present results for the performance of predicting class Returned . The mea-
sures of choice for this is Accuracy . It also takes into account the correct predictions
for the true negatives (TN). TN is here defined as items which are predicted as a
probably return and which are returned. Accuracy is explained in Eq. 8.2 .Itisthe
fraction of all correctly predicted (TP and TN) instances versus all predictions made.
= (
TP
+
TN
)
Accuracy
(8.2)
(
P
+
N
)
 
Search WWH ::




Custom Search