Information Technology Reference
In-Depth Information
of the more sophisticated methods including ALS and SGD collaborative filtering
depends on the stopping criterion. These methods either stop as the optimization
target surpasses a threshold or after a predefined number of iterations.
Besides algorithmic optimization, a selection of frameworks enables systems to
parallelize their computation thus achieving considerable speed-ups. These frame-
works include hadoop , 8 spark , 9 and storm 10 amongst others. Additionally, news
recommender system operators may consider to pre-compute recommendations as
soon as possible. For instance, they may estimate the probability that a novel article
will become popular. If the probability estimate is sufficiently high, the system could
start recommending it more often.
6.6 Evaluation Criteria
This section treats aspects related to news recommender systems' evaluation proto-
cols. Section 6.2 discussed aspects which we need to consider when evaluating news
recommender systems. First, we have to define quality criteria. These criteria relate
to the use-case introduced in Sect. 6.3 . We aim to assess how visitors, advertisers, as
well as operators benefit of having the recommender system in place. ORP does not
reveal information about earnings or users converted to customers. Hence, we rely on
the interactions which we observe. These interaction represent implicit preference
indicators. In contrast, users may explicitly rate items on a pre-defined scale. Lacking
such graded feedback, we dismiss error-based metrics—e.g., RMSE, MAE—as we
disregard ranking-based criteria including normalized discounted cumulative gain
(nDCG) and mean reciprocal rank (MRR). Measures used in information retrieval
dispense with numerical preferences. Recall and precision require knowing whether
or not a certain item is relevant to a user. Our observations fail to provide such
information for all (user, item)-pairs. We may infer relevancy as users select news
articles. Still, articles remain ambiguous until we observe interactions with users.
Have users missed to see the article? Have users seen the article and decided not to
read them? We can evaluate search engine as we predefine each document's rele-
vance given a query. Unfortunately, we have no analogous concept for recommender
systems. This is due to individual users' varying preferences. We cannot tell whether
a specific news article interests a user unless the user reads it. Thus, we adhere to the
notion of click-through-rates (CTR). CTR relates the number of clicks to the number
of requests which the recommender system received.
ORP supports evaluating recommendation algorithms by means of live interac-
tions with users. Additionally, we may record such interactions. Subsequently, we
can use these records to replay the stream of interactions. We can apply various rec-
ommendations methods and assess their qualities having future click events recorded.
8 http://hadoop.apache.org/ .
9 https://spark.apache.org/ .
10 https://storm.incubator.apache.org/ .
Search WWH ::




Custom Search