Information Technology Reference
In-Depth Information
Algorithm 3 Random Recommender
Input
k
set of items I , number of items to recommend
Output
list of
k
items to recommend
1: while |
recommendations
| < k do
2:
i
rand
(I)
3:
if i
recommendations then
4:
recommendations
recommendations
i
5: end if
6: end while
6.5.2 Collaborative Filtering
Collaborative filtering (CF) adapts the notion of taste similarity continuity. In
other words, if two users exhibit similar tastes in the past, collaborative filter-
ing assumes that they will continue to prefer similar items. Previous research
provides an abundance of algorithms for collaborative filtering. Adomavicius and
Tuzhilin [ 1 ] distinguish memory-based and model-based collaborative filtering
algorithms. Memory-based CF uses all available data for recommendation. In con-
trast, model-based CF generalizes patterns apparent in interactions and provides
recommendations based on thesemodels. Matrix factorization techniques have estab-
lished among the most successful model-based CF methods.
Algorithm4 illustrates memory-based recommendation from the user perspective.
The method requires the sets of users and items, a similarity function, the number of
neighbors to consider, along with the length of the recommendation lists to produce.
The algorithm iterates first the set of users to determine whose taste resembles the
target user's taste. Subsequently, themethod predicts the preferences for each item the
target user is unaware of. The algorithm returns the
items with the highest scores.
Algorithm 5 shows memory-based recommendation from the item perspective.
In contrast to Algorithm 4, the method compute similarities between items in terms
of their interactions. This is advantageous in cases where
k
since we skip
the computational more expensive loops over the larger user dimension.
Matrix factorization has established as one of the most successful type of collabo-
rative filtering. These algorithms reduce the dimensionality of a M by N interaction
matrix R to a lower rank approximation. Projecting user and item profiles in this
lower space enables recommender systems to compute similarities between them.
We present two methods to learn these low rank approximations. Algorithm 6 learns
low rank approximations with an alternating least squares procedure. Hereby, we ran-
domly initialize two factor matrices. These matrices' dimension follows the number
of users, items, and the desired latent factors. Subsequently, the algorithm iteratively
optimizes a target function. This target function measures how close the predicted
interactions match the observed interactions. Root mean squared error (RMSE) rep-
resents a popular choice for such a function. The algorithm keeps one feature matrix
| I || U |
 
Search WWH ::




Custom Search