Database Reference
In-Depth Information
Beyond its simplicity the left-projection approximation has another advantage.
Since the orthoprojector U k U k is multiplied from left, due to
h
i ¼ A k ¼ U k U k A ¼ U k U k A k ;
h
i
A 0 k ;
¼U k U k A k , U k U k a
a k
½
a
:
the property
a k ¼ U k U k a ,
holds. This means that for the calculation of the updated session vector a k , only the
current session vector a is required. Thus, we can use this approach for arbitrary
session vectors without updating the left singular vector U k each time for a .This
enables us to use an existing rank- k SVD without updating, i.e., without learning,
for the prediction of new sessions.
Therefore, we now generally want to apply left-projection approximation for
SVD-based calculation of recommendations. We get
a k ¼ U k U k a
ð 8
:
21 Þ
and thus recommend the highest-rewarded products of the session vector a k .Itis
so easy!
We will give a descriptive interpretation: the transposed left singular vector
matrix U k provides a mapping into the k -dimensional feature space resulting in a
profile vector of our session. Then it is mapped by U k back into the product space.
For the special case of a full-rank SVD, i.e., k ¼ rank A , the left singular vector
matrix U k ¼ U is unitary, and thus we get again
a k ¼ UU T a ¼ a ,
what, of course, would be little helpful. The essence behind the projection approach
is that we map our session vector by a low-rank approximation onto its “generalized
profile” and then assign “characteristic rewards” to this profile. Hence, this proce-
dure corresponds to the previous one but is much easier.
In a nutshell, ( 8.21 ) allows the direct computation of recommendations for
arbitrary sessions. It is noteworthy that here the matrices of the singular values S k
and right singular vectors V k are not required at all! This makes our approach in
every aspect more computational efficient than the truncated SVD.
Finally we mention that the truncated SVD also gives rise to a nice factorized
version of the item-to-item collaborative filtering described in Sect. 8.2 . Thus, we
are looking for a factorized version S k of the similarity matrix S ¼ A A T over all
products.
Obviously, it is obtained by S k ¼ A k A T
k where A k is the rank- k SVD A k ¼
U k S k V k normalized along all of its columns. Introducing the factor matrix L k : ¼
U k S k , we can express the inner products A k A k
through L k as A k A k ¼ U k S k V k V k S k U k
Search WWH ::




Custom Search