Database Reference
In-Depth Information
Now experience has shown that the update algorithm 8.1 in general can also be
applied successfully for truncated SVDs of any rank r including small ones.
However, unlike as for the truncated full-rank SVD presented there, the resulting
SVD may not be optimal, i.e., it may lead to decompositions different from ( 8.14 ).
8.3.3 Computing Recommendations
There remains the question how to compute the recommendations via the truncated
singular value decomposition ( 8.14 ). Let A k ¼ U k S k V k be the rank- k SVD of the
previous session data and a
m be the current session vector, i.e., the vector
containing the rewards of the products of the current session.
One way is to proceed as follows. In each step of the session, we add the current
session vector a
∈R
m to the (fixed) SVD A k ¼ U k S k V k and execute one incremental
step of the singular value decomposition as described in the previous section, i.e.,
∈R
h
i ¼: A k :
A
U k S k V k ¼
A 0 k ;
:¼ A k ;
½
a
a k
ð 8
:
17 Þ
m and recommend the products
of the row numbers with the largest entries of a k , provided they are not already
included in a . After the termination of the current session, we set A k : ¼ [ A k , a k ].
Then we proceed in the same way with the next session vector.
Thus, in each step we conduct a low-rank approximation of the whole data
matrix and use its generalization for prediction in the current session - by means of
the last column. For the special case of a full-rank SVD, i.e., k ¼ rank A , we would
always obtain a k ¼ a and would not be able to provide a meaningful prediction.
The described procedure essential complies with the typical approach used in
literature about factorization for recommendations, although there learning and
evaluation are mostly carried out separately, i.e., offline. At this, the transactions
of each session (in literature mostly users) are subdivided into two disjoined
training and test sets such that for each session vector, it holds that a ¼ a train +
a test . This way the data matrix A is split into a training matrix A train and a test matrix
A test . Next a factorization is applied to A train and thereafter evaluated on A test . The
adaptive behavior of our incremental SVD, in contrast, allows the more flexible
procedure described above, which, however, is still computational intensive. Thus,
we are looking for further alternatives.
The aforementioned “classical” offline approach has in practice the additional
disadvantage that it can only be applied to existing users (in our case even session!)
which already have a transaction history. To overcome this problem, the following
approach is pursued, especially in older literature. First, for a the feature vector,
Now we use the updated session vector a k ∈R
f k ¼ S k U k a
is computed. Now in the feature space, we are searching for that feature vector f k
which is closest to f k and recommend the highest-rewarded products of its
Search WWH ::




Custom Search