Database Reference
In-Depth Information
9.4 Summary
We have generalized the factorization concept, introduced in the last chapter for
matrices, to the case of higher dimensions - tensors. We were also able to develop
an incremental tensor factorization approach, the incremental HOSVD, based on
the incremental SVD. First, experimental results indicate that the tensor factoriza-
tion is useful for our RE approach.
The main challenge of tensor factorization is complexity. This is a highly demand-
ing task, which certainly will keep researchers occupied for many years to come. For
the Tucker model, namely, for the HOSVD, there are some in principle efficient
algorithms along with crisp convergence propositions. In particular, this applies to the
Lanczos method as well as Brand's incremental SVD. Yet, these methods are
complex as regards implementation and computationally intensive in the high-
dimensional case. Moreover, the Tucker model itself suffers from the curse of
dimensionality since its complexity is exponentially growing with the number of
dimensions d .
Thus, simpler factorization models, first of all the canonical decomposition, look
quite appealing at the first sight. The complexity of the canonical decomposition
grows only linearly with d. However, here we face other problems. Unlike the
Tucker decomposition, it is not stable concerning the rank and no general algo-
rithms for its efficient computation exist.
This turns the attention of researchers back to the Tucker decomposition and
SVD-type approaches. In fact, here hierarchical decompositions seem to be the
solution. Especially the tensor-train decomposition has emerged as efficient instru-
ment to break the curse of dimensionality. It is relatively simple. Like the canonical
decomposition, the number of parameters is linear in d . At the same time it shares
positive properties with Tucker: TT decompositions have stable ranks. They can be
computed based on SVD procedures, rank reduction can be provided efficiently,
etc. So the main problem is the development of algorithms to efficiently calculate
TT decompositions. This development is being carried out fiercely at present, and a
couple of interesting methods have already emerged in the field.
This also applies to many other tensor decomposition algorithms. Many of them,
however, are fairly empirical, lacking theoretically valid estimates of convergence
rates, alas, even a proof of convergence at all. Hence, it is still difficult to assess the
practical eligibility of these recent methods.
In spite of all of the above-outlined difficulties, the meaning of tensor factori-
zation as an approximation tool of future recommendation engines is obvious. This
very field is currently undergoing a turbulent development. We shall therefore
return to tensor factorization at some point in the next chapter, so as to make an
attempt at combining it with reinforcement learning.
Search WWH ::




Custom Search