Database Reference
In-Depth Information
In some sense, this result tells us that we have been doing a special case of the
tensor factorization approach all along. Together with Proposition 10.1, moreover, it
ensures that, at least mathematically, assuming an MDP of order k and approximating
the state-value function according to the aggregation-based tensor model with an
arbitrary partition is no worse than simply assuming order 1. Hence, the above
introduced approach is consistent with what we have been doing all along.
10.5 Clustering Sequences of Products
At the end of the day, it all comes down to a judicious choice of the partition
underlying the aggregation prolongator deployed in the factorization. At the first
glance, choosing an error-minimizing partition with a prescribed number of classes
seems most reasonable. As is well known among computer scientists, however, the
arising optimization problem is equivalent to the clustering problem mentioned in
the introductory part of the previous chapter, and thus NP-hard. Of course, we can
use the k -means algorithm as proposed in Sect. 10.2 , also in conjunction with an
SVD or NMF to find a good initial guess. Nevertheless, we must keep in mind that,
with regard to an application to recommendation engines, an explicit representation
of the transition probabilities is typically neither available nor favorable. This
situation leaves us with basically two options, the first of which consists in devising
an adaptive online method and the second in an a priori choice of the partition based
on additional knowledge about the situation described by the model rather than
purely mathematical reasoning.
10.5.1 An Adaptive Approach
As it is subject of forthcoming research, the question of how to obtain a partition
that minimizes the approximation error has as yet been left open. Nevertheless, we
shall outline some ideas and outlooks that we consider promising.
A possible adaptive method may be based upon a genetic algorithm (GA). GA,
introduced by Holland [Hol92], is a biologically inspired search heuristic framework
for discrete optimization. Specifically, these schemes mimic a natural evolution
process consisting of selection , crossover ,and mutation on a population of elements
of the search space. In the selection step, a subset of the population the elements of
which maximize some fitness function , which, more often than not, coincides with the
objective function, is picked. The crossover step consists in crossing elements of the
selected subset so as to obtain a new generation of individuals each of which unites
properties of different fittest individuals of the previous generation. In the last step of
an iteration of a GA, each individual is exposed to mutation, which, mathematically,
amounts to perturbing each individual slightly with respect to a suitably chosen
metric. More details and references concerning genetic algorithms may be found in
[Zim06].
Search WWH ::




Custom Search