Databases Reference
In-Depth Information
• Optimize U while V is fixed.
• Optimize V while U is fixed.
• Keep doing the preceding two steps until you're not changing very
much at all. To be precise, you choose an ϵ and if your coefficients
are each changing by less than ϵ , then you declare your algorithm
“converged.”
Theorem with no proof: The preceding algorithm will converge if your prior
is large enough
If you enlarge your prior, you make the optimization easier because
you're artificially creating a more convex function—on the other hand,
if your prior is huge, then all your coefficients will be zero anyway, so
that doesn't really get you anywhere. So actually you might not want
to enlarge your prior. Optimizing your prior is philosophically
screwed because how is it a prior if you're back-fitting it to do what
you want it to do? Plus you're mixing metaphors here to some extent
by searching for a close approximation of X at the same time you are
minimizing coefficients. The more you care about coefficients, the less
you care about X . But in actuality, you only want to care about X .
Fix V and Update U
The way you do this optimization is user by user. So for user i , you
want to find:
2
ar gmin u i j Pi
p i , j u i * v j
where v j is fixed. In other words, you just care about this user for now.
But wait a minute, this is the same as linear least squares, and has a
closed form solution! In other words, set:
V *, i −1 V *, i
τ
τ
u i = V *, i
P * i ,
where V *, i is the subset of V for which you have preferences coming
from user i . Taking the inverse is easy because it's d × d , which is small.
And there aren't that many preferences per user, so solving this many
times is really not that hard. Overall you've got a doable update for
U .
Search WWH ::




Custom Search