Information Technology Reference
In-Depth Information
e.g. NDCG. For a commercial search engine company, the real scenario is more
complex: the ranking function is required to be updated and improved periodically
with more training data, newly developed ranking features, or more fancy ranking
algorithms, however, the ranking results should not change dramatically. Such re-
quirements would bring new challenge of robustness to learning to rank.
To develop robust ranking model, the very first step is to measure robustness.
Multiple evaluations over time is one method but very costly. One practical solution
is that robustness can be measured with the probability of switching neighboring
pairs in a search result when ranking score turbulence happens [ 10 ]. From the per-
spective of evaluation measure, if adding the robustness factors into the original
measures, e.g. NDCG, the new measures could be more suitable to evaluate rele-
vance and robustness at the same time. However, the efforts on robustness measure-
ment are still very preliminary.
How to learn a robust ranking function is another interesting topic. Intuitively, if
an algorithm could learn the parameters which control the measure sensitivity to the
score turbulence, the generated ranking functions would be more robust. Another
possible solution is related to incremental learning, which guarantees that the new
model is largely similar to previous models.
20.8 Online Learning to Rank
Traditional learning-to-rank algorithms for web search are trained in a batch mode,
to capture stationary relevance of documents to queries, which has limited ability to
track dynamic user intention in a timely manner. For those time-sensitive queries,
the relevance of documents to a query on breaking news often changes over time,
which indicates the batch-learned ranking functions do have limitations. User real-
time click feedback could be a better and timely proxy for the varying relevance
of documents rather than the editorial judgments provided by human annotators.
In other words, an online learning-to-rank algorithm can quickly learn the best re-
ranking of the top portion of the original ranked list based on real-time user click
feedback.
At the same time, using real-time click feedback would also benefit the recency
ranking issue [ 6 ], which is to balance relevance and freshness of the top ranking
results. Comparing the real-time click feedback over the click history, we could
observe some signals that can be leveraged for both ranking and time-sensitive query
classification [ 8 ].
We have to admit that the research on online learning and recency ranking are still
relatively preliminary. How to effectively combine the time-sensitive features into
the online learning framework is still an open question for the research community.
Furthermore, as a ranking function is frequently updated according to real-time click
feedback, how to keep the robustness and stability is another important challenge
which cannot be ignored.
Search WWH ::




Custom Search