Information Technology Reference
In-Depth Information
since changing the hypothesis to a wrong direction may even increase the expected
test error.
In [ 27 ], different ways of presenting search results to users are studied in order
to actively control the click-through data and get as much informative labels as
possible.
According to [ 27 ], the limitations of passively collected click-through data are as
follows. After submitting a query, users very rarely evaluate results beyond the first
page. As a result, the click-through data that are obtained are strongly biased toward
documents already ranked on the top. Highly relevant results that are not initially
ranked on the top may never be observed and evaluated.
To avoid this presentation effect, the ranking presented to users should be op-
timized to obtain useful data, rather than strictly in terms of estimated document
relevance. A naive approach is to intentionally present unevaluated results in the top
few positions, aiming to collect more feedback on them. However, such an ad-hoc
approach is unlikely to be useful in the long run and would hurt user satisfaction.
To tackle the problem, in [ 27 ], modifications of the search results are systematically
discussed, which do not substantially reduce the quality of the ranking shown to
users, but produce much more informative user feedback.
In total four different modification strategies are studied and evaluated in [ 27 ]:
Random exploration : Select a random pair of documents and present them first
and second, then rank the remaining documents according to the original ranking
results.
Largest expected loss pair : Select the pair of documents d i and d j that have the
largest pairwise expected loss contribution, and present them first and second.
Then rank the remaining documents according to the original ranking results.
One step lookahead : Find the pair of documents whose contribution to the ex-
pected loss is likely to decrease most after getting users' feedback, and present
them the first and second.
Largest expected loss documents : For each document d i , compute the total contri-
bution of all pairs including d i to the expected loss of the ranking. Present the two
documents with the highest total contributions at the first and second positions,
and rank the remainder according to the original ranking results.
According to the experimental results in [ 27 ], as compared to the passive collec-
tion strategy used in the previous work, the last three active exploration strategies
lead to more informative feedback from the users, and thus much faster conver-
gence of the learning-to-rank process. This indicates that actively controlling what
to present to the users can help improve the quality of the click-through data.
13.3.2 Document and Query Selection for Training
Suppose we have already obtained the training data, whose labels are either from
human annotators or from click-through log mining. Now the question is as follows.
Search WWH ::




Custom Search