Information Technology Reference
In-Depth Information
Using Incentives to Obtain Truthful Information
Boi Faltings
Artificial Intelligence Laboratory (LIA)
Swiss Federal Institute of Technology (EPFL), IN-Ecublens
CH-1015 Ecublens, Switzerland
boi.faltings@epfl.ch
Abstract. There are many scenarios where we would like agents to report their
observations or expertise in a truthful way. Game-theoretic principles can be used
to provide incentives to do so. I survey several approaches to eliciting truthful in-
formation, in particular scoring rules, peer prediction methods and opinion polls,
and discuss possible applications.
1
Introduction
The internet has opened many new possibilities for gathering information from large
numbers of individual agents. For example, people rate services in reputation forums,
they annotate maps with location information, and they answer questions in online fo-
rums. In the future, software agents will control networks of sensors and report mea-
surements such as air quality, radio spectrum, or traffic congestion.
An implicit assumption is that agents will make their best effort to report such in-
formation truthfully. However, when they are self-interested, this can not always be as-
sumed. For example, in online reputation forums, leaving a rating is a time-consuming
operation and most users will not do this unless they have a motive. Thus, one can of-
ten observe skewed distributions of ratings that indicate that most reviews were left by
users who either loved or hated the item they rated ([1]). It is not clear whether ranking
items by taking averages of such reviews is very helpful. Similar, sensors may save en-
ergy by providing inaccurate measurements or no measurements at all, or they may be
manipulated to provide skewed reports that are beneficial to the interests of their owner.
To obtain better quality information, it is important to reward agents who contribute
ratings and thus increase participation of agents even without ulterior motives. Such
reward schemes could be useful both as incentives to human agents as well as for soft-
ware agents operating sensors: rewards could finance the operation of the sensors and
direct their deployment towards the most useful measurements ([2]).
Furthermore, it is possible to scale the rewards so that they specifically reward truth-
ful reporting, and can even counter exterior incentives to report false information. These
mechanisms are based on scoring rules that reward correct prediction of a future out-
come once that outcome becomes known. In peer prediction methods, these rules are
extended to situations where the true outcome never becomes known. Instead, they take
the predictions of other agents as the ground truth to compare to. This makes truth-
fulness an equilibrium, i.e. the best response strategy when all other agents are also
truthful. Finally, I show how to design mechanisms that achieve this independently of
 
Search WWH ::




Custom Search