Information Technology Reference
In-Depth Information
However, if the agent considers the poll unreasonably high, its best strategy is to
report bad, independently of its own observation. While this behavior is not truthful, it
can be considered helpful in that the agent drives the outcome of the poll closer to its
own opinion. For example, suppose A observes the plumber at work and realizes that
he is completely incompetent, but still by chance receives good service. Now, A might
have a private probability that B will receive bad service that is much higher than the
10% that would be expected from the poll, let's say 50%. Now A would be better off
reporting poor service, as its expected reward would be 1 / 0 . 1 = 10 with probability
0 . 5, which is much higher than his expectation in case of truthful reporting. However,
the report could still be considered helpful in that it drives the value of the opinion poll
towards A's true opinion.
The advantage of this mechanism is that agents can have different and unknown prior
distributions for the signal, whereas scoring rules require this distribution to be known
to the mechanism designer.
5
Applications
The techniques reported here have numerous applications. The most obvious ones are
forums such as reputation and review forums. Leaving such feedback is cumbersome
and thus often done by agents who have ulterior motives and thus do not leave honest
reports. Here is would be useful to reward raters for their effort, and it would be even
better to scale these payments to encourage honest feedback.
Another range of applications is in ensuring quality of crowdsourcing. For example,
consider an image labeling task as in the ESP game proposed in [11]: two people are
independently asked to give keywords that describe the content of an image. They get a
reward when they provide matching keywords. This game has the flaw that people will
tend to use very common words, and so these have to be explicitly excluded. A more
general strategy based on the opinion poll mechanism given above would be to scale
the rewards according to the frequency of the matching word: a less common word
would fetch a higher reward. One can imagine many other applications in crowdsourc-
ing where rewards depend on the agreement with other worker's results.
Further applications can be found in sensor networks. The peer prediction method
can be generalized to settings where agents do not measure exactly the same signal. It is
sufficient that measurements are correlated in a known way ([8]). Thus, one can design
a reward scheme that rewards truthful operation of a network of sensors that sense
related values, for example air pollution ([2]). This could be applied in sensor networks,
in particular when sensors are operated by different entities who might save cost by
inaccurate measurements, or even maliciously want to manipulate measurements.
Services such as internet access, cloud computing, or wireless communications re-
quire monitoring of the quality of service. This would be most easily done by the cus-
tomers themselves, but the difficulty is that they often have an incentive to misreport
since they stand to gain refunds or other claims if service is deemed to be insufficient.
Somewhat surprisingly, it turns out that incentive mechanisms are entirely sufficient to
solve this problem, as shown in [12]. Provided that the entire user population is suffi-
ciently large, it would take a significant coalition of users to shift the average reported
 
Search WWH ::




Custom Search