Information Technology Reference
In-Depth Information
Proposition 1), outputs a private query for each agent ( Q is some set of available queries
including an empty query
). Agents reply to these queries by publicly committing to
their answer on a broadcast channel (without revealing their answer). Thus, γ i is defined
as the set of commitments agent i made so far. Whenever no more information is needed
from a particular agent i , q ( i, γ i )=
. Agents reply to that by committing to an “empty
answer”. Agents then jointly compute the Boolean stop function s : Γ
→{
0 , 1
}
and
proceed to the next round (by asking more queries) if it is 0, or compute the outcome
function f : Γ
O if s ( γ )=1.
So far, all agents get to know the number of rounds, i.e. , the maximal number of
queries asked to a single agent, and could infer information from that. Sometimes this
information can be inferred from the outcome ( e.g. , in an English or Dutch auction).
However, as this is not the general case, the number of rounds needs to be hidden. For
this reason, we execute the protocol for the maximal number of rounds that could be
needed to compute the outcome and use the modified query function
q ( i, γ )= q ( i, γ )
if s ( i, γ )=0
.
otherwise
The only information an agent learns is the sequence of queries he is asked. According to
Definition 3, the agent can infer this information from the mechanism outcome anyway,
thus giving him no information advantage than to just being informed of the outcome.
What remains to be shown is that there always is a protocol that hides some part of the
preferences unconditionally. If we define the query function to ask completely at random
for information that has not been revealed by that agent so far (satisfying the second
condition of Definition 3), some preferences always remain unelicited (in expectation).
There certainly exist particular mechanisms that allow for more efficient elicitation
protocols than this general proof construction. Also, in some specific mechanisms,
queries may depend on others' preferences if the information revealed through the
queries can be inferred from the outcome.
Together with Proposition 1 and Proposition 2 this result gives a nice classification of
private mechanisms: All mechanisms can be emulated guaranteeing computational pri-
vacy, privately elicitable mechanisms can additionally provide unconditionally privacy
of a (non-empty) subset of preferences, and simple mechanisms provide unconditional
privacy of all preferences. A striking advantage of private elicitation over the approach
given in Proposition 2 is that it enables unconditional privacy of some preferences with-
out assuming private channels.
It is important to note that elicitation can invalidate strategy equilibria existing in the
single-step version of a mechanism if the queries asked to an agent depend on other
agents' preceding answers [CS02b]. When preference elicitation is used to implement
a mechanism that would be a dominant-strategy direct-revelation mechanism if imple-
mented as a single-step mechanism, then each agent's best (even in hindsight) strategy is
to act truthfully if the other agents act truthfully [CS01]. In other words, truthful strate-
gies form an ex post equilibrium. Ex post equilibria are not as robust as dominant-strategy
equilibria, but are more robust than Bayesian Nash equilibria in that they are prior-free.
The emulation of multi-step mechanisms is a powerful tool to guarantee strong pri-
vacy (partially unconditional) and reduce the amount of agents' deliberation required to
Search WWH ::




Custom Search