Information Technology Reference
In-Depth Information
evaluate their preferences. For example, consider a veto voting mechanism: Preferences
are single bits (veto or not) and the outcome function is defined as f ( θ )= i =1 θ i .This
mechanism is not simple according to Definition 2. As a consequence, there is no un-
conditionally fully private veto protocol. However, we can construct a protocol in which
most preferences remain unconditionally private by emulating a preference elicitor. The
protocol consists of n rounds. In each round, a randomly selected agent that has not been
queried so far is privately asked for his preference (veto or not). All other agents receive
empty queries and reply with empty queries. Once an agent privately submits a veto, all
agents receive empty queries in the following rounds. Since the query function is com-
puted fully privately, only some agents (those who are queried) learn some probabilistic
information on the number of vetoers.
4.3
Untruthful Mechanisms
Untruthful mechanisms may not only lead to greater social welfare in some settings (re-
lying on computational assumptions) [CS03b], but they can also support the protection
of preferences. As a matter of fact, the probably most prominent truthful mechanism,
the Vickrey auction, is said to be rare because bidders are reluctant to reveal their pref-
erences truthfully as required by the dominant strategy [RTK90]. This problem and the
possibility of an untruthful mechanism center (which is stated as the other major rea-
son for the Vickrey auction's rareness) can be tackled by the techniques presented in
Section 3.2. Yet, even in the centralized model, preferences can be protected (at least
partially) by inducing strategic behavior in a mechanism. We sketch two different ways
how to achieve this, of which the second one relies on computational intractability.
When the mapping from preferences to a strategy is a non-injective function, i.e. ,
different preferences can yield the same strategy, it is obviously impossible to uniquely
invert a strategy. This naturally is the case when the set of strategies S i is smaller than
the set of preferences (
). For instance, in most voting protocols with more
than two candidates, a complete ranking of candidates (the preferences) is mapped to
a single strategic vote (the strategy). For example, when considering the US presiden-
tial election in 2000 (plurality voting with three candidates), it is impossible to tell if
someone who voted for Gore, truthfully preferred Gore over Nader or not (even given
the prior beliefs of that voter, e.g. , that Nader would be far behind). The same argument
applies to most other voting protocols.
Based on these considerations, it might be interesting to construct mechanisms that
induce equilibria consisting of “ one-way strategies ”. Here, the mapping from prefer-
ences to a strategy is computationally easy while the inversion is intractable (preferably
even given the beliefs that the agent had). This requires that
|
Θ i |
>
|
S i |
is exponentially large
or somehow enriched, possibly by random data padding. We do not know whether such
mechanisms can be constructed (for relevant problems), but note that this might be
another interesting application of computational intractability in mechanism design.
|
Θ i |
5
Conclusion and Future Work
In this paper, we suggested that the fields of cryptography and mechanism design have
several similarities and can both greatly benefit from each other. We proposed two se-
Search WWH ::




Custom Search