Cryptography Reference

In-Depth Information

Commitment schemes are digital analogs of sealed envelopes (or, better, locked boxes).

Sending a commitment means sending a string that binds the sender to a unique value

without revealing this value to the receiver (as when getting a locked box). Decommitting

to the value means sending some auxiliary information that allows the receiver to read

the uniquely committed value (as when sending the key to the lock).

Common Input:
Agraph
G
(
V, E
). Suppose that
V ≡{
1
, ..., n}
for
n
de
=

|V |
.

Auxiliary Input (to the prover):
A 3-coloring
φ
:
V →{
1
,
2
,
3
}
.

The following 4 steps are repeated
t ·|E|

many times so to obtain soundness error

exp(
−t
).

Prover's first step (P1):
Select uniformly a permutation
π
over
{
1
,
2
,
3
}
.For
i
=1

to
n
, send the verifier a commitment to the value
π
(
φ
(
i
)).

Verifier's first step (V1):
Select uniformly an edge
e ∈ E
and send it to the prover.

Prover's second step (P2):
Upon receiving
e
=(
i, j
)
∈ E
, decommit to the
i
-th and

j
-thvaluessentinStep(P1).

Verifier's second step (V2):
Check whether or not the decommitted values are dif-

ferent elements of
{
1
,
2
,
3
}
and whether or not they match the commitments

received in Step (P1).

Fig. 4.2 The zero-knowledge proof of Graph 3-Colorability (of (75)). Zero-knowledge proofs

for other NP-sets can be obtained using the standard reductions.

and we will confine ourselves to presenting an adequate simulator. We

start with three simplifying conventions (which are useful in general):

(1) Without loss of generality, we may assume that the cheat-

ing verifier strategy is implemented by a
deterministic

polynomial-time algorithm with an auxiliary input. This is

justified by fixing any outcome of the verifier's coins (as part

of the auxiliary input), and observing that our (uniform)

simulation of the various (residual) deterministic strategies

yields a simulation of the original probabilistic strategy.

(2) Without loss of generality, it suces to consider cheating

verifiers that (only) output their view of the interaction (i.e.,

their input, coin tosses, and the messages they received). In

other words, it suces to simulate the view of the cheat-

ing verifier rather than its output (which is the result of a

polynomial-time post-processing of the view).

(3) Without loss of generality, it suces to construct a “weak

simulator” that produces an output with some noticeable