Cryptography Reference
In-Depth Information
4.4.4.2. Knowledge Tightness
The foregoing efficiency measures are generic in the sense that they are applicable to
any protocol (independent of whether or not it is zero-knowledge). Because security
and efficiency often are convertible from one to the other (especially in this context),
one should consider refined measures of efficiency only in conjunction with a refined
measure of security.
In contrast to the generic (efficiency) measures, we consider a (security) measure
specific to zero-knowledge, called knowledge tightness . Intuitively, knowledge tight-
ness is a refinement of zero-knowledge that is aimed at measuring the “actual security”
of the proof system, namely, how much harder the verifier needs to work, when not
interacting with the prover, in order to compute something that it can compute after in-
teracting with the prover. Thus, knowledge tightness is the ratio between the (expected)
running time of the simulator and the running time of the verifier in the real interaction
simulated by the simulator. Note that the simulators presented thus far, as well as all
known simulators, operate by repeated random trials, and hence an instructive measure
of tightness should consider their expected running times (assuming they never err,
i.e., never output the special symbol), rather than the worst case. (Alternatively,
one can consider the running time of a simulator that outputs with probability at
most
1
2 .)
Definition 4.4.13 (Knowledge Tightness): Let t : N → N be a function. We say
that a zero-knowledge proof for language L has knowledge tightness t ( · ) if there
exists a polynomial p ( · ) such that for every probabilistic polynomial-time verifier
V there exists a simulator M (as in Definition 4.3.2) such that for all sufficiently
long x L we have
Time M ( x ) p ( | x | )
Time V ( x )
t (
|
x
|
)
where Time M ( x ) denotes the expected running time of M on input x , and
Time V ( x ) denotes the running time of V on common input x .
We assume a model of computation that allows one machine to emulate another
machine at the cost of only the running time of the latter machine. The purpose of
polynomial p (
) in the foregoing definition is to take care of generic overhead created
by the simulation (this is important only in case the verifier V is extremely fast). We
remark that the definition of zero-knowledge does not guarantee that the knowledge
tightness is polynomial. Yet all known zero-knowledge proofs, and, more generally, all
zero-knowledge properties demonstrated using a single simulator with black-box access
to V , have polynomial knowledge tightness. In particular, Construction 4.3.8 (like the
construction in Exercise 20) has knowledge tightness 2, whereas Construction 4.4.7 has
knowledge tightness approximately
·
3
2 . We believe that knowledge tightness is a very
important efficiency consideration and that it is desirable to have it be a constant.
We comment that the notion of knowledge tightness is also instructive in reconciling
statements like the following:
Search WWH ::




Custom Search