Cryptography Reference
In-Depth Information
probabilistic polynomial-time algorithm to one that runs in strict polynomial time is
not suitable for the current context. 8
We prefer to adopt Definition 4.3.1, rather than Definition 4.3.6, because we want
to avoid the notion of expected polynomial time. The main reason for our desire to
avoid the latter notion is that the correspondence between average polynomial time and
efficient computations is more controversial than the widely accepted association of
strict polynomial time with efficient computations. Furthermore, the notion of expected
polynomial time is more subtle than one realizes at first glance:
The naive interpretation of expected polynomial time is having an average run-
ning time that is bounded by a polynomial in the input length. This definition of
expected polynomial time is unsatisfactory because it is not closed under reduc-
tions and is (too) machine-dependent . Both aggravating phenomena follow from
the fact that a function can have an average (say over
n ) that is bounded by
a polynomial (in n ) and yet squaring the function will yield a function that is not
bounded by a polynomial (in n ). For example, the function f ( x ) def
{
0
,
1
}
2 | x | if x
} ,
=
∈{
0
and f ( x ) def
[ f ( U n ) 2 ] > 2 n .
Hence, a better interpretation of expected polynomial time is having a running
time that is bounded by a polynomial in a function that has an average linear
growth rate . That is, using the naive definition of linear on the average, we say
that f is polynomial on the average if there exist a polynomial p and a linear-
on-the-average function such that f ( x ) p ( ( x )) for all sufficiently long x 's.
Note that if f is polynomial on the average, then so is f 2 .
An analogous discussion applies to computational zero-knowledge. More specifically,
Definition 4.3.2 requires that the simulator work in polynomial time, whereas a more
liberal notion would allow it to work in expected polynomial time.
We comment that for the sake of elegance it is customary to modify the definitions
that allow expected polynomial-time simulators by requiring that such simulators also
exist for the interaction of expected polynomial-time verifiers with the prover.
2 otherwise, satisfies
E
[ f ( U n )] < n 2
E
=| x |
+ 1, but
4.3.1.7. Honest-Verifier Zero-Knowledge
We briefly discuss a weak notion of zero-knowledge. The notion, called honest-verifier
zero-knowledge , requires simulatability of the view of only the prescribed (or honest )
verifier, rather than simulatability of the view of any possible (probabilistic polynomial-
time) verifier. Although this notion does not suffice for typical cryptographic appli-
cations, it is interesting for at least a couple of reasons: First, this weak notion of
zero-knowledge is highly non-trivial and fascinating by itself. Second, public-coin
8 The naive transformation truncates runs of the algorithm (in our case, the simulator) that take more than t
times the expected number of steps. (Such a truncated run is said to produce some fixed output.) The statistical
difference between the output distribution of the original algorithm and the output distribution of the modified
algorithm is at most 1 / t . The problem is that t must be bounded by a fixed polynomial in the running time, and
so the statistical difference is not negligible. To see that the analysis of this naive transformation is tight, consider
its effect on the following algorithm: On input 1 n , the algorithm first selects uniformly r ∈{ 0 , 1 }
n , next takes 2 i
idle steps, where i is the length of the longest all-zero prefix of r , and finally runs S (1 n ), where S is an arbitrary
(strict) probabilistic polynomial-time algorithm.
Search WWH ::




Custom Search