Cryptography Reference
In-Depth Information
knowledge, have been commonly considered in the literature (cf., (81;
59)):
(1) Perfect Zero-Knowledge requires that the two probability
ensembles be identical. 7
(2) Statistical Zero-Knowledge requires that these probability
ensembles be statistically close (i.e., the variation distance
between them is negligible).
(3) Computational (or rather general) Zero-Knowledge requires
that these probability ensembles be computationally indis-
tinguishable.
Indeed, Computational Zero-Knowledge is the most liberal notion, and
is the notion considered in Definition 4.1. We note that the class of prob-
lems having statistical zero-knowledge proofs contains several prob-
lems that are considered intractable. The interested reader is referred
to (121).
Strict versus expected probabilistic polynomial-time. So far,
we did not specify what we exactly mean by the term probabilistic
polynomial-time. Two common interpretations are:
(1) Strict probabilistic polynomial-time . That is, there exist a
(polynomial in the length of the input) bound on the number
of steps in each possible run of the machine, regardless of the
outcome of its coin tosses.
(2) Expected probabilistic polynomial-time . The standard
approach is to look at the running-time as a random variable
and bound its expectation (by a polynomial in the length
of the input). As observed by Levin (cf. (65, Sec. 4.3.1.6)
and (12)), this definitional approach is quite problematic
(e.g., it is not model-independent and is not closed under
algorithmic composition), and an alternative treatment of
this random variable is preferable.
7 The actual definition of Perfect Zero-Knowledge allows the simulator to fail (while out-
putting a special symbol) with negligible probability, and the output distribution of the
simulator is conditioned on its not failing.
 
Search WWH ::




Custom Search