Information Technology Reference
In-Depth Information
eciently accomplished. A third important property that should be fulfilled by
the chosen problem is that the verification procedure for any solution would be
as simple as possible. The previously described identification method is valid
if it does not allow anyone else to misrepresent himself as the legitimate user
Alice, including the entity Bob carrying the identification process. So, this kind
of strong identification is possible because the zero-knowledge theory has shown
that the knowledge of a solution of a problem can be proven without reveal-
ing anything about such a solution, which implies that the verifier Bob learns
nothing else but the conviction that the prover A knows the secret solution.
Since their introduction, zero-knowledge proofs have proven to be very use-
ful as a building block in the construction of cryptographic protocols, especially
after Goldreich, Micali and Wigderson [2] showed that all languages in NP have
zero-knowledge proofs assuming the existence of secure encryption functions.
Due to their importance, the eciency of zero-knowledge proofs have received
considerable attention. The first zero-knowledge protocols (Fiat-Shamir [3] and
its variants [4], [5],[6], and Schnorr [7]) were based on the two number theoret-
ical problems used also as basis for the best known public-key cryptosystems,
that are factoring large integers and the discrete logarithm problem. So, the ma-
jor disadvantages of those protocols are due to the non-proved hardness of the
problems used as basis, and the important computational cost required by the
necessary arithmetic operations.
Since the definition of zero-knowledge does not exactly requires trap-door
functions (that are so important in public-key), and instead of them, it is based
on one-way functions (which is a less stringent requirement), the way to other
techniques non based on number-theoretical problems is opened. So, since 1989,
new more ecient schemes have appeared with the peculiarity that their secu-
rity is based on NP-complete combinatorial problems. Shamir published [8] the
first identification scheme based on an NP-hard problem, the Permuted Kernel
Problem. Other two approaches on NP-hard problems came from Stern, who
considered a coding problem, Syndrome Decoding [9], and the combinatorial
problem of Constrained Linear Equations [10]. Later, Pointcheval [11] proposed
another identification scheme based on the NP-hard Permuted Perceptrons Prob-
lem from the theory of learning machines. More recently, a general framework
for designing zero-knowledge proofs based on any NP-complete graph problem
was proposed in [12]. In all these identification schemes, the NP-hardness of the
base problems guarantees the fulfillment of the three properties mentioned in a
previous paragraph [13]. However, a common problem of these schemes is the
high communication rate between the prover and the verifier that considerably
reduces the eciency of the protocols. This fact is due to the number of iterations
required in the algorithms, which is closely connected to the high probability of
fraud at each iteration.
On the other hand, NP-completeness only ensures that there is no polyno-
mial-time algorithm for solving a problem in the worst case. However, with the
development of the Computational Complexity Theory, and concretely thanks to
the advances made on the Average-Case analysis, it has been proved that many
Search WWH ::




Custom Search