Cryptography Reference
In-Depth Information
Again, the variance V ar(M) = V ar(n−ξ n 1 ,n 2 ,T ) = V ar(ξ n 1 ,n 2 ,T ) = λ, giving
sd(M) = sd(ξ n 1 ,n 2 ,T ) =
λ ≈ 8.33.
Simulation with 10 million trials, each time with 1024 consecutive words
of keystream generation for the complete arrays P and Q, gives the average
of the number ξ n 1 ,n 2 ,T of isolated vertices of the index graph of the state of
HC-128 as 69.02 with a standard deviation of 6.41. These values closely match
with the theoretical estimates of the mean λ ≈ 69.41 and standard deviation
λ ≈ 8.33 of ξ n 1 ,n 2 ,T derived in Corollary 10.5.5.
From Corollary 10.5.5, theoretical estimates of the mean and standard
deviation of the size M of the largest component is 442.59 and 8.33 respec-
tively. From the same simulation described above, the empirical average and
standard deviation of M are found to be 407.91 ≈ 408 and 9.17 respectively.
The small gap between the theoretical estimate and the empirical result
arises from the fact that the theory takes n as infinity, but its value in the
context of HC-128 is 512. In the limit when n →∞, each vertex is either an
isolated one or part of the single giant component. In practice, on the other
hand, except for the isolated vertices (≈ 69 in number) and the vertices of the
giant component (≈ 408 in number), the remaining few (≈ 512−69−408 = 35
in number) vertices form some small components. However, the low (9.17)
empirical standard deviation of M implies that the empirical estimate 408 of
E(M) is robust. We will see later that as a consequence of Theorem 10.5.7,
any M > 200 is su cient for our purpose.
If C = {y 1 ,y 2 ,...,y M
} be the largest component of G, then we can guess
the word corresponding to any fixed index, say y 1 . As explained in the
proof of Lemma 10.5.3, each guess of Q[y 1 ] uniquely determines the values
of Q[y 2 ],...,Q[y M ]. According to Corollary 10.5.5 and the discussion follow-
ing it, we can guess around 408 words of Q in this method. This is the Second
Phase of the recovery algorithm.
The following result, known as the Propagation Theorem, can be used to
determine the remaining unknown words.
Theorem 10.5.6 (Propagation Theorem). If Q[y] is known for some y in
[0,499], then m = ⌊ 511−y
12
⌋ more words of Q, namely, Q[y + 12],Q[y +
24],...,Q[y + 12m], can all be determined from Q[y] in a time complexity
that is linear in the size of Q.
Proof: Consider block B 1 . Following the notation in Section 10.5.1, the
equation for keystream generation is
s 1,i = h 2 (Q[i−12])⊕Q[i], for 12 ≤i ≤ 511.
Written in another way, it becomes
Q[i] = s 1,i
P
.
(Q[i−12]) (0)
256 + (Q[i−12]) (2)
+ P
Setting y = i−12, we have, for 0 ≤ y ≤ 499,
[Q[y]) (0)
256 + (Q[y]) (2)
Q[y + 12] = s 1,y+12
P
+ P
(10.18)
This is a recursive equation, in which all s 1 values and the array P are
Search WWH ::




Custom Search