Cryptography Reference
In-Depth Information
Schnorr show (subje ct to a conjecture) that if n S
8 then the expected running time for the
rho algorithm is c πr/ 2 group operations for an explicit constant c . Teske shows, using
results of Hildebra nd , that the additive walk should approximate the uniform distribution
after fewer than r steps once n S
6. She recommends using the additive walk wit h
1 . 3 r
n S
20 and, when this is done, conject u res that the expected cycle length is
1 . 253 r ).
Further motivation for using large n S is given by Brent and Pollard [ 94 ], Arney and
Bender [ 13 ] and Blackburn and Murphy [ 55 ].
(compared with the theoretical
They present h euristic arguments that the
expected cycle length when using n S partitions is c n S πr/ 2 where c n S =
n S / ( n S
1).
This heuristic is supported by the experimental results of Teske [ 542 ].
Finally, when using equivalence classes (see Section 14.4 ) there are further advantages
in taking n S to be large.
14.3 Distributed Pollard rho
In this section we explain how the Pollard rho algorithm can be parallelised. Rather than
a parallel computing model we consider a distributed computing model. In this model
there is a server and N P
1 clients (we also refer to the clients as processors ). There is
no shared storage or direct communication between the clients. Instead, the server can send
messages to clients and each client can send messages to the server. In general, we prefer
to minimise the amount of communication between server and clients. 7
To solve an instance of the discrete logarithm problem the server will activate a number
of clients, providing each with its own individual initial data. The clients will run the rho
pseudorandom walk and occasionally send data back to the server. Eventually, the server
will have collected enough information to solve the problem, in which case it sends all
clients a termination instruction. The rho algorithm with distinguished points can very
naturally be used in this setting.
The best one can expect for any distributed computation is a linear speedup compared
with the serial case (since if the overall total work in the distributed case was less than the
serial case then this would lead to a faster algorithm in the serial c ase). In other words, with
N P clients we hope to achieve a running time proportional to r/N P .
14.3.1 The algorithm and its heuristic analysis
All processors perform the same pseudorandom walk ( x i + 1 ,a i + 1 ,b i + 1 )
walk ( x i ,a i ,b i )
as in Section 14.2.1 , but each processor starts from a different random starting point.
Whenever a processor hits a distinguished point then it sends the triple ( x i ,a i ,b i )tothe
server and re-starts its walk at a new random point ( x 0 ,a 0 ,b 0 ). If one processor ever visits a
=
7
There are numerous examples of such distributed computation over the internet. Two notable examples are the Great Internet
Mersenne Primes Search (GIMPS) and the Search for Extraterrestrial Intelligence (SETI). One observes that the former search
has been more successful than the latter.
Search WWH ::




Custom Search