Cryptography Reference
In-Depth Information
graph is an expander, we know that #
X
i
=
#
X
i
−
1
+
#
δ
v
(
X
i
−
1
)
≥
(1
+
c
)#
X
i
−
1
when
X
i
−
1
c
)
i
.
In the supersingular case we have #
X
E,
k
,S
=
is small, and so #
X
i
≥
(1
+
O
(
q
) and in the ordinary case we have
O
(
q
1
/
2
log(
q
)). In both cases, one expects the algorithm to terminate
after
O
(log(
q
)) steps. Step
i
involves, for every vertex
j
#
X
E,
k
,S
=
h
(
O
)
=
∈
X
i
(or
j
∈
δ
v
(
X
i
−
1
)) and every
∈
S
, computing roots of
(
j,y
)in
F
q
. One can check that if all
are polynomially
bounded in log(
q
) then the expected number of bit operations is bounded by
#
X
E,
k
,S
times a polynomial in log(
q
).
In the supersingular case the algorithm is expected to perform
O
(
q
1
/
2
) bit operations.
In the ordinary case, by Bach's result (and therefore, assuming various generalisations of
the Riemann hypothesis) we can restrict to isogenies of degree at most 6 log(
q
)
2
and so
each step is polynomial-time (the dominant cost of each step is finding roots of the modular
polynomial; see Exercise
25.2.2
). The total complexity is therefore an expected
O
(
q
1
/
4
)bit
operations. The storage required is expected to be
O
(
q
1
/
4
log(
q
)
2
) bits.
Exercise 25.5.2
Let
m
O
(log(
q
)
m
). Let
φ
be the
isogeny output by the Galbraith algorithm. Show, under the same heuristic assumptions as
above, that the isogeny can be evaluated at
P
∈ N
. Suppose all
∈
S
are such that
=
∈
E
(
F
q
) in polynomial-time.
Exercise 25.5.3
Isogenies of small degree are faster to compute than isogenies of large
degree. Hence, the average cost to compute an
-isogeny can be used as a weight for
the edges in the isogeny graph corresponding to
-isogenies. It follows that there is a
well-defined notion of shortest path in the isogeny graph between two vertices. Show how
Dijkstra's algorithm (see Section 24.3 of [
136
]) can be used to find a chain of isogenies
between two elliptic curves that can be computed in minimal time. What is the complexity
of this algorithm?
25.5.2 The Galbraith-Hess-Smart algorithm
We now restrict to the ordinary isogeny graph and sketch the algorithm of Galbraith, Hess
and Smart [
204
]. The basic idea is to replace the breadth-first search by a random walk,
similar to that used in the kangaroo algorithm.
Let
H
be a hash function from
F
q
toaset
S
of prime ideals of small norm. One starts
j
(
E
) and stores ideals a
0
=
random walks at
x
0
=
(1). One can
think of (
x
0
,
a
0
) as a “tame walk” and (
y
0
,
b
0
) as a “wild walk”. Each step of the algorithm
computes new values
x
i
and
y
i
from
x
i
−
1
and
y
i
−
1
: to compute
x
i
set l
=
j
(
E
) and
y
0
=
(1), b
0
=
H
(
x
i
−
1
) and
N
(l), find the roots of
(
x
i
−
1
,z
), choose the root corresponding to the ideal l (using
the trick mentioned in Remark
25.3.9
) and call it
x
i
. The same process is used (with the
same function
H
) for the sequence
y
i
. The ideals are also updated as a
i
=
a
i
−
1
l (reduced
in the ideal class group to some short canonical representation of ideals). If
x
i
=
=
y
j
then
the walks follow the same path. We designate certain elements of
F
q
as being distinguished
points, and if
x
i
or
y
i
is distinguished then it is stored together with the corresponding ideal