Cryptography Reference
In-Depth Information
and to classify these problems according to their difficulty. An intrinsic estimation
of the difficulty of a problem is hard to come by and so usually one has to settle
for estimating the cost of solving the problem by means of a specific algorithm.
Computational complexity theory classifies algorithms in terms of the resources
(such as time, space, etc.) needed to run them. The most important of these resources
is time, which is usually evaluated by means of a function that estimates how time
varies as a function of input size. This function does not attempt to measure the real
time the algorithm will spend on a given input but, rather, the number of elementary
operations used, which will be roughly proportional to the time actually spent (with
the proportionality constant depending on many additional factors, such as the power
of the machine or machines running it, the specifics of the implementation, etc.).
The important thing here is that this estimation should be precise enough to allow
us to predict, given the time required to execute the algorithm with a given input
on a concrete machine, how long will it take to run the algorithm with the same
implementation and a larger input.
2.3.1 Asymptotic Notation
In order to achieve the estimation just mentioned it will be enough to know the
asymptotic behavior of the running time function, i.e., the growth rate of this function
as the input size gets large. This growth rate might be a complicated function as it
will depend on many different factors, but the idea is to approximate it by simpler
functions. For this, the following asymptotic notation is used:
R + denotes the positive reals; more
generally, we can consider functions defined for sufficiently large positive integers
that are eventually positive, i.e., f
: N → R +
Definition 2.8 Let f
,
g
(where
(
x
)
, g
(
x
)>
0 for sufficiently large x ). Then we
say that:
∈ R +
1.
f
=
O
(
g
)
( fisbig - Oofg ) if there exists c
and x 0
∈ N
such that
f
(
x
)
cg
(
x
)
for all x
x 0 . In this case we also say that f is of order g .
∈ R + and x 0
2.
f
= Ω(
g
)
( f is big - Omega of g ) means that there exists c
∈ N
such that f
(
x
)
cg
(
x
)
for all x
x 0 , equivalently, g
=
O
(
f
)
.
∈ R + and x 0
3.
f
= Θ(
g
)
( f is big - Theta of g ) means that there exist c
,
d
∈ N
such that cg
(
x
)
f
(
x
)
dg
(
x
)
for all x
x 0 , equivalently, f
=
O
(
g
)
and
g
=
O
(
f
)
.
)
g ( x ) =
f
(
x
4.
f
=
o
(
g
)
( f is little - oofg ) if lim x →∞
0, equivalently, for every
ε>
0,
for every sufficiently large x . Intuitively, this means that g grows
much faster than f and it clearly implies that f
f
(
x
) ε
g
(
x
)
=
O
(
g
)
.
)
g ( x ) =
f
(
x
5.
f
g if lim x →∞
1. In this case we say that f is asymptotically equal
(or asymptotic )to g .
 
 
Search WWH ::




Custom Search