Java Reference
In-Depth Information
general big-oh rules
5.4
Now that we have the basic ideas of algorithm analysis, we can adopt a
slightly more formal approach. In this section, we outline the general rules for
using Big-Oh notation. Although we use Big-Oh notation almost exclusively
throughout this text, we also define three other types of algorithm notation
that are related to Big-Oh and used occasionally later on in the text.
definition:
(Big-Oh)
T
( )
is
OF
( )
(
)
if there are positive constants
c
and
N
0
such that
when
.
T
( )
cF
( )
≤
NN
0
≥
definition:
(Big-Omega)
is
if there are positive constants
c
T
( )
Ω
(
F
( )
)
and
such that
when
.
N
0
T
( )
cF
( )
≥
NN
0
≥
definition:
(Big-Theta)
is
if and only if
is
and
T
( )
Θ
(
F
( )
)
T
( )
OF
( )
(
)
is
.
T
( )
Ω
(
F
( )
)
definition:
(Little-Oh)
T
( )
is
oF
( )
(
)
if and only if
T
( )
is
OF
( )
(
)
and
.
4
T
( )
is not
Θ
(
F
( )
)
The first definition,
Big-Oh
notation, states that there is a point such
that for all values of
N
that are past this point, is bounded by some
multiple of . This is the sufficiently large
N
mentioned earlier. Thus, if
the running time of an algorithm is , then, ignoring constants,
we are guaranteeing that at some point we can bound the running time by a
quadratic function. Notice that if the true running time is linear, then the state-
ment that the running time is is technically correct because the ine-
quality holds. However, would be the more precise claim.
If we use the traditional inequality operators to compare growth rates,
then the first definition says that the growth rate of
N
0
T
( )
F
( )
T
( )
ON
( )
ON
( )
O
(
N
)
Big-Oh
is similar to
less than or equal
to, when growth
rates are being
considered.
T
( )
is less than or equal
to that of .
The second definition, , called
Big-Omega
, says that
the growth rate of is greater than or equal to that of . For instance,
we might say that any algorithm that works by examining every possible subse-
quence in the maximum subsequence sum problem must take time
because a quadratic number of subsequences are possible. This is a lower-bound
argument that is used in more advanced analysis. Later in the text, we will see
one example of this argument and demonstrate that any general-purpose sorting
algorithm requires
F
( )
T
( )
=
Ω
(
F
( )
)
T
( )
F
( )
Big-Omega
is simi-
lar to greater than
or equal to, when
growth rates are
being considered.
Ω
N
( )
Ω
(
N
log
N
)
time.
4. Our definition for
Little-Oh
is not precisely correct for some unsual functions, but is the
simplest to express the basic concepts used throughout this text.
Search WWH ::
Custom Search