Digital Signal Processing Reference
In-Depth Information
Fig. 3.9 Probability of error
versus E s /
η
P e
0.5
10 -1
10 -2
10 -3
10 -4
10 -5
-4 -2 0 2 4 6 8
E
η
dB
VT τ
Also, at the end of the bit interval, S 0 (
T
) =
for logic 1, the error will
VT τ
) <
occur if n 0 (
. Hence, the probability of error in this case is given by
themarkedarea(1)showninFig. 3.9 . Because of the symmetry of the Gaussian
curve, both the marked areas (0) and (1) are of equal area, and so the proba-
bility of error defined by the Eq. ( 3.14 ) is sufficient to deduce the entire error
probability.
The graph of P e versus E S
T
in db is a clear representation of error probability
improvement with respect to signal strength [ 2 ]. The maximum value of P e is 1 / 2
which implies, even if the signal is entirely lost in the noise, the receiver cannot
detect wrong more than 1 / 2 the chance on the average.
3.3 The Optimum Filter
As discussed in the previous section, the probability of error is the characterizing
parameter of the filter. If one filter provides minimum probability of error, the filter
or integrator can be called as an optimum filter.
The above Fig. 3.10 is showing a generalized receiver for binary coded PCM. If
the input is S 0 (t), the output is v 0 (T)
=
S 00 (T) + n 0 (T) and if, the input is S 1 (t), the
output is v 0 (T)
S 01 (T) + n 0 (T). Therefore is no noise is present there, the output of
the entire receiver assembly would be S 00 (T)orS 01 (T). Thus, the presence of noise
is ensured by the closeness of n 0 (T) to S 00 (T)orS 01 (T). The decision boundary is,
therefore, the average of S 00 (T) and S 01 (T), i.e. S 00 (
=
T
) +
S 01 (
T
)
.
2
 
Search WWH ::




Custom Search