Information Technology Reference
In-Depth Information
The signal processing electronic stages represented in Figure 2 are based on a Digi-
tal Signal Processing (DSP) integrated circuit. The DSP performs a number of arith-
metical operations. Figure 3 illustrates the two main functional blocks that must be
implemented on the DSP. The first one, labeled 'functional block A', corresponds to
the set of signal processing stages aiming to compensate the hearing losses. The sec-
ond one, 'functional block B', is the classification system itself. Note that the afore-
mentioned classifying system consists conceptually of two basic parts:
A feature extraction stage in order to properly characterize the signal to be
classified. (See Sections 3 and 5.2).
The three-class (speech, music and noise) classifier (See Section 5.3).
The aforementioned restrictions arise mainly from the tiny size of the hearing aid
(especially for the ITC and CIC models), which, as illustrated in Figure 2, must contain
not only the electronic equipment but also the battery. In fact the DSP usually inte-
grates the A/D and D/A converters, the filter bank, RAM, ROM and EPROM memory,
input/output ports, and the core. The immediate consequence is that the hearing aid has
to work at very low clock frequencies in order to minimize the power consumption and
thus maximize the life of its battery. The power consumption must be low enough to
ensure that neither the size of the battery pack nor the frequency of battery changes
will annoy the user.
Another key point to note regarding this is that a considerable part of the computa-
tional capabilities are already used for running the algorithms of 'functional group A'
aiming to compensate the acoustic loss. Therefore, designers are constrained to use the
remaining part to implement the embedded sound classifier. Roughly speaking, the
computational power available does not exceed 3 Million Instructions Per Second
(MIPS), with only 32 Kbytes of internal memory. The time/frequency decomposition
is performed by using an integrated Weighted Overlap-Add (WOLA) filter bank, with
64 frequency bands. It must be considered that only the filter bank requires about 33%
of the computation time of the DSP. Since functional block A requires another 33%,
the immediate consequence is that only about 1 MIPS is free for designing the classi-
fier. Within the framework imposed by this constraint, the success of the classification
strongly depends upon the sound describing features used to characterize the signal to
be classified.
3 Fundamentals of Selection and Extraction of Features
3.1 Intuitive Approach
As previously stated, these processes aim at finding and extracting the kind of infor-
mation of the audio signal that should assist the classifier in distinguishing between
the acoustical classes. The performance of the system is severely dependent on what
features are used. For instance, the classifier may not perform properly if its input fea-
tures do not contain all this essential, adequate information. Additionally, the features
not only have to properly describe the sound (allowing its subsequent classification),
but also have to make efficient use of the DSP resources. This means that computing
Search WWH ::




Custom Search