Hardware Reference
In-Depth Information
make some of the issues related to hard drive reliability so critical—namely, timing. Engineers and
designers are constantly pushing the envelope to stuff more and more bits of information into the
limited quantity of magnetic flux reversals per inch. What they've come up with, essentially, is a
design in which the bits of information are decoded not only from the presence or absence of flux
reversals, but from the timing between them. The more accurately they can time the reversals, the
more information that can be encoded (and subsequently decoded) from that timing information.
In any form of binary signaling, the use of timing is significant. When a read or write waveform is
interpreted, the timing of each voltage transition event is critical. Timing is what defines a particular
bit or transition cell—that is, the time window within which the drive is either writing or reading a
transition. If the timing is off, a given voltage transition might be recognized at the wrong time as
being in a different cell, which would throw the conversion or encoding off, resulting in bits being
missed, added, or misinterpreted. To ensure that the timing is precise, the transmitting and receiving
devices must be in perfect synchronization. For example, if recording a 0 is done by placing no
transition on the disk for a given time period or cell, imagine recording ten 0 bits in a row—you
would have a long period of time (ten cells) with no activity, no transitions at all.
Imagine now that the clock on the encoder was slightly off time while reading data as compared to
when it was originally written. If it were fast, the encoder might think that during this long stretch of
10 cells with no transitions, only nine cells had actually elapsed. Or if it were slow, it might think that
11 cells had elapsed instead. In either case, this would result in a read error, meaning the bits that
were originally written would not be read as being the same. To prevent timing errors in drive
encoding/decoding, perfect synchronization is necessary between the reading and writing processes.
This synchronization often is accomplished by adding a separate timing signal, called a clock signal,
to the transmission between the two devices. The clock and data signals also can be combined and
transmitted as a single signal. Most magnetic data-encoding schemes use this type of combination of
clock and data signals.
Adding a clock signal to the data ensures that the communicating devices can accurately interpret the
individual bit cells. Each bit cell is bounded by two other cells containing the clock transitions.
Because clock information is sent along with the data, the clocks remain in sync, even if the medium
contains a long string of identical 0 bits. Unfortunately, the transition cells used solely for timing take
up space on the medium that could otherwise be used for data.
Because the number of flux transitions a drive can record in a given space on a particular medium is
limited by the physical nature or density of the medium and the head technology, drive engineers have
developed various ways of encoding the data by using a minimum number of flux reversals (taking
into consideration the fact that some flux reversals used solely for clocking are required). Signal
encoding enables the system to make the maximum use of a given drive hardware technology.
Although various encoding schemes have been tried, only a few are popular today. Over the years,
these three basic types have been the most popular:
• Frequency Modulation
• Modified Frequency Modulation
• Run Length Limited
The following sections examine these codes, how they work, where they are used, and any advantages
or disadvantages that apply to them. It will help to refer to Figure 8.10 (later in this chapter) as you
read the descriptions of these encoding schemes because this figure depicts how each of them would
 
Search WWH ::




Custom Search