Information Technology Reference
In-Depth Information
reversals. Run limited length (RLL) encoding takes this technique one step
further. It considers groups of several bits instead of encoding one bit at a time.
The idea is to mix clock and data flux reversals to allow for even denser packing of
encoded data and thus improve efficiency. The two parameters that define RLL
are the run length and the run limit (and hence the name). The word ''run'' here
refers to a sequence of spaces in the output data stream without flux reversals. The
run length is the minimum spacing between flux reversals, and the run limit is the
maximum spacing between them. As mentioned before, the amount of time
between reversals cannot be too large or the read head can get out of sync and
lose track of which bit is where. Finally, the so called partial-response maximum
likelihood (PRML) sequence detection is even more advanced and is today's most
common method. Partial-response means controlled inter-symbol interference
(ISI). That is, the data represented by the received waveform are packed so closely
together that they overlap (interfere). The ''controlled'' part means that there is
some identifiable structure to the overlapping. This structure is ''taught'' to a
sophisticated sequence detector that looks only for the possible controlled
patterns of ISI in the received waveform. This provides a much more robust
detection method in the presence of noise. This type of detection is often used with
trellis coding to further improve detection. Trellis coding encodes the data such
that the possible received sequences that are most similar are not used. This
provides extra capability to distinguish between received waveforms in the
presence of noise and distortion.
6.3.2. New Error-Correcting Techniques for Multilevel
Magnetic Recording
In the previous sections, it is shown that multilevel recording can substantially
increase the capacity of storage systems versus its conventional two-level counter-
part. On the other hand, multilevel recording reduces the separation between
different amplitude levels (given the same power constraints), which can make the
entire system more error-prone as more data becomes incorrectly recorded or
retrieved in the ''read-write'' channel. To address this problem, in this chapter, we
intend to employ powerful error-correcting codes. In general, efficient error
correction can reduce the overall error rates by a few orders of magnitude, and
this can also lead to higher recording densities in the entire storage system. Thus,
error correction becomes an efficient tool that can greatly reduce the errors caused
by multilevel recording and higher densities.
The main overhead caused by many error-correcting algorithms is their
excessive complexity or—equivalently—prohibitively slow data processing. There-
fore, our main goal in this part of this chapter is to design the error-correcting
techniques that combine powerful error-correcting performance with fast, feasible
processing. Namely, we intend to design the algorithms that approach optimum,
maximum likelihood (ML) decoding without significant degradation of the data
transmission rate in the overall system.
 
Search WWH ::




Custom Search