In PSTN, communication from the originating CO or DLC to the destination DLC is synchronous digital communication. This process has been well established over the last several decades. PSTN transmission follows some common standards such as ITU and IEEE and country-specific standards like TR-57 and TIA standards [TR-NWT-000057 (1993), TIA/EIA-470C (2003), IEEE-STD-743 (1995)]. PSTN manufacturers use TR-57 in North America to specify the signaling and transmission requirements for DLC. The FXS interface on VoIP adapters works similar to DLC or analog CO providing the TIP-RING interface. Assuming other favorable conditions for VoIP voice quality, as explained in topic 20, ensuring TR-57 transmission characteristics helps in improving voice quality. TR- 57 is used in North America, and the current
standard is GR- 57 [GR- 57- CORE (2001)] . Several other standards are followed in different countries. For example, Japan makes use of the NTT PSTN specification [URL (NTT-E)], and France uses the France Telecom STI series [FT ITS-1 (2007)] of specifications. Most of these country specifications are closely related to each other with only a few deviations like end-to-end losses, transmission filter characteristics, impedance, and so on. TR-57 transmission specifications can also be followed in achieving voice quality in VoIP systems. In VoIP, some signaling events as call progress tones used in establishing end-to-end calls may go through longer delays than PSTN specification. These deviations will tend to develop during the call establishment phase. Meeting transmission characteristics [TIA/EIA-912 (2002)] can take care of established VoIP call voice quality.

TR-57 Transmission Tests

TR-57 or GR-57 transmission summary points and high-l evel interpretation are given in this section. [Courtesy: Text and parameter values with TR-57 reference are from " TR - NWT- 000057 (1993) , Functional criteria for digital loop carrier systems—topic 6" ; used with the permission from Telcordia, NJ,]. It is suggested to refer to TR-57 [TR-NWT-000057
(1993) ] or the recent document GR – 57 [GR - 57 - CORE (2001) ] for the complete requirements and detailed specifications referred to in this topic. Integrating these test specifications into PSTN and VoIP helps in maintaining the
quality of voice and fax calls. Refer to instrument manuals [URL (Sage-options)] for details on measurements and interpretation.
Return Loss. Return loss happens because of an impedance mismatch of the terminations and the source. It is expressed as the ratio of the outgoing signal to the reflected signal. It is given as echo return loss (ERL) and singing return loss (SRL) with SRL at both high (Hi) and low (Lo) frequencies. ERL is the frequency-weighted average of the return losses over the 3 -dB bandwidth points of 560 to 1965 Hz. Higher ERL is advantageous and gives lower echo. ERL in dB has to be greater than 18 dB, and this level avoids echo issues in the intraregional calls. In numerical terms, 18-dB ERL means 1/64th of the power or 1/8th of the voltage returning as echo. It is possible to achieve ERL up to the extent of 24 dB in practical systems with short-loop applications with VoIP gateways. To clarify on a relative scale, an ERL of 24 dB is better than an ERL of 18dB by 6dB. SRL is created by a resonance effect at selected frequencies. At some frequencies, the round trip of the echo is created with multiples of 360 ° of phase shift, which makes oscillations sustain for a longer duration. This phenomenon is called “line is on singing.” SRL-Lo is the singing return loss for 3-dB bandwidth low frequencies in the range of 260 to 500 Hz. SRL-Hi is for the high-frequency singing return loss measured in the band of 2200 to 3400 Hz. Details on the band-pass characteristics are given in IEEE STD-743 [(1995) , URL (Midcom)]. The SRL has to be greater than 10dB. In general, SRL-Hi will be higher than SRL-Lo. Lower dB values of SRL- Hi, SRL-Lo creates annoyance to both talker and listener of the voice call.
Longitudinal Balance. Balance is the match between TIP and RING lines. It provides the discrimination for the common mode signal on the TIP and RING interface or indicates how well the TIP-RING is matching. It is evaluated as the ratio of longitudinal to metallic balance. Longitudinal is a common mode signal, and metallic is a differential signal. In the measurements, a common or ground point is required in addition to TIP-RING lines, as given in reference [URL (Sage- Balance)] . The preferred balance is greater than 65 dB at 1 kHz, but the acceptable balance is 63 dB. In good implementations, a balance of 68 dB is achieved.
Total Loss. It identifies that the transmitter and receiver are having a required loss. The end-to-end losses have to be within the acceptable limits, including the network cables loss. PSTN systems achieve their highest end-to-end loss of 8 dB in off-hook, and these losses vary with country-specific requirements. These losses also influence the loudness rating (given in topics 17 and 20) that has a close relationship with voice quality.
Off-Hook Frequency Response. Frequency response has to be flat to meet set tolerances for most of the voice band. This test also ensures that frequency response roll off did not happen in the 400- to 2800- Hz band. Frequency
response is checked at 0 dBm, and the flatness goals are of ±0.5 dB. The reference for comparison of flatness is at 1004 Hz. With reference to a 1004- Hz frequency response, deviations from 400 to 2800 Hz have to be within ±0.5 dB as optional and -0.5 to +1.0 as required. In actual practice, the frequency response is taken from 100 Hz to 3900 Hz in suitable frequency steps. The values between 400 and 2800 Hz are used for comparing the frequency response.
60-Hz Signal Loss. Signal loss at 60 Hz is not desirable in speech transmission, but this “mains hum” can unfortunately be introduced from the AC mains power supply (110V, 60 Hz). This test is to ensure that a 60-Hz contribution is at least 20 dB lower than a frequency response at 1004 Hz. This 60-Hz contribution has to be treated before sampling. If end-to-end loss at 1004 Hz is “X” dB, loss at 60 Hz has to be greater than X + 20 dB. In some countries, especially in Europe, 50 Hz is used for power transmission, and therefore, the measurements have to be conducted at 50 Hz in those countries even though it is not stated.
Off-Hook Amplitude Tracking. It ensures transmission gain flatness for different amplitudes at 1004 Hz, which is similar to amplitude linearity at the selected frequency. Three different ranges are defined in TR-57. At lower signal levels, A–aw and |>-aw create more quantization noise, which makes gain deviation to increase below the -37-dBm input level. For inputs of greater than or equal to -37 dBm, maximum deviation is ±0.5dB (average of ±0.25dB). For inputs of -37 to -50 dBm, maximum deviation is ±1.0 dB. For inputs of -50 to -55dBm, maximum deviation is ±3.0dB, which implies that AGC cannot improve linearity for small signals. Most instruments can measure up to a minimum of -60dBm.Test instruments use a C-message band-pass filter [URL (Sage935)] for the voice band. In some countries, psophometric (p)-message filters are used. The differences are outlined in the ideal channel noise section.
Overload Compression Tests. The A-law can accept 3.14dBm and the | -l aw can accept 3.17dBm of a sine wave without distortion. With increased signal power, the sine wave is clipped on either end of the top to make it look like a trapezoidal waveform. Overload compression is for measuring distortions on overloaded power levels up to 9 dBm. These distortions are measured as loss in simple measurements. There are set limits to the maximum loss at different power levels of 0, +3, +6, and +9 dBm. If system loss is “X” dB at 0-dBm input, loss at +3 dBm has to be <X + 0.5. The loss at +6 dBm has to be <X + 1.8. The loss at the highest signal of +9dBm has to be lower than <X + 4.5. The relations are good for no loss of X = 0 dB. When loss is significant, the measurements may show much better results. Assuming loss at an analog interface, even a higher power signal may pass without any distortions.
Off-Hook Ideal Channel Noise. In its simplest form, ideal channel noise is the noise heard when microphones are in mute. The noise can be heard in a silent room. Instruments in North America use a C-message filter for voice band and measure the noise power in dBrnc. Power in dBrnc (also referred as dBrnC) is C-message weighted noise power with a reference unit of 1pw. The popular notation is dBrnc for noise power. For voice power, dBm units are used. In simple mapping, 90dBrn = 0dBm (1mw), and with C-message weighting, the mapping is 88dBrnc = 0dBm. The psophometric weighting is used in Europe as 87.5 dBp = 0 dBm. The symbol “dBp” is used for dBrn with psophometric weighting. The variation is by 2 to 2.5dB with the weighting windows. The ideal channel noise power has to be better than 18 dBrnc, which is equivalent to 18 – 88 = -70 dBm. An accepted noise power level is 20 dBrnc = -68 dBm. Long-distance analog telephony was accepting higher ideal channel noise [Bellamy (1991)]. After conversion to digital telephony with short analog loops and DLCs, ideal channel noise achieved better levels, as stated in this section.
SNR or Distortion Ratio. The SNR and distortion measure identifies the SNR limits at the signal level. Lower signal power levels and their SNR results are also important in transmission. A-law or |>-aw compression is used in PSTN and VoIP. Both the highest and the lowest signal are coded in 4 bits with 3-bit level scaling and in 1 bit for sign (polarity of the signal), which limits the possible SNR to 38-41 dB with self-quantization effects. With low amplitudes, SNR starts degrading. It is measured with an input sine wave of 1004 to 1020 Hz, and signal power is varied from 0 to -45dBm. For input of 0 to -30 dBm, the required SNR is more than 33 dB (usually >37dB is achieved in most systems). For input of -30 to -40dBm, SNR is >27 dB. For -40 to -45 dBm, SNR is >22 dB. When end-to-end losses are more, or ideal channel noise is not meeting the specifications, SNR will be lower for the same input signal levels. The signal-to-total-distortion (STD) ratio that takes into account total distortion, including second and third harmonics.
Impulse Noise. Impulse noise is the short duration noise power that exceeds the set thresholds. Impulse noise can create audible tick sounds, and fax/ modems may fall back to a lower bit rate. Ideal channel noise is the steady low-level noise. Impulse noise durations are comparable with 10-ms duration. Signal loss from phase or amplitude distortion can be noticed during VoIP packet loss, coupling from adjacent channels, ringing, line reversals clock drifts, and a change in power supply load conditions in the hardware. Noise on the interfaces may occupy wider bandwidth. Impulse noise is heard with a C-message filter, which means it is limited to the voice band.
The number of impulse counts in 15 minutes should not exceed 15; therefore, one impulse count per minute is the maximum. The measurements are conducted on both ends of the system with tones and simple ideal channel configuration. In the no-tone case, impulses are detected with a threshold of
47 dBrnc (equivalent to -41 dBm). In the tone case, with the tone at 1004 Hz, -13 dBm is sent and impulse noise is measured with a threshold of 65dBrnc (equal to -23 dBm). Several conditions with impulse noise exist as given in TR-57 [TR-NWT-000057 (1993)]. The impulse noise counts are also referred to as hits.
Intermodulation Distortion. This test makes use of four tones as per IEEE STD-743 (1995) with a total input level of -13dBm. This test is mainly used for measuring the undesired nonlinearity. A pair of tones around 860 Hz and another pair of tones around 1380 Hz are used in this test. In this test, total second and third harmonic distortions are measured. The second harmonic has to be better than 43 dB, and the third harmonic has to be better than 44dB.
Single-Frequency Distortion. It measures the distortions to the single pure tone. This distortion is separated into two frequency bands of 0-4kHz and 0-12kHz. In narrowband, the 1004-1020Hz tone at 0dBm is sent and the observed tones outside the sent tone have to be lower than -40 dBm in the frequency band of 0-4kHz. In the wideband range, for any input at 0 dBm in the frequency range of 0-12 kHz, the observed output at any other frequency outside the sent tone frequency has to be less than -28 dBm in the 0-12-kHz frequency range. This type of test will require spectrum analyzers.
System Generated Tones. It measures any undesired system generated tones. Send and receive can be in either the terminated mode or the not terminated mode with proper impedance. Tones measured at any end have to be less than -50dBm in the frequency range of 0-16kHz.
PAR. PAR provides amplitude and phase distortion over time because of transmission impairments [URL (Sage-PAR)]. A PAR waveform is a complex signal consisting of about 16 non-harmonically related tones with a spectrum that approximates the modem signals in the voice frequency band. In general, PAR takes care of attenuation distortion, envelope distortion, four-tone modulation distortion, and phase distortion. PAR is measured on both ends. Most systems achieve a PAR of 94, and an acceptable PAR is 90. The PAR measurement cannot reveal what caused this distortion. In many types of equipment, PAR test is replaced with the better 23-tone test as per IEEE STD-743 (1995).
Clarification Note on PAR Measurement and PAR Speech Levels. In
specifying the speech characteristics, peak-to-average levels are specified as 15.8-18 dB. The usual speech levels are of -16 dBm, and peak levels can go up to +2 dBm with an 18-dB speech-level PAR. The PAR specified in TR-57 and the measurements using instruments from [URL (Sage- PAR)] provide the distortion measure for voice band signals. A PAR of 100 is the best, and a PAR of 90 to 94 is the goal in digital-loop carrier [TR-NWT-000057 (1993)]. A PAR of 94 is 6% of overall transmission distortions.
Channel Crosstalk. This test is to ensure, end-to- end, that no significant coupling occurs in the adjacent channels. Therefore, 0-dBm power tones from 200 to 3400 Hz are sent and power is measured in adjacent channels. This crosstalk power has to be 65 dB lower or below -65 dBm of power. With this type of low coupling, crosstalk will submerge into ideal channel noise.
Frequency Offset. It measures the frequency drift from end to end, which happens mainly because of a sampling clock parts per million (PPM) difference between source and destination. The frequency drift has to be contained to within 0 ± 0.4Hz at 1004Hz. The instruments have to cater to a 0.1 -Hz measurement resolution.

IEEE STD-743-Based Tests

IEEE STD-743 (1995) had many tests that overlapped with TR-57 measurements. Instrument manufacturers were following IEEE specifications. Several tests of TR- 57 are contained in IEEE STD- 743. The 23- t one test has been adapted in recent test equipment. It takes care of transmission and distortion tests. The 23-tone test signal consists of 23 equally spaced tones [URL (Sage935)] from 203.125 to 3640.625Hz with known phase relationships. The tones are spaced at 8000/512 = 15.625 Hz to allow easy fast Fourier transform (FFT) analysis with 512 and 256 points at 8000-Hz sampling. The signal repeats once every 64ms (1/15.625Hz = 64ms). Frequency response is measured at 23 tones, and phase relations are observed at 22 frequencies.

Summary on Association of TR-57, IEEE, and TIA Standards

Many equipment manufacturers built instruments as per 1984 version of IEEE STD-743. This outlines the IEEE standard accuracy of various measurements and measurement procedures. It has many overlapping measurements of TR-57. IEEE STD-743-based instruments can perform TR-57 tests also. IEEE STD-743 specifications were revised in 1995. This revision had many tests suitable for TR-57. This standard became obsolete in 2004. Equipment manufacturers, however, still follow IEEE STD-743. Now the TIA-470 series is followed in many types of new equipment. TIA-470 has had several revisions and extensions for acoustic measurements, such as cordless phones and wireless systems that were not available in TR-57- and IEEE STD-743-based tests. TR-57 is
limited to a narrowband of 0-4kHz. IEEE STD- 743 (1995) and TIA IEIA -470C (2003) take care of wideband voice tests. In general, all these standards have several matching and overlapping specifications that determine proper voice quality on PSTN-based systems and their extensions across many different types of telephone and fax equipment.

Next post:

Previous post: