Information Technology Reference
In-Depth Information
To address these challenges, the following link efficiency mechanisms have been introduced:
Key
To p i c
Payload Compression: Compresses application data being sent over the network
so the router sends less data across the slow WAN link.
Header Compression: Some traffic (such as VoIP) may have a small amount of ap-
plication data (RTP audio) in each packet but send many packets overall. In this case,
the amount of header information becomes a significant factor and often consumes
more bandwidth than the data itself. Header compression addresses this issue directly
by eliminating many of the redundant fields in the header of the packet. Amazingly,
RTP header compression (also called Compressed Real-time Transport Protocol
(cRTP) reduces a 40-byte header down to 2-4 bytes!
Link Fragmentation and Interleaving (LFI): LFI addresses the issue of serializa-
tion delay by chopping large packets into smaller pieces before they are sent. This al-
lows the router to move critical VoIP traffic in between the now-fragmented pieces of
the data traffic (which is called “interleaving” the voice). You can use LFI on PPP con-
nections (by using multilink PPP) or on Frame Relay connections (using FRF.12 or
FRF.11 Annex C).
Tip: One major thing to understand: Link efficiency mechanisms are not a magic way to
get more bandwidth. Each of them has their own drawback: Compression adds delay, and
processor load and link fragmentation increases the amount of actual data being sent on
the line (because all the fragmented packets now need their own header information).
Cisco does not recommend using these methods on links faster than T1 speed.
Queuing Algorithms
Queuing define the rules the router should apply when congestion occurs. The majority of
network interfaces use basic First-in, First-out (FIFO) queuing by default. In this method,
whatever packet arrives first is sent first. Although this seems fair, not all network traffic is
created equal. The primary goal of queuing is to ensure that the network traffic servicing
your critical or time-sensitive business applications gets sent before non-essential network
traffic. Beyond FIFO queuing, there are three primary queuing algorithms in use today:
Key
To p i c
Weighted Fair Queuing (WFQ): WFQ tries to balance available bandwidth among
all senders evenly (thus the “fair” queuing). By using this method, a high-bandwidth
sender gets less priority than a low bandwidth sender. On Cisco routers, WFQ is of-
ten the default method applied to serial interfaces.
Class-Based Weighted Fair Queuing (CBWFQ): This queuing method allows you
to specify guaranteed amounts of bandwidth for your various classes of traffic. For
example, you could specify that web traffic gets 20 percent of the bandwidth,
whereas Citrix traffic gets 50 percent of the bandwidth (you can specify values as a
percent or a specific bandwidth amount). WFQ is then used for all the unspecified
traffic (the remaining 30 percent, in the previous example).
Low Latency Queuing (LLQ): LLQ is often referred to as PQ-CBWFQ because it is
exactly the same thing as CBWFQ, but adds a priority queuing (PQ) component.
 
 
Search WWH ::




Custom Search