First-In-First-Out, Priority Queuing, Round-Robin, and Weighted Round-Robin Queuing (Congestion Management and Queuing)

FIFO is the default queuing discipline in most interfaces except those at 2.048 Mbps or lower (E1). The hardware queue (TxQ) also processes packets based on the FIFO discipline. Each queue within a multiqueue discipline is a FIFO queue. FIFO is a simple algorithm that requires no configuration effort. Packets line up in a single FIFO queue; packet class, priority, and type play no role in a FIFO queue. Without multiple queues and without a scheduling and dropping algorithm, high-volume and ill-behaved applications can fill up the FIFO queue and consume all the interface bandwidth. As a result, other application packets—for example, low volume and less aggressive traffic such as voice—might be dropped or experience long delays. On fast interfaces that are unlikely to be congested, FIFO is often considered an appropriate queuing discipline.

PQ, which has been available for many years, requires configuration. PQ has four queues available: high-, medium-, normal-, and low-priority queues. You must assign packets to one of the queues, or the packets will be assigned to the normal queue. Access lists are often used to define which types of packets are assigned to which of the four queues. As long as the high-priority queue has packets, the PQ scheduler forwards packets only from the high-priority queue. If the high-priority queue is empty, one packet from the medium-priority queue is processed. If both the high- and medium-priority queues are empty, one packet from the normal-priority queue is processed, and if high-, medium-, and normal-priority queues are empty, one packet from the low-priority queue is processed. After processing/de-queuing one packet (from any queue), the scheduler always starts over again by checking if the high-priority queue has any packets waiting, before it checks the lower priority queues in order. When you use PQ, you must both understand and desire that as long as packets arrive and are assigned to the high-priority queue, no other queue gets any attention. If the high-priority queue is not too busy, however, and the medium-priority queue gets a lot of traffic, again, the normal- and low-priority packets might not get service, and so on. This phenomenon is often expressed as a PQ danger for starving lower-priority queues. Figure 4-3 shows a PQ when all four queues are holding packets.


Figure 4-3 Priority Queuing

Priority Queuing

In the situation depicted in Figure 4-3, until all the packets are processed from the high-priority queue and forwarded to the hardware queue, no packets from the medium-, normal-, or low-priority queues are processed. Using the Cisco IOS command priority-list, you define the traffic that is assigned to each of the four queues. The priority list might be simple, or it might call an access list. In this fashion, packets, based on their protocol, source address, destination address, size, source port, or destination port, can be assigned to one of the four queues. Priority queuing is often suggested on low-bandwidth interfaces in which you want to give absolute priority to mission-critical or valued application traffic.

RR is a queuing discipline that is quite a contrast to priority queuing. In simple RR, you have a few queues, and you assign traffic to them. The RR scheduler processes one packet from one queue and then a packet from the next queue and so on. Then it starts from the first queue and repeats the process. No queue has priority over the others, and if the packet sizes from all queues are (roughly) the same, effectively the interface bandwidth is shared equally among the RR queues. If a queue consistently has larger packets than other queues, however, that queue ends up consuming more bandwidth than the other queues. With RR, no queue is in real danger of starvation, but the limitation of RR is that it has no mechanism available for traffic prioritization.

A modified version of RR, Weighted Round Robin (WRR), allows you to assign a "weight" to each queue, and based on that weight, each queue effectively receives a portion of the interface bandwidth, not necessarily equal to the others. Custom Queuing (CQ) is an example of WRR, in which you can configure the number of bytes from each queue that must be processed before it is the turn of the next queue.

Basic WRR and CQ have a common weakness: if the byte count (weight) assigned to a queue is close to the MTU size of the interface, division of bandwidth among the queues might not turn out to be quite what you have planned. For example, imagine that for an interface with an MTU of 1500 bytes, you set up three queues and decide that you want to process 3000 bytes from each queue at each round. If a queue holds a 1450-byte packet and two 1500-byte packets, all three of those packets are forwarded in one round. The reason is that after the first two packets, a total of 2950 bytes have been processed for the queue, and more bytes (50 bytes) can be processed. Because it is not possible to forward only a portion of the next packet, the whole packet that is 1500 bytes is processed. Therefore, in this round from this queue, 4450 bytes are processed as opposed to the planned 3000 bytes. If this happens often, that particular queue consumes much more than just one-third of the interface bandwidth. On the other hand, when using WRR, if the byte count (weight) assigned to the queues is much larger than the interface MTU, the queuing delay is elevated.

Next post:

Previous post: