Queuing and Scheduling (QOS-Enabled Networks) Part 3

Priority-Based Deficit Weighted Round Robin

Introducing the deficit counter allows the WRR algorithm to be aware of bandwidth and improves the fairness. But for certain traffic types, fairness is not the desired behavior. What is needed is a priority scheduling similar to PQ, but that preserves the benefits of DWRR. To achieve predictable service for sensitive, real-time traffic, a priority level for scheduling needs to be introduced. By enabling strict priority, or by offering several priority levels and using DWRR scheduling between queues with the same priority levels, service assurance with regards to delay and loss protection can be achieved for demanding traffic types, such as voice and real-time broadcasting. To illustrate this, consider three queues, one of which has a priority of strict-high, as shown in Figure 7.21.

PB - DWRR scheduling, with one strict - high priority queue

Figure 7.21 PB – DWRR scheduling, with one strict – high priority queue

PB - DWRR scheduling, policed priority - high queue


Figure 7.22 PB – DWRR scheduling, policed priority – high queue

When service cycles are scheduled, queue two (Q2), which has strict. high priority, always has preference over the two low . priority queues. In each scheduling turn, Q2 is drained first. If the strict- high queue is filled faster than it is cleared, the result is that the packets in queue zero (Q0) and queue one (Q1) become stale and can age out because these queues are not behaving within their defined weights. This aging out of packets is a side effect of the fact that a strict-high queue never goes into negative credits. However, a strict. high priority queue system can work if the rate of packets entering Q2 is controlled, limited, for example, to 20% of the total bandwidth. Then the credit state is reflected accordingly. One way to control the rate into a queue is by adding a policer that limits the rate by dropping packets that exceed a defined rate limit. This scheme avoids having the strict-priority queue enter a runaway state in which it monopolizes all the scheduling cycles. Another way to control the rate into the queue is to control the delay for the queue. Figure 7.22 illustrates rate control. Assigning a weight of 25% of the scheduling bandwidth to Q2 prevents more than one packet from being in Q2 at a time, as a result, Q2 drops one of the 200-byte packets at the tail of the queue.

PB - DWRR scheduling with two priority - high queues

Figure 7.23 PB – DWRR scheduling with two priority – high queues

Instead of using a strict-high priority queue, an alternative way to rate-limit the queue is to use an alternating priority scheme. This method allows the priority-high queue to be scheduled between every other queue. For example:

tmpC132_thumb2[2]

An alternating priority mode can, however, cause some unpredicted delay because the other queues could introduce delay depending on the traffic that they schedule. Also, with alternating priority, the rate of the queue is not controlled. Another way to handle fairness is to have queues behave according to their weight, without caring about priority if they exceed their weight. This solution prevents the previously illustrated scenario of runaway higher-priority queues causing packets in lower-priority queues to become stale. Consider the PB – DWRR configuration and queue status in Figure 7.23.

This scenario shows four queues, with two different priority levels. The different priority levels means that there are multiple levels of scheduling. Queue two (Q2) and queue 3 (Q3) have a high priority level. If there are packets in these two queues and if they have enough credit state, they are serviced before the two low-priority queues.

Let us evaluate the first scheduling turn. Because DWRR separates queues with the same priority level, Q2 is scheduled to receive service in the first cycle. The quantum for Q2 is 200. Thus, its credit state is 200 and it can clear one 200- byte packet from the queue. Q2′s deficit counter value is now 0, and as explained earlier, the deficit counter must be greater than zero to receive service in the same scheduling cycle. The next queue visited is the next priority-high queue, Q3. This queue schedules one packet, but then enters into a negative credit state because of its quantum. Q3′s credit state is 100. When removing 1000 bytes from the queue, the value of the deficit counter becomes -900 (see Figure 7.24).

PB-DWRR scheduling, turn 1 Q2 and Q3

Figure 7.24 PB-DWRR scheduling, turn 1 Q2 and Q3

PB-DWRR scheduling, turn 2 Q2 and Q3

Figure 7.25 PB-DWRR scheduling, turn 2 Q2 and Q3

At the next scheduling turn, Q2 is visited again because its priority level is high and because it contains packets. The quantum is 200 bytes, which is also the number of bytes waiting to be cleared from the queue, as illustrated in Figure 7.25.

Now something interesting happens: Q3 is in negative credit state. This means it gets no scheduling cycles because the deficit counter is not greater than zero. The result is that turn 2 in the scheduling now jumps to the low-priority queues. Because Q1 has a higher weight than queue 0, the scheduling cycle services Q1, which removes one 1000-byte packet from the queue, leaving its deficit counter at a value of 0. The next queue to be scheduled in turn 2 is Q0, and it removes 800 bytes before running into a negative credit state, as illustrated in Figure 7.26.

PB-DWRR scheduling, turn 2 Q0 and Q1

Figure 7.26 PB-DWRR scheduling, turn 2 Q0 and Q1

Now we have a scenario in which one of the priority-high queues, queue 3, and the low-priority queue 0 are in negative credit state. If we honor the state of the queues, in the next scheduling turn, the queue to get scheduling service is queue 1 because it is not in negative credit state. Because queue 2 has no packets in the queue, the two queues in negative credit state can use queue 2′s weight and thereby update their credit states based on available weights. Because a queue can receive service only once the deficit counter is greater than zero, the queue needs to be bypassed in some scheduling rounds to increase their credits. If no other queues with positive credit state have packets in their queues, this refill of credits happens faster because the punished queues can reuse the available weights not being used by other queues.

What is the most efficient way to schedule queues with the same priority level? A good design is probably a combination of strict-high and DWRR scheduling between queues with the same level of priority. This is a good design in the core of the network for situations in which the strict- priority queue may need more bandwidth for a short time because of a network failure and the resultant reconvergence. Limiting real-time traffic should instead be done on the edge of the network, and using a combination of strict-high and DWRR schedule is likely also a good design. But if there are multiple priority high queues, some restrictions or control, for example, DWRR, are needed to avoid possible resource clashes between multiple high-priority queues. PB-DWRR has the following benefit:

• PBt DWRR incorporates the lessons learned from most other queuing algorithms. It incorporates most of the features introduced by these algorithms, such as byte deficit scheduling and priority levels.

PB-DWRR has the following limitation:

• It is not a standard, so all routers and switches that implement it may not act in a similar fashion. PB- DWRR is highly dependent on the hardware it runs on, the available resources, and of course, on the actual vendor implementation.

Conclusions about the Best Queuing Discipline

As of this writing, the most popular queuing and scheduling mechanism is PB-DWRR. The reason behind its success and acceptance in the industry is that PB-DWRR is built on the lessons learned from its predecessors. This topic has described PB- DWRR concepts. However, actual vendor implementations may vary slightly. As with any queuing and scheduling mechanism, there is always a dependency on resources and hardware. However, one piece of the queuing and scheduling puzzle is still missing: the WRED profiles. We discuss these in the next topic.

Next post:

Previous post: