The QOS Tools Part 3

Delay and Jitter Insertion

Two QOS tools can insert delay, the shaper and a combination of queuing and scheduling. Because the value of inserted delay is not constant, both tools can also introduce jitter.

The shaper can sustain excess traffic at the expense of delaying it. This means that there is a trade-off between how much excess traffic can be stored and the maximum delay that can possibly be inserted.

Let us now focus on the queuing and scheduling tool. The delay inserted results from a multiplexing operation applied by this QOS tool, as follows. When multiple queues containing packets arrive in parallel at the scheduler, the scheduler selects one queue at a time and removes one or more packets from that queue. While the scheduler is removing packets from the selected queue, all the other packets in this particular queue, as well as all packets in other queues, must wait until it is their turn to be removed from the queue.

Let us illustrate this behavior with the example in Figure 2.17, which shows two queues named A and B and a scheduler that services them in a round-robin fashion, starting by servicing queue A.

Packet X is the last packet inside queue B, and we are interested in calculating the delay introduced into its transmission. The clock starts ticking.

The first action taken by the scheduler is to remove black packet 1 from queue A. Then, because the scheduler is working in a round-robin fashion, it next turns to queue B and removes white packet 3 from the queue. This leads to the scenario illustrated in Figure 2.18, in which packet X is sitting at the head of queue B.


Two queues and a round-robin scheduler

Figure 2.17 Two queues and a round-robin scheduler

Continuing its round-robin operation, the scheduler now services queue A by removing black packet 2. It again services queue B, which finally results in the removal of packet X. The clock stops ticking.

In this example, the delay introduced into the transmission of packet X is the time that elapses while the scheduler removes black packet 1 from queue A, white packet 3 from queue B, and black packet 2 from queue A.

Considering a generic queuing and scheduling scenario, when a packet enters a queue, the time it takes until the packet reaches the head of the queue depends on two factors, the queue fill level, that is, how many packets are in front of the packet, and how long it takes for the packet to move forward to the head of the queue. For a packet to move forward to the queue head, all the other packets queued in front of it need to be removed first.

As a side note, a packet does not wait indefinitely at the queue head to be removed by the scheduler. Most queuing algorithms apply the concept of packet aging. If the packet is at the queue head for too long, it is dropped.

The speed at which packets are removed from the queue is determined by the scheduler’s properties regarding how fast and how often it services the queue, as shown with the example in Figures 2.17 and 2.18. But for now, let us concentrate only on the queue fill level, and we will return to the scheduler’s removal speed shortly.

It is not possible to predict the queue fill level, so we have to focus on the worst-case scenario of a full queue, which allows us to calculate the maximum delay value that can be inserted into the packet’s transmission. When a packet enters a queue and when this results in the queue becoming full, this particular packet becomes the last one inside a full queue. So in this case, the delay inserted in that packet’s transmission is the total queue length, as illustrated in Figure 2.19.

Packet X standing at the queue B head

Figure 2.18 Packet X standing at the queue B head

Worst-case delay scenario with a full queue

Figure 2.19 Worst-case delay scenario with a full queue

Two equal queues

Figure 2.20 Two equal queues

Uneven scheduler operation

Figure 2.21 Uneven scheduler operation

To summarize our conclusions so far, the maximum delay that can be inserted in a packet’s transmission by the queuing and scheduling tools is the length of the queue into which the packet is placed.

Now if we take into account the role played by the scheduler in determining the speed at which packets are removed, the conclusion drawn in the previous paragraph becomes a reasonable approximation of the delay.

Let us demonstrate how the accuracy of this conclusion varies according to the scheduler’s properties. The example in Figure 2.20 shows two queues that are equal in the sense that both have a length of 10 milliseconds, each contain two packets, and both are full.

Packets X and Y are the last packets in the full queues A and B, respectively. In this case, the scheduler property is implemented is that as long as packets are present in queue A, this queue is always serviced first. The result of this behavior leads the scheduler to first remove black packets 1 and X. Only then does it turn to queue B, leading to the scenario illustrated in Figure 2.21.

This example shows that the previous conclusion drawn – that the maximum delay that can be inserted by the queuing and scheduling tools is the length of the queue on which the packet is placed – is much more accurate for packet X than for packet Y, because the scheduler favors the queue A at the expense of penalizing queue B.

So, for packets using queue A, to state that the maximum possible delay that can be inserted is 10 milliseconds (the queue length) is a good approximation in terms of accuracy. However, this same statement is less accurate for queue B.

In a nutshell, the queue size can be seen as the maximum amount of delay inserted, and this approximation becomes more accurate when the scheduling scheme favors one particular queue and becomes less accurate for other queues.

Regarding jitter, the main factors that affect it are also the queue length and the queue fill level, along with a third phenomenon called the scheduler jumps that we discuss later in this section. As the gap between introducing no delay and the maximum possible value of delay widens, the possible value of jitter inserted also increases.

As illustrated in Figure 2.22 . the best-case scenario for the delay parameter is for the packet to be mapped into an empty queue. In this case, the packet is automatically placed at the queue head, and it has to wait only for the scheduler to service this queue. The worst-case scenario has already been discussed, one in which the packet entering a queue is the last packet that the queue can accommodate. The result is that as the queue length increases, the maximum possible variation of delay (jitter) increases as well.

Scheduler jumps are a consequence of having multiple queues, so the scheduler services one queue and then needs to jump to service other queues. Let us illustrate the effect such a phenomenon has in terms of jitter by considering the example in Figure 2.23, in which a scheduler services three queues.

In its operation, the scheduler removes three packets from queue 1 (Q1), and then two packets from queue 2 (Q2) and other two packets from queue 3 (Q3). Only then does it jump again to queue 1 to remove packets from this queue. As illustrated in Figure 2.23, the implications in terms of jitter is that the time elapsed between the transmission of black packets 2 and 3 is smaller than the time elapsed between the transmission of black packets 3 and 4. This is jitter. Scheduler jumps are inevitable, and the only way to minimize them is to use the minimum number of queues, but doing so without compromising the traffic differentiation that is achieved by splitting traffic into different queues.

Best-case and worst-case scenarios in terms of jitter insertion

Figure 2.22 Best-case and worst-case scenarios in terms of jitter insertion

Jitter insertion due to scheduler jumps

Figure 2.23 Jitter insertion due to scheduler jumps

As discussed in topic a queue that carries real-time traffic typically has a short length, and the scheduler prioritizes it with regards to the other queues, meaning it removes more packets from this queue compared with the others to minimize the possible introduction of delay and jitter. However, this scheme should be achieved in a way that assures that the other queues do not suffer from complete resource starvation.

Packet Loss

At a first glance, packet loss seems like something to be avoided, but as we will see, this is not always the case because in certain scenarios, it is preferable to drop packets rather than transmit them.

Three QOS tools can cause packet loss: the policer, the shaper, and queuing. From a practical perspective, QOS packet loss tools can be divided into two groups depending on whether traffic is dropped because of an explicitly defined action or is dropped implicitly because not enough resources are available to cope with it.

The policer belongs to the first group. When traffic exceeds a certain rate and if the action defined is to drop it, traffic is effectively dropped and packet loss occurs. Usually, this dropping of packets by the policer happens for one of two reasons: either the allowed rate has been inaccurately dimensioned or the amount of traffic is indeed above the agreed or expected rate and, as such, it should be dropped.

The shaper and queuing tools belong to the second group. They drop traffic only when they run out of resources, where the term resources refers to the maximum amount of excess traffic that can be sustained for the shaper and the queue length for the queuing tool (assuming that the dropper drops traffic only when the fill level is 100%).

As previously discussed, there is a direct relationship between the amount of resources assigned to the shaper and queuing tools (effectively, the queue length) and the maximum amount of delay and jitter that can be inserted. Thus, limiting the amount of resources implies lowering the maximum values of delay and jitter that can be inserted, which is crucial for real-time traffic because of its sensitivity to these parameters. However, limiting the resources has the side effect of increasing the probability of dropping traffic.

Suppose a real – time traffic stream crossing a router has the requirement that a delay greater than 10 milliseconds is not acceptable. Also suppose that inside the router, the real – time traffic is placed in a specific egress queue whose length is set to less than 10 milliseconds to comply with the desired requirements. Let us further suppose that in a certain period of time, the amount of traffic that arrives at the router has an abrupt variation in volume, thus requiring 30 milliseconds worth of buffering, as illustrated in Figure 2.24.

The result is that some traffic is dropped. However, the packet loss introduced can be seen as a positive situation, because we are not transmitting packets that violate the established service agreement.

Traffic dropped due to the lack of queuing resources

Figure 2.24 Traffic dropped due to the lack of queuing resources

Also, as previously discussed, for a real-time stream, a packet that arrives outside the time window in which it is considered relevant not only adds no value, but also causes more harm than good because the receiver must still spend cycles processing an already useless packet.

This perspective can be expanded to consider whether it makes sense to even allow this burst of 30 milliseconds worth of traffic to enter the router on an ingress interface if, on egress, the traffic is mapped to a queue whose length is only 10 milliseconds.

Conclusion

The focus of this topic has been to present the QOS tools as building blocks, where each one plays a specific role in achieving various goals. As the demand for QOS increases, the tools become more refined and granular. As an example, queuing and scheduling can be applied in a multiple level hierarchical manner. However, the key starting point is understanding what queuing and scheduling can achieve as building blocks in the QOS design.

Some tools are more complex than others. Thus, in Part Two of this topic we dive more deeply into the internal mechanics of classifiers, policers, and shapers and the queuing and scheduling schemes.

We have also presented an example of how all these tools can be combined. However, the reader should always keep in mind that the required tools are a function of the desired end goal.

In the next topic, we focus on some of the challenges and particulars involved in a QOS deployment.

Next post:

Previous post: