Challenges (QOS-Enabled Networks) Part 1

In the previous topic, we discussed the QOS toolkit that is available as part of a QOS deployment on a router. We now move on, leaving behind the perspective of an isolated router and considering a network-wide QOS deployment. Such deployments always have peculiarities depending on the business requirements, which make each single one unique. However, the challenges that are likely to be present across most deployments are the subject of this topic.

Within a QOS network and on each particular router in the network, multiple traffic types compete for the same network resources. The role of QOS is to provide each traffic type with the behavior that fits its needs. So the first challenge that needs to be considered is how providing the required behavior to a particular traffic type will have an impact and will place limits on the behavior that can be offered to the other traffic types.However, that is achieved at the expense of increasing the delay for the traffic present in other queues. The unavoidable fact that something will be penalized is true for any QOS tool that combines a greater number of inputs into a fewer number of outputs.

This description of QOS behavior can also be stated in a much more provocative way: a network in which all traffic is equally "very important and top priority" has no room for QOS.

Defining the Classes of Service

The main foundation of the entire QOS concept is applying different behavior to different traffic types. Achieving traffic differentiation is mandatory, because it is only by splitting traffic into different classes of service that different behavior can be selectively applied to each.


We presented the classifier tool that, based on its own set of rules,makes decisions regarding the class of service to which traffic belongs. Let us now discuss the definition of the classes of service themselves.

In the DiffServ model, each router first classifies traffic and then, according to the result of that classification, applies a specific per-hop behavior (PHB) to it. Consistency is achieved by ensuring that each router present along the path that the traffic takes across the network applies the same PHB to the traffic.

A class of service represents a traffic aggregation group, in which all traffic belonging to a specific class of service has the same behavioral requirements in terms of the PHB that should be applied. This concept is commonly called a Behavior Aggregate.

In this case, the router can apply two different behaviors in terms of the delay that is inserted in the traffic transmission, and the classifier makes the decision regarding which one is applied when it maps traffic into one of the two available classes of service. So continuing this example, on one side of the equation we have traffic belonging to different services or applications with their requirements, and on the other, we have the two classes of service, each of which corresponds to a specific PHB, which in this case is characterized by the amount of delay introduced, as illustrated in Figure 3.2.

The relationship between services or applications and classes of service should be seen as N: 1, not as 1:1, meaning that traffic belonging to different services or applications but with the same behavior requirements should be mapped to the same class of service.

Prioritizing one class of service

Figure 3.1 Prioritizing one class of service

Mapping between services and classes of service

Figure 3.2 Mapping between services and classes of service

For example, two packets belonging to two different real-time services but having the same requirements in terms of the behavior they should receive from the network should be classified into the same class of service. The only exception to this is network control traffic, as we see later in this topic.

The crucial question then becomes how many and what different behaviors need to be implemented. As with many things in the QOS realm, there is no generic answer, because the business drivers tend to make each scenario unique.

Returning to Figure 3.2, the approach is to first identify the various services and applications the network needs to support, and then take into account any behavior requirements, and similarities among them, to determine the number of different behaviors that need to be implemented.

Something commonly seen in the field is the creation of as many classes of service as possible. Conceptually, this is the wrong approach. The approach should indeed be the opposite, to create only the minimum number of classes of service. There are several reasons behind this logic:

• The more different behaviors the network needs to implement, the more complex it becomes, which has implications in terms of network operation and management.

• As previously stated, QOS does not make the road wider, so although traffic can be split into a vast number of classes of service, the amount of resources available for traffic as a whole remains the same.

• The number of queues and their length are limited (a topic discussed later in this topic).

• As we will see in topic,the classifier granularity imposes limits regarding the maximum number of classes of service that can exist in the network.

Plenty of standards and information are available in the networking world that can advise the reader on what classes of service should be used, and some even name suggestions. While this information can be useful as a guideline, the reader should view them critically because a generic solution is very rarely appropriate for a particular scenario. That is the reason why this topic offers no generic recommendations in terms of the classes of service that should exist in a network.

Business drivers shape the QOS deployment, and not the other way round, so only when the business drivers are present, as in the case studies in Part Three of this topic, do the authors provide recommendations and guidelines regarding the classes of service that should be used.

Classes of Service and Queues Mapping

As presented in topic,the combination of the queuing and scheduling tools directs traffic from several queues into a single output, and the queue properties, allied with the scheduling rules, dictate specific behavior regarding delay, jitter, and packet loss, as illustrated in Figure 3.3.

Each queue associated with the scheduling policy provides a specific behavior

Figure 3.3 Each queue associated with the scheduling policy provides a specific behavior

Green and yellow traffic in the same queue

Figure 3.4 Green and yellow traffic in the same queue

As also discussed in topic,other tools can have an impact in terms of delay, jitter, and packet loss. However, the queuing and scheduling stage is special in the sense that it is where the traffic from different queues is combined into a single output.

So if, after taking into account the required behavior, traffic is aggregated into classes of service, and if each queue associated with the scheduling policy provides a specific behavior, then mapping each class of service to a specific queue is recommended. A 1 : 1 mapping between queues and classes of service aligns with the concept that traffic mapped to each class of service should receive a specific PHB.

Also, if each class of service is mapped to a unique queue, the inputs for the definition of the scheduler rules that define how it serves the queues should themselves be the class of service requirements.

When we previously discussed creation of the classes of service, we considered that all traffic classified into a specific class of service has the same behavior requirements.A 1 :1 mapping between queues and classes of service can become challenging if some traffic in a queue is green (in contract) and other traffic is yellow (out of contract). The concern is how to protect resources for green traffic. Figure 3.4 illustrates this problem, showing a case in which both green and yellow traffic are mapped to the same queue and this queue is full.

As shown in Figure 3.4 , the queue is full with both green and yellow packets. When the next packet arrives at this queue, the queue is indifferent as to whether the packet is green or yellow, and the packet is dropped. Because yellow packets inside the queue are consuming queuing resources, any newly arrived green packets are discarded because the queue is full. This behavior is conceptually wrong, because as previously discussed, the network must protect green traffic before accepting yellow traffic.

Different dropper behaviors applied to green and yellow traffic

Figure 3.5 Different dropper behaviors applied to green and yellow traffic

There are two possible solutions for this problem. The first is to differentiate between green and yellow packets within the same queue. The second is to use different queues for green and yellow packets and then differentiate at the scheduler level.

Let us start by demonstrating how to differentiate between different types of traffic within the same queue. The behavior shown in Figure 3.4 is called -ail drop. When the queue fill level is at 100%, the dropper block associated with the queue drops all newly arrived packets, regardless of whether they are green or yellow. To achieve differentiation between packets according to their color, the dropper needs to be more granular so that it can apply different drop probabilities based on the traffic color. As exemplified in Figure 3.5, the dropper block can implement a behavior such that once the queue fill level is at X% (or goes above that value) no more yellow packets are accepted in the queue, while green packets are dropped only when the queue is full (fill level of 100%).

Comparing Figures 3.4 and 3.5 , the striking difference is that, in Figure 3.5, once the queue fill level passes the percentage value X, all yellow packets are dropped and only green packets are queued. This mechanism defines a threshold so when the queuing resources are starting to be scarce, they are accessible only for green packets. This dropper behavior is commonly called Weighted Random Early Discard (WRED).

The second possible solution is to place green and yellow traffic in separate queues and then differentiate using scheduling policy. This approach conforms with the concept of applying a different behavior to green and yellow traffic. However, it comes with its own set of challenges.

Let us consider the scenario illustrated in Figure 3.6, in which three sequential packets, numbered 1 through 3 and belonging to the same application, are queued. However, the metering and policing functionality marks the second packet, the white one, as yellow.

As per the scenario of Figure 3.6 . there are three queues in which different types of traffic are mapped. Queue 1 is used for out-of-contract traffic belonging to this and other applications, so packet number 2 is mixed with yellow packets belonging to the same class of service, represented in Figure 3.6 as inverted triangles.

Using a different queue for yellow traffic

Figure 3.6 Using a different queue for yellow traffic

Queue 2 is used by green packets of another class of service. Finally, queue 3 is used by packets 1 and 3, and also by green packets that belong to the same class of service.

So we have green and yellow packets placed in different queues, which ensures that the scenario illustrated in Figure 3.4, in which a queue full with green and yellow packets leads to tail dropping of any newly arrived green packet, is not possible. However, in solving one problem we are potentially creating another.

The fact that green and yellow traffic is placed into two different queues can lead to a scenario in which packets arrive at the destination out of sequence. For example, packet number 3 can be transmitted before packet number 2.

The scheduler operation is totally configurable. However, it is logical for it to favor queues 2 and 3, to which green traffic is mapped, more than queue 1 which, returning to Figure 3.6, has the potential of delaying packet number 2 long enough for it to arrive out of sequence at the destination, that is after packet 3 has arrived.

The choice between the two solutions presented above is a question of analyzing the different drawbacks of each. Using WRED increases the probability of dropping green traffic, and using a different queue increases the probability of introducing packet reordering issues at the destination.

Adding to the above situation, the queuing resources – how many queues there are and their maximum length – are always finite numbers, so dedicating one queue to carry yellow traffic may pose a scaling problem as well. An interface can support a maximum number of queues, and the total sum of the queue lengths supported by an interface is also limited to a maximum value (we will call it X), as exemplified in Figure 3.7 for a scenario of four queues.

The strategy of "the more queues, the better" can have its drawbacks because, besides the existence of a maximum number of queues, the value X must be divided across all the queues that exist on the interface. Suppose a scenario of queues A and B, each one requiring 40% of the value X. By simple mathematics, the sum of the lengths of all other remaining queues is limited to 20% of X, which can be a problem if any other queue also requires a large length.

 Maximum number of queues and maximum length

Figure 3.7 Maximum number of queues and maximum length

Next post:

Previous post: