Identifying and Comparing QoS Models (IP Quality of Service)

This section discusses the three main QoS models, namely best-effort, Integrated Services, and Differentiated Services. The key features, and the benefits and drawbacks of each of these QoS models, are explained in turn.

Best-Effort Model

The best-effort model means that no QoS policy is implemented. It is natural to wonder why this model was not called no-effort. Within this model, packets belonging to voice calls, e-mails, file transfers, and so on are treated as equally important; indeed, these packets are not even differentiated. The basic mail delivery by the post office is often used as an example for the best-effort model, because the post office treats all letters as equally important.

The best-effort model has some benefits as well as some drawbacks. Following are the main benefits of this model:

■ Scalability—The Internet is a best-effort network. The best-effort model has no scalability limit. The bandwidth of router interfaces dictates throughput efficiencies.

■ Ease—The best-effort model requires no special QoS configuration, making it the easiest and quickest model to implement.

The drawbacks of the best-effort model are as follows:

■ Lack of service guarantee—The best-effort model makes no guarantees about packet delivery/loss, delay, or available bandwidth.

■ Lack of service differentiation—The best-effort model does not differentiate packets that belong to applications that have different levels of importance from the business perspective.


Integrated Services Model

The Integrated Services (IntServ) model, developed in the mid-1990s, was the first serious attempt to provide end-to-end QoS, which was demanded by real-time applications. IntServ is based on explicit signaling and managing/reserving network resources for the applications that need it and demand it. IntServ is often referred to as Hard-QoS, because Hard-QoS guarantees characteristics such as bandwidth, delay, and packet loss, thereby providing a predictable service level. Resource Reservation Protocol (RSVP) is the signaling protocol that IntServ uses. An application that has a specific bandwidth requirement must wait for RSVP to run along the path from source to destination, hop by hop, and request bandwidth reservation for the application flow. If the RSVP attempt to reserve bandwidth along the path succeeds, the application can begin operating. While the application is active, along its path, the routers provide the bandwidth that they have reserved for the application. If RSVP fails to successfully reserve bandwidth hop by hop all the way from source to destination, the application cannot begin operating.

IntServ mimics the PSTN model, where every call entails end-to-end signaling and securing resources along the path from source to destination. Because each application can make a unique request, IntServ is a model that can provide multiple service levels. Within the Cisco QoS framework, RSVP can act both as a signaling mechanism and as a CAC mechanism. If an RSVP attempt to secure and reserve resources for a voice call fails, the call does not get through. Controlled volume services within the Cisco IOS QoS feature set are provided by RSVP and advanced queuing mechanisms such as LLQ. The Guaranteed Rate service type is offered by deploying RSVP and LLQ. Controlled Load service is provided by RSVP and WRED.

For a successful implementation of IntServ, in addition to support for RSVP, enable the following features and functions on the routers or switches within the network:

Admission control—Admission control responds to application requests for end-to-end resources. If the resources cannot be provided without affecting the existing applications, the request is turned down.

Classification—The traffic belonging to an application that has made resource reservations must be classified and recognized by the transit routers so that they can furnish appropriate service to those packets.

Policing—It is important to measure and monitor that applications do not exceed resource utilization beyond their set profiles. Rate and burst parameters are used to measure the behavior of an application. Depending on whether an application conforms to or exceeds its agreed-upon resource utilizations, appropriate action is taken.

Queuing—It is important for network devices to be able to hold packets while processing and forwarding others. Different queuing mechanisms store and forward packets in unique ways.

Scheduling—Scheduling works in conjunction with queuing. If there are multiple queues on an interface, the amount of data that is dequeued and forwarded from each queue at each cycle, hence the relative attention that each queue gets, is called the scheduling algorithm. Scheduling is enforced based on the queuing mechanism configured on the router interface.

When IntServ is deployed, new application flows are admitted until requested resources can no longer be furnished. Any new application will fail to start because the RSVP request for resources will be rejected. In this model, RSVP makes the QoS request for each flow. This request includes identification for the requestor, also called the authorized user or authorization object, and the needed traffic policy, also called the policy object. To allow all intermediate routers between source and destination to identify each flow, RSVP provides the flow parameters such as IP addresses and port numbers. The benefits of the IntServ model can be summarized as follows:

■ Explicit end-to-end resource admission control

■ Per-request policy admission control

■ Signaling of dynamic port numbers

Some drawbacks to using IntServ exist, the most important of which are these:

■ Each active flow has a continuous signaling. This overhead can become substantially large as the number of flows grows. This is because of the stateful architecture of RSVP.

■ Because each flow is tracked and maintained, IntServ as a flow-based model is not considered scalable for large implementations such as the Internet.

Differentiated Services Model

Differentiated Services (DiffServ) is the newest of the three QoS models, and its development has aimed to overcome the limitations of its predecessors. DiffServ is not a guaranteed QoS model, but it is a highly scalable one. The Internet Engineering Task Force (IETF) description and discussion on DiffServ are included in RFCs 2474 and 2475. Whereas IntServ has been called the "Hard QoS" model, DiffServ has been called the "Soft QoS" model. IntServ, through usage of signaling and admission control, is able to either deny application of requested resources or admit it and guarantee the requested resources.

Pure DiffServ does not use signaling; it is based on per-hop behavior (PHB). PHB means that each hop in a network must be preprogrammed to provide a specific level of service for each class of traffic. PHB then does not require signaling as long as the traffic is marked to be identified as one of the expected traffic classes. This model is more scalable because signaling and status monitoring (overhead) for each flow are not necessary. Each node (hop) is prepared to deal with a limited variety of traffic classes. This means that even if thousands of flows become active, they are still categorized as one of the predefined classes, and each flow will receive the service level that is appropriate for its class. The number of classes and the service level that each traffic class should receive are decided based on business requirements.

Within the DiffServ model, traffic is first classified and marked. As the marked traffic flows through the network nodes, the type of service it receives depends on its marking. DiffServ can protect the network from oversubscription by using policing and admission control techniques as well. For example, in a typical DiffServ network, voice traffic is assigned to a priority queue that has reserved bandwidth (through LLQ) on each node. To prohibit too many voice calls from becoming active concurrently, you can deploy CAC. Note that all the voice packets that belong to the admitted calls are treated as one class.

Remember the following three points about the DiffServ model:

■ Network traffic is classified.

■ QoS policies enforce differentiated treatment of the defined traffic classes.

■ Classes of traffic and the policies are defined based on business requirements; you choose the service level for each traffic class.

The main benefit of the DiffServ model is its scalability. The second benefit of the DiffServ model is that it provides a flexible framework for you to define as many service levels as your business requirements demand. The main drawback of the DiffServ model is that it does not provide an absolute guarantee of service. That is why it is associated with the term Soft QoS. The other drawback of this model is that several complex mechanisms must be set up consistently on all the elements of the network for the model to yield the desired results.

Following are the benefits of DiffServ:

■ Scalability

■ Ability to support many different service levels

The drawbacks of DiffServ are as follows:

■ It cannot provide an absolute service guarantee.

It requires implementation of complex mechanisms through the network.

Next post:

Previous post: