As discussed in the VoIP Quality of Service (QoS) Basics article, one of the methods of controlling the amount of congestion on the network is to utilize congestion management techniques, primarily queue configuration. This article addresses the various congestion management techniques, and reviews the steps required to configure it on a network.
Congestion management techniques can be used alongside other Quality of Service (QoS) techniques to ensure that the traffic which requires a high level of service is able to get it. Congestion management techniques work by reducing the effect that network congestion has on the flow of traffic across the network; this is done with a number of different queueing mechanisms.
Congestion management, like classification and marking, is another subject area that is required for the completion of the CVOICE (642-437) exam. Congestion management, along with a carefully planned overall QoS plan, is essential when implementing multiservice networks.
The majority of the time that a network is set up without the need for QoS or congestion management, a queueing mechanism is still being used to process traffic through a device. By default, all interfaces that are below 2.048 Mbps utilize Weighted Fair Queueing (WFQ), while all interfaces above 2.048 Mbps utilize First-In, First-Out Queueing (FIFO); both of these will be covered in more detail later in this section.
So, what is a queue? One of the most recognizable examples of queueing is at a large store. There are a number of different lanes that are created so that people can check themselves out of the store with their merchandise. If using the FIFO method, all individuals would be able to check out in the order in which they got in line. What happens if there is only one line and the person in front has 100 items to checkout? In this situation, everyone waits for that person to check out completely before the next person in line is able to go. An extension of this concept works well as an example of weighted fair queueing as well. What if a separate 20 or less lane could be created to speed the rate at which shoppers with smaller amounts of merchandise can check out? In this circumstance, shoppers with a small number of items do not get held up by those with a larger number of items.
When implementing FIFO or WFQ on a networking device, the concepts in the examples hold true. When using FIFO, the traffic that arrives first gets sent first, regardless of who is waiting to be forwarded. What if a telnet session was happening at the same time a large file transfer was happening? With FIFO, if the file transfer traffic was received first, then the interactive traffic would be stuck waiting for that traffic to pass out of the queue. If the queueing method was changed to WFQ, then both types of traffic would be weighted so that each got to pass an equal number of packets regardless of who got there first; this enables the telnet session to continue to have a satisfactory response time and allows the file transfer to continue.
There are a couple of other queueing mechanisms that can be used as well; these include Priority Queueing, Custom Queueing, and Class-Based WFQ, among others.
The configuration of advanced queueing can be quite complicated. This article will limit its scope to the basic commands used to configure FIFO, WFQ, priority and custom queueing; if seriously considering using any non-default queueing mechanism, please reference the Cisco website.
The configuration of basic FIFO and WFQ is rather simple as both require very little or no configuration.
The command syntax required to configure FIFO queueing on an interface is as follows:
In most situations the default parameters are adequate.
The configuration of FIFO queueing is even easier, as by default most interfaces use it. If there is an interface that has an existing queueing mechanism other than FIFO, simply remove it, and the interface will then use FIFO queueing.
For example, if an interface is using WFQ, the following command would configure the interface with FIFO queueing.
Priority queueing provides the ability to manually assign the traffic from specific interfaces or specific running protocols into separate low, normal, medium or high priority queues. The configuration of priority queueing begins by configuring a priority list that contains the different parameters used with priority queueing. The different commands that can be used to configure a priority list are shown below.
To configure a priority list based on the protocol, the command syntax includes:
To configure a priority list based on the interface, the command syntax includes:
To configure the priority given to all traffic that does not match any of the other commands, the command syntax includes:
The priority list is then applied to an interface; the same priority list can be applied to multiple interfaces at the same time. Each of the rules in the priority lists are consulted in order, until a match is found; any traffic that does not match any of the rules is assigned the priority configured in the default rule.
The command syntax used to apply a priority list to an interface includes:
Custom queueing works quite similarly to priority queueing, but provides a mechanism for more fairness between high and low priority traffic. With priority queueing, if there is traffic in the high priority queue it will be sent in front of all other traffic regardless of how long it takes to transmit. With custom queueing, each of the queues is configured with a queue depth, and an average byte count of the bytes forwarded, when it is time to empty that specific queue.
The configuration of a custom queue is similar to the syntax used by priority queueing, the commands include:
The commands used to configure the queue depth and the number of bytes forward per queue include:
The queueing mechanisms discussed in this article are not all of the ones available, but are the simplest to configure. For example, the Class Based WFQ and Low Latency Queueing methods are available for those applications with very specific traffic requirements. This article covers various congestion management techniques, which are used alongside other Quality of Service (QoS) techniques, and queue configuration, to reduce the effect that network congestion has on the flow of traffic across the network.