Powered By Blogger

Friday, December 12, 2014

Queue Scheduling Mechanisms

We will introduce several common queue-scheduling mechanisms here.

I. FIFO queuing

As shown in above figure, FIFO (First In First Out) designs the forwarding order of packets depending upon their arrival time. On a firewall, the resources assigned for data traffic of users are based on the arrival time of packets and the current load status of the network. Best-Effort services use FIFO queuing policy.
If there is only one FIFO-based output/input queue on firewall’s interface, malicious applications may occupy all network resources and seriously affect mission-critical data.
Within each queue, the sending (sequence) of packets is defaulted as FIFO.

II. PQ (Priority Queuing)

Priority queuing is designed for mission-critical applications. Those applications have an important feature, i.e. when congestion occurs they require preferential service to reduce the response delay. PQ can flexibly design priority sequence according to different network protocols (e.g. IP and IPX), interface receiving packets, packet length, source/destination IP address etc. Priority queuing classifies the packets into four different types: top, middle, normal and bottom, in descending order. By default, the data flow enters the normal queue.
During queues dispatching, PQ strictly comply with the priority sequence from high to low, and it will send packets in the high-priority queue first. When that queue is empty, PQ will begin to send packets in lower priority queue. The system then put packets of mission-critical application in higher priority queue, and packets of normal application in lower priority queue. It guarantees that the packets of mission-critical application are sent with priority, and the packets of normal application are sent at the free interval of operating mission-critical application.
The disadvantage of PQ is that packets in the lower queues will be neglected if there are packets in the higher queues for a long time.

III. CQ (Custom Queuing)

CQ classifies packets into 17 classes in accordance with certain rules (corresponding to 17 queues). Based on their own classes, packets will enter the corresponding custom queues with FIFO policy.
Of these 17 queues, queue 0 is a system queue (not shown) which is not configurable and queues 1 through 16 are user queues, as shown in the above figure. User can set the rules of traffic classification and assign the proportions on occupying interface bandwidth for those 16 user queues. During dispatching, packets in system queue are sent preferentially till the queue is empty. Then with polling method a certain number of packets, taken from No.1-16 user queues under the bandwidth-occupying proportion set in advance, are sent out. In this way, packets of different application can be assigned with different bandwidth. Therefore, it will not only ensure mission-critical application to get more bandwidth but also prevent normal application from obtaining no bandwidth at all. By default, the data flow enters the No.1 queue.
Another advantage of custom queuing is that bandwidth can be assigned according to the busyness of applications, which is suitable for those applications having special requirement for bandwidth. Though the dispatching for 16 user queues is polling, the service time for each queue is not fixed. So when there are no packets of certain classes, the CQ dispatching mechanism can automatically increase bandwidth occupied by the packets of current existing class.

IV. WFQ (Weighted Fair Queuing)

Before further to Weighted Fair Queuing, FQ (Fair Queue) is to be introduced first. FQ is designed for fairly sharing network resources, which will try to reduce the delay and jitter of all traffics to their optimum levels. It has taken all the aspects into consideration, which has the following features:
l           Different queues have fair opportunity of dispatching to equilibrate the delay of each stream on the whole.
l           Short packets and long packets are treated fairly while dequeuing: if there are long packets in a queue and short packets in another queue waiting simultaneously to be send out, the short packets should also be cared, and statistically the short packets should be treated preferentially, and the jitter between packets of every traffic will be reduced on the whole.
Compared with FQ, WFQ concerns about priority in addition when calculating the dispatching sequence of packets. Statistically, with WFQ, high priority traffic takes priority over low priority packets in dispatching. WFQ can automatically classify traffic according to the “session” information of traffic (protocol type, source/destination TCP or UDP port number, source/destination IP address, preference bits of ToS field, etc), and try to provide more queues so that each traffic will be equably put into different queues and equilibrate the delay of every traffic on a whole. While dequeuing, WFQ can assign the bandwidth of egress interfaces occupied by each flow according to the precedence. The bigger the numerical value of the precedence, the more bandwidth can be obtained.
For example: Now if there are five streams on the interface, and their precedence values are 0, 1, 2, 3, 4, respectively, then the total bandwidth quota will be: the sum of total (precedence of traffic +1), i.e.
1+2+3+4+5=15
The bandwidth-occupying proportion for each traffic is: (priority + 1)/total quota of bandwidth, i.e. Bandwidth available for each traffic: 1/15, 2/15, 3/15, 4/15, 5/15.
Because WFQ can balance the delay and jitter of every flow when congestion occurs, it is effectively applied in particular fields. For instance, in the assured services using the RSVP (resource reservation protocol), generally, WFQ will be used as the dispatching policy. And also in TS, WFQ is used to dispatch buffered packets.

V. CBQ (Class-Based Queuing)

CBQ is the extension of WFQ, which supports user-defined classes. CBQ allocates an independent FIFO reserved queue for each user-defined class to buffer data of the same class. In case of network congestion, CBQ matches output packets according to user-defined class rules and enables them to enter corresponding queues. It is necessary to check the congestion avoidance mechanism, namely, whether tail drop or weighted random early detection (WRED) is used, and bandwidth restriction before the packets enter queues. WFQ is performed to the packets in the queue corresponding to each class when they go out of the queue.
CBQ provides an emergency queue for emergency packets. CBQ uses FIFO dispatch and has no bandwidth restriction. If CBQ performs WFQ on queues of all classes, such data traffic as voice packets that are delay sensitive may not be transmitted in time. In this case, PQ is added to CBQ, which is known as LLQ (Low Latency Queuing) to provide strict priority (SP) mechanism for such delay-sensitive data streams as voice packets.
LLQ combines SP mechanism with CBQ. The user can set a class subject to SP mechanism when defining the class. Such a class is known as priority class. All packets of the priority class will enter the same priority queue. It is necessary to check bandwidth restriction of packets before they enter queues. When the packets go out of queues, the packets in priority queue get transmitted first and then the packets in other queues corresponding to other classes follows. These packets are dispatched in weighted fair mode when being transmitted.
In order that packets in other queues will not be delayed too long, maximum available bandwidth can be specified to each priority class when using LLQ. The bandwidth value is used to monitor traffic in case of congestion. If no congestion happens, the priority class is allowed to use the bandwidth exceeding the allocated value. If congestion happens, the packets of the priority class exceeding the allocated bandwidth will be discarded. LLQ can also specify burst-size.
The system will always match the priority class first and then the other classes when matching rules for packets. Multiple priority classes are matched in the configuration sequence. It is the same with other classes. Multiple rules in a class are matched in the configuration sequence.

VI. RTP (Real-time Transport Protocol) priority queuing

RTP priority queuing technology is used to solve the QoS problems of real-time service (including audio and video services). Its principle is to put RTP packets carrying audio or video into high-priority queue and send it first, thus minimizing delay and jitter and ensuring the quality of audio or video service which is sensitive to delay.
As shown in the above figure, an RTP packet is sent into a high priority queue. RTP packet is the UDP packet whose port number is even. The range of the port number is custom. RTP priority queue can be used along with any queue (e.g., FIFO, PQ, CQ, WFQ and CBQ), while it has the highest priority. Since LLQ of CBQ can also be used to solve real-time service, it is recommended not to use RTP together with CBQ.

No comments:

Post a Comment