Queue Configuration in pfSense 2.1

traffic shaping queue

Configuring the qVoIP traffic shaping queue under pfSense 2.1.

In the previous articles, I covered the basics of the traffic shaper and how to configure it using the wizard. In this article, we’ll examine how to monitor traffic shaping and some advanced configuration techniques.

In order to make sure the traffic shaper is working as it should, you can monitor it. In order to do so, navigate to Status -> Queues. This screen will show each queue listed by name, its current usage, and some other statistics. The graphical bar on this screen shows you how full a queue is. The rate of data in the queue is shown as both packets per second (or pps) and bits per second (or b/s). Borrows happen when a neighboring queue is not yet full and capacity is borrowed from there when needed. Drops happen when traffic in a queue is dropped in favor of higher priority traffic. It is normal to see drops, and this does not mean that a full connection is dropped, just a packet. Usually, one side of the connection will see that a packet was missed and then resend it, often slowing down to avoid future drops. The suspends counter indicates when a delay action happens.

If you find that the rules generated by the traffic shaper do not quite fit your requirements, you may want to edit the shaper queues manually. You may have a service that is not handled by the wizard, a game that uses a different port, or other services not covered by the wizard. It should be relatively easy to cover these cases.

The queues are where bandwidth and priorities are actually allocated. In pf, each queue is assigned a priority, from 0 to 7. When there is an overload of traffic, the higher-numbered queues are preferred over the lower-numbered ones. Each queue is assigned either a hard bandwidth limit or a percentage of the total link speed. The queues can be assigned other attributes that control how they behave. Queues may be changed by navigating to Firewall -> Traffic Shaper and clicking on the Interfaces tab. You will see a list of rules here, sorted by interface. Clicking on one of the queues will enable you to edit the settings for it.

Queue Configuration Options

Editing queues is not easy, and if you do not have a thorough understanding of the settings involved, it is best to leave the settings alone. That having been said, here are the options. “Enable/Disable queue and its children” and “Queue Name” are self-explanatory. “Priority” assigns a priority to the queue from 0 to 7 (7 is highest). The “Queue limit” field specifies the queue limit in packets per second. Next is “Scheduler options“; there are five different options here:

  • Default Queue: Selects this queue as the default, the one which will handle all unmatched packets. Each interface should have one and only one default queue.
  • Random Early Detection (RED): This is a method to avoid congestion on a link; it will actively attempt to ensure that the queue does not get full. If the bandwidth is above the maximum given for the queue, drops will occur. Also, drops may occur if the average queue size approaches the maximum. Dropped packets are chosen at random, so the more bandwidth in use by a given connection, the more likely it is to see drops. The net effect is that the bandwidth is limited in a fair way, encouraging a balance. RED should only be used with TCP connections; TCP is capable of handling lost packets and can resend when needed.
  • Random Early Detection In and Out (RIO): Enables RED with in/out, which will result in having queue averages being maintained and checked against for incoming and outgoing packets.
  • Explicit Congestion Notification (ECN): Along with RED, it allows the sending of control messages that will throttle connections if both ends support ECN. Instead of dropping the packets as RED will normally do, it will set a flag in the packet indicating network congestion. If the other side sees and obeys the flag, the speed of the ongoing transfer will be reduced.
  • CoDel Active Queue: A newish form of AQM (Active Queue Management) designed to solve at least in part the problem of bufferbloat (persistently full buffers). The three primary innovations of CoDel are: [1] using the local minimum queue as the queue measure; [2] simplified single-state variable tracking of how long the minimum has been above or below the target value for standing queue delay rather than keeping a window of values to compute the minimum; and [3] use of queue-sojourn time rather than measuring queue size in bytes or packets. This simple description does not do justice to the CoDel AQM, but you can read more about it at the ACM white paper on CoDel: (HTML PDF).

The next two fields are fairly self-explanatory: at “Description“, you can enter a brief description, and at “Bandwidth“, you can specify the total bandwidth allocated to this queue. The options for “Service Curve” are interesting because they enable you to configure a Hierarchical Fair Service Curve (HFSC) for the queue. I covered HFSC in somewhat greater depth in a previous article, but here are the options in a nutshell:

  • m1: Burstable bandwidth limit
  • d: Time limit for the bandwidth burst, specified in milliseconds
  • m2: Normal bandwidth limit

For example, you need m1 bandwidth within d time, but with a normal maximum of m2. Within the time set by d, m1 is the maximum. After d milliseconds has expired, if the traffic is still above m2, it will be shaped. If you read my article about HFSC or any of the other material relating to it, you will see that m1 and m2 are two different slopes for the curve.

Each of the values m1, d and m2 can be set for the following uses:

  • Upper Limit: Maximum bandwidth allowed for the queue. Will do hard bandwidth limiting.
  • Real Time: Minimum bandwidth guarantee for the queue. This is valid only for child queues.
  • Link Share: The bandwidth share of a backlogged queue. It will share the bandwidth between classes if the Real Time guarantees have been satisfied.

Finally, at the bottom of the page, there is a “Save” button to save the changes to a queue, “Add new queue” if you want to add a queue, and “Delete queue” to delete it. This covers the queue options in pfSense 2.1. In writing this article, it has occurred to me that I have not yet addressed some of the configuration and troubleshooting issues involved with traffic shaping, so these subjects will likely be topics for future blog postings.

Other Articles in This Series:

Traffic Shaping in pfSense: What it Does
Traffic Shaping Wizard: Introduction
QoS Management Using the Traffic Shaper Wizard
Traffic Shaping Rules in pfSense 2.1
Layer 7 Groups in pfSense 2.1
Bandwidth Limiting with the pfSense Limiter
Deep Packet Inspection Using Layer 7 Traffic Shaping

External Links:

Traffic Shaper in pfSense 2.0 – doesn’t cover 2.1, but it’s still a good article.

Controlling Queue Delay at queue.acm.org – this is the white paper on the CoDel AQM.

Link Ads:

pfSense Traffic Shaping: Part Two (Hierarchical Fair Service Curve)

Hierarchical Fair Service CurveIn Part One, I gave a brief overview of the traffic shaping wizard. Underlying pfSense’s traffic shaping capabilities are several traffic shaping algorithms. In this article, I will cover one of them: Hierarchical Fair Service Curve.

Hierarchical Fair Service Curve (HFSC)

Traditional Packet Fair Queuing algorithms such as Weighted Fair Queueing allocate to each backlogged flow at least a weighted fair share of service and achieve instantaneous fairness. When used with reservation (in which resources are reserved), PFQ can provide a minimum bandwidth guarantee, When used without reservation, PFQ can provide fair service and protection against malicious flows for best effort traffic. For these reasons, PFQ has become the most popular QoS mechanism for integrated services.

While PFQ remains popular, it has two limitations: [1] limited resource management capability; and [2] coupled delay and bandwidth allocation. Hierarchical Fair Service Curve (HFSC) is designed to address both these limitations.

With PFQ, each traffic flow is treated as an independent entity, and excess bandwidth is distributed among all backlogged flows in the system rather than distributing excess bandwidth among flows within an organizational boundary. Thus, PFQ has limited resource management capability.

Hierarchical Packet Fair Queuing was proposed to extend basic PFQ algorithms to support link-sharing. Under this model, hierarchical link-sharing is implemented, whereby there is a class hierarchy associated with each link that specifies the resource allocation policy for the link. A class represents a traffic stream or some aggregate of traffic streams that are grouped according to administrative affiliation, protocol, traffic type, or other criteria.

Hierarchical Fair Service Curve

Example of a concave service curve.

Hierarchical link-sharing can be illustrated using a hypothetical scenario involving a 45 Mbps link that is shared by two organizations, Carnegie Mellon University (CMU) and University of Pittsburgh (U.Pitt.).1  There are several important goals that the hierarchical link-sharing service is aimed to achieve. First, each class should receive a certain minimum amount of resource if there is enough demand. In this example, CMU’s traffic should receive at least 25 Mbps of bandwidth during a period when the aggregate traffic from CMU has a higher arrival rate. Similarly, if there is resource contention between traffic classes within CMU, the video traffic should get at least 10 Mbps. Also, if there are only audio and video streams from CMU, the audio and video traffic should receive all the bandwidth that is allocated to CMU if the demand is high enough. While the above policy specifies that the CMU audio and video traffic classes have priority to use any excess bandwidth unused by the data traffic, there is also the issue of how the excess bandwidth will be distributed between the audio and video classes. The second goal of hierarchical link-sharing service is to have a policy to distribute the excess bandwidth unused by a class to other classes. In this case, the policy could be set so if the CMU Video class is not utilizing all its 10 Mbps, the CMU Audio class and CMU Data class have priority to receive the excess bandwidth first. Only if CMU cannot use up all the excess, the remaining bandwidth is distributed over to the University of Pittsburgh. HFSC is designed based on this hierarchical link-sharing model and thus proves more powerful resource management than PFQ. However, HFSC has an important additional benefit over HPFQ: it can decouple the allocation of delay and bandwidth.

Under PFQ (or for that matter, HPFQ), only one parameter is used to characterize the performance of a flow: its guaranteed bandwidth. A worst case packet delay bound can be derived from a given guaranteed bandwidth. In this case, the only way to reduce worst case delay is to increase the bandwidth reservation. This may lead to inefficient resource utilization when there are low-bandwidth low-delay flows.

Hierarchical Fair Service Curve

Example of a convex service curve.

The solution to this problem used by HFSC is to use the service curve service model. It is actually a two-piece linear curve, in which m1 is the slope of the first segment, m2 is the slope of the second segment, and d is the intersection point of the two segments. Delay on the x-axis, while bandwidth is on the y-axis. We call the curve concave if m1 > m2, and convex if m2 > m1. By using a concave service curve, HFSC can achieve a lower delay than under PFQ without over-reserving link bandwidth. As a result, HFSC provides decoupled delay and bandwidth allocation in an integrated fashion.

With these advantages, it is not surprising that HFSC is gaining acceptance and is one of the traffic shaping algorithms implemented in pfSense. In part three of this series on traffic shaping, I will cover class based queuing and priority queuing.


[1] This was given as an example in “A Hierarchical Fair Service Curve Algorithm for Link-Sharing, Real-Time and Priority Services”, Ion Stoica, Hui Zhang, T. S. Eugene Ng, Carnegie Mellon University, Pittsburgh, PA.

External Links:

Ion Stoica et. al, “A Hierarchical Fair Service Curve Algorithm for Link-Sharing, Real-Time and Priority Services”

Hierarchical Packet Schedulers

Hierarchical Fair Service Curve at Wikipedia

© 2013 David Zientara. All rights reserved. Privacy Policy