pfSense Multi-WAN Configuration: Part One

pfSense multi-WANpfSense incorporates the ability to set up multiple WAN interfaces (multi-WAN), which allows you to utilize multiple WAN connections. This in turn enables you to achieve higher uptime and greater throughput capacity (for example, if the user has one 1.5 Mbps connection and a second 2.5 Mbps connection, their total bandwidth using a multi-WAN setup would be 4 Mbps). It has been reported that some pfSense deployments have used as many as 12 WAN connections, and pfSense may scale even higher than that with the right hardware.

Any additional WAN interfaces are referred to as OPT WAN interfaces. References to WAN refer to the primary WAN interfaces, and OPT WAN to any additional WAN interfaces.

There are several factors to consider in a multi-WAN deployment. First, you’re going to want to use different cabling paths, so that multiple Internet connections are not subject to the same cable cut. If you have one connection coming in over a copper pair, you probably want to choose a secondary connection utilizing a different type and path of cabling. IN most cases, you cannot rely upon two or more connections of the same type to provide redundancy. Additional connections from the same provider are typically a solution only for additional bandwidth; the redundancy provided is minimal at best.

Another consideraton is the path from your connection to the Internet. With larger providers, two different types of connections will traverse significantly different networks until reaching core parts of the network. These core network components are generally designed with high redundancy and problems are addressed quickly, as they have widespread effects.

Whether an interface is marked as down or not is determined by the following ping command:

ping -t 5 -oqc 5 -i 0.7 [IP ADDRESS]

In other words, pfSense sends 5 pings (-c 5) to your monitor IP, waiting 0.7 seconds between each ping. it waits up to 5 seconds (-t 5) for a resoibsem and exits successfully if one reply is received (-o). It detects nearly all failures, and is not overly sensitive. Since it is successful with 80 percent packet loss, it is possible your connection could be experiencing so much packet loss that it is unusable but not marked as down. Making the ping settings more strict, however, would result in false posiitives and flapping. Some of the ping options are configurable in pfSense 2.2.4.

In the next article, we’ll cover WAN interface configuration in a multi-WAN setup.


External Links:

Network Load Balancing on Wikipedia

Bandwidth Monitoring with BandwidthD

BandwidthD

Configuring BandwithD in pfSense 2.1.5.

BandwidthD tracks usage of TCP/IP subnets and builds HTML files with graphs to display utilization. Charts are built for individual IP addresses, and by default display utilization over 2 day, 8 day, 40 day, and 400 day periods. Furthermore, each IP address’s utilization can be logged at intervals of 3.3 minutes, 10 minutes, 1 hour or 12 hours in CDF format, or to a backend database server. HTTP, TCP, UDP, ICMP, VPN, and P2P traffic are color-coded.

BandwidthD can produce output in two ways. The first is as a standalone application that produces static HTML and PNG output every 200 seconds. The second is as a sensor that transmits its data to a backend database which is then reported on by dynamic php pages. The visual output of both is similar, but the database-driven system allows for searching, filtering, multiple sensors and custom reports. The BandwidthD plugin for pfSense can present the output in both ways.

BandwithD Configuration and Installation

To install BandwidthD in pfSense, navigate to System->Packages, and scroll down to BandwidthD. Press the “plus” button on the right side, and on the next page, press “Confirm” to confirm installation. The package should complete installation within a few minutes.


Once installation is complete, there should be a new item on the “Services” menu called “Bandwidthd”. Once you navigate there, you will see two tabs: “BandwidthD”, which allows you to configure the settings and “Access BandwidthD”, which allows you to view data. The “BandwidthD” tab has several settings. The “Enable bandwidthd” check box simply enables BandwidthD. The “Interface” drop down box allows you to select the interface to which BandwidthD will bind. “Subnet” allows you to specify the subnet (or subnets) on which BandwidthD will report. The subnet for the interface selected in “Interface” is automatically put in the config, so you do not have to specify it here. Subnets are specified in dotted decimal notation, with a slash and the number of bits of the subnet after the subnet (e.g. 192.168.1.0/24). The next setting is “Skip Intervals”, which sets the number of intervals to skip between graphing. The default is 0. Each interval is 200 seconds (3 minutes 20 seconds). The next setting, the “Graph cutoff”, is how many kilobytes (KB) must be transferred by an IP before it is graphed (default is 1024).

BandwithD

Viewing bandwidth usage in the BandwidthD web GUI.

The “Promiscuous” check box will put the interface in promiscuous mode to see traffic that may not be routing through the host machine. This will only work on a hub, where all packets are sent to all ports; if the interface is connected to a switch, then the interface will only see the traffic on its port. The “output_cdf” check box allows you to log data to cdf files, while “recover_cdf” reads back the cdf files on startup if enabled.

The “output PostgreSQL” check box allows you to log the data to a PostgreSQL database. If you enable this option, you need to specify a hostname, database name, username and password in the next four edit boxes. In the “sensor_id” field you can enter an arbitrary sensor name. In “Filter” you can specify a Libpcap-format filter string to control what bandwidthd sees. The “Draw graphs” check box draws graphs to graph the traffic if enabled. You can disable this if you want CDF or database output. Finally, “Meta Refresh” sets the interval in seconds at which the browser graph display refreshes. The default is 150; specifying 0 disables it.

Clicking on the “Access BandwidthD” tab will open up a separate browser tab showing a table summarizing the types of traffic on the specified interface (FTP, HTTP, P2P, TCP, UDP, and ICMP), as well as graphs for each of the IP addresses on the interface.


External Links:

The official BandwidthD home page

BandwidthD

Configuring BandwithD in pfSense 2.1.5.

Bandwidth Limiting with the pfSense Limiter

Bandwidth

Creating a limiter in pfSense 2.1

Although we have covered a number of powerful features that are part of pfSense’s traffic shaping capabilities, we haven’t yet covered one of the most interesting and useful features: the ability to limit users’ upload and download speed. In this article, I will describe how to use the pfSense bandwidth limiter.

Using the Bandwidth Limiter

To invoke the bandwidth limiter, first navigate to Firewall -> Traffic Shaper, and click on the “Limiter” tab. At this tab, click on “plus” to add a new limiter. Check the “Enable limiter and its children” checkbox, and for the “Name” field, enter a name for the new limiter. At “Bandwidth“, click on the “plus” button to add a bandwidth limit. There are four options: “Bandwidth“, “Burst“, “Bw type” and “Schedule“. “Bandwidth” is the maximum transfer rate, while “Burst” is the total amount of data that will be transferred at full speed after an idle period and is apparently a new setting under pfSense 2.1. “Bw type” allows you to select between Kbit/s, Mbit/s, Gbit/s, and bit/s. “Schedule” does not seem to have any options.

In the next nection, “Mask“, you can select “Source address” or “Destination address” in the drop down box. If either one is chosen, a dynamic pipe with the bandwidth, delay, packet loss and queue size specified in the “Bandwidth” section will be created for each source or destination IP address encountered respectively. This makes it possible to easily specify bandwidth limits per host. In the next two fields, you can specify the IPv4 and IPv6 mask bits. At “Description“, you can enter a description, which will not be parsed.


Underneath “Description” is the “Show advanced options” button. Pressing this button reveals some additional settings. “Delay” allows you to specify a delay before packets are delivered to their destination (leaving it blank or entering 0 means there is no delay). “Packet loss rate” allows you to specify the rate at which packets are dropped (e.g. 0.001 means 1 packet per 1000 gets dropped). Again, you can leave this blank. “Queue size” allows you to specify a number of slots for the queue, and “Bucket size” allows you to set the hash size. Finally, press the “Save” button to save the limiter or “Delete virtual interface” to delete it. Press “Apply changes” on the next page to apply the changes.

Bandwidth

Creating a firewall rule to limit upload bandwidth. Note that we are using the limiter created in the previous step.

Now, the limiter that we just created should be available when we go to make or edit firewall rules. As an example, we can use the limiter created in the previous step to limit the upload bandwidth to 1 GB. Navigate to Firewall -> Rules, and click on the “LAN” tab. Press the “plus” button to add a new rule. Leave the “Action” as Pass, the “Interface” as LAN, and the “TCP/IP Version” as IPv4. The “Source” should be set to “LAN subnet”, and the “Destination” should be left as Type: any. After entering a “Description“, scroll down to advanced features and press the “Advanced” button next to “In/Out“, and set the “In” queue to the limiter created in the previous step. Then press “Save” to save the rule and “Apply changes” on the next page.

Now, the upload bandwidth on the LAN interface should be limited to 1 Gb/sec. When you navigate to Firewall -> Rules and click on the “LAN” tab, you should see a small purple circle next to the newly-created rule, indicating that the rule invokes the limiter. If you wanted to limited the download bandwidth, this could easily be done; just create another limiter specifying the maximum download bandwidth, and set the “Out” queue in the rule to the new limiter (or if you just want to make the upload and download bandwidth the same, use the original limiter).


Other Articles in This Series:

Traffic Shaping in pfSense: What it Does
Traffic Shaping Wizard: Introduction
Queue Configuration in pfSense 2.1
Traffic Shaping Rules in pfSense 2.1
Traffic Shaping Rules in pfSense 2.1
Layer 7 Rules Groups in pfSense 2.1
Deep Packet Inspection Using Layer 7 Traffic Shaping

External Links:

PFSense 2.0 – Limiting users Upload and Download Speeds by Limiting Bandwidth at www.squidworks.net

pfSense 2.0 – Limit Download & Upload bandwidth per IP at YouTube

Ad Links:

Queue Configuration in pfSense 2.1

traffic shaping queue

Configuring the qVoIP traffic shaping queue under pfSense 2.1.

In the previous articles, I covered the basics of the traffic shaper and how to configure it using the wizard. In this article, we’ll examine how to monitor traffic shaping and some advanced configuration techniques.

In order to make sure the traffic shaper is working as it should, you can monitor it. In order to do so, navigate to Status -> Queues. This screen will show each queue listed by name, its current usage, and some other statistics. The graphical bar on this screen shows you how full a queue is. The rate of data in the queue is shown as both packets per second (or pps) and bits per second (or b/s). Borrows happen when a neighboring queue is not yet full and capacity is borrowed from there when needed. Drops happen when traffic in a queue is dropped in favor of higher priority traffic. It is normal to see drops, and this does not mean that a full connection is dropped, just a packet. Usually, one side of the connection will see that a packet was missed and then resend it, often slowing down to avoid future drops. The suspends counter indicates when a delay action happens.

If you find that the rules generated by the traffic shaper do not quite fit your requirements, you may want to edit the shaper queues manually. You may have a service that is not handled by the wizard, a game that uses a different port, or other services not covered by the wizard. It should be relatively easy to cover these cases.

The queues are where bandwidth and priorities are actually allocated. In pf, each queue is assigned a priority, from 0 to 7. When there is an overload of traffic, the higher-numbered queues are preferred over the lower-numbered ones. Each queue is assigned either a hard bandwidth limit or a percentage of the total link speed. The queues can be assigned other attributes that control how they behave. Queues may be changed by navigating to Firewall -> Traffic Shaper and clicking on the Interfaces tab. You will see a list of rules here, sorted by interface. Clicking on one of the queues will enable you to edit the settings for it.


Queue Configuration Options

Editing queues is not easy, and if you do not have a thorough understanding of the settings involved, it is best to leave the settings alone. That having been said, here are the options. “Enable/Disable queue and its children” and “Queue Name” are self-explanatory. “Priority” assigns a priority to the queue from 0 to 7 (7 is highest). The “Queue limit” field specifies the queue limit in packets per second. Next is “Scheduler options“; there are five different options here:

  • Default Queue: Selects this queue as the default, the one which will handle all unmatched packets. Each interface should have one and only one default queue.
  • Random Early Detection (RED): This is a method to avoid congestion on a link; it will actively attempt to ensure that the queue does not get full. If the bandwidth is above the maximum given for the queue, drops will occur. Also, drops may occur if the average queue size approaches the maximum. Dropped packets are chosen at random, so the more bandwidth in use by a given connection, the more likely it is to see drops. The net effect is that the bandwidth is limited in a fair way, encouraging a balance. RED should only be used with TCP connections; TCP is capable of handling lost packets and can resend when needed.
  • Random Early Detection In and Out (RIO): Enables RED with in/out, which will result in having queue averages being maintained and checked against for incoming and outgoing packets.
  • Explicit Congestion Notification (ECN): Along with RED, it allows the sending of control messages that will throttle connections if both ends support ECN. Instead of dropping the packets as RED will normally do, it will set a flag in the packet indicating network congestion. If the other side sees and obeys the flag, the speed of the ongoing transfer will be reduced.
  • CoDel Active Queue: A newish form of AQM (Active Queue Management) designed to solve at least in part the problem of bufferbloat (persistently full buffers). The three primary innovations of CoDel are: [1] using the local minimum queue as the queue measure; [2] simplified single-state variable tracking of how long the minimum has been above or below the target value for standing queue delay rather than keeping a window of values to compute the minimum; and [3] use of queue-sojourn time rather than measuring queue size in bytes or packets. This simple description does not do justice to the CoDel AQM, but you can read more about it at the ACM white paper on CoDel: (HTML PDF).

The next two fields are fairly self-explanatory: at “Description“, you can enter a brief description, and at “Bandwidth“, you can specify the total bandwidth allocated to this queue. The options for “Service Curve” are interesting because they enable you to configure a Hierarchical Fair Service Curve (HFSC) for the queue. I covered HFSC in somewhat greater depth in a previous article, but here are the options in a nutshell:

  • m1: Burstable bandwidth limit
  • d: Time limit for the bandwidth burst, specified in milliseconds
  • m2: Normal bandwidth limit

For example, you need m1 bandwidth within d time, but with a normal maximum of m2. Within the time set by d, m1 is the maximum. After d milliseconds has expired, if the traffic is still above m2, it will be shaped. If you read my article about HFSC or any of the other material relating to it, you will see that m1 and m2 are two different slopes for the curve.


Each of the values m1, d and m2 can be set for the following uses:

  • Upper Limit: Maximum bandwidth allowed for the queue. Will do hard bandwidth limiting.
  • Real Time: Minimum bandwidth guarantee for the queue. This is valid only for child queues.
  • Link Share: The bandwidth share of a backlogged queue. It will share the bandwidth between classes if the Real Time guarantees have been satisfied.

Finally, at the bottom of the page, there is a “Save” button to save the changes to a queue, “Add new queue” if you want to add a queue, and “Delete queue” to delete it. This covers the queue options in pfSense 2.1. In writing this article, it has occurred to me that I have not yet addressed some of the configuration and troubleshooting issues involved with traffic shaping, so these subjects will likely be topics for future blog postings.

Other Articles in This Series:

Traffic Shaping in pfSense: What it Does
Traffic Shaping Wizard: Introduction
QoS Management Using the Traffic Shaper Wizard
Traffic Shaping Rules in pfSense 2.1
Layer 7 Groups in pfSense 2.1
Bandwidth Limiting with the pfSense Limiter
Deep Packet Inspection Using Layer 7 Traffic Shaping

External Links:

Traffic Shaper in pfSense 2.0 – doesn’t cover 2.1, but it’s still a good article.

Controlling Queue Delay at queue.acm.org – this is the white paper on the CoDel AQM.

Link Ads:


© 2013 David Zientara. All rights reserved. Privacy Policy