Siproxd: Part One

SiproxdSiproxd is a proxy/masquerading daemon for the SIP protocol. It handles registrations of SIP clients on a private IP network and performs rewriting of the SIP message bodies to make SIP connections work via a masquerading firewall (NAT). It allows SIP software clients or SIP hardware clients to work behind an IP masquerading firewall or NAT router.

SIP, or Session Initiation Protocol, is a standardized set of formats for communicating messages used to initiate, control, and terminate interactive Unicast or Multicast user sessions with multimedia services such as Internet telephone calls, video conferencing, chat, file transfer and online games. Originally developed in 1996 (and standardized in 2002 under the name RFC 3261 by the Internet Engineering Task Force), SIP is the most widely used communication protocol and is the protocol of choice for most VoIP phones to initiate communication. By itself, SIP does not work via masquerading firewalls, as the transferred data contains IP addresses and port numbers. Thus, using SIP over a firewall requires solutions to traverse NAT, but such solutions may have disadvantages or may not be applied in certain situations. Siproxd does not aim to be a replacement for these solutions; however, in some situations it may bring advantages.

Siproxd: Installation and Configuration

Siproxd runs on a variety of Unix variants. It is currently known to work on:

  • Linux
  • FreeBSD
  • OpenBSD
  • SunOS
  • Mac OS X

Installation of the siproxd package in pfSense is easy. Navigate to System -> Packages, scroll down to sixproxd in the packages list, and press the “plus” button (+) to install siproxd. On the next page, press the “Confirm” button to confirm installation, which should take less than two minutes.


Once siproxd is installed, you can begin configuration by navigating to Services -> siproxd. There are three tabs: “Settings“, “Users“, and “Registered Phones“. On the “Settings” tab there are several general settings. Check the “Enable siproxd” check box to enable or disable siproxd. The “Inbound interface” dropdown box allows you to select the inbound interface (usually WAN). You can also select the “Outbound interface” (usually LAN). The “Listening port” edit box allows you to enter the port on which to listen for SIP traffic (default port is 5060). The “Default expiration timeout” specifies the timeout (in seconds), provided that the REGISTER request does not contain an Expires header or an “expires=” parameter.

Under “RTP Settings“, you can configure settings for the Real-time Transport Protocol (RTP), a standardized packet format for delivering audio and video over IP networks. The “Enable RTP proxy” dropdown box allows you to enable or disable the RTP proxy (default is enabled). In the next two edit boxes, you can enter the lower and upper bounds for the RTP port range (the default is 7070 to 7079). Finally, you can enter the “RTP stream timeout”, the number of seconds after which an RTP stream will be considered dead and proxying it will stop (the default is 300 seconds).

The next section is “Dejittering Settings“, which allows you to set an artificial delay to de-jitter RTP data streams (both input and output). The default is zero (no dejittering).

Next is “SIP over TCP Settings“. Here can can configure a “TCP Inactivity timeout” to set the amount of time (in seconds) after which an idling TCP connection will be disconnected. The “TCP Connect Timeout” defines how many milliseconds siproxd will wait for a successful connect when establishing an outgoing SIP signaling connection. “TCP Keepalive” is used for TCP SIP signaling. If this parameter is greater than zero, empty SIP packets will be sent every n seconds (where n is the number specified) to keep the connection alive. The default value is zero (off).

In the next article, we will continue our look at configuring siproxd.


External Links:

The official Siproxd site.

Siproxd on SourceForge.

Real-time Transport Protocol on Wikipedia

netio: A Network Benchmark Tool

netio

netio in action under pfSense 2.1.5.

netio is a network benchmark utility for OS/2 2.x, Windows, Linux and Unix. It measures the net throughput of a network via TCP and UDP protocols using various different packet sizes. For netio to run a benchmark, one instance has to be run on one computer as a server process, while another instance is used on another computer to perform the benchmark. Starting with version 1.20, multi-threading support is required. While this does not affect anyone using the program under Linux or BSD, it did mean that DOS was no longer supported.

netio: Installation and Use

To install netio under pfSense, navigate to System -> Packages, and scroll down to netio in the list. Press the “plus” button to begin installation, and on the next screen, press “Confirm” to confirm installation. netio should complete installation within a few minutes.


Once netio is installed, there will be a new item on the Diagnostics menu called “netio“. If you navigate to it, you will find two tabs: “Client” and “Server“. The “Client” tab, appropriately enough, is to configure netio to run as a client, while “Server” will allow it to act as a server. On the “Client” tab there are two settings: “Server” (for the IP address or hostname netio will connect to) and “Port” (for the port that netio will connect to). On the “Server” tab, there is only one field: “Port“, to specify the port netio will bind to (the default is 18767). Press the “Save” button at the bottom to save settings.

Running netio at the command prompt under Windows 8.1.

Whether you run netio as a client or server, netio requires another node with which to connect. As a result, you are going to have to download netio, which you can do from the official netio site. The zip file contains both the source code and binaries for several platforms, including Windows, Linux, BSD, OS/2 and Mac OS X. Select the right binary for your platform and run netio from your system’s command prompt/shell.

At the risk of stating the obvious, if you are running netio under pfSense as a server, then you want to be running it under the other system as a client, and vice-versa. To test netio, I decided to run it under pfSense as a server (I kept the default port and just pressed “Save”). In Windows, I typed:

win32-i386 -t 192.168.2.1

where win32-i386 is the name of the windows executable, -t specifies the TCP protocol, and 192.168.2.1 is the IP address of the server (my pfSense box). The output of netio can be seen in the screenshot on the right.

And here we are running it under Linux Mint 17.

One problem with this program is that it seems if you connect with one protocol (e.g. TCP), you cannot connect to the server again with another protocol (e.g. UDP). If you try to do this and you get an “error code 10060” message, try restarting the server and then attempt a client connection a second time.

Did I mention that netio supports several platforms? This last screenshot shows what happened when I ran netio under Linux on an old IBM Lenovo M51 running Mint Linux 17. The only shortcoming is that the binary for Linux is version 1.30 of the program, not the latest version (1.32). Thus if you want the latest version under Linux, you’ll have to compile it yourself.


External Links:

The official netio site

HAProxy Load Balancing: Part Two

HAProxy

Listener configuration in HAProxy under pfSense 2.1.5.

In the previous article, we introduced HAProxy as a load balancing solution for TCP and HTTP-based applications. In this article, we will continue our look at HAProxy configuration.

The next setting in the “Settings” tab is “Global Advanced pass thru“, which is for text that you would like to pass through to the global settings area. The next section is “Configuration synchronization“. The first check box allows you to synchronize the HAProxy configuration to back up CARP members via XML-RPC, a remote procedure call which uses XML to encode its calls and uses HTTP as a transport mechanism. The next two fields are for the username and password that will be used during configuration synchronization. The username is general “admin” or an admin-level privileged account on the target system, and the password is generally the remote web configurator password. The next three fields are for sync hosts 1, 2, and 3. HAProxy will synchronize settings to the host’s IP address, if it is specified here. Finally, there are two buttons at the bottom. “Save” will just save the configuration, whereas “Save and Check Config” will parse the automatically generated config file and check for errors.


HAProxy: Configuring Listener Settings

The next tab is “Listener“. By pressing the “plus” button on the right side, you can specify a server to which HAProxy will listen. “Name” is the IP address of the interface to listen to. You can also specify an option “Description“. “Status” indicates whether the server pool is active or disabled. In the next field, “External address“, if you want the rule to apply to an IP address other than the IP address of the interface chosen here, you can select it here. You need to define virtual IP addresses for it first. If you are trying to redirect connections on the LAN, select the “any” option. You can also specify the port to listen to at “External port“, a backend server pool, the default server port, and the protocol at “Type” (HTTP, HTTPS, TCP, or a check for health). You can specifyy an ACL at “Access Control lists“.

Under “Advanced settings“, you can specify several other paramters. “Connection timout” allows you to specify the amount of time in milliseconds HAProxy will wait for a connection to complete, while “Server timeout” indicates the amount of time to wait for data from the server. “Retries” indicates the number of retry attempts. “Balance” indicates what method is used to load balance. If “Round robin” is selected, each server is used in turns, according to their weights. The algorithm is dynamic, which means that server weights may be adjusted on the fly for slow starts. If “Source” is selected, the source IP address will be hashed and divided by the total weight of the running servers to designate which server will receive the request. As a result, the same client IP address will always reach the same server as long as no server goes up or down. If the hash result changes due to the number of running servers changing, many clients will be directed to a different server.

In the next article, we we conclude our look at HAProxy configuration.


External Links:

The official HAProxy site

HAProxy on Wikipedia

Port Enumeration and Fingerprinting

port enumerationPort Enumeration

Port enumeration is based on the ability to gather information from an open port, by either straightforward banner grabbing when connecting to an open port, or by inference from the construction of a returned packet. There is not much true magic here, as services are supposed to respond in a predictable manner.

Once the open ports are captured, by running a port scanner such as nmap, you need to be able to verify what is running on said ports and thus move one step closer to completing port enumeration. For example, you might assume SMTP is running on TCP port 25, but perhaps the system administrator is trying to obfuscate the service, and is running telnet on that port instead. The easiest way to check the status of a port is a banner grab. Upon connecting to a service, the target’s response is captured and compared to a list of known services, such as the response when connecting to an OpenSSH server.

Some services are wrapped in other frameworks, such as Remote Procedure Call (RPC). On UNIX-like systems, an open TCP port 111 indicates this. UNIX-style RPC can be queried with the rpcinfo command, or a scanner can send NULL commands on the various RPC-bound ports to enumerate what functions a particular RPC service performs.


Fingerprinting

The next step after port enumeration is system fingerprinting. The goal of system fingerprinting is to determine the operating system version and type. There are two common methods of performing system fingerprinting: active and passive scanning. The more common active methods use responses sent to TCP or ICMP packets. The TCP fingerprinting process involves setting flags in the header that different operating systems and versions respond to differently. Usually, several different TCP packets are sent and the responses are compared to known baselines to determine the remote operating system (OS). Typically, ICMP-based methods use fewer packets than TCP-based methods, so when you need to be more stealthy and can afford a less-specific fingerprint, ICMP is a viable alternative. Higher degrees of accuracy can often be achieved by combining TCP/UDP and ICMP methods, assuming that no device between you and the target is reshaping packets and mismatching the signatures.

Passive fingerprinting provides the ultimate in stealthy detection. Similar to the active method, this style of fingerprinting does not send any packets, but depends on sniffing techniques to analyze the information sent in normal network traffic. If your target is running publicly available services, passive fingerprinting may be a good way to start your fingerprinting. A drawback of passive fingerprinting is that it is less accurate than a targeted active fingerprinting session and relies on an existing traffic stream.


External Links:

Defining Footprinting, Fingerprinting, Enumeration and SNMP Enumeration?? at the World of Information Technology and Security blog

Router Hacking Part 2 (Service Enumeration, Fingerprinting, And Default Accounts at www.securitytube.net

Fingerprinting at projects.webappsec.org

Port Forwarding with NAT in pfSense

Firewall Configuration: NAT port forwarding

Firewall -> NAT configuration page in the pfSense web GUI.

In computer networking, Network Address Translation (NAT) is the process of modifying IP address information in IPv4 headers while in transit across a traffic routing device. In most cases, it involves translating from the WAN IP address to the 192.168.x.x addresses of your local network. In this article, I will describe how to set up NAT port forwarding.

NAT and firewall rules are distinct and separate. NAT rules forward traffic, while firewall rules block or allow traffic. In the next article, I will cover firewall rules, but for now keep in mind that just because a NAT rule is forwarding traffic does not mean the firewall rules will allow it.

NAT Port Forwarding

NAT port forwarding rules can differ in complexity, but in this example, let’s assume we set up an Apache server at 192.1.168.125 on the local network, and we want to direct all HTTP traffic (port 80) to that address. First, browse to Firewall -> NAT. The options are “Port Forward“, “1:1” and “Outbound“. Select the “Port Forward” tab. Click the “plus” button in order to create a new NAT port forward rule. “Disable the rule” and “No RDR” can be left unchanged. For “Interface” you can choose WAN and LAN; we are concerned about incoming requests from the Internet, so you can keep it as WAN.


For “Protocol”, there are five choices: TCP, UDP, TCP/UDP, GRE, and ESP. TCP stands for Transmission Control Protocol, and is the transport level protocol of the Internet protocol suite. This is usually what we want to use. Next is UDP, or User Datagram Protocol, another transport level protocol which is also part of the Internet protocol suite. It is suitable for purposes where error checking and correction are either not necessary or are performed in the application. GRE stands for Generic Routing Encapsulation, a tunneling protocol that can encapsulate a wide variety of network layer protocols inside virtual point-to-point links. It can be used, among other things, in conjunction with PPTP to create VPNs. ESP stands for Encapsulating Security Payload, a member of the IPsec protocol suite which provides authenticity, integrity and confidentiality protection of packets. In this port forwarding scenario, you can leave the protocol unchanged (TCP).

Firewall Configuration: NAT

Adding a NAT port forwarding rule.

For “Source“, you can specify the allowed client source. Typically you can leave it as “any”, but there are several choices: “Single host or alias“, “Network“, “PPTP clients“, “PPPoE clients“, “L2TP clients“, “WAN subnet“, “WAN address“, “LAN subnet“, and “LAN address“. In this case, you can leave the default (any) unchanged.

For “Source port range“, we want to redirect HTTP traffic (port 80), so choose HTTP for the from and to drop-down boxes. “Destination” offers the same choices as “Source” and can be left unchanged. “Destination port range” should be changed to HTTP for the from and to drop-down boxes. For “Redirect target IP“, specify the web server the traffic to be forwarded to (in our case, 192.168.1.125). For “Redirect target Port“, choose HTTP. Next is “No XMLRPC Sync“; enable this option to prevent this rule from being applied to any redundant firewalls using CARP. This option can be left unchecked now. “NAT Reflection” can be enabled or disabled, usually it is disabled. “Filter Rule association” will automatically create a firewall rule and associate it to this NAT rule. Check this box to avoid having to create a separate firewall rule. Add a description if you wish, and press “Save” to save the changes. The port forwarding rule set up should now be in effect.

NAT Port Redirection

In this case, we passed traffic from port 80 on the source to port 80 on the destination, which is the classic port forwarding scenario. But there’s no reason you can’t redirect traffic to a different port. There are two reasons you might want to do this:

[1] Security: A good way to thwart hackers is to put services on non-standard ports. For example, everyone knows the standard port for FTP is 21, but an outsider is unlikely to find your FTP server if you place it on port 69, or better yet, an even higher port number (e.g. 51782). The same can be said of SSH. Users will have to know the port in order to access it.

[2] Single Public IP Address, more than one computer with the same services: Smaller networks with only a single public IP address may be stuck if the want to expose a lot of public services. For example, imagine that we want to have two separate FTP servers, but on two separate computers. With port redirection, we create two different NAT rules: the first rule will redirect port 51782 to port 21 on FTPServer1, and the second will redirect port 51783 to port 21 on FTPServer2. We can then remote into two separate FTP servers on two different computers using the same IP address.


External Links:

Port Forwarding Troubleshooting at doc.pfsense.org

© 2013 David Zientara. All rights reserved. Privacy Policy