Specifying Specific SSH Keys for Git Remotes

When using Git via SSH with services such as GitHub and Gitorious, it can be useful to use specific / different ssh keys than your default.

This is accomplished with an entry such as the following in your ~/.ssh/config:

Host some-alias
    IdentityFile /home/username/.ssh/id_rsa-some-alias
    IdentitiesOnly yes
    HostName git.example.com
    User git

You then specify this remote as follows in .git/config:

url = some-alias:project/repository.git

Monitoring Asterisk via SNMP (with OSS_SNMP)

Over on the company blog, we just announced that we committed a complete Asterisk MIB implementation to OSS_SNMP in addition to a proof of concept Asterisk wallboard to NOCtools which includes active channels, uptime, version information and more.

Full details are on the company post but here’s a snippet of how easy it is to query Asterisk over SNMP:

 $host = new \OSS_SNMP\SNMP( $asteriskIP, $community );
 echo "Asterisk version: " . $host->useAsterisk()->version();
 echo "Active calls: " . $host->useAsterisk()->callsActive();
 echo "Calls processed: " . $host->useAsterisk()->callsProcessed();

And here’s a screenshot of the proof of concept Asterisk wallboard:

[NOCtools Asterisk Wallboard]
NOCtools – Asterisk Wallboard

A PHP SNMP Library for People Who HATE SNMP, MIBs and OIDs!

I hate SNMP! But I have to use it on a daily basis with my company, Open Solutions.

Don’t get me wrong, it’s an essential tool in the trade of network engineering but it’s also a serious PITA. Finding MIBs, OIBs, making them work, translating them, cross-vendor translations, etc, blah, blah. And then, when you do find what you need, you’ll have forgotten it months later when you need it again.

Anyway, while trying to create some automatic L2 topology graphing tools (via Cisco/Foundry Discovery Protocol for example) and also some per VLAN RSTP tools to show port states, I started writing this library. As I wrote, I realised it was actually very useful and present it here now in the hopes that the wider network engineering community will find it useful and also contribute back MIBs.

An example may best illustrate the library:

First, we need to instantiate an SNMP object with a hostname / IP address and a community string:

$ciscosw = new \OSS\SNMP( $ip, $community ); 

Assuming the above is a standard Cisco switch, let’s say we want to get an associate array of VLAN names indexed by the VLAN ids:

print_r( $ciscosw->useCisco_VTP()->vlanNames() ); 

This yields something like the following:

Array (
    [1] => default
    [2] => mgmt
    [100] => cust-widgets
    [1002] => fddi-default
    ...
)

It really is that easy.

See the GitHub project page: https://github.com/opensolutions/OSS_SNMP

“Go Faster” Websites – Introducing Minify

We’ve been minifying and bundling CSS and JS for years to ensure quick page loads of the applications we build. We’ve now generalised, documented and packaged the tool we use for this and released it under a BSD license so others can benefit.

Most web developers know that including lots of JS and CSS files in their sites slow page load times down. Most also know that these files should be minified and bundled into one file on production sites. Most developers don’t do this though. It’s a lot of extra steps in putting your new changes live.

Also, using CDNs or setting expiry times into the future for mostly static files such as CSS and JS also significantly improves page load as clients will grab these files once and use their local cache until their expire. This also poses issues for web developers that is easily overcome by versioning these files – literally adding a version number to the bundles – for example min.bundle-v6.css would be version 6 of the CSS minified and bundled file.

We’ve been doing both of these for a long time with the sites we build. We’ve now generalised, documented and packaged the tool we use for this and released it under a BSD license so others can benefit. See our page on GitHub to download this tool and for examples of its use:

https://github.com/opensolutions/Minify

This tool will:

  • automatically find all CSS/JS files in a given directory named xxx-blah.css where xxx is a three digit ordering / sequence number;
  • minify these files and create a single file bundle including them in the correct order;
  • automatically generate template include files allowing production / development mode (i.e. use individual CSS/JS or bundles based on an application option);
  • versioning for those using CDNs, future expiry dates, etc to ensure clients load fresh JS/CSS bundles.

If you use it, please drop us a note to let us know how you get on! 

We’ve Released Some of our Nagios Plugins

We create a lot of Nagios installations for our own systems over, for customer systems which we manage and as a service over at Open Solutions. We’ve written a lot of custom Nagios plugins over the years as part of this process.

We are now making a concerted effort to find them, clean them, maintain them centrally and release them for the good of others.

To that end, we have created a repository on GitHub for the task with a detailed readme file:

They main goal of Nagios plugins that we write and release are:

  • BSD (or BSD like) license so you can hack away to wield into something that may be more suitable for your own environment;
  • scalable in that if we are polling power supply units (PSUs) in a Cisco switch then it should not matter if there is one or a hundred – the script should handle them all;
  • WARNINGs are designed for email notifications during working hours; CRITICAL means an out of hours text / SMS message;
  • each script should be an independant unit with no dependancies on each other or unusual Perl module requirements;
  • the scripts should all be run with the --verbose on new kit. This will provide an inventory of what it finds as well as show anything that is being skipped. OIDs searched for by the script but reported as not supported on the target device should really be skipped via various --skip-xxx options.
  • useful help available via --help or -?

Using Doctrine ORM with Zend Application

In this first of a serious of articles where we delve into some of the hidden treasures in our ViMbAdmin application, we look at how to integrate Doctrine ORM with Zend – and specifically Zend_Application and Zend_Controller.

In this article we delve into our ViMbAdmin application and we look at how to integrate Doctrine ORM with Zend – and specifically Zend_Application and Zend_Controller.

The first assumption (and requirement) we are going to make is that you are using Zend_Application. If you want to see a working application set up and configured for this, please checkout or browse our ViMbAdmin source code – which we’ll reference throughout this document.

Zend Application has a resource framework which allows us to bootstrap various resources on demand. We have created a Doctrine resource for this very purpose which you can download from here (and you may need to edit the class name and change the plugin path in the config code below to match your setup). Our implementation does many things:

  • instantiates the Doctrine object;
  • sets up an autoloader for Doctrine models;
  • instantiates the Doctrine manager;
  • opens the connection to the database;
  • sets all collations and character sets to UTF8 (this is hard coded but can easily be changed);
  • sets various hard coded Doctrine attributes which can also be changed.

We the add various configuration parameters to the application.ini file:

 

Or the following where $application is the instance of Zend_Application:

$application->getBootstrap()->bootstrap( 'doctrine' );

From that, you can use Doctrine to your hearts content!

We also have a Doctrine CLI script which works from the same resource. See:

http://code.google.com/p/vimbadmin/source/browse/trunk/bin/doctrine-cli.php

 

Useful RANCID Debugging Tips

I always find it difficult to find a good reference for RANCID debugging strategies and, after spending the afternoon on doing same on one installation, put together my own list.

I always find it difficult to find a good reference for RANCID debugging strategies and, after spending the afternoon on doing same on one installation, put together my own list.

Note that in the following, I use clogin and rancid which assumes a Cisco device. Change to the appropriate variations if you’re not trying to work with a Cisco.

  1. Test logging into a device:
    > clogin rtr1.example.com
  2. Test logging into a device and a single command:
    > clogin -t 90 -c"show version" rtr1.example.com
  3. Test logging into a device and run a sequence of commands:
    > clogin -t 90 -c"show version;show calendar" rtr1.example.com
  4. Show what RANCID does with debugging output:
    > rancid -d rtr1.example.com

    If the above throws some errors (especially a list of missed commands, and if you’re using TACACS, ensure you have authorisation to run all the commands RANCID tries but logging into the router as the RANCID user and executing them one at a time.

  5. Same as (4) but record all router / switch output for analysis:
    > setenv NOPIPE YES
    > rancid -d rtr1.example.com

    and then complete output can be found in the file: rtr1.example.com.raw (in this example).

  6. Run RANCID on a single switch / router tree rather than all:
    > /usr/local/bin/rancid-run [tree]
  7. Run RANCID normally:
> /usr/local/bin/rancid-run
  1. Don’t forget that logs are available in RANCID’s logs/ directory.

Combatting SPAM in ISP Mail Servers

Fighting SPAM is not easy. Here’s an example of how we configured an ISP mail server cluster to reduce the SPAM load…

Introduction

ISPs must provide a mail relay service to their end users. However, mail relay isn’t a revenue generating service and ISPs can’t allocate unlimited staff resources to these services. The problem administrators face is trying to find a balance between unobstructively allowing their clients to send legitimate mail but to stop them from spamming.

The main source of SPAM from clients (SMTP clients in the ISPs IP space) is typically computers infected with bots / malware behind the users router. It may be their own or even a neighbor taking advantage of an unsecured wireless network. There is a smaller secondary source als0 – spammers who have someone gotten hold of a customers SMTP authentication details and are logging in from outside of the ISPs IP space to relay mail through.

Now, the problem this causes is that when the ISP mail servers pass on this SPAM to its destination such as a Yahoo! or GMail recipient, these companies examine the ratio of SPAM to HAM coming from the ISPs servers. When this ratio goes the wrong way, they legitimately block the mail servers meaning that customers of the ISP cannot get their mail delivered to these destinations. This is a big problem.

In this article, I’ll describe a new mail cluster set-up for an ISP to try and ensure their SPAM:HAM ratio stays on the right side.

Solution  Overview

The mail cluster comprises a number of SMTP servers all configured identically (in fact all booting from the same image using PXE and an NFS root file system with dedicated local /var partitions). They are running Debian Stable (Lenny at the time of writing) and the mail server is the excellent Postfix.

Postfix is configured to accept authentication using SASL with Dovecot (configured only for authentication, no POP3 or IMAP services) over TLS on port 25 or SSL on port 465. Users on the ISP’s address space do not need to authenticate to relay mail. This is actually a legacy configuration and quite common among ISPs. If we were to go back many years starting all over again, I’d certainly try and evaluate the merits of only allowing relay after authentication irrespective of your ISP.

Mail server server selection is made via DNS round robin but this could easily be changed to a load balancer. As part of this retrofit, we took the opportunity to IPv6 enable all the servers.

We now need to look at securing these boxes to ensure the level of SPAM is controller. There are three points at which we examine incoming mail:

  1. before it hits the SMTP daemon at all;
  2. after the SMTP headers (MAIL FROM, RCPT TO) have been passed to the daemon but before the mail has been accepted (i.e. before the SMTP DATA command);
  3. after the daemon has accepted the mail.

We’ll examine those in each of the sections below.

Rate Limiting Connections

In order to limit repeated connections to the mail server – such as from a SPAM bot that creates a new SMTP session for every mail (rather than sending many mails in one session), we use iptables recent match module.

This module keeps count of the number of connections in a given time period allow us to take alternative action if the number is above a threshold. In our case, we just drop additional connections until the number of connections drops below that threshold. For example:

-A INPUT -p tcp -m state --state NEW -m multiport
        --dports 25,465 -j SMTP_IN

-A SMTP_IN -p tcp --syn -m recent --name SMTP_RATE_LIMIT --set
-A SMTP_IN -p tcp --syn -m recent --name SMTP_RATE_LIMIT --update
        --seconds 60 ! --hitcount 10 -j SMTP_IN_LIMIT
-A SMTP_IN -j ACCEPT

-A SMTP_IN_LIMIT -j LOG --log-prefix "SMTP_IN: LIMIT EXCEEDED:: "
-A SMTP_IN_LIMIT -j DROP

We have some additional rules where, for example, we have whitelisted core infrastructure IP addresses and also mail from non ISP IP addresses (the purpose here is to stop SPAM bots from within the network).

Checks Before Accepting the Mail

Once the client connects to Postfix we check a number of other items.

We of course use some of Postfix’s own policy checks (such as reject_non_fqdn_sender, reject_non_fqdn_recipient, etc) but that’s pretty par for the course and not discussed below.

The ISP uses a commercial subscription to the Spamhaus data feed service. This is a very reliable database of IP addresses that are known to be sending SPAM (and other classifications). This is the first thing we check – customer or not. If you’re on this list, we do not accept mail from you.

If you’re not coming from the ISP address space, we then initiate gray listing using a Debian package called postfix-gld (see this site) which uses a MySQL database as its storage engine. See greylisting.org for more details on this useful anti-SPAM measure.

Now, if you’ve gotten this far, we use a package called postfix-policyd (see this site) to initiate rate limiting. Clients are limited in an hourly window on how many mails they can send and on how many recipients in total these mails can reach. The exact numbers are a moving goal post for now while we tweak it.

We could of course have used policyd for gray listing also but this posed a significant problem. Our Postfix configuration looks like this:

smtpd_recipient_restrictions =
    ...
    check_policy_service inet:127.0.0.1:10031,
    permit_mynetworks,
    permit_sasl_authenticated,
    check_policy_service inet:127.0.0.1:2525
    ...

As you can see, the issue is that we do not want to gray list our own IP range or people with a valid username and password but we do want to rate limit these. Using policyd for both is not possible unfortunately.

At this point, your mail will be accepted for relay. We not start the next batch of checks.

Content Filtering

We now need to examine these mails to discard SPAM and mails infected with a virus. For this we use the excellent amavisd-new package with SpamAssassin and ClamAV installed on the servers.

We’re still tweaking the rules for best effect but as it stands, if a virus is detected in the mail it’s silently discarded. A mail with a reasonable SPAM score is marked as SPAM in the subject but mails with a high score are also just discarded. DSNs are not sent.

If a mail passes all these checks, it’ll be relayed onwards.

Fighting SPAM is far from easy but the above will make a big dent. Unfortunately we can’t stop just there. We also have scripts to analyse the logs and rate limiting database to pull out the bad guys so the ISP’s tech support team can chase these up retroactively and instruct them on protecting their machines.

SIP Brute Force Attacks on the Increase

On our own Asterisk PBX server for our office and on some customer boxes with open SIP ports, we have seen a dramatic rise in brute force SIP attacks.

They all follow a very common pattern – just over 41,000 login attempts on common extensions such as 200, 201, 202, etc. We were even asked to provide some consultancy about two weeks ago for a company using an Asterisk PBX who saw strange (irregular) calls to African countries.

They were one example of such a brute force attack succeeding because of a common mistake:

  1. an open SIP port with:
  2. common extensions with:
  3. a bad password.

(1) is often unavoidable. (2) can be mitigated by not using the predictable three of four digit extension as the username. (3) is inexcusable. We’ve even seen entries such as extension 201, username 201, password 201. The password should always be a random string mixing alphanumeric characters. A good recipe for generating these passwords is to use openssl as follows:

openssl rand -base64 12

Users should never be allowed to choose their own and dictionary words should not be chosen. The brute force attack tried >41k common passwords.

Preventing or Mitigating These Attacks

You can mitigate against these attacks by putting external SIP users into dedicated contexts which limit the kinds of calls they can make (internal only, local and national, etc); ask for a PIN for international calls; limit time and cost; etc.

However, the above might be a lot of work when simply blocking users after a number of failed attempts can be much easier and more effective. Fail2ban is a tool which can scan log files like /var/log/asterisk/full and firewall IP addresses that makes too many failed authentication attempts.

See VoIP-Info.org for generic instructions or below for a quick recipe to get it running on Debian Lenny.

Quick Install for Fail2ban with Asterisk SIP on Debian Lenny

  1. apt-get install fail2ban
  2. Create a file called /etc/fail2ban/filter.d/asterisk.conf with the following (thanks to this page):
  3. Put the following in /etc/fail2ban/jail.local:
  4. Edit /etc/asterisk/logger.conf such that the date format under [general]reads:
    dateformat=%F %T
  5. Also in /etc/asterisk/logger.conf, ensure full logging is enable with a line such as the following under [logfiles]:
    full => notice,warning,error,debug,verbose
  6. Resart / reload Asterisk
  7. Start Fail2ban: /etc/init.d/fail2ban startand check:
    • Fail2ban’s log at /etc/log/fail2ban.logfor:
      INFO   Jail 'asterisk-iptables' started
    • The output of iptables --list -vfor something like:
      Chain fail2ban-ASTERISK (1 references)
       pkts bytes target     prot opt in     out     source               destination
        227 28683 RETURN     all  --  any    any     anywhere             anywhere

You should now receive emails (assuming you replaced the example addresses above with your own and your MTA is correctly configured) when Fail2ban starts or stops and when a host is blocked.

HTTP Streaming with Encryption under Linux

For a customer of ours, we need to mass encode thousands of video files and also segment and encrypt them for use with Apple’s HTTP Streaming.

For a customer of ours, we need to mass encode thousands of video files and also segment and encrypt them for use with Apple’s HTTP Streaming. (using Amazon EC2 instances for the leg work).

On his blog, Carson McDonald, has put together a good over view of how HTTP Streaming can work under Linux a long with a segmenter.

The one piece of the jigsaw we were missing was encryption and after some work ourselves and with the help of a stackoverflow question, we have a working sequence of commands to successfully and compatibly encrypt segments for playback on Safari and other supported HTTP streaming clients:

  1. Create a key file:
    openssl rand 16 > static.key
  2. Convert the key into hex:
    key_as_hex=$(cat static.key | hexdump -e '16/1 "%02x"')
  3. At this point, let’s assume we have segmented a file of 30 seconds called video_low.ts into ten 3 second segments called video_low_X.ts where X is an integer from 1 to 10. We can then encrypt these as follows:
    for i in {0..9}; do
        init_vector=`printf '%032x' $i`
        openssl aes-128-cbc -e -in video_low_$(($i+1)).ts     
            -out video_low_enc_$(($i+1)).ts -p -nosalt        
            -iv $init_vector -K $key_as_hex
    done
    

With a matching m3u8 file such as the following, the above worked fine:

#EXTM3U
#EXT-X-TARGETDURATION:3
#EXT-X-MEDIA-SEQUENCE:0
#EXTINF:3,
#EXT-X-KEY:METHOD=AES-128,URI="http://www.example.com/static.key"
http://www.example.com/video_low_enc_1.ts
#EXTINF:3,
http://www.example.com/video_low_enc_2.ts
...
#EXT-X-ENDLIST

What caught us out was the initialisation vector with is described in the draft IETF document as follows:

128-bit AES requires the same 16-octet Initialization Vector (IV) to
be supplied when encrypting and decrypting. Varying this IV
increases the strength of the cipher.

If the EXT-X-KEY tag has the IV attribute, implementations MUST
use the attribute value as the IV when encrypting or decrypting
with that key. The value MUST be interpreted as a 128-bit
hexadecimal number and MUST be prefixed with 0x or 0X.

If the EXT-X-KEY tag does not have the IV attribute,
implementations MUST use the sequence number of the media
file as the IV when encrypting or decrypting that media file.
The big-endian binary representation of the sequence number
SHALL be placed in a 16-octet buffer and padded (on the left)
with zeros.