Apple OS X as an NFS Server (with Linux Clients)

For a customer, I had to set up a Linux-based virtualised environment on a MacBook Pro using VirtualBox. This environment included making a couple of 8TB external hard drives available under NFS to the Linux hosts.

In all fairness, what better use can one put OS X to than to virtualise Linux?!?  Just kidding fanboys… well, sort of 😉

Let’s begin with a quick description of the environment:

  • A MacBook Pro (MBP) with OS X 10.8.2
  • VirtualBox with it’s own network (MBP: 192.168.56.1/24) for NFS as well as bridged adapters for general Internet access;
  • Multiple external HDDs – for simplicity, let’s just do one here which is mounted under /Volumes/DATA-1.

We want to export the DATA-1 volume to the Linux clients. That bit’s actually not too hard (see below), the main issue is we needed to match what on Linux is call no_root_squash – i.e. so the root user on the Linux clients would have root access to the NFS shares. That bit was harder.

I’ll assume root access / sudo use in the following commands.

To configure NFS, we edit / create /etc/exports (e.g. nano /etc/exports) such as:

/Volumes/DATA-1 -maproot=root:wheel -network 192.168.56.0 -mask 255.255.255.0

In other words:

  • export /Volumes/DATA-1
  • map the clients root user to local root user and the clients root group to local group wheel (gid = 0)
  • allow the export to be accessed by any host on the private VirtualBox network.

With that entry, NFS can be enabled at boot and started via:

nfsd enable
nfsd start

On a Linux client, this can then be mounted at boot with an /etc/fstab entry:

192.168.56.1:/Volumes/DATA-1 /mnt/data-1 nfs defaults 0 0

The problem was that no matter what variation of options I used, I could not get root access from the Linux clients.

The answer came by chance when I glanced an odd mount option on the external HDD:

/dev/disk2s2 on /Volumes/DATA-1 (hfs, NFS exported, local, nodev, nosuid, journaled, noowners)

noowners? What pray-tell is this? The internet provided some insight:

In Leopard, due to an unfortunate design decision by Apple, “admin” authentication is now required to make this change (no noowners) and non-admin users are no longer able to use “Get Info” to change this setting, even on devices they own and have mounted themselves.

An unfortunate design decision indeed. The temporary solutions is to execute:

mount -u -o owners /Volumes/DATA-1

Thereafter, I now have root access / effective UID from the Linux clients. This of course needs to be entered each time – if someone has a more permanent solution, I’m all ears (see below for a cron script I have implemented for this).

Just as an aside, we have a lot of NFS activity which required some tuning. First, additional NFS threads by adding nfs.server.nfsd_threads=16 to /etc/nfs.conf (execute nfsd restart after that). I’ve also added the following line to /etc/rc.local:

sysctl -w kern.aiomax=64 kern.aioprocmax=32 kern.aiothreads=4

Cron Script for Automatically Removing noowners

As mentioned above, removing this mount option every time you connect these HDDs is damn annoying at best and error prone at worst. I have a script for this now which I locate in /var/root/bin/mount-check.sh which is:

#! /bin/bash

NOOWNERS=`/sbin/mount | grep "/Volumes/DATA-1" | grep noowners | wc -l`

if [[ "X${NOOWNERS//[[:space:]]/}X" = "X1X" ]]; then
    /sbin/mount -u -o owners /Volumes/DATA-1;
fi

This is then executed via a new line in /etc/crontab:

* * * * *    root    /var/root/bin/mount-check.sh

 

Monitoring SSL Certificate Expiry Dates with Nagios

It is good practice to separate Nagios checks of your web server being available from checking SSL certificate expiry. The latter need only be run once per day and should not add unnecessary noise to a more immediately important web service failure.

To use check_http to monitor SSL certificate expiry dates, first ensure you have a daily service definition – let’s call this service-daily. Now create two service commands as follows:

define command{
    command_name check_cert
    command_line /usr/lib/nagios/plugins/check_http -S \
        -I $HOSTADDRESS$ -w 5 -c 10 -p $ARG1$ -C $ARG2$
}

define command{
    command_name check_named_cert
    command_line /usr/lib/nagios/plugins/check_http -S \
        -I $ARG3$ -w 5 -c 10 -p $ARG1$ -C $ARG2$
}

The second is useful for checking named certificates on additional IP addresses on web servers serving multiple SSL domains.

We can use these to check SSL certificates for POP3, IMAP, SMTP and HTTP:

define service{
    use service-daily
    host_name mailserver
    service_description POP3 SSL Certificate
    check_command check_cert!993!21
}

define service{
    use service-daily
    host_name mailserver
    service_description IMAP SSL Certificate
    check_command check_cert!995!21
}

define service{
    use service-daily
    host_name mailserver
    service_description SMPT SSL Certificate
    check_command check_cert!465!21
}

define service{
    use service-daily
    host_name webserver
    service_description SSL Cert: www.example.com
    check_command check_named_cert!443!21!www.example.com
}

define service{
    use service-daily
    host_name webserver
    service_description SSL Cert: www.example.net
    check_command check_named_cert!443!21!www.example.net
}

Nagios Plugin for Checking Backups via rsnapshot

We’ve just added a check_rsnapshot.php script to our nagios-plugins bundle on Github. This script will verify rsnapshot backups via Nagios using a number of checks / tests:

  • minfiles – checks the number of files in a snapshot against a minimum expected number;
  • minsize – checks the size of a snapshot against a minimum expected size;
  • log – parses the rsnapshot log to ensure the most recent runs for each retention period completed successfully;
  • timestamp – checks for files created server side containing a timestamp and thus ensuring snapshots are succeeding;
  • rotation – checks that retention directories are being rotated; and
  • dir-creation – checks that retention directories are being created.

Please see this Github wiki page for more information including instructions.

Analysing MySQL Slow Query Logs

MySQL has a really useful feature that allows it to log slow queries where slow is a minimum time defined by you in micro seconds. It helps a lot is diagnosing website outages or slow responsiveness issues after the fact.

Unfortunately I couldn’t find any nice graphical tools for analysing these but there are a few command line tools:

mysqldumpslow

MySQL’s own tool, mysqldumpslow, which aggregates queries and allows you to sort them by: query time or average query time; lock time or average lock time; rows sent or average rows sent; or the number of queries.

Percona’s MySQL Slow Query Log Analyser

Dating from 2006, Percona’s Peter Zaitsev wrote about their own version of a slow query log analyser (local copy) which has given me good results. Note that their micro time patch has since been incorporated into MySQL mainstream.

One of the main differences over MySQL’s own version is that as well as printing the aggregated query (with number and string literals wildcarded), it also prints a real example of the query allowing a copy and paste to MySQL for execution with EXPLAIN.

Example output with query details redacted:

### 230 Queries 
### Total time: 4708.948293, Average time: 20.4736882304348
### Taking 0.093420 to 203.693466 seconds to complete
### Rows analyzed 0 - 141008
SET timestamp=XXX;
SELECT ... FROM ... AS A 
        INNER JOIN ... AS C ON C.item_id = A.item_id 
    WHERE XXX AND C.item_lang = 'XXX' AND ... 
    ORDER BY CATALOG.item_sort LIMIT XXX;

SET timestamp=1348032761;
SELECT ... FROM ... AS A 
        INNER JOIN ... AS C ON C.item_id = A.item_id 
    WHERE 1 AND C.item_lang = '1' AND ... 
    ORDER BY C.item_sort LIMIT 1;

 

Specifying Specific SSH Keys for Git Remotes

When using Git via SSH with services such as GitHub and Gitorious, it can be useful to use specific / different ssh keys than your default.

This is accomplished with an entry such as the following in your ~/.ssh/config:

Host some-alias
    IdentityFile /home/username/.ssh/id_rsa-some-alias
    IdentitiesOnly yes
    HostName git.example.com
    User git

You then specify this remote as follows in .git/config:

url = some-alias:project/repository.git

Migrating SVN with Branches and Tags to Git

Following my love affair with Git, I’ve also started using a local install of Gitorious for private and commercial projects at Open Solutions. Before Gitorious, this meant setting up authentication and Apache aliases for each new Git repository which meant we were pretty disinclined to create repositories when we should have.

With Gitorious, it’s just a couple of clicks and we have internal public repositories, team repositories or individual developer private repositories. It’s grrrrrrreat!

Last night and this morning, I’ve stated a process of finding the many SVN repositories I / we have scattered around to import them into Git (with all branches and tags). Here’s the process:

Importing Subversion Repositories with Branches and Tags to Git

1. Create a users file so you can correctly map SVN commit usernames to Git users. For example, users.txt:

user1 = First Last Name 
user2 = First Last Name  
...

2. Now clone the Subversion repository:

git svn clone -s -A users.txt svn://hostname/path dest_dir-tmp
cd dest_dir-tmp
git svn fetch --fetch-all

3. Now we have all the SVN repository. We need to create Git tags to match the SVN tags:

git branch -r | sed -rne 's, *tags/([^@]+)$,\1,p' | \
while read tag; do \
 echo "git tag $tag 'tags/${tag}^'; git branch -r -d tags/$tag"; \
done | sh

4. Now we need to create matching branches:

git branch -r | grep -v tags | sed -rne 's, *([^@]+)$,\1,p' | \
 while read branch; \
 do echo "git branch $branch $branch"; \
done | sh

5. To help speed up a remote push, we’ll compact the repository:

git repack -d -f -a --depth=50 --window=100

6. Then we remove the meta-data that was used by git-svn:

git config --remove-section svn-remote.svn
git config --remove-section svn
rm -r .git/svn

7. And finally, we push it to our own Gitorious server:

git remote add origin git@example.com:zobie/my_great_app.git
git push --all && git push --tags

References:

  1. http://stackoverflow.com/questions/3239759/checkout-remote-branch-using-git-svn
  2. http://stackoverflow.com/questions/79165/how-to-migrate-svn-with-history-to-a-new-git-repository
  3. http://blog.zobie.com/2008/12/migrating-from-svn-to-git/

Ubuntu 12.04 (Precise Pangolin) and PHP 5.4 (again)

My previous post, Ubuntu 12.04 (Precise Pangolin) and PHP 5.4, has been extremely popular but I left some work for the user to figure out.

In a nutshell, here is how you install PHP 5.4 in Ubuntu 12.04 (Precise Pangolin).

1. Install the signing key for the PPA (which also adds the sources to apt):

add-apt-repository ppa:ondrej/php5

If the above command is not available, install it using:

apt-get install python-software-properties

2. Now update the package database and then upgrade the system. As part of upgrading, PHP 5.4 will be installed automatically:

apt-get update
apt-get upgrade
apt-get dist-upgrade

Now, you should have PHP 5.4 installed:

# php -v
PHP 5.4.3-4~precise+1 (cli) (built: May 17 2012 13:00:25)
Copyright (c) 1997-2012 The PHP Group
Zend Engine v2.4.0, Copyright (c) 1998-2012 Zend Technologies

As an aside, at the time of my previous post, PHP Memcache packages were not available from the PPA but that has since been rectified.

Ubuntu 12.04 (Precise Pangolin) and PHP 5.4

UPDATED in Ubuntu 12.04 (Precise Pangolin) and PHP 5.4 (again) (May 22nd 2012).

I was looking forward to trying out some of the new features in PHP 5.4 when I upgraded to Ubuntu 12.04 this morning. Unfortunately, Ubuntu decided to stick with 5.3 for this release.

There is upgrade path available though via a PPA (Personal Package Archive) from Ondřej Surý at https://launchpad.net/~ondrej/+archive/php5.

I’ve just installed it and it’s working fine with Apache:

$ php -v
PHP 5.4.0-3~precise+4 (cli) (built: Mar 27 2012 08:50:50)
Copyright (c) 1997-2012 The PHP Group
Zend Engine v2.4.0, Copyright (c) 1998-2012 Zend Technologies

 

Essential Windows Software for a Fresh Install

I get landed with new computers from time to time for colleagues, friends and family to install / set-up. Here is my list of essential Windows software for these.

The first thing I do is a fresh install of Windows to remove the crazy and ridiculous amount of bloatware that comes pre-installed and makes the system as slow as a wet week. Then after re-installing Windows and the necessary hardware drivers (Windows Update will get most of these automatically), I then:

“Go Faster” Websites – Introducing Minify

We’ve been minifying and bundling CSS and JS for years to ensure quick page loads of the applications we build. We’ve now generalised, documented and packaged the tool we use for this and released it under a BSD license so others can benefit.

Most web developers know that including lots of JS and CSS files in their sites slow page load times down. Most also know that these files should be minified and bundled into one file on production sites. Most developers don’t do this though. It’s a lot of extra steps in putting your new changes live.

Also, using CDNs or setting expiry times into the future for mostly static files such as CSS and JS also significantly improves page load as clients will grab these files once and use their local cache until their expire. This also poses issues for web developers that is easily overcome by versioning these files – literally adding a version number to the bundles – for example min.bundle-v6.css would be version 6 of the CSS minified and bundled file.

We’ve been doing both of these for a long time with the sites we build. We’ve now generalised, documented and packaged the tool we use for this and released it under a BSD license so others can benefit. See our page on GitHub to download this tool and for examples of its use:

https://github.com/opensolutions/Minify

This tool will:

  • automatically find all CSS/JS files in a given directory named xxx-blah.css where xxx is a three digit ordering / sequence number;
  • minify these files and create a single file bundle including them in the correct order;
  • automatically generate template include files allowing production / development mode (i.e. use individual CSS/JS or bundles based on an application option);
  • versioning for those using CDNs, future expiry dates, etc to ensure clients load fresh JS/CSS bundles.

If you use it, please drop us a note to let us know how you get on!Â