Upgrading Legacy Versions of Ubuntu

The standard Ubuntu versions go EOL quite quickly and it’s easy to miss the upgrade window such that running do-release-upgrade yields:

An upgrade from 'groovy' to 'impish' is not supported with this tool.

In this example, Ubuntu is looking to upgrade from 20.10 to 21.10 skipping 21.04 which is not supported. You’ve probably also reached a situation where you cannot even upgrade your current packages as the repositories have also been EOL’d and do not exist.

To upgrade step-wise, we need to upgrade our current platform first. You need to be logged in as root or using sudo for all of the following.

Start by changing your mirror in /etc/apt/sources.list to use old-releases.ubuntu.com. For example, in my case my mirror was ie.archive.ubuntu.com and so I can replace that via:

sed -i -e 's/ie.archive.ubuntu.com/old-releases.ubuntu.com/g' /etc/apt/sources.list

Once that’s done, upgrade your current system as usual:

apt-get update
apt-get dist-upgrade
shutdown -r now

Now that your current system is up-to-date, we need to do a distribution upgrade to 21.04 hirsute (in my case). do-release-upgrade will still not work so we need to manually download the upgrade tool and run that ourselves. Find the appropriate UpgradeTool file from Ubuntu’s meta-release page here. In my case the appropriate upgrade file was hirsute.tar.gz and I downloaded that via:

wget http://archive.ubuntu.com/ubuntu/dists/hirsute-updates/main/dist-upgrader-all/current/hirsute.tar.gz

You now need to extract and run the tool:

mkdir hirsute_files
cd hirsute_files
tar zxf ../hirsute.tar.gz
./hirsute

If you’re as fortunate as me, this will run cleanly just as it would have via do-release-upgrade. If you have no more intermediary versions, you can do the final upgrade via do-release-upgrade as normal. Remember also that upgrading from one LTS version to another is also supported by that tool.

Laravel 8 + Jetstream 2 + Livewire + Mix + NPM + Datatables

A post for Google’s crawler as this took me far too long to get working and the many many many Google hits on the topic all proved mostly unhelpful.

On a simple Laravel + Jetstream (Livewire version) installation, I needed DataTables to work. This also means jQuery needs to be installed.

What follows are the various changes to the default files to make this work:

package.json (versions as at Jan 2022, run npm install after updating file):

    "devDependencies": {
        ...
        "datatables.net-dt": "^1.11.3",
        "jquery": "^3.4.1",
        ...

webpack.mix.js – Alpine.js has an idiosyncrasy which causes issues for jQuery – it is loaded via <script src="" defer> – emphasis on the defer. This means that jQuery won’t be available for your scripts within the Blade templates and, specifically, $(document).ready() will throw an exception because the jQuery function, $, will not be defined as yet. To solve this, we create a second new js file by adding the following line:

mix.js( "resources/js/alpine.js", "public/js" )

resources/js/app.js – we need to remove the Alpine.js lines from here and place them in a new file, resources/js/alpine.js. Specifically – move the following lines:

import Alpine from 'alpinejs';
window.Alpine = Alpine;
Alpine.start();

resources/js/bootstrap.js – I added the following bolded lines:

window._ = require('lodash');

/**
 * We'll load jQuery and the Bootstrap jQuery plugin which provides
 * support for JavaScript based Bootstrap features such as modals 
 * and tabs. This code may be modified to fit the specific needs of
 * your application.
 */

window.$ = window.jQuery = require('jquery');


// Datatables
window.DataTable = require('datatables.net-dt');

...

Lastly, edit the template to not defer app.js and to load your new alpine.js file. In resources/views/layouts/app.blade.php, remove the script line for app.js and replace it with the following two lines:

<script src="{{ mix('js/app.js') }}"></script>
<script src="{{ mix('js/alpine.js') }}" defer></script>

I think the biggest confusion here was that there were no changes required to webpack.mix.js specifically for DataTables despite many online resources making changes there. Perhaps it was earlier versions of Mix? The second confusing issue was Alpine.js and the effects of ‘defer’ring app.js.

With the above, DataTables (and jQuery) are now available as usual in your Blade templates:

<script>
    $(document).ready(function () {
        $('#clients-table').DataTable();
    });
</script>

Upgrading Legacy Versions of IXP Manager

Legacy installations of IXP Manager can be very difficult to upgrade as you can find yourself in a dependency nightmare whereby the old version of IXP Manager will not run on modern versions of PHP; and vice versa.

In case you missed it, we have a new modern website for IXP Manager – find it at https://www.ixpmanager.org/. One of the features of this new website is that we now gather IXP Manager usage statistics on a daily basis – including the distribution of versions in use.

Reassuringly, of the installs we can poll for version used, ~65% are using one of the latest three minor versions (5.5.0 – 5.7.0). This is reassuring for a number of reasons including: knowing that IXPs stay current; knowing that IXPs are concerned about security updates; and knowing that the upgrade process is not especially difficult.

Of the 145 installs we know about, we can poll 116 and collect the version is use which yields the following table:

5.x.y83
4.x.y26
3.x.y7
Distribution of major IXP Manager versions in use as at September 8th 2020.

Legacy installations of IXP Manager can be very difficult to upgrade as you can find yourself in a dependency nightmare whereby the old version of IXP Manager will not run on modern versions of PHP; and vice versa.

Community IX Atlanta (CIX-ATL) are in the process of upgrading from 4.9.3 to the latest (5.7.0) and they graciously allowed me to record the process:

The video is a real-life experience where it wasn’t planned in advance allowing the viewer to see the mistakes and thought processes throughout. Also, if you weren’t aware of it, we have an on-going series of IXP Manager tutorials here.

When considering a legacy upgrade, there are two main approaches:

  1. Build a new IXP Manager installation on a new (modern) server and migrate the database (this is what we’ve done here).
  2. Attempt an in place upgrade alternating between IXP Manager upgrades and operating system upgrades. This is probably more awkward with more scope for issues to crop up (especially on non-IXP Manager applications which may be on the same server).

Remember, what’s covered here is “just” the IXP Manager and database upgrade. There’s a bunch of other things that would also need to be done including:

  • Working through the various upgrade actions in the release notes (mentioned throughout the video). Essentially you’ll need to step through each set of release notes for the versions you cycle through (and jump over).
  • If building a new server, pointing elements such as route server cron jobs and other API consumers at the new server.
  • Migrating other applications from the legacy server (e.g. maybe you have mrtg co-installed there).

In a production environment, my goal would be to build the new IXP Manager installation with the copied and upgraded database and run them in parallel. NB: either avoid or duplicate changes made in the UI across both installations of IXP Manager for this period of time.

Once the new installation of IXP Manager is ready for production use, you will then step through all external tools that consume data from it (sflow, mrtg, route servers, route collectors, etc.) and migrate them to the new installation. Sometimes simply updating DNS can achieve most of this but you’ll probably want to take it piece-meal and ensure each external service works as expected.

Take particular care with essential services such as route servers. This is an opportune time to upgrade to Bird v2 and add RPKI. What we did at INEX was do one route server at a time with 1-2 weeks between upgrades. This allowed time to ensure the new system was stable and also to ensure no member issues due to RPKI filtering, etc. (spoiler alert: it was uneventful!).

As you complete the migration, you can also consider if some services should be left on the “old” server. Separating tasks between different servers is good practice and so ask youself if everything should be migrated over to the new server.

More than anything, I hope this video entices you to keep current with your IXP Manager installations!

Using IXP Manager’s Grapher API

We call IXP Manager’s statistics and graphing architecture Grapher. It’s a backend agnostic way to collect and present data. Out of the box, we support MRTG for standard interface graphs, sflow for peer to peer and per-protocol graphs, and Smokeping for latency/packet loss graphs. You can see some of this in action on INEX’s public statistics section.

Internet Exchange Points (IXPs) play a significant role in national internet infrastructures and IXP Manager is used in nearly 100 of these IXPs worldwide. In the last couple weeks we have got a number of queries from those IXPs asking for suggestions on how they can extract traffic data to address queries from their national Governments, regulators, media and members. We just published our own analysis of this for traffic over INEX here.

Grapher has a basic API interface (documented here) which we use to help those IXP Manager users address the queries they are getting. What we have provided to date are mostly quick rough-and-ready solutions but we will pull all these together over the weeks (and months) to come to see which of them might be useful permanent features in IXP Manager.

How to Use These Examples

The code snippets below are expected to be placed in a PHP file in the base directory of your IXP Manager installation (e.g. /srv/ixpmanager) and executed on the command line (e.g. php myscript.php).

Each of these scripts need the following header which is not included below for brevity:

<?php

require 'vendor/autoload.php';

use Carbon\Carbon;

$data = json_decode( file_get_contents( 
    'https://www.inex.ie/ixp/grapher/ixp?period=year&type=log&category=bits' 
) );

We’ve placed a working API endpoint for INEX above – change this for your own IXP / scenario.

Data Volume Growth

An IXP was asked by their largest national newspaper to provide daily statistics of traffic growth due to COVID-19. For historical reasons linked to MRTG graph images, the periods in IXP Manager for this data is such that: day is last 33.3 hours; week is last 8.33 days; month is last 33.33 days; and year is last 366 days.

This is fine within IXP Manager when comparing averages and maximums as we are always comparing like with like. But if we’re looking to sum up the data exchanged in a proper 24hr day then we need to process this differently. For that we use the following loop:

$start = new Carbon('2020-01-01 00:00:00');
$bits = 0;
$last = $data[0][0];
$startu = $start->format('U');
$end = $start->copy()->addDay()->format('U');

foreach( $data as $d ) {
  // if the row is before our start time, skip
  if( $d[0] < $startu ) { $last = $d[0]; continue; }

  if( $d[0] > $end ) {
    // if the row is for the next day break out and print the data 
    echo $start->format('Y-m-d') . ',' 
        . $bits/8 / 1024/1024/1024/1024 . "\n";

    // and reset for next day        
    $bits  = $d[1] * ($d[0] - $last);
    $startu = $start->addDay()->format('U');
    $end    = $start->copy()->addDay()->format('U');
  } else {
    $bits += $d[1] * ($d[0] - $last);
  }

  $last = $d[0];
}

The output is comma-separated (CSV) with the date and data volume exchanged in that 24 hour period (in TBs via 8/1024/1024/1024/1024). This can, for example, be pasted into Excel to create a simple graph:

The elements of the $d[] array mirror what you would expect to find in a MRTG log file (but the data unit represents the API request – e.g. bits/sec, pkts/sec, etc.):

  • d[0] – the UNIX timestamp of the data sample.
  • $d[1] and $d[2] – the average incoming and outgoing transfer rate in bits per second. This is valid for the time between the $d[0] value of the current entry and the $d[0] value of the previous entry. For an IXP where traffic is exchanged, we expect to see $d[1] roughly the same as $d[2].
  • $d[3] and $d[4] – the maximum incoming and outgoing transfer rate in bits per second for the current interval. This is calculated from all the updates which have occured in the current interval. If the current interval is 1 hour, and updates have occured every 5 minutes, it will be the biggest 5 minute transfer rate seen during the hour.

Traffic Peaks

The above snippet uses the average traffic values and the time between samples to calculate the overall volume of traffic exchanged. If you just want to know the traffic peaks in bits/sec on a daily basis, you can do something like this:

$daymax = 0;
$day    = null;

foreach( $data as $d ) {

    $c = ( new Carbon($d[0]) )->format('Y-m-d');

    if( $c !== $day ) {
        if( $day !== null ) {
            echo $day . ',' . $daymax / 1000/1000/1000 . "\n";
        }
        $day = $c;
        $daymax = $d[3];
    } else if( $d[3] > $daymax ) {
        $daymax = $d[3];
    }
}

The output is comma-separated (CSV) with the date and data volume exchanged in that 24 hour period (in Gbps via 1000/1000/1000). This can also be pasted into Excel to create a simple graph:

Import to Carbon / Graphite / Grafana

Something that is on our development list for IXP Manager is to integrate Graphite as a Grapher backend. Using this stack, we could create much more visually appealing graphs as well as time-shift comparisons. In fact this is how we created the graphs for this article on INEX’s website which includes graphs such as:

To create this, we need to get the data into Carbon (Graphite’s time-series database). Carbon accepts data via UDP so we used a script of the form:

foreach( $data as $d ) {
    echo "echo \"inex.ixp.run1 " . $d[1] . " " . $d[0] 
        . "\" | nc <carbon-ip-address> 2003\n";
}

This will output lines like the following which can be piped to sh:

echo "inex.ixp.run1 387495973600 1585649700" | nc -u 192.0.2.23 2003

The Carbon / Graphite / Grafana stack is quite complex so unless you are familiar with it, this option for graphing could prove difficult. To get up and running quickly, we used the docker-grafana-graphite Docker image. Beware that the default graphite/storage-schemas.conf in this image limits data retention to only 7 days.

Using Laraval Eloquent Models for API Results

There’s a very interesting package called calebporzio/sushi for Laravel that allows one to use arrays as Eloquent drivers / sources of data. @calebporzio posted his own example of using this to front API results here.

It’s a very interesting proof of concept for this use case (probably needs more work and more knobs for production use). So interesting, I had a quick look myself with a bare bones Laravel app:

$ laravel new test-sushi
$ cd test-sushi
$ composer require calebporzio/sushi
$ composer require kitetail/zttp
$ php artisan make:model IxpdbProviders

The only interesting part of the model, IxpdbProviders, is the getRows() function:

public function getRows()
{
  return Cache::remember( 'IxpdbProvider::rows', 3600, function() {

    return array_map( function( $a ) {
      foreach( $a as $k => $v ) {
        if( is_array( $v ) ) {
          unset( $a[$k] );
        }
      }
      return $a;
    },
    Zttp::get('https://api.ixpdb.net/v1/provider/list')->json()
  );

});

There’s a few interesting things happening here:

  1. I’m using the cache to store the array result of:
    • the fairly large API response for one hour;
    • the array_map() which is required to remove sub-arrays (sub-objects) within the response as Sushi requires flat rows.
  2. Using Zttp out of curiosity rather than Guzzle directly.
  3. Sushi then takes the array of IXPs (the result of the API call) and stores these in a dedicated in-memory Sqlite database for the duration of the request.

We can now query this as if it were a typical database table:

$ php artisan tinker

>>> App\IxpdbProvider::count();
=> 581

>>> App\IxpdbProvider::where( 'name', 'like', 'inex%')->pluck('name')
=> Illuminate\Support\Collection {#3002
     all: [
       "INEX LAN1",
       "INEX LAN2",
       "INEX Cork",
     ],
   }

2FA and User Session Management in IXP Manager

We’ve just released IXP Manager v5.3.0. The headline feature in this release is two-factor authentication (2fa) and user session management. This blog post overviews the PHP elements on how we did that.

While IXP Manager is a Laravel framework application, it uses Doctrine ORM as its database layer via the Laravel Doctrine bridge. For those curious, this really is a carry over from when IXP Manager was a Zend Framework application. For the migration, we concentrated on the controller and view elements of the MVC stack leaving the model layer on Doctrine. Over time we’ll probably migrate the model layer over to Laravel’s Eloquent.

Before reading on, it would be useful to first read the official documentation we have written aroud 2fa and user session management:

Hopefully the how we did this will be useful for anyone else in the same boat or even just trying to understand the Laravel authentication stack.

Two factor authentication (2fa) strengthens access security by
requiring two methods (also referred to as factors) to verify your
identity. Two factor authentication protects against phishing, social
engineering and password brute force attacks and secures your logins
from attackers exploiting weak or stolen credentials.

User session management allows a user to be logged in and remembered from multiple browsers / devices and to manage those sessions from within IXP Manager.

For 2fa, we used the antonioribeiro/google2fa-laravel package which is built on antonioribeiro/google2fa. If we were 100% in Laravel’s eco-system the would have been easier but because we use Doctrine, we needed to override a number of classes.

Structurally we need a database table to indicate if a user has 2fa enabled and to hold their 2fa secret – for this we created Entities\User2FA. Similarly, we have a controller to handle the UI interaction of enabling, configuring and disabling 2fa: User2FAController – this also includes generating QR codes for the typical 2fa activation process.

On the user session management side, we created Entities\UserRememberToken to hold multiple tokens per user (rather than Laravel’s default single token in a column in the user’s user database entry. For the frontend UI, UserRememberTokenController allows a user to view their active sessions and invalidate (delete) them if required.

The actual mechanism of enforcing 2fa is via middleware: IXP\Http\Middleware\Google2FA. This is added, as appropriate, to web routes via the RouteServiceProvider. This will check the user’s session and if 2fa is enabled but has not been completed, then the middleware will enforce 2fa before granting access to any routes covered by it.

Note that because we also implemented user session management via long-lived cookies and because the fact that a user has passed 2fa or not is held in the session, we need to persistently store the fact in the user’s specific remember token database entry. This is done via the Google2FALoginSucceeded listener. This is then later checked in the SessionGuard – where, if we log a user in via the long-lived cookie, we also make them as having passed 2fa if so set.

Speaking of the SessionGuard, this was one of the bigger changes we had to make – we overrode the Illuminate\Auth\SessionGuard as we needed to replace a few functions to make 2fa and user session management work. We have kept these to a minimum:

  1. The user() function – Laravel’s long lived session uses a single token but we require a token per device / browser. We also need to side-step 2fa for existing sessions as discussed above and allow for features such as allowing a user to delete other long-lived sessions and to provide functionality to allow these sessions to expire.
  2. The ensureRememberTokenIsSet() to actually create per-browser tokens (and to expire old ones).
  3. The queueRecallerCookie() so we can insert our own token rather than the default Laravel version.
  4. The cycleRememberToken() which is actually used to invalidae a token by changing it in Laravel. We override to delete the token.

Similarly we have to override the DoctrineUserProvider class to:

  1. Change retrieveByToken() to use our new database in which a user may have multiple sessions across different browsers / devices.
  2. Add addRememberToken() and purgeExpiredRememberTokens() to add and remove tokens.

We of course had to ammend the AuthServiceProvider to use our new overridden classes.

The above constitutes a bulk to the changes. Because 2fa can be enforced via middleware, it doesn’t really touch the core Laravel authentication process. The user session management was more invasive and responsible for the bulk of the changes required in the DoctrineUserProvider and SessionGuard.

What’s not mentioned above is the views – these are mainly covered in the views/user-remember-token (with a lot of inheritence from views/frontend) and the views/user/2fa directories.

While there are a lot more changes between v5.2.0 and v5.3.0 than 2fa and user session management, you can see the complete set of changes here.

Upgrading to PHP 7.3 on Ubuntu Bionic 18.04 LTS

Ubuntu 18.04 ships with PHP 7.2 by default but there are various reasons why you may wish to upgrade to newer versions. For example, active support for it ends later this year – far sooner than the 2023 support window for the OS.

In addition, applications will be released that will require newer versions in that 2018 – 2023 window. For IXP Manager, we are releasing v5 this month and mandating PHP 7.3 support. We do this to stay current and to prevent developer apathy – insisting on legacy frameworks and packages that have been EOL’d provides a major stumbling block for bringing on new developers and contributors. There’s also a real opportunity cost – I have a couple free hours, will I work on project A or project B? If project A uses an old stale toolchain where everything is that much more awkward that project B then which would you choose?

So, from a typical LAMP stack install of Ubuntu 18.04, you’ll find something like the following packages for PHP:

root@ubuntu:/var/www/html# dpkg -l | grep php | cut - -b 1-65
 ii  libapache2-mod-php                    1:7.2+60ubuntu1
 ii  libapache2-mod-php7.2                 7.2.17-0ubuntu0.18.04.1
 ii  php-common                            1:60ubuntu1
 ii  php-mysql                             1:7.2+60ubuntu1
 ii  php7.2-cli                            7.2.17-0ubuntu0.18.04.1
 ii  php7.2-common                         7.2.17-0ubuntu0.18.04.1
 ii  php7.2-json                           7.2.17-0ubuntu0.18.04.1
 ii  php7.2-mysql                          7.2.17-0ubuntu0.18.04.1
 ii  php7.2-opcache                        7.2.17-0ubuntu0.18.04.1
 ii  php7.2-readline                       7.2.17-0ubuntu0.18.04.1

Obviously your exact list will vary depending on what you installed. I find the easiest way to upgrade is to start by removing all installed PHP packages. Based on the above:

dpkg -r libapache2-mod-php libapache2-mod-php7.2 php-common   \
  php-mysql php7.2-cli php7.2-common php7.2-json php7.2-mysql \
  php7.2-opcache php7.2-readline

The goto place for current versions of PHP on Ubuntu is OndÅ™ej Surý’s PPA (Personal Package Archive). OndÅ™ej maintains this in his own time so don’t be afraid to tip him here.

It’s easy to add this to 18.04 as follows:

add-apt-repository ppa:ondrej/php
apt-get update

Then install the PHP 7.3 packages you want / need. For example we can just take the package removal line above and install the 7.3 equivalents with:

apt install libapache2-mod-php libapache2-mod-php7.3 php-common \
    php-mysql php7.3-cli php7.3-common php7.3-json php7.3-mysql \
    php7.3-opcache php7.3-readline

And voilà:

php -v
 PHP 7.3.5-1+ubuntu18.04.1+deb.sury.org+1 (cli) (built: May  3 2019 10:00:24) ( NTS )

One post-installation check is to replicate and custom php.ini changes you may have made (max upload size, max post size, max memory usage, etc.).

Evaluating zsh

I’ve always been a bash user but I’ve recently decided to give zsh a while. It has some pretty useful features such as path expansion and replacement (see this slideshare). And yes, I’m well aware of bash-completion thank you very much.

It also has a nice eco system of expansions including oh-my-zsh with which I’m using plugins for git, composer (php), laravel5, brew, bower, vagrant, node and npm. I went with the agnoster theme and for iTerm2 (my terminal application of choice) I installed the Solarized Dark and Light themes. Both work well with the agnoster theme. I also installed and use the Meslo font.

One issue I did find immediately is things like file type colourisation with ls were not as good as bash. To resolve this, I installed the warhol plugin (as well as brew install zsh-syntax-highlighting grc). Now I find my ls’, ping’s, traceroute’s etc all nicely coloured.

We use Dropbox with work and to keep my work and home office laptops in sync, I moved the configs into Dropbox and symlinked to them:

cd ~
mv .oh-my-zsh Dropbox/ 
mv .zshrc Dropbox
ln -s Dropbox/.og-my-zsh
ln -s Dropbox/.zshrc

This all works really well. My bash aliases are fully compatible so I just pull them in at the end of .zshrc (source ~/.bash_aliases). Lastly – to prevent the prompt including my username and hostname on my local laptop, I set the following in .zshrc:

export DEFAULT_USER=barryo

So far, so happy.

PhpStorm and Xdebug – macOS / Homebrew

After many years of Sublime Text and, latterly, Atom, I’ve decided to give an integrated IDE another look – this time PhpStorm. I’ve always dropped them in the past as they tended to crash (hello Zend Studio) and were slow as hell (hello again Zend Studio). But so far so good – I’m only a couple days into an evaluation license but it’s fast (admittedly I have fast laptops – Intel i7’s with four cores, PCI SSD and 16GB RAM) and it’s yet to crash.

One of the key advantages of IDE’s is integrated debugging. This was ridiculously easy with PhpStorm. I use Homebrew for PHP:

$ brew ls --full-name
 homebrew/completions/composer-completion
 homebrew/php/composer
 homebrew/php/php71
 homebrew/php/php71-intl
 homebrew/php/php71-mcrypt
 homebrew/php/php71-xdebug

I’ve then configured xdebug as follows:

$ cat /usr/local/etc/php/7.1/conf.d/ext-xdebug.ini
[xdebug]
zend_extension="/usr/local/opt/php71-xdebug/xdebug.so"
xdebug.remote_enable=1
xdebug.remote_host=localhost
xdebug.remote_port=9001
xdebug.remote_autostart=1
xdebug.idekey=PHPSTORM

If you’re not using Laravel’s Valet for local development then you should check it out immediately: https://laravel.com/docs/5.3/valet. If you are using it, issue a valet restart.

Port 9001 was chosen above as Valet tends to use 9000 also. We now need to reconfigure PhpStorm to list on this port. Open preferences and type xdebug into the search box. Then find Languages & Frameworks -> PHP -> Debug on the left hand navigation pane and change the port to 9001.

That’s pretty much it for PhpStorm. They really mean zero-configuration debugging. When editing a project in the IDE, there’s a Start Listening for PHP Debug Connections toggle icon in the top left – it looks like a phone. Just turn it on.

The last thing we need to do is have an easy way to enable Xdebug when we want it when testing our applications in the browser. Chrome has a very useful plugin for this: Xdebug-helper. Just install it and edit its options and change the IDE form Eclipse to PhpStorm. You can now use this to start a debug session from within Chrome to your listening PhpStorm IDE.

Oh, just found this useful resource also covering similar topics with a CGI/CLI xdebug split.

Linux (Ubuntu 16.04), PHP and MS SQL

In the many years I’ve been using the traditional LAMP stack, I’ve successfully managed to avoid having anything to do with MS SQL server. Until 2016. This year I’ve had to work quiet a bit with it – administration, backups and, now, scripted queries from Linux with PHP.

I suspect I’m (a) lucky I haven’t had to do this before now; and (b) that Azure seems to have pushed Microsoft into greater Linux based support for MS SQL. The evidence? This open source Mircosoft repository with a MS SQL PHP binary driver for Linux released just a few months ago.

NB: installing the Microsoft PHP driver is different to installing the Microsoft ODBC driver for SQL Server on Linux. These may even be incompatible.

For me, I just took a standard Ubuntu 16.04 install (64bit obviously) with PHP 7.0 and downloaded the latest MS PHP SQL extension (for me, at time of writing, this was 4.0.6. When you untar the Ubuntu16.tar file, copy the .so files to /usr/lib/php/20151012/ and then create a /etc/php/7.0/mods-available/msphpsql.ini file with contents:

extension=php_pdo_sqlsrv_7_nts.so
extension=php_sqlsrv_7_nts.so

Note that the tar also contains two ‘ts’ versions of these files. Trying to use those resulted in errors. Link this for Apache2 / CLI as required. E.g. for PHP CLI:

cd /etc/php/7.0/cli/conf.d/
ln -s ../../mods-available/msphpsql.ini 20-msphpsql.ini

You can confirm it’s working via:

$ php -i | grep sqlsrv
Registered PHP Streams => https, ftps, compress.zlib, php, file, glob, data, http, ftp, sqlsrv, phar
PDO drivers => sqlsrv
pdo_sqlsrv
pdo_sqlsrv support => enabled
pdo_sqlsrv.client_buffer_max_kb_size => 10240 => 10240
pdo_sqlsrv.log_severity => 0 => 0
sqlsrv
sqlsrv support => enabled
sqlsrv.ClientBufferMaxKBSize => 10240 => 10240
sqlsrv.LogSeverity => 0 => 0
sqlsrv.LogSubsystems => 0 => 0
sqlsrv.WarningsReturnAsErrors => On => On

And, finally, for using it, following the the sample scripts from the repository worked a charm.