Nagios / Icinga Alerts via Pushover

I came across Pushover recently which makes it easy to send real-time notifications to your Android and iOS devices. And easy it is. It also allows you to set up applications with logos so that you can have multiple Nagios installations shunting alerts to you via Pushover with each one easily identifiable. After just a day playing with this, it’s much nicer than SMS’.

So, to set up Pushover with Nagios, first register for a free Pushover account. Then create a new application for your Nagios instance. I set the type to Script and also upload a logo. After this, you will be armed with two crucial pieces of information: your application API tokan/key ($APP_KEY) and your user key ($USER_KEY).

To get the notification script, clone this GitHub repository or just down this file – notify-by-pushover.php.

You can test this immediately with:

echo "Test message" | \
    ./notify-by-pushover.php HOST $APP_KEY $USER_KEY RECOVERY OK

The parameters are:

USAGE: notify-by-pushover.php  <$APP_KEY> \
    <$USER_KEY> <NOTIFICATIONTYPE>

Now, set up the new notifications in Nagios / Icinga:

# 'notify-by-pushover-service' command definition
define command{
    command_name notify-by-pushover-service
    command_line /usr/bin/printf "%b" "$NOTIFICATIONTYPE$: \
        $SERVICEDESC$@$HOSTNAME$: $SERVICESTATE$           \
        ($SERVICEOUTPUT$)" |                               \
      /usr/local/nagios-plugins/notify-by-pushover.php     \
        SERVICE $APP_KEY $CONTACTADDRESS1$                 \
        $NOTIFICATIONTYPE$ $SERVICESTATE$
}

# 'notify-by-pushover-host' command definition
define command{
  command_name notify-by-pushover-host
  command_line /usr/bin/printf "%b" "Host '$HOSTALIAS$'    \
        is $HOSTSTATE$: $HOSTOUTPUT$" |                    \
      /usr/local/nagios-plugins/notify-by-pushover.php     \
        HOST $APP_KEY $CONTACTADDRESS1$ $NOTIFICATIONTYPE$ \
        $HOSTSTATE$
}

Then, in your contact definition(s) add / update as follows:

define contact{
  contact_name ...
  ...
  service_notification_commands ...,notify-by-pushover-service
  host_notification_commands ...,notify-by-pushover-host
  address1 $USER_KEY
}

Make sure you break something to test that this works!

Recovering MySQL Master-Master Replication

MySQL Master-Master replication is a common practice and is implemented by having the auto-increment on primary keys increase by n where n is the number of master servers. For example (in my.conf):

auto-increment-increment = 2
auto-increment-offset = 1

This article is not about implementing this but rather about recovering from it when it fails. A work of caution – this former of master-master replication is little more than a useful hack that tends to work. It is typically used to implement hot stand-by master servers along with a VRRP-like protocol on the database IP. If you implement this with a high volume of writes; or with the expectation to write to both without application knowledge of this you can expect a world of pain!

It’s also essential that you use Nagios (or another tool) to monitor the slave replication on all masters so you know when an issue crops up.

So, let’s assume we have two master servers and one has failed. We’ll call these the Good Server (GS) and the Bad Server (BS). It may be the case that replication has failed on both and then you’ll have the nightmare of deciding which to choose as the GS!

  1. You will need the BS to not process any queries from here on in. This may already be the case in a FHRP (e.g. VRRP) environment; but if not, use combinations of stopping services, firewalls, etc to stop / block access to the BS. It is essential that the BS does not process any queries besides our own during this process.
  2. On the BS, execute STOP SLAVE to prevent it replicating from the GS during the process.
  3. On the GS, execute:
    1. STOP SLAVE; (to stop it taking replication information from the bad server);
    2. FLUSH TABLES WITH READ LOCK; (to stop it updating for a moment);
    3. SHOW MASTER STATUS; (and record the output of this);
  4. Switch to the BS and import all the data from the GS via something like: mysqldump -h GS -u root -psoopersecret –all-databases  –quick  –lock-all-tables | mysql -h BS -u root -psoopersecret; Note that I am assuming that you are replicating all databases here. Change as appropriate if not.
  5. You can now switch back to the GS and execute UNLOCK TABLES to allow it to process queries again.
  6. On the BS, set the master status with the information your recorded from the GS via: CHANGE MASTER TO master_log_file=’mysql-bin.xxxxxx’, master_log_pos=yy;
  7. Then, again on the BS, execute START SLAVE. The BS should now be replication from the GS again and you can verify this via SHOW SLAVE STATUS.
  8. We now need to have the GS replicate from the BS again. On the BS, execute SHOW MASTER STATUS and record the information. Remember that we have stopped the execution of queries on the BS in step 1 above. This is essential.
  9. On the GS, using the information just gathered from the BS, execute: CHANGE MASTER TO master_log_file=’mysql-bin.xxxxxx’, master_log_pos=yy;
  10. Then, on the GS, execute START SLAVE. You should now have two way replication again and you can verify this via SHOW SLAVE STATUS on the GS.
  11. If necessary, undo anything from step 1 above to put the BS back into production.

There is a –master-data switch for mysqldump which would remove the requirement to lock the GS server above but in our practical experience, there are various failure modes for the BS and the –master-data method does not work for them all.

Synchronising Microsoft Exchange to Another IMAP Server

This post is much less of a detailed how-to but rather some useful links. We were tasked with the job of sync’ing about 1,000 MS Exchange mailboxes to a Dovecot server. This needed to be done via an administrator account on the Exchange end as individual user passwords were not available.

The tool of choice for this is imapsync.  Unfortunately, there is not a single formula that will work for all as it can depend on the Exchange configuration and version as well as the use of domains on the Exchange and ActiveDirectory servers.

To help understand the various combinations of logins for Exchange, I found the following invaluable: Understanding login strings with POP3/IMAP.

Also invaluable is the imapsync FAQ – just search for mentions of Exchange.

In the end, the following worked for me (but your mileage will most definitely vary!):

./imapsync --host1 exchange-server 
    --user1 'domain/adminuser/user' --password1 'admin-password' 
    --authmech1 LOGIN 
    --host2 dovecot-server --user2 user@example.com 
    --password2 userpassword

One key element here is that when logging into Exchange as an individual user I had to use --authmech1 NTLM but if you use this auth method with the above user string, you will always end up logging into the admin’s mailbox, not the user’s. That, at least, was my experience.

Adventures with LDAP (OpenLDAP) – SSL, Multi-Master Replication and Monitoring

In my career to date, I successfully managed to avoid all but the periphery engagement in OpenLDAP. Until recently that is – we had to build a Microsoft Exchange like environment with open source software in a way that was closely integrated and easily managed. But, more on that another time. For anyone else diving into OpenLDAP, here are some articles on my experiences that I have penned:

Securing LDAP with TLS / SSL

This is a continuation of a previous post, Creating an LDAP Addressbook / Directory where we add SSL encryption to the directory.

In our case, we used a signed Unified Communications Certificate (UCC) (also known as a Subject Alternative Names (SAN) Certificate) from GoDaddy. The following will work for those as well as standard signed certificates. I have not tested with wildcard certificates. If you want to use a self-signed certificate, see the TLS and SSL section of Ubuntu’s OpenLDAP documentation as well as notes at the end of this document.

GoDaddy (or any other signing authority) will, when presented with a CSR (Certificate Signing Request), return a signed certificate as well as their own CA cert. You will already have your private key which you used to generate the CSR. With this information, prepare a file called tls.ldif with (for example):

dn: cn=config
add: olcTLSCACertificateFile
olcTLSCACertificateFile: /etc/ssl/gd_bundle.crt
-
add: olcTLSCertificateFile
olcTLSCertificateFile: /etc/ssl/webmail.opensolutions.ie.crt
-
add: olcTLSCertificateKeyFile
olcTLSCertificateKeyFile: /etc/ssl/webmail.opensolutions.ie.key

And apply the change via:

ldapmodify -Y EXTERNAL -H ldapi:/// -f tls.ldif

On Ubuntu (you own distribution may vary here), you need to add the SSL service by editing /etc/default/slapd and updating the SLAPD_SERVICES line to read:

SLAPD_SERVICES="ldap:/// ldapi:/// ldaps:///"

and then restart the server (/etc/init.d/slapd restart). You should now consider firewalling the standard port (389) to force users to use the encrypted SSL port.

Following our example with Thunderbird, you can now update your LDAP directory configuration by setting the hostname to match the subject name in your UCC / certificate (e.g. abook.opensolutions.ie) and the port to 636.

Notes for Self Signed Certificates

If you are using a self-signed certificate, you need to ensure a couple of things. Let’s assume you created a self-signed certificate for abook.opensolutions.ie. Clients need a special configuration parameter for untrusted / self-signed certificates. Copy your self-signed certificate (e.g. /etc/ssl/webmail.opensolutions.ie.crt above) to the client machine(s) – say /etc/ssl/certs/abook.crt.

Now, on the client machine, add the following line to /etc/ldap/ldap.conf:

TLS_CACERT /etc/ssl/certs/abook.crt

Secondly, the hostname you use to access the LDAP server must also match the certificate subject name – i.e. use abook.opensolutions.ie in this example rather than an IP address / alternative hostname.

Creating an LDAP Addressbook / Directory

This article will describe my experiences in creating a read-only LDAP address book (with Thunderbird as a proof of concept); also known as a corporate directory. This is written by someone who has (to put it mildly) hated LDAP for years and dies a little every time he reads an introduction to LDAP that describes it in terms of DNS.

There is one important point to make before we start – while these instructions should apply to any *nix distribution, it uses OpenLDAP/slapd version 2.4 which uses the newer runtime dynamic configuration engine. All of the below performed on Ubuntu 12.10.

Installing OpenLDAP is as easy as (root user is assumed in all of the following):

apt-get install slapd ldap-utils

As part of this process, you’ll be asked to enter an admin password – record this as it will be stored in hashed format.

You can immediately run some LDAP queries to test / get to know your system:

  • Dump your entire configuration:
ldapsearch -Y EXTERNAL -H ldapi:/// -b cn=config
  • List all configuration objects:
ldapsearch -Q -LLL -Y EXTERNAL -H ldapi:/// -b cn=config dn
  • List all installed schemas:
ldapsearch -Q -LLL -Y EXTERNAL -H ldapi:/// -b cn=schema,cn=config dn

In the output from the last command, you’ll see core, cosine, nis and inetorgperson. These are all we need for an address book directory. If you are so inclined, there is a published but neglected schema for Thunderbird specifically but it is not a standard and those fields may not (and probably will not) be supported by other clients.

One thing you might want to do before you start is up the logging level (from none by default) as follows. Don’t forget to change it back when you’re up and running as your logs will fill up fast.

cat <ldapmodify -Y EXTERNAL -H ldapi:///
dn: cn=config
changetype: modify
replace: olcLogLevel
olcLogLevel: 296
EOF

The installation will have created an organisation object based on your domain (or nodomain). E.g.

 ldapsearch -Q -LLL -Y EXTERNAL -H ldapi:/// -b dc=nodomain
dn: dc=nodomain
objectClass: top
objectClass: dcObject
objectClass: organization
o: nodomain
dc: nodomain

dn: cn=admin,dc=nodomain
objectClass: simpleSecurityObject
objectClass: organizationalRole
cn: admin
description: LDAP administrator

You can find out what domain yours is under by examining the olcSuffix field of the output of:

ldapsearch -Q -LLL -Y EXTERNAL -H ldapi:/// -b olcDatabase={1}hdb,cn=config

You may want to modify this to suit or add new objects. We’re going to add new objects – which will work fine as long as the new olcSuffix does not conflict with the output from the above.

Let’s start with creating a database for our directory. First we need a directory on the filesystem:

mkdir -p /var/lib/ldap/opensolutions
chown openldap: /var/lib/ldap/opensolutions

Now create a file (say db-create.ldif) with something like:

# Database creation
dn: olcDatabase=hdb,cn=config
objectClass: olcDatabaseConfig
objectClass: olcHdbConfig
olcDatabase: hdb
olcSuffix: dc=opensolutions,dc=ie
olcDbDirectory: /var/lib/ldap/opensolutions
olcRootDN: cn=admin,dc=opensolutions,dc=ie
olcRootPW: gOeBTo5vfBdUs
olcDbConfig: set_cachesize 0 2097152 0
olcDbConfig: set_lk_max_objects 1500
olcDbConfig: set_lk_max_locks 1500
olcDbConfig: set_lk_max_lockers 1500
olcDbIndex: cn,sn,uid,mail pres,eq,approx,sub
olcDbIndex: objectClass eq
olcLastMod: TRUE
olcDbCheckpoint: 512 30
olcAccess: to attrs=userPassword 
  by dn="cn=ldapadmin,dc=opensolutions,dc=ie" write 
  by anonymous auth 
  by self write 
  by * none
olcAccess: to attrs=shadowLastChange 
  by self write 
  by * read
olcAccess: to dn.base="" by * read
olcAccess: to * 
  by dn="cn=admin,dc=opensolutions,dc=ie" write 
  by * read

And instruct LDAP to create the database:

ldapadd -Y EXTERNAL -H ldapi:/// -f db-create.ldif

The above creates a new LDAP database with some useful indexes. You can ignore the olcAccess for now as we’ll come back and address this later. What is above is fairly typically of a default installation.

Now, we need to add an organization object, an admin user to manage that (i.e. add, edit and remove entries from the corporate database) and an organisationalUnit object to hold our staff information. Create a file (say opensolutions.ldif) containing:

# Organisation object
dn: dc=opensolutions,dc=ie
dc: opensolutions
description: Open Solutions Corporate Directory
objectClass: top
objectClass: dcObject
objectClass: organization
o: Open Source Solutions Limited

# Admin user
dn: cn=ldapadmin,dc=opensolutions,dc=ie
objectClass: simpleSecurityObject
objectClass: organizationalRole
cn: ldapadmin
description: Corporate Directory Administrator
userPassword: Jh90Ckb.c.Tp6

# Unit for our corporate directory
dn: ou=people,dc=opensolutions,dc=ie
ou: people
description: All people in Open Solutions
objectclass: organizationalUnit

And add these objects to the database:

ldapadd -x -D cn=admin,dc=opensolutions,dc=ie -W -f opensolutions.ldif

Note the password is as specified in the database creation object (ie. gOeBTo5vfBdUs in this case).

A quick work on security and access control. By default, anonymous users / anyone can read all your entries. If you are publishing a public directory, this may be okay. If not, create and auth.ldif file with (for example):

dn: olcDatabase={2}hdb,cn=config
changetype: modify
replace: olcAccess
olcAccess: {0}to attrs=userPassword,shadowLastChange
  by dn="cn=ldapadmin,dc=opensolutions,dc=ie" write
  by self read
  by anonymous auth
  by * none
olcAccess: {1}to dn.subtree="dc=opensolutions,dc=ie"
  by dn="cn=ldapadmin,dc=opensolutions,dc=ie" write
  by users read

And apply it with:

ldapmodify -Y EXTERNAL -H ldapi:/// -f auth.ldif

This will:

  • allow access to user password fields for authentication purposes (not for reading);
  • allow any authenticated user to read the corporate directory;
  • allow the ldap admin to make changes;
  • deny all other access to this database (implicit rule).

See OpenLDAP’s Access Control page for more information.

Now, let’s add two sample entries. Create a file people.ldif with:

dn: cn=Barry O'Donovan,ou=people,dc=opensolutions,dc=ie
objectClass: inetOrgPerson
uid: barryo
sn: O'Donovan
givenName: Barry
cn: Barry O'Donovan
cn: barry odonovan
displayName: Barry O'Donovan
userPassword: testpw123
mail: sample-email@opensolutions.ie
mail: sample-email@barryodonovan.com
o: Open Solutions
mobile: +353 86 123 456
title: Chief Packet Pusher
initials: BOD
carlicense: HISCAR 123
ou: Computer Services

dn: Joe Bloggs,ou=people,dc=opensolutions,dc=ie
objectClass: inetOrgPerson
uid: joeb
sn: Bloggs
givenName: Joe
cn: Joe Bloggs
displayName: Joe Bloggs
userPassword: testpw124
mail: joeb@opensolutions.ie
o: Open Solutions
title: Chief Coffee Maker
ou: Kitchen

Add these to the directory using the ldapadmin user:

ldapadd -x -D "cn=ldapadmin,dc=opensolutions,dc=ie" -w Jh90Ckb.c.Tp6 -f people.ldif

You can test this with a couple of searches:

ldapsearch -xLLL -D "cn=Barry O'Donovan,ou=people,dc=opensolutions,dc=ie" -w testpw123 \
    -b dc=opensolutions,dc=ie
ldapsearch -xLLL -D "cn=Barry O'Donovan,ou=people,dc=opensolutions,dc=ie" -w testpw123 \
    -b ou=people,dc=opensolutions,dc=ie mail=sample-email@opensolutions.ie

slapd will listen on all interfaces on the standard port (389) when installed on Ubuntu. So, to test, we turn to Thunderbird:

  1. Open the address book (e.g. Tools -> Address Book)
  2. Add a new directory (File -> New -> LDAP Directory…)
  3. In the General tab (assuming we’re setting up Barry O’Donovan’s Thunderbird), set:

Name: Corporate Directory (whatever you like)
Hostname: 127.0.0.1 (or as appropriate)
Base DN: ou=people,dc=opensolutions,dc=ie
Port number: 389
Bind DN: cn=Barry O’Donovan,ou=people,dc=opensolutions,dc=ie

  1. In the Advanced tab:

Don’t return more than 100 results – change if you wish

Scope: Subtree

Login method: Simple

  1. Click okay to save the settings
  2. Right click on the directory in the left pane and select Properties
  3. Test by going to the Offline tab and click Download Now
  4. Enter your password (and use the password manager) – password is as per the person object above and so in this case: testpw123
  5. Test by typing Joe into the search bar on the top right
  6. Joe Bloggs should appear in the results.

Congratulations! You have a corporate directory.

Next Steps

References

Apple OS X as an NFS Server (with Linux Clients)

For a customer, I had to set up a Linux-based virtualised environment on a MacBook Pro using VirtualBox. This environment included making a couple of 8TB external hard drives available under NFS to the Linux hosts.

In all fairness, what better use can one put OS X to than to virtualise Linux?!?  Just kidding fanboys… well, sort of 😉

Let’s begin with a quick description of the environment:

  • A MacBook Pro (MBP) with OS X 10.8.2
  • VirtualBox with it’s own network (MBP: 192.168.56.1/24) for NFS as well as bridged adapters for general Internet access;
  • Multiple external HDDs – for simplicity, let’s just do one here which is mounted under /Volumes/DATA-1.

We want to export the DATA-1 volume to the Linux clients. That bit’s actually not too hard (see below), the main issue is we needed to match what on Linux is call no_root_squash – i.e. so the root user on the Linux clients would have root access to the NFS shares. That bit was harder.

I’ll assume root access / sudo use in the following commands.

To configure NFS, we edit / create /etc/exports (e.g. nano /etc/exports) such as:

/Volumes/DATA-1 -maproot=root:wheel -network 192.168.56.0 -mask 255.255.255.0

In other words:

  • export /Volumes/DATA-1
  • map the clients root user to local root user and the clients root group to local group wheel (gid = 0)
  • allow the export to be accessed by any host on the private VirtualBox network.

With that entry, NFS can be enabled at boot and started via:

nfsd enable
nfsd start

On a Linux client, this can then be mounted at boot with an /etc/fstab entry:

192.168.56.1:/Volumes/DATA-1 /mnt/data-1 nfs defaults 0 0

The problem was that no matter what variation of options I used, I could not get root access from the Linux clients.

The answer came by chance when I glanced an odd mount option on the external HDD:

/dev/disk2s2 on /Volumes/DATA-1 (hfs, NFS exported, local, nodev, nosuid, journaled, noowners)

noowners? What pray-tell is this? The internet provided some insight:

In Leopard, due to an unfortunate design decision by Apple, “admin” authentication is now required to make this change (no noowners) and non-admin users are no longer able to use “Get Info” to change this setting, even on devices they own and have mounted themselves.

An unfortunate design decision indeed. The temporary solutions is to execute:

mount -u -o owners /Volumes/DATA-1

Thereafter, I now have root access / effective UID from the Linux clients. This of course needs to be entered each time – if someone has a more permanent solution, I’m all ears (see below for a cron script I have implemented for this).

Just as an aside, we have a lot of NFS activity which required some tuning. First, additional NFS threads by adding nfs.server.nfsd_threads=16 to /etc/nfs.conf (execute nfsd restart after that). I’ve also added the following line to /etc/rc.local:

sysctl -w kern.aiomax=64 kern.aioprocmax=32 kern.aiothreads=4

Cron Script for Automatically Removing noowners

As mentioned above, removing this mount option every time you connect these HDDs is damn annoying at best and error prone at worst. I have a script for this now which I locate in /var/root/bin/mount-check.sh which is:

#! /bin/bash

NOOWNERS=`/sbin/mount | grep "/Volumes/DATA-1" | grep noowners | wc -l`

if [[ "X${NOOWNERS//[[:space:]]/}X" = "X1X" ]]; then
    /sbin/mount -u -o owners /Volumes/DATA-1;
fi

This is then executed via a new line in /etc/crontab:

* * * * *    root    /var/root/bin/mount-check.sh

 

Monitoring SSL Certificate Expiry Dates with Nagios

It is good practice to separate Nagios checks of your web server being available from checking SSL certificate expiry. The latter need only be run once per day and should not add unnecessary noise to a more immediately important web service failure.

To use check_http to monitor SSL certificate expiry dates, first ensure you have a daily service definition – let’s call this service-daily. Now create two service commands as follows:

define command{
    command_name check_cert
    command_line /usr/lib/nagios/plugins/check_http -S \
        -I $HOSTADDRESS$ -w 5 -c 10 -p $ARG1$ -C $ARG2$
}

define command{
    command_name check_named_cert
    command_line /usr/lib/nagios/plugins/check_http -S \
        -I $ARG3$ -w 5 -c 10 -p $ARG1$ -C $ARG2$
}

The second is useful for checking named certificates on additional IP addresses on web servers serving multiple SSL domains.

We can use these to check SSL certificates for POP3, IMAP, SMTP and HTTP:

define service{
    use service-daily
    host_name mailserver
    service_description POP3 SSL Certificate
    check_command check_cert!993!21
}

define service{
    use service-daily
    host_name mailserver
    service_description IMAP SSL Certificate
    check_command check_cert!995!21
}

define service{
    use service-daily
    host_name mailserver
    service_description SMPT SSL Certificate
    check_command check_cert!465!21
}

define service{
    use service-daily
    host_name webserver
    service_description SSL Cert: www.example.com
    check_command check_named_cert!443!21!www.example.com
}

define service{
    use service-daily
    host_name webserver
    service_description SSL Cert: www.example.net
    check_command check_named_cert!443!21!www.example.net
}

Nagios Plugin for Checking Backups via rsnapshot

We’ve just added a check_rsnapshot.php script to our nagios-plugins bundle on Github. This script will verify rsnapshot backups via Nagios using a number of checks / tests:

  • minfiles – checks the number of files in a snapshot against a minimum expected number;
  • minsize – checks the size of a snapshot against a minimum expected size;
  • log – parses the rsnapshot log to ensure the most recent runs for each retention period completed successfully;
  • timestamp – checks for files created server side containing a timestamp and thus ensuring snapshots are succeeding;
  • rotation – checks that retention directories are being rotated; and
  • dir-creation – checks that retention directories are being created.

Please see this Github wiki page for more information including instructions.

Analysing MySQL Slow Query Logs

MySQL has a really useful feature that allows it to log slow queries where slow is a minimum time defined by you in micro seconds. It helps a lot is diagnosing website outages or slow responsiveness issues after the fact.

Unfortunately I couldn’t find any nice graphical tools for analysing these but there are a few command line tools:

mysqldumpslow

MySQL’s own tool, mysqldumpslow, which aggregates queries and allows you to sort them by: query time or average query time; lock time or average lock time; rows sent or average rows sent; or the number of queries.

Percona’s MySQL Slow Query Log Analyser

Dating from 2006, Percona’s Peter Zaitsev wrote about their own version of a slow query log analyser (local copy) which has given me good results. Note that their micro time patch has since been incorporated into MySQL mainstream.

One of the main differences over MySQL’s own version is that as well as printing the aggregated query (with number and string literals wildcarded), it also prints a real example of the query allowing a copy and paste to MySQL for execution with EXPLAIN.

Example output with query details redacted:

### 230 Queries 
### Total time: 4708.948293, Average time: 20.4736882304348
### Taking 0.093420 to 203.693466 seconds to complete
### Rows analyzed 0 - 141008
SET timestamp=XXX;
SELECT ... FROM ... AS A 
        INNER JOIN ... AS C ON C.item_id = A.item_id 
    WHERE XXX AND C.item_lang = 'XXX' AND ... 
    ORDER BY CATALOG.item_sort LIMIT XXX;

SET timestamp=1348032761;
SELECT ... FROM ... AS A 
        INNER JOIN ... AS C ON C.item_id = A.item_id 
    WHERE 1 AND C.item_lang = '1' AND ... 
    ORDER BY C.item_sort LIMIT 1;