Monitoring the Health of your Security Certificates

Security Certificates

Most modern websites are protected by some form of Security Certificate that ensures the data transmitted from your computer to it (and vice versa) is encrypted. You can usually tell if you are interacting with an encrypted website via the presence of a padlock symbol (usually green in colour) near the website address in your browser.

These certificates are more commonly known as SSL Certificates but in actual fact, the more technically correct name for them is TLS Certificates (it’s just that nobody really calls them that as the older name has quite a sticky sound/feel to it).

Certificate Monitoring

In any case, one of the things we do a lot of at Red Hat Mobile is monitoring, and over the years we’ve designed a large collection of security certificate monitoring checks. The majority of these are powered by the OpenSSL command-line utility (on a Linux system), which contains some pretty neat features.

This article explains some of my favourite things you can do with this utility, targeting a website secured with an SSL Certificate.

Certificate Analysis

Certificate Dumps

Quite often, in order to extract interesting data from a security certificate you first need to dump it’s contents to a file (for subsequent analysis):

$ echo "" | openssl s_client -connect <server-address>:<port> > /tmp/cert.txt

You will, of course, need to replace <server-address> and <port> with the details of the particular website you are targeting.

Determine Expiry Date

Using the dump of your certificate from above, you can then extract it’s expiry date like this:

$ openssl x509 -in /tmp/cert.txt -noout -enddate|cut -d"=" -f2

Determine Common Name (Subject)

The Common Name (sometimes called Subject) for a security certificate is the website address by which the secured website is normally accessed. The reason it is important is that, in order for your website to operate securely (and seamlessly to the user), the Common Name in the certificate must match the server’s Internet address.

$ openssl x509 -in /tmp/cert.txt -noout -subject | sed -n '/^subject/s/^.*CN=//p' | cut -d"." -f2-

Extract Cipher List

This is a slightly more technical task but useful in some circumstances nevertheless (e.g. when determining the level or encryption being used).

$ echo "" | openssl s_client -connect <server-address>:<port> -cipher

View Certificate Chain

In order that certain security certificates can be verified more thoroughly, a trust relationship between them and the Certificate Authority (CA) that created them must be established. Most desktop web browsers are able to do this automatically (as they are pre-programmed with a list of authorities they will always trust) but some mobile devices need some assistance with establishing this trust relationship.

To assist such devices with this, the website administrator can opt to install an additional set of certificates on the server (sometimes known as Intermediate Certs) which will provide a link (or chain) from the server to the CA. This chain can be verified as follows:

$ echo "" | openssl s_client -connect <server-address>:<port> -showcerts -status -verify 0 2>&1 | egrep "Verify return|subject=/serial"

The existence of the string “0 (ok)” in the response indicates a valid certificate chain.

OpenSSL

You can find out more about the openssl command-line utility at http://manpages.ubuntu.com/manpages/xenial/en/man1/openssl.1ssl.html.

The True Value of a Modern Smartphone

While vacationing with my family recently, I stumbled into a conversation with my 11-year old daughter about smartphones and the ever growing number of other devices they are replacing as they digitally transform our lives.

For fun, we decided to compare the relative cost of the vacation with and without my smartphone at the time (a Samsung Galaxy S3, by the way) and by imagining if we’d taken the same vacation a mere 10 years earlier, how much extra would that vacation have cost without the same smartphone?

Smart Cost Savings

I was actually quite shocked at the outcome, both in terms of the number of other devices the modern smartphone now replaces (we managed to count 10) and at the potential cost savings it can yield, which we estimated at a whopping $3,450!

Smart Cost Analysis

The estimations below are really just for fun and are not based on very extensive research on my part (more of a gut feeling about a moment in time, plus some quick Googling). You can also assume a 3-week vacation near some theme parks in North America.

Telephony: $100

Assuming two 15 minute phone calls per week, from USA to Ireland, at mid-week peak rates, you could comfortably burn $100 here.

Camera: $1,000

Snapping around 1,000 old-school, non-digital photos (at 25 photos per 35mm roll of film) would require approximately 40 rolls of film (remember, no live preview). Then factoring in the cost of a decent SLR camera, plus plus the cost of developing those 40 rolls of film, you could comfortably spend well in excess of $1,000 here.

Of course digital cameras would indeed have been an option 10 years ago too but it’s unreasonable to suggest that a decent digital camera (with optical zoom, of sufficient portability and quality for a 3-week family vacation) could also have set you back $1,000.

Music Player: $300

The cost of an Apple iPod in 2005 was around $299.

GPS / Satellite Navigation: $400

It’s possible that in 2005, the only way to obtain a GPS system for use in North America was to rent one from the car rental company. Today, this costs around $10 per day, so let’s assume it would have cost around/under $20 in 2005.

Games Console: $300

The retail price for a Nintendo DS in 2005 was $149.99 but you also need to add in the cost of a selection of games, which cost around $50 each. Let’s be reasonable and suggest 3 games (one for each week of the vacation).

Laptop Computer: $1,000

I’m not entirely sure how practical/easy it would have been to access the Internet (at the same frequency) while on vacation in 2005 (i.e. how many outlets offered WiFi at all, never mind free WiFi). Internet Café’s would have been an option too, but would not have offered the levels of flexibility I’d had needed to catch up on emails and update/author some important business documents, so let’s assume the price of a small laptop here.

Mobile Hotspot / MiFi: $200

Again, not quite sure if these were freely available (or feasible) in 2005, but let’s nominally assume they were and price them at double what they cost today, plus $100 for Internet access itself

Alarm Clock: $50

I guess you could request a wake up call in your hotel but if you were not staying in a hotel and needed an alarm clock, you’d either have needed a watch with one on it, or had to purchase an alarm clock.

Compass: $50

Entirely optional of course, but if you’re the outdoor type and fancy a little roaming in some of the national parks, you might like to have a decent compass with you.

Torch: $50

Again, if you’re the outdoor type, or just like to have some basic/emergency tools with you on vacation, you might have brought (or purchased) a portable torch or Maglite Flashlight.

10 Ways to Analyzing your Apache Access Logs using Bash

Since it’s original launch in 1995, the Apache Web Server continues to be one of the most popular web servers in use today. One of my favourite parts of working with Apache during this time has been discovering and analysing the wealth of valuable information contains its access logs (i.e. the files produced on foot of general usage from users viewing the websites served by Apache on a given server).

And while there are plenty of free tools available to help analyse and visualise this information (in an automated or report-driven way), it can often be quicker (and more efficient) to just get down and dirty with the raw logs and fetch the data you need that way instead (subject to traffic volumes and log file sizes, of course). This is also a lot more fun.

For the purposes of this discussion, let’s assume that you are working with compressed backups of the access logs and have a single, compressed file for each day of the month (e.g. the logs were rotated by the operating system). I’m also assuming that you are using the standard “combined” LogFormat.

1. Total Number of Requests

This is just a simple word count (wc) on the relevant file(s):

Requests For Single Day

$ zcat access.log-2014-12-31.gz | wc -l

Requests For Entire Month

$ zcat access.log-2014-12-*.gz | wc -l

2. Requests per Day (or Month)

This will show the number of requests per day for a given month:

$ for d in {01..31}; do echo "$d = `zcat access.log-2014-12-$d.gz | wc -l`"; done

Or the requests per month for a given year:

$ for m in {01..12}; do echo "$m = `zcat access.log-2014-$m-*.gz | wc -l`"; done

3. Traffic Sources

Show the list of unique IP addresses the given requests orginated from:

$ zcat access.log-2014-12-31.gz | cut -d" " -f1 | sort | uniq

List the unique IP addresses along with the number of requests from each, from lowest to highest:

$ zcat access.log-2014-12-31.gz | cut -d" " -f1 | sort | uniq -c | sort -n

4. Traffic Volumes

Show the number of requests per minute over a given day:

$ zcat access.log-2014-12-31.gz | cut -d" " -f4 | cut -d":" -f2,3 | uniq -c

Show the number of requests per second over a given day:

$ zcat access.log-2014-12-31.gz | cut -d" " -f4 | cut -d":" -f2,3,4 | uniq -c

5. Traffic Peaks

Show the peak number of requests per minute for a given day:

$ zcat access.log-2014-12-31.gz | cut -d" " -f4 | cut -d":" -f2,3 | uniq -c | sort -n

Show the peak number of requests per second for a given day:

$ zcat access.log-2014-12-31.gz | cut -d" " -f4 | cut -d":" -f2,3,4 | uniq -c | sort -n

Commands Reference

The selection of Unix commands used to conduct the above analysis (along with what they’re doing above) was:

  • zcat – prints the contents of a compressed file.
  • wc – counts the number of lines in a file (or input stream)
  • cut – splits data based on a delimiting character.
  • sort – sorts data, either alphabetically or numerically (via -n option).
  • uniq – removes duplicates from a list, or shows how much duplication there is.

Injecting Data into Scanned Vintage Photos

In a previous post, entitled Vintage Photo Scanning – A Journey of Discovery, I shared my experiences while digitising my parent’s entire vintage photo collection.

As part of that pet project, I also took the liberty of digitally injecting some of the precious data I had learned about them (i.e. when and where the photos were taken, and who was in them) into the photo files themselves, so that it would be preserved forever along with the photo is pertained to. This article explains how I did that.

Quick Recap

My starting point was a collection of around 750 digital images, arranged into a series of folders and subdirectories, and named in accordance with the following convention:

<Date>~<Title>~<People>~<Location>.jpg

So the task at hand was how to programmatically (and thus, automatically) inject data into each of these files, starting over if required, so that sorting them or importing them into photo management software later becomes much easier.

Ready, Steady, Bash!

In order to process many files at a time, you’re going to need to do a little programming. My favourite language for this sort of stuff is Bash, so expect to see plenty of snippets written in Bash from here on. I am also going to assume that you are running this on a Unix/Linux-based system (sorry, Mr. Gates).

Setup

While each photo will have a different date, title, list of people and location, there are some other pieces of data you like to store within them (while you’re at it), which you (or your grandchildren) may be glad of later on. To this end, let’s set up some assumptions and common fields.

Default Data

If you only know the year of a photo, you’ll need to make some assumptions about the month and day:

DEFAULT_MONTH="06"
DEFAULT_DAY="01"
DEFAULT_LOCATION="-"

Reusable File Locations

Define the location of exiftool utility or any other programs you’re using (if they are located in a non-standard part of your system), along with any other temporary files you’ll be creating:

EXIFTOOL=/usr/local/bin/exiftool
CSVFILE=$(dirname "$0")/photos.csv
GPSFILE=$(dirname "$0")/gps.txt
GPS_COORDS=$(dirname "$0")/gps-coords.txt
PHOTO_FILES=/tmp/photo-files.txt
PHOTO_DESC=/tmp/photo-description.txt

Common Metadata

Most photo files support other types of metadata including the make/model of camera used, the applicable copyright statement and of the owner of the files. If you wish to use these, you could define their values in some variables that can be used (for each file) later on:

EXIF_MAKE="HP"
EXIF_MODEL="HP Deskjet M4500"
EXIF_COPYRIGHT="Copyright (c) 2014, James Mernin"
EXIF_OWNER="James Mernin"

Data Injection

List, Iterate & Extract

Firstly, compile a list of the files you wish to process:

find "$IMAGE_DIR" -name "*.jpg" > $PHOTO_FILES

Now iterate over that list, processing one file at a time:

while read line; do
 # See below for what to do...
done < "$PHOTO_FILES"

Now split the input line to extract the 4 field of data:

BASENAME=`basename "$line" .jpg`
ID_DATE=`echo $BASENAME|cut -d"~" -f1`
ID_TITLE=`echo $BASENAME|cut -d"~" -f2`
ID_PEOPLE=`echo $BASENAME|cut -d"~" -f3`
ID_LOCATION=`echo $BASENAME|cut -d"~" -f4`

Prepare Date & Title

Prepare a suitable value for Year, Month and Day (taking into account the month and day maybe unknown):

DATE_Y=`echo $ID_DATE|cut -d"-" -f1`
DATE_M=`echo $ID_DATE|cut -d"-" -f2`
if [ -z "$DATE_M" ] || [ "$DATE_M" = "$DATE_Y" ]; then DATE_M=$DEFAULT_MONTH; fi
DATE_D=`echo $ID_DATE|cut -d"-" -f3`
if [ -z "$DATE_D" ] || [ "$DATE_D" = "$DATE_Y" ]; then DATE_D=$DEFAULT_DAY; fi

It’s possible the title of some photos may contain a numbered prefix in order to separate multiple photos taken at the same event (e.g. 01-School Concert, 02-School Concert). This can be handled as follows:

TITLE=$ID_TITLE
TITLE_ORDER=`echo $ID_TITLE|cut -d"-" -f1`
if [ -n "$TITLE_ORDER" ]; then
 if [ $TITLE_ORDER -eq $TITLE_ORDER ] 2>/dev/null; then TITLE=`echo $ID_TITLE|cut -d"-" -f2-`; fi
fi

Location Processing

This is a somewhat complex process but essential boils down to trying to determine the GPS coordinates for the location of each photo. This is because most photo file only support the GPS coordinates inside their metadata. I have used the Google Maps APIs for this step with an assumption that you get an exact match first time. You can, of course, complete this step as a separate exercise beforehand and store the results of that in a separate file to be used as input here.

In any case, the following snippet will attempt to fetch the GPS coordinates for the location of the given photo. Pardon also the crude use of Python for post-processing of the JSON data returned by the Google Maps APIs.

ENCODED_LOCATION=`python -c "import urllib; print urllib.quote(\"$ID_LOCATION\");"`
GPSDATA=`curl -s "http://maps.google.com/maps/api/geocode/json?address=$ENCODED_LOCATION&sensor=false"`
NUM_RESULTS=`echo $GPSDATA|python -c "import json; import sys; data=json.load(sys.stdin); print len(data['results'])"`
if [ $NUM_RESULTS -eq 0 ]; then
 GPS_LAT=0
 GPS_LNG=0
else 
 GPS_LAT=`echo $GPSDATA|python -c "import json; import sys; data=json.load(sys.stdin); print data['results'][0]['geometry']['location']['lat']"`
 GPS_LNG=`echo $GPSDATA|python -c "import json; import sys; data=json.load(sys.stdin); print data['results'][0]['geometry']['location']['lng']"`
fi

Convert any negative Latitude and Longitude values to North/South or West/East so that your coordinates end up in the correct hemisphere and on the correct side Greenwich, London:

if [ "`echo $GPS_LAT|cut -c1`" = "-" ]; then GPS_LAT_REF=South; else GPS_LAT_REF=North; fi
if [ "`echo $GPS_LNG|cut -c1`" = "-" ]; then GPS_LNG_REF=West; else GPS_LNG_REF=East; fi

Inject Data

You now have all the data you need to prepare the exiftool command:

EXIF_DATE="${DATE_Y}:${DATE_M}:${DATE_D} 12:00:00"
echo "Title: $TITLE" > $PHOTO_DESC
echo "People: $ID_PEOPLE" >> $PHOTO_DESC
echo "Location: $ID_LOCATION" >> $PHOTO_DESC

if [ -n "$ID_LOCATION" ]; then
 $EXIFTOOL -overwrite_original \
  -Make="$EXIF_MAKE" -Model="$EXIF_MODEL" -Credit="$EXIF_CREDIT" -Copyright="$EXIF_COPYRIGHT" -Owner="$EXIF_OWNER" \
  -FileSource="Reflection Print Scanner" -Title="$TITLE" -XMP-iptcExt:PersonInImage="$ID_PEOPLE" "-Description<=$PHOTO_DESC" \
  -AllDates="$EXIF_DATE" -DateTimeOriginal="$EXIF_DATE" -FileModifyDate="$EXIF_DATE" \
  -GPSLatitude=$GPS_LAT -GPSLongitude=$GPS_LNG -GPSLatitudeRef=$GPS_LAT_REF -GPSLongitudeRef=$GPS_LNG_REF \
  "$line"
 else
  $EXIFTOOL -overwrite_original \
   -Make="$EXIF_MAKE" -Model="$EXIF_MODEL" -Credit="$EXIF_CREDIT" -Copyright="$EXIF_COPYRIGHT" -Owner="$EXIF_OWNER" \
   -FileSource="Reflection Print Scanner" -Title="$TITLE" -XMP-iptcExt:PersonInImage="$ID_PEOPLE" "-Description<=$PHOTO_DESC" \
   -AllDates="$EXIF_DATE" -DateTimeOriginal="$EXIF_DATE" -FileModifyDate="$EXIF_DATE" \
   "$line"
 fi

Cross your fingers and hope for the best!

Apple iPhoto Integration

Personally, I manage my portfolio of personal photographs using Apple iPhoto so I wanted to import these scanned photos there too. And so the Data Injection measures above I took above simplified this process greatly (especially the Date and Location fields which iPhoto understands natively).

While I did then go on to use iPhoto’s facial recognition features and manually tag each of the people in the photographs (adding several weeks to my project), the metadata injected into the files helped make this a lot easier (as it was visible in the information panel displayed beside each photo) in iPhoto.

Return on Investment

Timescales

In all, this entire project took almost 9 months to complete (with an investment of 2-3 hours per evening, 1-2 nights per week). The oldest photograph I scanned was from 1927 and the most precious one was one from my early childhood holding a teddy bear that my own children now play with.

Benefits

The total number of photos processed was somewhere in the region of 750. And while that may appear to be a very long time for relatively few photographs, the return on investment is likely to last for many, many multiples of that time.

Upon Reflection

I’ve also been asked a few times since then, “Would I do it again?”, to which the answer is an emphatic “Yes!” as the rewards will last far longer than I could ever have spent doing the work.

However, when asked, “Could I do it again for someone else?”, that has to be a “No”. And not because I would not have the time or the energy, but simply because I would not have the same level of emotional attachment to the subject matter (either the people or the occasions in the photos) and I believe that this would ultimately be reflected in the overall outcome. So hopefully these notes will help others to take on the same journey with their own families.

Vintage Photo Scanning – A Journey of Discovery

I recently undertook a project to scan and digitally convert a collection of vintage photographs belonging to my parents and wanted to share some of my findings, both from a technical and an emotional perspective. So if, like me, you discover a treasure trove of old photographs buried in a drawer somewhere in your parents house, don’t put them back, but do keep reading!

Parental Archives

Like many of my generation and the generations before me, I grew up in an almost exclusively non-digital era with an unwilling reliance on a minimal selection of analog TV and radio channels, cassette tapes and film cameras.

And while the invention of the Internet, coupled with services like YouTube, iTunes and Spotify has meant that many of the TV, radio and musical memories of my youth can be resurrected in the blink of an eye, alas the same does not hold not true for photographic memories. These are way harder to resurrect (impossible in some cases) as they cannot be reproduced or digitally remastered without the original content itself, which in most cases is in the possession of a single entity – your parents!

And this also means that you will require that your parents have done two things:

  1. Taken the time to capture photographs of your childhood in the first place;
  2. Ensure that these (and others of their own) were preserved intact over the intervening years.

And indeed the exact same applies to your parents and to the memories of the life they had before your arrival.

Equipment

In terms of the equipment used to conduct this year-long exercise, here is what I needed:

  • Digital Photo Scanner: You don’t need to pay a lot of money for this (mine was a HP Deskjet F4580 that I bought for just €50), it just needs to support both Greyscale and Colour scanning (which most do) and at a decent resolution (300dpi).
  • Computer: Again, a relatively inexpensive laptop/desktop will be fine, although the scanning software can prefer a little extra RAM at times (when you’re scanning a lot of photos in a single session). Mine (actually, my wife’s) was a Dell Lattitude running Microsoft Windows.
  • Scanning Software: This may depend on your scanning device. I used to software that came with my scanner.
  • Graphics Software: As you are highly likely to want to crop some of the scanned images afterwards, you may need some additional graphics software for this. My personal open source favourite is GIMP, but there are lots to choose from and your scanning software may even do this for you anyway.
  • Post-its: These could prove really handy for cataloging and sorting the photographs so they can be reinserted into their original albums afterwards.
  • Exiftool: This a Unix command-line utility for injecting meta-data into digital image files, such as GPS location, date & time and names of those in the photograph.

Copious amounts of patience, coffee and beer are also strongly recommended.

Planning & Sorting

Strangely, one of the first challenges you’ll face is exactly how to remove the photos from their albums without damaging them and in such a way that you’ll be able to reinsert them in roughly the same order afterwards.

And don’t forget that, while you may feel that you project is complete once you’ve scanned the photos and have them on your laptop, your parents may want them restored into their original setting and you need to respect that.

So in my view, here is the best way to approach this:

  1. Devise an album/page numbering scheme and attach some post-its (or equivalent) to the pages in the various albums.
  2. Remove all of the Black & White photos first, because it’ll be more efficient to scan these together using the same scanner resolution/quality settings.
  3. As you remove each photo, write the album and page number on the rear, preferably using a pencil (which can easily be removed afterwards if required).
  4. Once removed, arrange the photographs into bundles of roughly the same size. This will also make for more efficient scanning (and cropping) of images later on.

Culling

Some of the photos may also be too faded, blurred, cropped or too small to be worth scanning so you may wish to omit those from the process early on. Similarly, keeping multiple (but very similar) photos of the same occasion (with the same people in them) can sometimes dilute the power of just one photo of that occasion.

This is just something you’ll need to make a personal judgement call on but you could use the following logic:

  1. Is there another, similar photo of the same occasion with the same people in it?
  2. Although it’s blurred, or of poor quality, is this the only photo with a particular person or group in it?
  3. Is there a favourite piece of music that this photo could go with, if you were to include it in a musical slide show or movie?

In my case, the percentage success rate here was actually only around 50% (i.e. I ended up skipping roughly half the entire collection) but given the nature of photographic technology at the time, this is not entirely surprising.

Testing, Trial and Error

The first thing you need to do once you think you are ready to start scanning is to stop and do some testing (with just a couple of photos) to be sure you are going to be happy with the results. Here, you are looking to settle on your optimum scanning technique and preferred resolution, file format, compression ratio, colour balance etc.

In terms of the file format, this is important too because not all formats are supported by the popular exiftool utility and so if you plan to inject metadata into the scanned images later on, you need to test this now so you do not use a format you will later regret. For example, I had scanned several hundred photos in PNG format before I realised that I could not inject metadata into them using the exiftool utility. I found that the JPEG format worked best for me.

So trust me, testing beforehand will save you a huge amount of time (and stress) later on, and you will thank me for warning you now.

Scanning & Cropping

In terms of the scanning effort itself, I found that scanning multiple (similarly sized) images at the same time was way more efficient. I also found it more efficient to crop the images from within the scanning software (that came with my scanner) before saving them to disk as separate images.

You might be forgiven for thinking this is the longest phase of the journey, but for me it wasn’t – the dating of the photos, naming of the image files and insertion of meta-data took a lot longer.

Naming Convention

In terms of how you name the image files produced by the scanning exercise, this is really a matter of personal preference. You could just stick with the arbitrary names assigned by the scanning software, but based on my experience you are far better off to invest a little extra time in devising a naming scheme for the files so that you can search for (and/or rearrange) them more easily later on.

What worked for me here was to construct the name of each file using 4 basic pieces of data, separated by a tilde character:

<Date>~<Title>~<People>~<Location>.jpg

where

  • <Date> follows the standard YYYY-MM-DD date format. This means that the files will naturally sort themselves chronologically on most standard file browsing applications.
  • <Title> is some sort of snappy, 4-5 word title for the photo or event, possibly prefixed by a number if there are multiple photos taken on the same day at the same event.
  • <People> is a comma-separated list of the names of the people in the photo (as they would be commonly known to your family).
  • <Location> is a succinct description of where the photograph was originally taken (e.g. something that would match a search in Google Maps).

The use of a tilde character as the field separator (as opposed to a comma or hyphen, for example) is also optional, of course, but works well for me in many situations as it is rarely used within any of the other field/data types, thus allowing you to have commas and hyphens in those other fields without confusion.

Filing

Personally, I would not advise storing several hundred photos in a single directory as I think it would make them harder to manage, find and sort. I therefore decided to store batches of related files in a series of hierarchical subdirectories, some of which themselves included dates in their name. This is again a personal preference thing but it may work in your favour if you are planning to share a copy of the finished photo collection (on a USB stick or CD or via Dropbox) with friends and family.

Dating & Facial Recognition

This was by far the most enjoyable part of the journey. Not only did I learn so much about my wider family (and about myself) but the time I shared with my parents while undertaking this phase was hugely rewarding, both for them and for me. More mature readers will already know this, of course.

The facial recognition itself is relatively straightforward, in that your parents will either recognise the people or not, and it really doesn’t have to me any more complicated than that.

However, putting date on an old photograph can be a lot more difficult, especially when the folks are that little bit older. However, there are some tricks you can use to help with the accuracy here too, which essentially boil down to asking one or more of the following questions:

  1. We you married when this photo was taken?
  2. Was it before or after an important event in your life (e.g. Holy Communion, Confirmation, 21st Birthday, Wedding)?
  3. Was I (or any of my siblings) born when it was taken?
  4. Were your parents still alive when it was taken?
  5. Where did you/we live when that was taken?

By trying to evaluate the date of the photo in the context of seemingly unrelated milestones in their lives, you may find yourselves able to hone in on the real date with a reasonable sense of accuracy.

Are We There Yet?

At this point, you should have all of the photos scanned, cropped and named according to when they were taken, what the event was, who was in the photo and where it was taken. And for many people that would be more than enough.

However, the engineer in me was of course not happy to leave it at that. So watch out for my next blog post on how to inject metadata into your scanned images and use that to aid the importing of the photos into popular photo management software.

 

Debugging network connectivity issues using telnet

Introduction

Ah, the joys and simplicity of the humble telnet utility when it comes to debugging networking connectivity issues. Ever-present in any Unix-based operating system worth it’s salt but sorely missed in Windows systems post-XP (although it can be manually installed).

It has long since been superseded by it’s more secure cousin SSH but did you know that it can still be used to great effect to help determine the most likely reason you are unable to connect to another server in your network, or on the wider Internet.

What does success look like?

What many people don’t realise is that by specifying an additional parameter when using it, you can instruct the program to try to establish a connection on a specific port number (rather than the default port of 23). Take the following example which shows a successful connection to a fairly standard web server:

$ telnet myhost.example.com 80
Trying 192.168.1.10...
Connected to myhost.example.com.
Escape character is '^]'.

The key thing to note here is the presence of the message, “Connected to myhost.example.com” and the fact the the session remains connected (i.e. you don’t get bumped back to the command prompt immediately). This is telling you that you have successfully established a valid connection to the required port on the server you’re interested in, which also (and most importantly) confirms that:

  1. You are not being blocked by a firewall.
  2. The service you’re trying to connect to is alive and well, listing on port 80.
  3. There are no software-based access rules preventing you from accessing the service on that port.

Help, it didn’t work!

In my experience, the above list represents the three most common reasons why things may not be working the way you hoped or expected. The neat thing about the telnet command is that it usually hints at this in the response it gives you, consistently so across the wide range of operating systems it runs on. The following table summarises the meaning of each such response from telnet:

Response Meaning
Trying… Firewall issue
Unable to connect to remote host Service not running
Connection closed by foreign host Software-based access rule

Keep reading for a more detailed explanation of each scenario:

1. You are blocked by a firewall

If the response to your telnet command is simply a “Trying x.x.x.x…” message (which eventually times out) then there is most likely a firewall rule (somewhere along the route to the remote server) blocking you:

$ telnet myhost.example.com 80
Trying 192.168.1.10...

Resolving this normally requires the services of a network engineer to grant the correct access through the firewall that is blocking you.

2. Nothing is running on the specified port

If the response to your telnet command ends swiftly with, “Unable to connect to remote host”, then you can be confident that you are not being blocked by a firewall but the server (you’ve managed to connect to) does not appear to have a process running that’s listening the port you specified:

$ telnet myhost.example.com 80
Trying 192.168.1.10...
telnet: connect to address 192.168.1.10: Connection refused
telnet: Unable to connect to remote host
Resolving this normally requires the services of the server administrator (normally having them start the offending service for you).

3. You’re not allowed to talk to the service anyway

If your telnet command appears to connect successfully but immediately returns to the command prompt with a “Connection closed by foreign host” message, then you can be confident that you are not being blocked by a firewall, there is a valid service listening on the specified port but there is some form of software-specific access rule preventing you from communicating with that service:

$ telnet myhost.example.com 80
Trying 192.168.1.10...
Connected to myhost.example.com.
Escape character is '^]'.
Connection closed by foreign host.
$

Resolving this normally requires the services of the support team that manages the service you are trying to connect to (and having them add a rule allowing traffic from your network).

Summary

There are of course many other connectivity debugging tools (e.g. netcat) and indeed telnet is only suited to TCP-based protocols. However, the presence of this command on almost every operating system out there, along with the fact that it’s syntax and responses have not changed for so long, make it well suited for use in multi-platform environments and multi-layered networks.

Git basics for Subversion command-line users

For anyone that has used the Subversion version control system and wants to consider the switch to Git instead, here is a simple comparison of the more common tasks you’re likely to need to get started.

Please excuse the somewhat simplistic narrative in some items. This article is intended to get users accustomed to the basic Git commands without being too specific about the terminology of either world. Some items also show more than one Git command for the equivalent Subversion command (not uncommon in Git) and there are of course lots of shortcut ways to do some of the items (but for simplicity I’ve not covered them here).

 

1. Check out a repository
i.e. Fetch a copy of the files in a repository that I’ve never fetched before…

$ svn checkout <url>
$ git clone <url>

 

2. What files have I changed?
i.e. Which of my locally checked out files have changed since I last checked them out?

$ svn status
$ git status

 

3. What’s changed on the server?
i.e. Has someone else committed changes that I don’t yet have locally?

$ svn status -u
$ git fetch origin
$ git diff <branch> origin/<branch>

 

4. Show my changes
i.e. What are the changes I’ve made to my files?

$ svn diff
$ git diff

 

5. Fetch latest files
i.e. Fetch the very latest copy of my repository from the server

$ svn update
$ git pull origin <branch>

 

6. Commit my changes
i.e. Push all my changes back to the server

$ svn commit -m "A comment describing your changes"
$ git add <files>
$ git commit -m "A comment describing your changes"
$ git push origin <branch>

 

7. Change logs
i.e. View a list of my most recent changes

$ svn log
$ git log

 

Some other useful Git commands to note
Show all available branches

$ git branch -a

Show differences made in last commits

$ git log -p -<n>

Prune any local copies of branches that have been removed from master copy of repo

$ git remote prune origin

Free Mobile apps for LEGO fans

In my endless pursuit of different ways to enjoy LEGO products, I found the following free mobile apps in the Android app store (a.k.a. Google Play).

  • LEGO Creationary – Excellent series of build and guess games, created by the LEGO Group themselves. A bit heavyweight in terms of performance though.
  • myBrickset: LEGO Set Guide – Allows you to search for LEGO sets by number and create a catalog of the ones you own (or want to own).
  • LEGO Instructions – Lots of simple LEGO sets for younger children, complete with interactive step-by-step instructions.
  • LEGO Scans – Quickly peruse through over 4,000 LEGO sets by name or theme (but no instructions included though).
  • LEGO Minifig Collector – Search and browse through the full series of Minifigure collections. You can also tell the app which ones you have and it’ll then tell you which ones you’re missing (a potentially expensive feature).
  • Lego Mini Figure Identifier – Very clever app for helping you identify which minifigure in inside the package before you buys it. However, LEGO have made it a lot harder to use this app since Minifigure Series 5 onwards so bring a microscope with you if you intend to use this app.
There are loads more, some of which are just mobile games inspired by the LEGO brand and others are for much younger children or for specific LEGO sets.

Simple JSON parsing from Bash using Python

Have a Bash command you like to use a lot but need to parse some JSON data from another script? Here’s a quick inline Python command you might find helpful:

COUNT=`jscript.sh|python -c "import json; import sys;
  data=json.load(sys.stdin); print data['count']"`

Just substitute the name of your other shell script (jscript.sh) and the JSON index (count) as appropriate and you’re all set!

Oh, and of course, join the 2 preformatted lines above also – these are just split because by WordPress theme was truncating the single-line version.