Compare a photo of yourself all cleaned up for a night out with another one first thing the next morning, and you’ll begin to appreciate the problems that people working on face recognition software encounter.
While some unfeasibly lucky people look great from all angles, most of us have to contend with a lottery of lighting conditions, odd angles, stupid expressions, stupider poses and the ravages of age. Faced with this unavoidable variability, it’s no wonder that automatic software flounder when tasked with comparing images to stock photos, like those in passports.
Now, Rob Jenkins and Mike Burton from the University of Glasgow have beaten the problem by creating a face recognition system that, so far, has proved to be 100% accurate. This level of accuracy is unheard of in the technological world. It is matched only by that most sophisticated of computers – the human brain – and indeed, it’s the brain that provided Jenkins and Burton with the inspiration for their method.
Two years ago, the duo proposed that we learn to recognise faces by averaging the features of different images that we know belong to the same face. The more images we see, the better our averaged template image becomes. This explains why we are poor at recognising unfamiliar faces but astonishingly good at identifying familiar ones across different moods, environments and stages of drunkenness.
As Jenkins and Burton found out, if it’s good enough for the brain, it’s good enough for a PC. They worked with a pre-existing face recognition tool called FaceVACS, used in airport scanners belonging to the Australian Customs Service, among other places.
They fed FaceVACS with over 31,000 shots of famous faces, taken from a website called MyHeritage.com and representing photos taken by different photographers and cameras over several decades. The website uses this library to tell users which celebrities they look like; Jenkins and Burton used it to test their image averaging theory.
First, they gave FaceVACS a collection of 500 other celebrity photos taken from the Internet and asked it to match these to the MyHeritage library. On average, it scored a measly 54%. Even for celebrities represented by the most images, the hit rate was only 89%.
These are not numbers that would make put you at ease if national security was depending on them. If you scaled the experiment up to the numbers of people who pass through transport hubs, the amount of misses or false alarms would number in their thousands.
Next, the duo created new images by averaging the features of all the photos of any individual celebrity in their test database. For example, the 20 photos of Bill Clinton were combined to produce a shot that represented the essence of the former President. These combined photos highlight the important features that are consistent across the individual shots and play down traits like lighting and expression that act like red herrings.
They challenged FaceVACS with these averaged faces and amazingly, this simple process raised its success rate from a paltry 54% to a perfect 100%.
Like all good scientists, Jenkins and Burton didn’t let things lie; they challenged their promising result by hobbling their new system as best they could. They suggested that the averaging process was successful because it depended on key recognisable photos; to eliminate this possibility, they re-created the averaged images without the 54% of sample that was initially recognised by FaceVACS.
Without these shots, the programme should have scored 0%. As it was, the averaging process raised the bar to 80%. That massive improvement was solely down to averaging. A larger sample and a bigger test are the next obvious steps.
Reference: Jenkins, R., Burton, A.M. (2008). 100% Accuracy in Automatic Face Recognition. Science, 319(5862), 435-435. DOI: 10.1126/science.1149656