The potential and pitfalls of biometric technology


Your face could soon be the easiest way for companies to identify you. Facial recognition technology may not quite be ready for mass market adoption, but it’s becoming increasingly prevalent as more and more companies experiment with it.

For example; Live Nation Entertainment, the company behind the Ticketmaster brand, is developing a facial recognition system in an attempt to replace gig ticketsThe company has teamed up with Texas-based Blink Identity to develop biometric tech which, it claims, can identify someone within half a second as they walk past, by taking an image of their face and comparing it to a database.

Financial services firms are investing in the use of biometrics, too. In 2017, HSBC began trialling a voice ID authentication service, in an attempt to replace traditional passwords. The aim was to prevent fraud, but an investigation by the BBC revealed that the system could be tricked by twins, with one imitating the other’s voice.

More recently, the bank rolled out facial-recognition software to its mobile banking app, allowing customers in 24 countries – including the US, UK and China – to login using Face ID, which works by recognising facial features and analysing over 30,000 facial reference points to create a ‘depth map’ of the face.

Startups in the UK are also working to spread the technology’s adoption. London startup iProov uses biometric authentication for security, including working with governments around the world to provide passport and ID verification. AimBrain also provides cloud-based authentication for voice, facial and behavioural biometrics for security purposes.

Onfido, the brainchild of Oxford graduates, uses machine learning to validate identity documents with facial biometrics, in a bid to reduce identify fraud. Then there is Yoti, a London-based startup, which offers an app for both business and personal use that can be deployed to ensure secure identities. For instance, it allows banks to gather the data of people and businesses.

Identifying criminals

The tech is also being leveraged to identify potential criminals. Back in 1998, London became one of the first cities in the world to use CCTV with facial recognition in public areas. Fast forward 20 years and the software is more sophisticated as it spreads further across the UK.

South Wales Police have been trialling an automated facial recognition technology to track down criminals, and have used it at 10 real-world events. The cameras scan faces in a crowd and compare them against a database of custody images to try and make a match and, eventually, an arrest. However, the results have shown just how far the technology has to go before it can be fully implemented without causing potentially devastating mistakes.

It was revealed that at the June 2017 Champions League football final in Cardiff, more than 2,000 people were wrongly identified as potential criminals. The South Wales Police force have since defended their use of it by stating that “no facial recognition system is 100% accurate”, but that the technology had led to more than 450 arrests since its introduction.

The South Wales Police force said it had considered privacy issues “from the outset”, and had built in checks to ensure its approach was justified and proportionate.

However, this has been met with criticism, with the civil liberties campaign group Big Brother Watch posting on Twitter: “Not only is real-time facial recognition a threat to civil liberties, it is a dangerously inaccurate policing tool.”

Still, there is evidence that the tech works, as six correct matches were made at a Liam Gallagher concert in Cardiff in December.

Simon King, an early-stage investor at Octopus Ventures, weighed in: “The technology has already developed as much as it can, the industry will now grow with new applications (eg ID verification) and through peripheral technologies such as augmented reality.”

“Facial recognition is an area where AI and machine learning has already been applied extensively over the last few years. From my perspective, the next exciting area for growth in facial recognition lies in recognising emotion,” he said.

Privacy: a key concern

All things considered, it seems the widespread implementation of facial recognition technology is hooked on a wider debate about the balance between citizens’ privacy rights and security.

David Emm, principal security researcher at Kaspersky Lab, spoke to UKTN about the major downside of the technology:

“Any security breach resulting in leakage of Biometric information is likely to have much more serious consequences than the theft of a password: after all, we can change a weak password, but we can’t change a compromised fingerprint, iris scan or other biometric.”

He also noted the importance of recognising biometrics’ inherent reliance on data: “In an open society, it’s important that citizens are informed about the way personal data is used and held, and under what circumstances it might be passed on to other agencies – and this is no less true of biometrics.”

Steven Murdoch, security architect at the VASCO Data Security Innovation Centre in Cambridge, told us that current security concerns can be alleviated by understanding the tech: “The technology works by storing a template of the user’s face during enrolment, and then comparing the face presented to this template during authentication.

“Many people feel uneasy that these templates might fall into the wrong hands, however in reality, it’s unlikely that the template will be recognisable to someone just by looking at it – converting it back to a photograph is not simple. Biometric templates are also commonly stored securely – for example the templates for Apple’s Face ID are stored in the iPhone’s secure enclave. Also, our fingerprints are left on anything we touch and our photos are scattered all over the internet, so these concerns about whether templates might leak out should be put into proportion compared to other risks.”

There are, however, some legitimate issues to consider, Murdoch highlighted: “Face ID is susceptible to impersonation by close relatives, like siblings or even children, and particularly between identical twins.

“There is also a risk that someone could trick the device into unlocking when presented with a photograph instead of a real face. Modern face recognition systems use techniques like checking for movement and looking at the 3D shape, as well as a normal photo, to make such tricks more difficult but still not impossible. The highest security option is likely to be a combination of multiple, layered authentication methods,” he added.

Lack of awareness

Ojas Rege, chief strategy officer at MobileIron said privacy and security protocols can’t be underestimated: “The absolutely essential security and privacy question is “how does the phone store and secure that data and can anyone else get access to it?” Phone and operating system manufacturers have done a good job of answering this question but unfortunately it is a question that users rarely ask and businesses often overlook.

“With the recent attention on Cambridge Analytica and Facebook, I expect the attention to privacy across both the user community and the governmental level will continue to increase. Don’t ignore this issue – be an informed consumer and make sure you feel comfortable about how the biometric data is stored and used,” he recommended.

Despite the many arguments, for and against, the fact of the matter is that the technology is still nascent and although different applications are quickly surfacing, it will take time for these to improve and for citizens to be fully educated on the tech’s potential and its ramifications.