Facial recognition software has now reached the point where it has become so affordable that it’s becoming part of mainstream technology, as demonstrated by the launch of the iPhone X last year. This shift means facial recognition software is set to become an everyday part of life in the coming years, which will naturally lead to some seismic changes for both business and society at large. While we’re still in the early stages, organisations and governments are eagerly testing its potential, with interesting results, writes Professor Steven Van Belleghem.
Making things safer
One use of facial recognition software is the improvement of security and making the world a safer place. The USA’s Transport Security Administration has used facial recognition software to screen foreign travellers since 2015, and will soon be rolling this out to domestic travellers to improve homeland security.
The retail sector is similarly using facial scanning to fight back against shoplifters. A programme called FaceFirst can scan people’s faces from 50-100 feet away to check if they are a known shoplifter, and has been shown to reduce shoplifting by 30 per cent. And in a more extreme use of facial recognition software, Taylor Swift used it to monitor a concert in California last year to see if any of her stalkers were in the crowd.
Fast and frictionless
Companies are also deploying facial recognition software to improve processes for customers. Many mobile banking apps such as Belgium’s KBC Bank, Singapore’s OCBC Bank and Japan’s Seven Bank use facial recognition as a fast and frictionless form of identification. Lufthansa now has automated kiosks with facial recognition technology at Los Angeles International Airport that recognise passengers and allow them to board the plane within a few seconds – making boarding efficient, secure and convenient.
Taking this to the next stage, 7-Eleven is opening a trial shop in Tokyo that just requires your face for payment. This shows that we’re moving from one button interface to a situation where our face is the new interface – and the possibilities for this are endless.
The darker side
But while the development of this technology is exciting, there is also a real danger that it could be manipulated and used against people. A next logical step for facial recognition is it being used by companies for context- and emotion-driven hyper-personalisation. This would mean that the communication we receive, the products we’re offered and even the prices we’re shown could be based on our current mood and context. If enriched with past behaviour data, this could become even more personal.
It’s this use, however, where manipulation could easily creep in if we’re not careful. A multitude of studies have shown that our face can reveal a surprising amount of information about us, from our health and mood to our sexual preference and even our social status — information that could easily be used by companies to tailor the products and service we receive.
While on the one hand this could be beneficial, such as ensuring people with health issue receive the right information and support, it could also be used in a darker, more sinister way. For example, insurance companies could refuse policies to individuals whose face indicates that they have a higher percentage of developing certain diseases, or richer people have to pay more for their premiums.
Companies could also harvest data without consent. Camera ‘eyes’ on computers, iPads, mobile phones and smart TVs could be used by companies to monitor physical data such as your heart rate and emotional state, to in turn adapt products and prices offered to you. Pictures or videos shared on Facebook, Instagram and other apps could be used without our knowledge or consent to develop facial recognition software, such as showing how people age. Facebook’s recent ‘10 year photo challenge’ was questioned by some users as a way for the tech giant to do just this.
Getting the balance right
So how can we ensure that facial recognition software is used for good and not ill? One reassuring piece of news is that companies are taking pre-emptive steps, presumably to ward off any government intervention along the lines of last year’s GDPR legislation. Microsoft, for example, has a dedicated Principal Ethics Strategist within its AI Perceptions and Mixed Reality Group. It’s also likely that governments will insist on algorithmic transparency from companies very soon, in the same way they insist on financial transparency, meaning that businesses may get a visit from an algorithmic auditor in the not-too-distant future.
The possibilities to improve our lives with facial recognition software are endless, but it’s also vital that we ensure the scales are tipped towards efficiency and convenience instead of the darker side of manipulation.
Prof. Steven Van Belleghem is an expert in customer focus in the digital world. He’s is an award-winning author, and his new book Customers The Day After Tomorrow is out now. Follow him on Twitter @StevenVBe, subscribe to his videos at www.youtube.com/stevenvanbelleghem or visit www.stevenvanbelleghem.com