Could Facial Recognition Privacy Issues Constrain Augmented Reality?
22-07-2019 | By Christian Cawley
Facial recognition is a key element of augmented reality, processed using various artificial intelligence (AI) and machine learning systems. But with so much photographic data being collected, there is a risk to privacy. The FBI, Microsoft, and others are struggling with facial recognition privacy issues and consent.
Can augmented reality meet its potential, or will misunderstood laws and overzealous privacy advocates constrain it?
Privacy Concerns and Augmented Reality
Augmented reality (AR) should be awesome. It has already been used to enhance GPS, direct advertising to individuals, enhance sales (e.g. fashion), improve maintenance and repair, aid medical assistance in theatre and the field, and even revive the Pokemon video game franchise.
Yet the artificial intelligence machine learning element of AR are at serious risk of being hamstrung by privacy concerns, laws, and inexplicably stupid application of said laws.
FBI Testing Fails to Comply With Auditor Recommendations
Take the FBI, for example, who in 2016 were found by the US Government Accountability Office (GAO) to have poor accuracy metrics on its facial recognition technology. Six recommendations were made by the GAO; just one was adopted in full.
The FBI can search 641 million images through its Facial Analysis, Comparison and Evaluation (FACE) system. By maintaining an inaccurate system of metrics, the FBI puts at risk not only its reputation but also that of facial recognition tech. At a hearing held by the House Committee on Oversight and Reform, critics made their disillusionment with the technology clear. Critics include Clare Garvie, a senior associate at the Center on Privacy & Technology at Georgetown Law: "federal, state, and local governments should place a moratorium on police use of face recognition" was her conclusion, placing the onus on communities to decide if they want the technology.
Meanwhile, Microsoft has bizarrely stumbled over the issue of consent in its own facial recognition systems. An update to the Windows 10 Photos app requires the user to gain "all appropriate consents from the people in your photos and videos" to use the facial recognition software in the app.
Facial recognition in consumer software is common, with Google Photos, Adobe Photoshop Elements, and more relying on it. However, Microsoft is the first to take a step that could hit.
This leaves the entire facial recognition and AR development industry with a problem. Just how can facial data be used, is consent required, and if the FBI isn't meeting requirements, why should anyone else?
Could Privacy Laws Disrupt AR Use?
In almost all uses, AR records information not just about the user, but their surroundings. This invariably includes other people. But while legislation seems to care about protecting individuals, the actions of those observed in the presence of AR systems seems to suggest that maintaining social norms is more important.
Study into this phenomenon has settled on the term "privacy paradox": "the observation that many people are aware of privacy risks and claim to care about it, but do not behave in that way."
The promise for augmented reality to enhance everything from shopping to storytelling is considerable. As the privacy paradox demonstrates, it is legislation, rather than individual consent issues that will need to be overcome for it to reach its potential.