How Clearview AI is helping the Ukrainian effort
22-03-2022 | By Robin Mitchell
At Electropages, we have said in two separate articles how Clearview AI has used immoral practices by supplying law enforcement with a database of over 3 billion people. But the system is now being used to identify Russian agents active in Ukraine, which could provide a significant edge against terror and insurgency.
Why Clearview AI has been seen as immoral by the masses
Clearview AI is a software company specialising in AI facial recognition software trained using billions of images scoured from the internet. There is no doubt that the system is powerful. While facial recognition is nothing new, Clearview AI has come under flak for its creation of a database that links found faces with links to profiles and personal information. This database has then been sold on to law enforcement, who can then use the system to track individuals down and potentially link people to crime scenes.
The problem with this practice is that records are only ever made on citizens when they have been arrested for a crime. This means that random innocent people on the street will never have their details held by local law enforcement, but the Clearview AI database effectively does this. Even though the pictures are downloaded from publicly available sources, using that data for law enforcement and creating databases is a violation of privacy, GDPR, and copyright (as individuals own their likeness in photos).
In fact, the public reaction to Clearview AI has been so adverse that governments worldwide have started to launch investigations and issue fines to the company with ultimatums to remove photos of their citizens from their database. For example, the UK has recently fined Clearview AI £17m for breaches of GDPR while Italy has issued a fine of €20m.
How Clearview AI is helping the Ukrainian effort
We have said on Electropages in two separate articles that the use of Clearview AI is unethical. Having personal data on billions of people with their faces for use with law enforcement effectively makes everyone and anyone trackable by police. However, issues of morality can often become murky when something immoral can provide a moral good.
In the case of Clearview AI, its software and databases are being used by the Ukrainian army (provided with no charge) to identify potential Russian agents. One of the significant challenges faced by Ukrainian forces is that Russian troops undercover look Ukrainian and speak the same language, and this has even been noted by Russian soldiers who have messaged home saying, “they don’t know who to shoot as they look like us”.
Thus, Clearview AI (and its enormous database) can be used to quickly identify individuals, including where their faces have been found online, their name, and potentially their country of origin. This allows for saboteurs and potential assassins to be identified and detained before completing their designated mission.
According to Clearview AI, their database of over 10 billion photos includes over 2 billion from the Russian social media site VKontakte, meaning that any Russian who has uploaded their face will be identifiable. The use of Clearview AI also has the bonus of identifying family members and thus reuniting families who may have been separated during attacks.
How should engineers approach such sensitive topics?
It is possible that people being scanned by Clearview AI may be misidentified as the system is not perfect, but it could be a critical tool in stopping terror attacks against civilians and helping refugees. This is where the question of the morality of Clearview AI becomes murky as, on the one hand, it is a powerful tool that can do a lot of good, and on the other hand, it can be a tool for oppression and illicit activities.
As technology progresses, so does the ability for a device to be used unethically. As such, engineers are increasingly required to consider the moral impact of their developments and the broader impact on human society. So, how does an engineer approach such a topic?
When starting a new project, engineers should begin by understanding the goal of the project and its final application. From there, time should be spent thinking about how the project can be used in unfair malicious ways, whether stealing information, profiling individuals or violating privacy. Once problem areas are identified, they should be documented while methods for mitigating these challenges can be developed.