Intel Develops Real-Time Deepfake Detector with an Accuracy of 96%
05-12-2022 | By Robin Mitchell
With the rise of AI and Deepfakes, many researchers are increasingly concerned with their ability to spread false information and mislead, but Intel has recently developed a tool to detect Deepfakes in real-time. What challenges do Deepfakes present, what has Intel developed, and how can this help defend against AI in the future?
What challenges do Deepfakes present?
There is no doubt that AI has gone through leaps and bounds in the past 30 years. The first AI networks were designed to recognise basic patterns, often requiring vast computing resources simply unavailable to consumers. As researchers developed new AI techniques, the numerous advances in hardware allowed for AI to be accelerated using consumer equipment such as graphics cards, and this quickly saw the integration of basic AI systems into modern devices. Now, engineers have specifically designed hardware to run AI algorithms, and this is seeing cutting-edge processors integrate secondary AI cores that can run at extremely low powers.
But while some AI researchers have focused on identifying objects in images and processing sensory data, others have focused their attention on the generation of images and sound. The result of all of this work has been the creation of Deepfakes, which can map a user’s face with someone else’s and be virtually indistinguishable from reality. Deepfakes have numerous practical uses, with one particular example being the restoration of actors into films. Whether an actor needs to look younger or is no longer alive, deepfakes can bring actors from the past into the present, and the development of audio deepfakes can bring these deepfakes to life.
However, there are numerous challenges presented by deepfakes that are seeing growing concerns in society. One of the biggest concerns is the ability to commit fraud and identity theft. These are often found on sites such as YouTube that will take someone famous (such as Elon Musk) and have the deepfake suggest users purchase some cryptocurrency or phoney stock. If the artificial intelligence deepfake is good enough, it can be difficult to prove that the shared video is a scam.
Deepfakes can also be challenging for political individuals whose likenesses can be used to create false narratives. For example, it is perfectly possible for one political group to deepfake an opponent, make claims surrounding the deepfake, and affect the integrity and character of the victim. This would allow a party to gain more power and thereby damage the electoral system through sheer manipulation.
Finally, deepfakes are also gaining popularity in the pornographic industry, whereby unsuspecting individuals have their likenesses placed on professional actors. If the victim is lucky, the resulting images generated by deepfake algorithms don’t go beyond the creator’s machine, but in many cases, these will be shared in public which not only has the potential to damage the reputation of the individual but also cause humiliation.
Intel developed a real-time deepfake detector
AI is clearly a threat to society in many ways, so it’s only fitting to ban it and prevent its use, right? Well, maybe the solution to AI systems causing problems is to fight back with more AI, and this is exactly what Intel has recently done. Recognising the challenges presented by deepfakes, Intel has recently announced the development of a real-time deepfake detector that can monitor video streams and determine if the video is a deepfake or not.
The new solution, called FakeCatcher, utilises Intel hardware and software to identify blood flow in the target video. Simply put, the colour of skin changes as blood flows through, which itself varies with a heartbeat. At the same time, the moving of muscles and skin causes blood to redistribute, which in turn changes skin colour, and these subtle changes are often ignored by deepfakes. Thus, the new system can check to see if this blood flow exists and, from there, identify if the video is fake or not.
According to Intel, preliminary system tests provide an accuracy rating of 96%, which is significant in the fight against deepfake technologies. Furthermore, the technology can be used with real-time video, which allows for defending against live streams (deepfake technology already allows for real-time video generation).
While the system is still in development, users can now upload video files of suspected deepfake to see whether they are indeed deepfakes. But while the system can operate in real-time, the service will provide users with a result in several hours (likely due to the allocation of resources and needed infrastructure).
How can this help combat AI in the future?
Using AI to fight against AI is already something that is being done, especially by cybersecurity forces who are increasingly finding themselves attacked by AI cyber threats. But in everyday life, few tools are available to consumers to help identify deepfakes. If Intel can find a way to integrate its FakeCatcher technology into mobile devices and graphics processors, it is possible that consumer devices could authenticate video streams in real-time and thus warn users when they are being shown fake content.
One challenge, however, with using good AI to defend against bad AI is that bad AI can use the output of good AI to improve itself. Thus, AI systems find themselves in a vicious cycle of always trying to outdo the other side, and this could quickly create highly competent AI systems that are significantly more powerful than humans (specifically, concerning creating fake images). In fact, it could turn out that future humans will be unable to live with technology without numerous AI aids that can filter out misleading and/or dangerous information, but of course, who decides what data is dangerous?