Intel says it can sort the living human beings from the deepfakes in real time PlatoBlockchain Data Intelligence. Vertical Search. Ai.

Intel says it can sort the living human beings from the deepfakes in real time

Intel claims it has developed an AI model that can detect in real time whether a video is using deepfake technology by looking for subtle changes in color that would be evident if the subject were a live human being.

FakeCatcher is claimed by the chipmaking giant to be capable of returning results in milliseconds and to have a 96 percent accuracy rate.

There has been concern in recent years over so-called deepfake videos, which use AI algorithms to generate faked footage of people. The main concern has centered on it potentially being used to make politicians or celebrities appear to be voicing statements or doing things that they did not actually say or do.

“Deepfake videos are everywhere now. You have probably already seen them; videos of celebrities doing or saying things they never actually did,” said Intel Labs staff research scientist Ilke Demir. And it isn’t just affecting celebrities, even ordinary citizens have been victims.

According to the chipmaker, some deep learning-based detectors analyse the raw video data to try to find tell-tale signs that would identify it as a fake. In contrast, FakeCatcher takes a different approach, involving analyzing real videos for visual cues that indicate the subject is real.

This includes subtle changes in color in the pixels of a video due to blood flow from the heart pumping blood around the body. These blood flow signals are collected from all over the face and algorithms translate these into spatiotemporal maps, Intel said, enabling a deep learning model to detect whether a video is real or not. Some detection tools require video content to be uploaded for analysis, then waiting hours for results, it claimed.

However, it isn’t beyond the realm of possibility to imagine that anyone with the motives to create video fakes might be able to develop algorithms that can fool FakeCatcher, given enough time and resources.

Intel has naturally enough made extensive use of its own technologies in developing FakeCatcher, including the OpenVINO open-source toolkit for optimizing deep learning models and OpenCV for processing real-time images and videos. The developer teams also used the Open Visual Cloud platform to provide an integrated software stack for Intel’s Xeon Scalable processors. The FakeCatcher software can run up to 72 different detection streams simultaneously on 3rd Gen Xeon Scalable processors.

According to Intel, there are several potential use cases for FakeCatcher, including preventing users from uploading harmful deepfake videos to social media, and helping news organizations to avoid broadcasting manipulated content. ®

Time Stamp:

More from The Register