Imagine a video of you surfaces online, a video in which you say things you would never say and do things you’d never do. Imagine said video sways how others perceive you. This is just one of the fears associated with Deepfakes, AI replicas of people that are so accurate that it’s difficult to discern the difference between an actual person and their AI counterpart. And the technology is getting better all the time, while laws and their enforcement are floundering to catch up.
Thankfully, Intel has developed Fakecather, an AI that can detect Deepfakes with 96 percent accuracy.
Intel recently uploaded a video to YouTube, shining a light on the Fakecatcher AI. The video opens with Ilke Demir, a senior staff research scientist at Intel Labs, discussing DeepFake videos.
“Deepfake videos are everywhere now. You have probably already seen them; videos of celebrities doing or saying things they never actually did,” says Demir. Or does she? The camera then pans out to reveal the real Demir watching a Deepfake of herself on-screen, driving home Intel’s point.
Demir — the real Demir this time, we think — goes on to explain how Deepfake technology has evolved since 2014. While the technology has benign uses, like allowing actors to appear in films without being on location, its also been used for nefarious purposes, like creating misleading political ads and even revenge porn. And then there are Deepfakes which illegally use celebrity likenesses. Before being trolled on his newly purchased social media platform, Elon Musk made fun of in a more visual way when a Deepfake of Musk appeared in a promo video for a crypto scam.
FakeCatcher, which has been trained via deep learning to detect the visual difference between Deepfakes and actual humans, can analyze and detect Deepfakes within milliseconds. The AI is trained to notice things that may not be apparent to the human eye, like blood flow beneath the skin. Fakecatcher, which runs on a combination of Intel’s 3rd Gen Xeon scalable processors and a variety of proprietary software, will hopefully allow consumers to easily confirm whether they are watching a Deepfake or an actual human being.