- By James Clayton
- North American Technology Journalist
1 hour ago
In March last year, a video emerged showing President Volodymyr Zelensky telling the people of Ukraine to lay down their arms and surrender to Russia.
It was a pretty obvious deepfake – a type of fake video that uses artificial intelligence to swap faces or create a digital version of someone.
But as developments in AI make it easier to produce deepfakes, their rapid detection has become all the more important.
Intel thinks it has a solution, and it’s blood on your face.
The company named the system “FakeCatcher”.
At Intel’s lavish and mostly empty offices in Silicon Valley, we meet Intel Labs research scientist Ilke Demir, who walks us through how it works.
“We wonder what is real in the authentic videos? What is real about us? What is the watermark of the human being? ” she says.
At the center of the system is a technique called photoplethysmography (PPG), which detects changes in blood flow.
The faces created by deepfakes do not transmit these signals, she says.
The system also analyzes eye movement to verify authenticity.
“So normally when humans look at a point, when I look at you, it’s like I’m shooting rays from my eyes, towards you. But for deepfakes, it’s like wide eyes, they’re divergent,” she says.
By looking at these two characteristics, Intel believes it can tell the difference between real and fake video in seconds.
The company claims that FakeCatcher is 96% accurate. So we asked to test the system. Intel agreed.
We used a dozen clips of former US President Donald Trump and President Joe Biden.
Some were real, others were deepfakes created by the Massachusetts Institute of Technology (MIT).
To read this content, please enable JavaScript or try another browser
Watch: BBC’s James Clayton puts a deepfake video detector to the test
In terms of finding deepfakes, the system seemed pretty good.
We mainly chose lip-synced fakes – real videos where the mouth and voice had been edited.
And he got all the right answers, except one.
However, when we got to real authentic videos, it started to have a problem.
Many times the system declared a video to be fake, when it was actually real.
The more pixelated a video is, the harder it is to capture blood flow.
The system also does not analyze the audio. Thus, some videos that sounded quite obviously real when listening to the voice were labeled as fake.
The concern is that if the program says a video is fake, when it’s genuine, it could cause real problems.
When we make this point to Ms. Demir, she says that “verifying that something is wrong, versus ‘be careful, it may be wrong’ is weighted differently”.
She says the system is too cautious. Better to catch all the fakes – and catch some real videos too – than miss the fakes.
Deepfakes can be incredibly subtle: a two-second clip in a political campaign ad, for example. They can also be of poor quality. A fake can be made by only changing the voice.
In this regard, FaceCatcher’s ability to work “in the wild” – in real-world contexts – has been questioned.
A collection of deepfakes
Matt Groh is an assistant professor at Northwestern University in Illinois and an expert on deepfakes.
“I don’t doubt the stats they listed in their initial assessment,” he says. “But what I doubt is that the statistics are relevant to real-world contexts.”
This is where it becomes difficult to assess FakeCatcher’s technology.
Programs like facial recognition systems often give extremely generous statistics for their accuracy.
However, when actually tested in the real world, they may be less accurate.
Essentially, the accuracy depends entirely on the difficulty of the test.
Intel claims that FakeCatcher has undergone rigorous testing. This includes a “wild” test – in which the company collected 140 fake videos – and their real counterparts.
In this test, the system had a pass rate of 91%, says Intel.
However, Matt Groh and other researchers want to see the system analyzed independently. They don’t think it’s enough for Intel to put itself to the test.
“I would love to evaluate these systems,” says Groh.
“I think that’s really important when we’re designing audits and trying to understand how specific something is in a real-world setting,” he says.
It’s surprising how difficult it can be to tell fake video from real one – and this technology certainly has potential.
But based on our limited testing, it still has some way to go.