A UK scaler this week unveiled a novel approach to identity verification: asking users to turn their heads.
Onfido, a University of Oxford spin-out, launched the software amid growing identity fraud. Growing economic pressures, increasing digitization and pandemic-fueled upheaval recently led politicians to warn that an “epidemic of fraud” sweeps across Onfido’s home country of the UK.
Similar developments have been seen around the world. In the United States, for example, around 49 million consumers were victims of identity theft in 2020. — costing them in total about $56 billion.
These trends have triggered a boom in the identity verification market. Increasingly sophisticated fraudsters are also forcing providers to develop more advanced detection methods.
Onfido gave TNW an exclusive demo of their newest entry into the field: a turn-based capture experience dubbed Motion.
Adoption of biometric onboarding has been held back by two key issues. “Active” detection methods, which require users to perform a sequence of gestures in front of a camera, are known to high dropout rates.
“Passive” approaches, on the other hand, remove this friction because they do not require specific user actions, but this often creates uncertainty in the process. A little friction can reassure customers, but too much scares them away.
Motion attempts to address both of these issues. Giulia Di Nola, Onfido’s product manager, told TNW that the company tested more than 50 prototypes before deciding that the headshot offered the best balance.
“We experimented with device movements, different pattern movements, end-user feedback, and worked with our research team,” she said. “It was the sweet spot that we found easy to use, secure enough, and giving us all the signals we needed.”
Onfido claims that the system’s false rejection and acceptance rates are less than 0.1%. Verification speed, meanwhile, is 10 seconds or less for 95% of users. It’s fast for client onboarding, but slow enough for frequent use, which may explain why Onfido doesn’t yet use the service for regular authentication.
In our demo, the process seemed quick and seamless. After sharing a photo ID, the user is prompted to provide their facial biometrics through a smartphone. They are first asked to position their face in the frame, then to turn their head slightly to the right and left – the order does not matter.
As they move, the system provides feedback to ensure correct alignment. A few moments later, the application makes its decision: clear.
under the hood
While the user is spinning, AI compares the face on the camera with the face on the ID.
The video is sequenced into several frames, which are then separated into different sub-components. Then a sequence of deep learning networks analyzes both the individual parts and the video as a whole.
The gratings detect patterns in the image. In facial recognition, these patterns range from the shape of a nose to the colors of the eyes. In the case of anti-spoofing, the patterns can be reflections from recorded video, glasses on a digital device, or the sharp edges of a mask.
Each network constructs a representation of the input image. All the information is then aggregated into a single score.
“That’s what our customers see: whether or not we think the person is genuine or a parody,” said Romain Sabathe, Onfido’s applied science manager for machine learning.
Onfido’s reliance on Motion stems, in part, from an unusual division of the company: a fraud creation unit.
In a location that resembles a photo studio, the team tested various masks, lighting resolutions, videos, manipulated images, refresh rates and angles. In total, they created over 1,000,000 different cheat examples, which were used to train the algorithm.
Each case was tested on the system. If it passed the checks, the team probed Motion further with similar types of fraud, such as different versions of a mask. This generated a feedback loop to find problems, fix them and improve the mechanism.
Motion also had to work on a wide range of users. Despite Victim Stereotypes, Fraud Affects Most Demographics fairly evenly. To ensure that the system serves them, Onfido has deployed various training datasets and extensive testing. The company claims this has reduced algorithmic bias and false rejections across all geographies.
Sabathe demonstrated how Motion works when a fraudster uses a mask.
When the system captures his face, it extracts information from the image. It then plots the results as coordinates on a 3D graph.
The graph is made up of color groups, which correspond to genuine user characteristics and fraud types. When Sabathe puts on the mask, the system traces the image on the cheat cluster. As soon as he removes it, the point enters the real cluster.
“We can begin to understand how the network interprets the different types of spoofs and the different genuine users it sees based on this representation,” he said.
Onfido’s head rotation technique resembles that revealed last month by Metaphysic.ai, a startup behind the viral deepfakes of Tom Cruise. The company’s researchers found that a sideways glance could expose deepfake video callers.
Di Nola notes that such synthetic media attacks remain rare – for now.
“It’s definitely not the most common type of attack we see in production,” she said. “But it is an area that we are aware of and in which we invest.”
Identity theft attacks and defenses will continue to evolve rapidly.