Researchers have cracked facial recognition systems

Source Node: 1039829
Just nine ‘master faces’ can pass for 40% of people’s faces.

Getting the keys to the kingdom is a boon for any kind of criminal, which is why master keys – which allow access to any door possible in a building – are so sought after. They’re the sure fire way for criminals to gain access to anything they want once inside a building, which is why master keys are so carefully handled and access controlled.

For our digital lives, there is no such thing as a master password.

But there is the opportunity for ‘master faces’: the facial recognition equivalent of a master key. Master faces are those that look enough like a large proportion of the population to be able to hoodwink facial recognition systems into allowing them access to devices, even if they’re not the actual owner. And they leverage a reality of human biology: while we’re all different, and our faces may have minute alterations compared to another person’s, many of us look eerily alike.

Think of people you’d be convinced are twins, or celebrity lookalikes, and you get the idea. Master faces are used to impersonate – with a high probability of success – users without actually being the user themselves.

AI unlocks the code

Traditionally, you’d have to scour the world to find someone who looks similar enough to a broad range of people to act as a master face. But advances in AI and neurally generated faces – notoriously known as deepfakes – means that it’s now possible to generate entire faces from little more than some training data and a command.

And Israeli researchers based at Tel Aviv University have harnessed that technology to generate a series of master faces that are able to fool three leading facial recognition systems to the extent that they offer hackers the keys to the kingdom – no questions asked.

It’s a worrying development that throws into question the security of what was considered one of the more reliable lines of defence for people’s devices. “Face-based authentication is extremely vulnerable, even if there is no information on the target identity,” the authors conclude.

How it works

The researchers borrowed the power of an off-the-shelf face generating AI model: StyleGAN. They then tweak it slightly to predict the competitiveness of a given sample and to optimise to that. The master face generation process is repeated sequentially, each time to cover the identities that were not covered by the previously generated faces. They found that just a handful of generated faces could pass for individuals in a wide range of the Labeled Faces in the Wild database – between 40 and 60%.

The generated faces were then tested on different facial recognition systems. Each one of them works in a subtly different way, but nonetheless the nine fictional faces were broad and generic enough to be able to fool the systems in each, giving them access to devices without any issue.

Generating more faces would likely cover an even broader range of the population, making the attack yet more foolproof – an even more worrying development.

Currently, the AI used by the academics generates static images of faces, which some facial recognition systems’ anti-spoofing technology could pick up. But, the researchers point out, it wouldn’t be too difficult for them to animate the generated master faces and overcome liveness detection methods. It all makes for worrying reading for those who rely on facial recognition to keep their devices safe. While the experiment is a proof of concept, rather than a real-life vulnerability being targeted in the wild, it’s only a matter of time before it becomes a reality.

Source: https://cybernews.com/security/researchers-have-cracked-facial-recognition-systems/

Time Stamp:

More from CyberNews