Public domain

Adversarial algorithms — when the brawny brains of cybersecurity’s fight clubs slug it out in a battle of good vs. evil

Jeff Elder
3 min readDec 21, 2019

--

In movies like “Transformers” and “Terminator 2,” good robots fight bad robots with the fate of humanity hanging in the balance. The drama can seem melodramatic and downright inhuman. If only it weren’t so true.

Experts say adversarial algorithms — the brawling brains of artificial intelligence – are already duking it out in the fight clubs of cybersecurity. Some of them even use weird spells to make their foes hallucinate.

Here’s how it works: Cybersecurity companies and other researchers train their AI algorithms by feeding them many examples of good and malicious files they capture with anti-viruses and other cybersecurity tools. Cybercriminals, meanwhile, train their AI algorithms to generate malicious files that look harmless.

But just as a flu vaccine may stop all kinds of threats and still let a bug through that makes you sick, a white-hat algorithm misses outliers that sneak into systems. That outlier makes the black-hat algorithm smarter because now it knows what can break through.

“Adversarial attacks on cybersecurity’s malware detectors might seem theoretical, but actually they are already happening in the wild,” said Avast AI and U.C. Berkeley researcher Sadia Afroz at the recent Cybersecurity & AI Prague conference in the Czech Republic.

And just like in the movies, one robot gets the upper hand, and then the other battles back. It is a seesaw of enormous computing power. All takingplace without humans playing an active role.

If this sounds remote, it isn’t. You deal with adversarial algorithms every day via spam email.

“A strong example of both adversarial data scenarios is spam email. Spammers have evolved to have sophisticated adversarial toolkits. For instance, we know that spam email will transform content such that a filtering mechanism cannot read it,” says academic and technologist Jason M. Pittman.

In the heat of the battle an algorithm, like a weary boxer, may get a bit woozy and see things that are not there, says researcher Battista Biggio, an assistant professor at Italy’s University of Cagliari. In a presentation at CyberSec & AI Prague, Battista said data-driven artificial intelligence and machine learning models suffer from hallucinations known as “adversarial examples,” including perceiving images, text, and audio that are not there, and throwing off their logic. That too is a tactic of this computer vs. computer warfare. Algorithms learn to use patterns that can deceive their rivals.

This fight of good vs. evil may be the height of AI, and of a whole branch of technology. But it is no movie. So much is at risk that it is quite precarious, researchers say.

“This active arms race makes AI in security particularly challenging,” says Rajarshi Gupta, Avast Cybersecurity’s head of AI.

--

--

Jeff Elder

Former WSJ reporter and syndicated columnist now writing crypto and cybersecurity. The Paris Review praised my Johnny Cash post.