Pages Navigation Menu

SHOWFUN - Show & Fun & More!

How the US military is training machines to protect its digital assets

By now we’re all familiar with the story of the unassuming computer hacker-turned-superhero. It’s a favorite Hollywood trope that’s played out in movies like The Matrix, WarGames, and on the small screen with Mr. Robot – a hit USA network television drama. But if the Defense Advanced Research Projects Agency (DARPA) has its way, both security analysts and hackers alike could well find themselves being replaced by machines. An ongoing DARPA project, one with a goal of using artificial intelligence to tackle security issues, is now beginning to bear fruit and may soon muscle out the human competition in these arenas.

To get a view of the front lines in this story, one need not penetrate any secret government bunkers or fortified data centers. Instead, the action is proceeding directly beneath the halogen lit glare of casino lights and to the accompaniment of cocktail waitresses clicking heels. Welcome to the Cyber Grand Challenge, a DARPA hosted event at the Paris Hotel and Casino in Las Vegas, Nevada. In the plush confines of an event center on August 4, 2016, seven artificial intelligence algorithms will square off against each other to determine which is the best at patching a penetrated computer network. It’s the hacker equivalent of the World Series, live streamed and open to the public.

Call it insidious design, or just brilliant stratagem. DARPA knows the easiest way to get something done is to have someone else do it for you, and that the best place to hide something is in plain view — hence the Grand Challenges, held periodically and under full media scrutiny. The general idea here is to dangle large cash prizes in front of various private sector groups in the hopes of motivating them to develop some sought after bit of scientific wizardry.

The results of past years have generally substantiated DARPA’s chosen approach. Much of the technology behind the self-driving car was the result of a DARPA Grand Challenge, as well as the robotic expertise that will likely assist in future missions to Mars. Nice from a public good perspective, but it’s worth remembering DARPA is under no obligation to disclose the exact role for which it desires a technology, and being a wing of the military, it’s probably safe to assume it’s not for altogether peaceful purposes. (For a more penetrating discussion of the ways DARPA has managed to hoodwink scientists into participating in their programs, read Confronting society’s absurd enthusiasm for DARPA’s murdering ‘mad science’.)

There’s good reason DARPA wants to automate the business of cyber security. The time lag between detecting a network vulnerability and designing a patch for it gives hackers a distinct advantage over security personnel. In the intervening months between when the vulnerability is first detected and when a suitable patch is ready for release, hackers have free reign to move through many thousands of systems. Automating the patching process would tilt the landscape in favor of the security team.

But as usual, there is a sinister side to this seemingly benign goal of automating computer security. An artificial intelligence that is capable of writing computer code to patch a security leak wouldn’t be far from one capable of penetrating a security leak — imagine an AI system capable of producing computer viruses more sophisticated than any a human could code. A virus spewing artificial intelligence, whether in the hands of the military or accidentally leaked into the public domain, would be a scourge of biblical proportion. Even just the idea of such a creation could prove problematic. As Nick Bostrom reasons in his seminal work Superintelligence: Paths, Dangers, and Strategies, it could be enough for foreign governments to believe America has an AI hacking agenda to trigger the creation of such a program of their own, quickly escalating into a kind of AI arms race.

Security surveillance unlock privacy

Already there is rich history of governments using hacking tools to advance their political agenda – China has made little secret of their army of state-sponsored hackers, and a virus called Stuxnet, famously used to undermine Iran’s uranium enrichment program, suggests western governments are not above trying their own hand at hacking. In light of this historical trend, a government-sponsored AI hacking program seems likely to be a secondary, if not primary, motivator behind DARPA’s Cyber Grand Challenge.

While there is plenty of room for dark speculation regarding the Cyber Grand Challenge, let’s get down to the nuts and bolts of what one can expect to see on display at the Paris Hotel on August 4th. The focus of the event will take place around a game of what hackers call Capture the Flag. This is not the school yard game of yore; rather it is a highly developed form of cyberwarfare in which the contestants try to reverse engineer the opponents operating system to expose flaws and steal a specific file (the flag), while simultaneously patching security holes in their own system and protecting the file which the opposition is attempting to commandeer.

DEFCON, the official hacking conference that takes place in Las Vegas annually, has been hosting such Capture the Flag (CTF) events since its inception, and popularized their use as the gold standard for assessing both hackers and security analysts alike. In a case of odd bedfellows if ever there was one, the DARPA Cyber Grand Challenge will take place alongside DEFCON this year, further showing the willingness of the US government to court the fringe elements of the cyber community to further their agenda.

However, there will be one major difference between the CTF game employed in the Cyber Grand Challenge and the one routinely hosted at DEFCON, and that is the use of AI competitors instead of humans. For the moment, DARPA has made no mention of pitting these algorithms against any human foes in CTF, but it seems likely such a clash will soon be in the offing. And while the DEFCON hacker community seems to be treating their DARPA guests with benign camaraderie, this bonhomie could quickly unravel were their star hackers to find themselves at the losing end of a confrontation with a DARPA-sponsored AI adversary. Whatever the outcome of Cyber Grand Challenge, it seems likely the world of hacking and computer security will never be the same.

In time for Black Hat and DEFCON, we’re covering security, cyberwar, and online crime all this week; check out the rest of our Security Week stories for more in-depth coverage.

Leave a Comment

Captcha image