With the many meteoric victories artificial intelligence has scored in the recent past, it’s easy to forget the story line has basically been one of evolution rather than revolution. The deep neural networks upon which the likes of Apple’s Siri and Google Now are built have been in existence since at least the late 1950s, when the Frank Rosenblatt pioneered a multi-layer neural network called the perceptron and suggested additional layers with mathematical notations.
Much of the advances made in the field since then can be traced to better data sets for training neural nets and more sophisticated applications of the underlying technology. However, one pièce de résistance has always remained: No matter how good at prediction a neural network became, it lacked the ability to generalize that ability to learning new tasks. Now, for the first time, a neural network has achieved it. The ramifications could be truly seismic.
Many experts have seen the ability to generalize learning as one of the defining differences between how a neural network attacks a learning problem versus how a human does. Humans possess the ability to apply models they have learned from one task to a second, previously untried endeavor. For instance, the first time you learned to whisk eggs, you could subsequently apply that knowledge to whisking cream, or any other whisking-based endeavor. Not so for a deep neural network, which needs to be trained anew for each activity.
A computer that could generalize between learned activities would fundamentally alter the intelligence landscape, conceivably igniting the kind of “hard takeoff” scenario espoused by Nick Bostrom in his seminal book Superintelligence: Paths, Dangers, Strategies. In a hard takeoff scenario, a self-improving AI recursively augments its learning ability to the point where humans no longer really pose any competition. An AI that can generalize between learned activities could use its vast storehouse of learned models to attack any new activity with a level of sophistication only dreamed of by humans.
We shouldn’t be surprised the source of this breakthrough was made by the folks at DeepMind, a London-based AI firm responsible for AlphaGo, the go playing supercomputer. Writing for the journal Nature last week, the team described the underlying theory behind the new AI,, which they have dubbed a Differential Neural Computer (DNC). It relies upon a high throughput external memory device to store previously learned models, combined with a system for generating new neural networks based upon the archived models.
DeepMind’s achievement also disproves previous claims that computers are nowhere near achieving human-level general intelligence, a possibility that has given rise to much hand wringing by those who see artificial intelligence as posing an extinction level threat to humanity.
Given the tensions surrounding AI at the moment, it’s not surprising DeepMind is couching this breakthrough in the most mundane terms. The proffered example given in their Nature paper was DNC’s ability to successfully navigate a London subway map from previous experience, finding the shortest path between specified points and inferring the missing links in randomly generated graphs. Finding an optimal route between locations is something we are already familiar with computers doing, so it’s calculated to underwhelm.
But make no mistake: The way the a DNC goes about doing this is fundamentally different from how AIs have done it in the past. This new form of generalized learning could pave the way for an era of artificial intelligence the likes of which will strain the human imagination.
Now read: How Google’s DeepMind beat the best in Go, and why that matters