Way back in 2005, when Will Wright unveiled Spore to an astonished crowd at GDC, there was one particular part of the demo that seemed to generate buzz: procedural dance. Spore uses player input to generate everything from a creature’s walk animation to its mode of social interaction, but it was the ability to take a novel body shape and make it dance that seemed to excite people’s imaginations the most. Yet Spore took the easy road: None of the creatures look like human beings, which means we have no idea what they’re supposed to dance like. Spore basically just defined dancing as rhythmic movement, sometimes around a fire; that’s not a bad definition by any means, but it’s also nowhere near good enough to generate lifelike human dance.
Now, with the introduction of advanced machine learning techniques, computers are starting to do the real thing: learn and generate dance moves for a human skeleton that look realistic and genuinely dance-like to the average human observer. The neural network software lab Peltarion has teamed up with the Lulu Art Group to create the catchily-named “Chor-rnn,” a self-taught, dancing human skeleton. Here’s the final animation, after 48-hours of learning:
Those two days were spent learning from motion-captured interpretive dance performed by human beings in from of depth-sensing Kinect cameras. The 5-hour collection (“corpus”) of dance data fed to the neural network determined the overall choreographic style of the motions, but the motions themselves are a new form of the same thing.
This has two major implications. One, it allows users to get truly unique generated content, distinct to every artist and the corpus of dance moves they show to the computer; two, it means that the computer made its new dance in roughly the same way a new human student might, by imitating a role model and introducing slight changes to be either reinforced or forgotten.
Below, you’ll see the state of the animation before any real training had been done: it’s a big bundle of lines.
However, we can see below that after about six hours of training, we can see that the network has learned the sorts of movements and joint rotations that have the ability to give rise to the movements in the corpus — in other words, the bundle of lines is not a person made of lines. A big improvement!
The study’s authors want their work to be a “creativity catalyst” for artists, allowing them to have their own work parroted back to them by a non-biased observer. They envision a future in which artists might be inspired by a computer’s take on their own work, creating new computer-inspired art and feeding that back to the computer for more auto-innovation, and perhaps another round of the cycle. That’s achievable, since rather than a Google-level supercomputer this team crunched their motion capture data with four of Nvidia’s Titan X consumer graphics cards. That means it’s within the means of most groups.
Just in general, though, the ability of a computer to even seem to generate true art is a huge conceptual barrier. Linguistic arts contain so much specific, nested meaning that arts like poetry will likely remain locked off to computers for some time. But visual art is often more abstract, allowing more blind, computer-style experimentation to result in aesthetically pleasing results.
Quite frankly, a possible outcome of auto-generating slightly innovative art like this may not be to kill the human creation of art, but to kill the human creation of boring art. Once neural networks have removed all need for humans to do obvious, purely iterative progression on past work, perhaps all we’ll have left is true inspiration and genius. That sounds great — but it could also reveal the depressingly low proportion of people who are truly capable of that level of creativity.
Now read: Artificial neural networks are changing the world. What are they?