Apple has been busy snapping up dozens of AI startups, and its Siri product is one of the most-widely-used AI applications on the planet, but recent AI headlines have focused on more glamorous efforts — especially from competitors Google and Facebook. Both Google and Facebook have released major AI technologies into open source, and Google’s DeepMind famously tackled the challenge of Go, by beating one of the world’s best players. So Apple took the very unusual step of spending an entire day bragging about its AI efforts to journalist Steven Levy, who published a very-thorough account on Backchannel.
The article reads more like an inventory than a coherent strategy document. There are long lists of AI technologies, AI-infused Apple products, acquired startups, and key hires. So it is a little hard to connect all the dots and discern exactly which technologies are in use in which products, but one clear take away is that the neural network renaissance has had the same disruptive effect on Siri — when it moved to neural nets in 2014 — and other Apple efforts as it has for Google’s speech recognition, Facebook’s facial recognition, and plenty of other fields. Apple says Siri’s accuracy more than doubled when it made the switch, which is fairly consistent with the progress Google has made for its “OK Google” voice recognition through adoption of similar technologies — a couple years earlier.
Less obviously, Apple uses AI to power its predictive technology in the iPhone (showing you a reminder for an appointment you didn’t put in your calendar, the phone number for someone not in your contacts list, a map to a hotel before you ask for it, etc.) The “brain” that drives this behavior is about 200MB on your phone or tablet, although clearly a lot of the processing power comes from Apple’s massive GPU farm in the cloud.
The technologies covered range from fairly simple machine learning (essentially fancy optimization systems), to so-called deep learning (where many layers of neuron-like software structures are chained together and trained using a large number of examples to try and draw accurate conclusions about object recognition or classification of some type). Deep learning still requires quite a bit of clever design and optimization on the part of its programmers, as distinct from the emerging field of unsupervised learning, where large neural networks attempt to analyze data with little or no guidance.
MIT Technology Review published an analysis of Levy’s findings that isn’t nearly so rosy as Levy’s own, somewhat-breathless, conclusions. In an article by Will Knight, it pointed out the Apple was still well behind Google and Facebook, both in its timeline of adopting technologies like Neural Networks, and advances in fundamental research areas including unsupervised learning. He blames Apple’s notoriously secretive culture, and doesn’t see Apple besting its competitors in research without that changing. As a bright spot, he points to Levy’s comment that Apple is now, finally, finding ways its researchers can publish some results — a key to attracting top talent.
However you judge Apple’s progress in AI, it is a testament to how important the field has become that Apple has both felt the need to provide unprecedented access to its internal product process to promote its involvement, and has begun to open up its secretive culture to help it accelerate its research.