The age of robotic butlers may appear impossibly distant, as we gaze disheartened at our Roomba bumping stupidly against the staircase. But the first glimmerings of a much different sort of robot helper are already apparent. Like protozoan emerging from the primordial soup, the features that will comprise the next generation of home robots are present in the marketplace even now. As they start connecting up to form ever more complex automations, the results promise to be astounding. Hold on to your seat as we careen through the futuristic miasma that is the latest in robotic butlers.
One of the first myths worth dispelling as we embark on this journey is that home robotics is a single field. In fact, the next generation of home robots will be made possible by a portfolio of technologies that have gradually been maturing over the last decade. The mistake many have made is looking for a single technological threshold to be breached, marking the dawn of the robotic age. Instead, the robotic assistant of the future is being made possible through the gradual maturing of at least three different fields in robotics – speech and scene recognition, sensor capabilities, and power electronics. By browsing the latest developments in these arenas, we can catch a glimpse of the kind of robotic butler that will likely be serving us breakfast in the decades to come.
If there is a single technology that is most likely to be pointed to as enabling the dawn of the robotic butler, it’s machine learning. Machine learning is a branch of computer science that includes artificial neural networks – the technology behind Siri and Google’s speech recognition. This is the area that has probably received the biggest investment from heavyweight technology companies like Microsoft, Google, and Facebook. And it’s no surprise since it pertains directly to their business model – which is at the end of the day software rather than hardware. In looking at the advances made in machine learning we can, therefore, discern the “brains” of our robotic butler.
While it may seem like a very curtailed sort of butler, the Amazon Echo speaker and the soon-to-launch Google Home speaker are at the forefront of machine learning in regards to speech recognition and home automation – two of the key components we will look for in a robot butler. At its I/O conference last month, Google announced its latest brainchild, Google Home, a speaker which packs all the advanced AI we have come to expect from Google Now into a small-profile audio device. The speaker will contain far field “always-on” microphones, poised day or night to respond to our commands, however absurd (I can’t be the only one asking Google whether it’s better to peel a banana from the top or bottom?)
The brains behind the speaker will be capable of controlling much of your home automation including dimming the lights, changing thermostat settings, and unlocking smart door locks. In addition, it will have all of Google Now’s features, now repackaged as Google Assistant, including offering directions, sending text messages, and answering simple knowledge based queries.
Though the price for the Google speaker will probably be comparable with the Amazon Echo, weighing in just under two hundred dollars, the costs in terms of privacy will likely be far higher – a permanent eavesdropper lurking within our homes, controlled by one of the world’s largest corporations. But judging by the public’s reception to the Amazon Echo, it’s a tradeoff many people are willing to make.
The other area of machine learning that demands a closer look in regards to robotic butlers is scene recognition. While still in its infancy compared with speech recognition, scene recognition is essential to enabling robots to make sense of their visual surroundings. And it is orders of magnitude more difficult than speech recognition.
The old saw that a picture is worth a thousand words is literally true when it comes to scene recognition. Though we rarely stop to think about it, the amount of information digested by our vision processing system in the human cortex is several times larger than the auditory inputs. As an example, walk into an evening party, and in one swift glance you can gain more information about the relationships between people than could be obtained in a 10-minute description of the same proceedings.
Though we have fewer examples of cutting edge scene recognition in consumer technology products as compared with speech recognition, at least two examples are already in the wild. These include the consumer robots Jibo and Zenbo and the face detection algorithms employed in many digital cameras and smart phones. Both Jibo and Zenbo possess limited scene recognition capabilities. For instance, in its promotional material for Zenbo, Asus demonstrates how the home robot can use its onboard video camera to recognize when an elderly person has fallen and respond by calling an emergency contact.
Meanwhile, many smart phones already come packaged with face recognition algorithms, a kind of primitive scene recognition that could allow a robot to differentiate between members of the household in which they “live,” and recognize when a new face, perhaps belonging to an intruder, has been detected. For a more detailed breakdown on the latest in scene recognition refer to ExtremeTech’s previous explorations on this topic.
The other major advancement that will propel robotic butlers to the next level is happening in the domain of sensors. Three-dimensional cameras of the type pioneered by the Microsoft Kinect, and those in next generation iRobot Roombas, will allow the robotic butler to sense its surroundings with unparalleled finesse. iRobot is one of the companies pushing the envelope in this regard, as their latest Roomba demonstrates. Possessing vSLAM technology, a form of visual mapping that uses multiple cameras to create a layout of the environment, the Roomba 980 can traverse a living room in straight lines rather than its predecessor’s characteristic bumping and arbitrary manner. This same technology enables it to plot out the most efficient route to take when vacuuming, resembling much more closely the way a human would approach the task.
The third domain in which technological advancements will reap rewards for robotic butlers is the arena of power electronics and actuators. This is a more traditional engineering topic, and for the latest we can turn toward an organization that has been tackling the thorniest engineering problems for many decades, NASA. While its Valkyrie robot failed spectacularly during the DARPA robotics challenge, in regards to the power electronics that make up what we might think of as the brick and mortar of a robot, Valkyrie represented something of a high water mark.
In robotics, the versatility of a limb is measured in degrees of freedom, which describes the number single-axis rotational movements possessed by a joint. In general, the more degrees of freedom, the more physically versatile the robot. This NASA Valkyrie robot boasted a whopping 44 degrees of freedom, compared with the 28 degrees of freedom possessed by its closest rival, the Boston Dynamics Atlas robot. We should, therefore, look for robots resembling Valkyrie in design when it comes to mimicking the smooth muscle movements exhibited by humans while walking and picking up objects.
Having browsed the major areas of technology germane to robotic butlers, we can now see a dim outline of what the future holds. Imagine a robot possessing the body of NASA’s Valkyrie robot, with the brains and hearing of a Google Home speaker and the eyes of the Roomba 980. It’s a Frankenstein creation to be sure, and one that few of us could afford or even wish to have snooping about in the kitchen. Nevertheless, with Mark Zuckerberg talking a big game about desiring a robotic butler to help him around the house, at least one billionaire is in the market for such a device. And if history teaches us anything, it’s that once a technology enters the possession of the uber wealthy, it won’t take long for it to filter into the aspirations of the common people.