Ever since the invention of the magnifying glass nearly 25 centuries ago, we’ve been using technology to help us see better. For most of us, the fix is fairly simple, such as a pair of glasses or contact lenses. But for many with more seriously impaired vision — estimated at around 285 million people worldwide — technology has been short on answers until fairly recently. Doctors and scientists are making up for lost time though, with a slew of emerging technologies to help everyone from the mildly-colorblind to the completely un-sighted. They’re also part of a wide swath of new medical advances we’ll be covering all this week here at ExtremeTech in our new Medical Tech series.
We’re all familiar with the accessibility options available on our computers, including larger cursors, high-contrast fonts, and magnified screens. But those do nothing to help the vision-impaired navigate the rest of their day. Instead, a number of different “smart glasses” have been invented that help make the rest of the world more accessible.
These glasses work by using the image from one or more cameras — often including a depth sensor — and processing it to pass-along an enhanced version of the scene to a pair of displays in front of the eyes. Deciding on the best way to enhance the image — autofocus, zoom, object outlining, etc. — is an active area of research, as the best way for the wearer to control them. Right now they tend to require an external box that does the image processing and has knobs for controlling settings. Emerging technologies including eye tracking will provide improved ways to control these devices. Better object recognition algorithms will also help improve their utility. One day it will be easy to have these glasses know enough to highlight house keys, or a wallet, or other commonly-needed, but sometimes hard to locate, possessions.
One of the more clever solutions comes out of Oxford, via Google Global Impact Challenge winner VA-ST. I had a chance to try out VA-ST’s prototype Smart Specs last year, and can see how they could be very helpful for those who otherwise can’t make out details of a scene. It’s hard, though, to get a real feel for their effectiveness unless you are actually suffering from a particular vision impairment. Some work is being done to help simulate these conditions, and allow those with normal vision to evaluate solutions. But until then willing subject participants with uncommon vision disorders are actually a scarce resource for scientists attempting to do trials of their devices.
Most solutions available today suffer not only from technical issues like how they are controlled, but cut off eye contact and are socially awkward — which has also hampered their adoption. Less-obtrusive devices using wave guides, like the ones developed by Israeli startup Lumus, will be needed to overcome this issue. Startup GiveVision is already demoing a version of its vision-assisting wearable using Lumus wave guides to help make it more effective and less obtrusive. Similar advanced augmented reality display technology is also being used in Microsoft’s HoloLens and Magic Leap’s much-rumored device. While it is mostly mainstream AR devices like those that are driving the technology to market, there is no doubt the medical device sector will be quick to take advantage of it.
Other efforts to enhance awareness of the visual world, including EyeMusic, render salient aspects of the scene — such as distance to the closest object — as audible tones. The OrCam system recognizes text and reads it to the wearer out loud, for example. These systems have the advantage that they don’t require placing anything over the wearer’s eyes, so they don’t interfere with eye contact.
In many blind people — particularly those suffering from Retinitis Pigmentosa and age-related Macular Degeneration — the retinal receptors may be missing, but the neurons that carry information from them to the brain are intact. In that case, it is sometimes possible to install a sensor — an artificial retina — that relays signals from a camera directly to the vision neurons. Since the pixels on the sensor (electrodes) don’t line up exactly with where the rods and cones would normally be, the restored vision isn’t directly comparable with what is seen with a natural retina, but the brain is able to learn to make sense of the input and partial vision is restored.
Retinal implants have been in use for over a decade, but until recently have only provided a very minimal level of vision — equivalent to about 20/1250 — and have needed to be wired to an external camera for input. Now, though, industry-leader Retina Implant has introduced a wireless version with 1,500 electrodes on its 3mm square surface. Amazingly, previously-blind patients suffering from Retinitis Pigmentosa have been able to recognize faces and even read the text on signs. Another wireless approach, base on research by Stanford professor Daniel Palanker’s lab, involves projecting the processed camera data into the eye as near IR — and onto the retinal implants — from a special pair of glasses. The implants then convert that to the correct electrical impulses to transmit to the brain’s neurons. The technology is being commercialized by vision tech company Pixium Vision as its PRIMA Bionic Vision Restoration System, and is currently in clinical trials.
While severe vision disorders affect a large number of people, even more suffer from the much more common problem of color blindness. There are many types of color blindness — some caused by missing the correct cones to discriminate one or more of the primary colors. But many who have what is commonly called “red-green colorblindness” simply have cones with sensitivities that are too close together to help distinguish between red and green. Startup Enchroma stumbled across the idea of filtering out some of the overlap, after noticing that surgeons were often taking their OR glasses with them to the beach to use as sunglasses. From there, the company worked to tune the effect to assist with color deficiency — the result being less overall light let through its glasses, but a better ability to discriminate between red and green. If you’re curious whether the company’s glasses can help you, it offers an online test of your vision.
There are plenty of limits on what medical technology can currently accomplish for those who are blind or vision-impaired. Fortunately, accessibility technology has also continued to advance. Most of us are familiar with magnified cursors, zoomed-in text, and speech input-and-output, but there are other more sophisticated tools available. There are too many to even list them here, but for example, startup blitab is creating a tablet for the world’s estimated 150 million braille users that features a tactile braille interface as well as speech input and output. On the lighter side, Pixar is developing an application that will provide a narrative description of the screen while viewers watch.
However good your vision, you’re likely to benefit from medical technology for improving it at some point, since the incidence of vision-related conditions increases dramatically with age. Everyone eventually suffers from at least relatively minor conditions like Presbyopia (the inability for the eye to accommodate to near and far focusing), and over 25% of those who make it to age 80 suffer from major vision impairment. Even for those of us with only minor vision issues, the advent of smartphone apps to help measure our vision and diagnose possible problems will help lower costs. With the rapid advances in microelectronics, surgical technology, and augmented reality, though, there are likely to be some amazing treatments for those conditions in the future.