Blind can ‘see’ using sound, Hebrew U team shows
If the eyes don’t work properly, the ears are an appropriate substitute for vision, according to Prof. Amir Amedi
If there’s a credo that Hebrew University’s Dr. Amir Amedi lives by, it’s that blindness is no reason for not seeing. “The fact that the visual pathways for blind people are blocked doesn’t mean that their brains don’t function,” Amedi said in a recent TED talk in Jerusalem. “Their brains work just fine.”
And for most, so do their other senses – specifically hearing, which, with the help of modern technology and a unique training program, can enable the blind to use sound to “see.” By using a combination of sounds, tones, and noise bursts, blind people can be taught within weeks to identify everyday items, and even concepts like colors, using Google Glass-type devices that contain a camera that scans items and translates them into a special musical code.
Since 2007, Amedi, of the Edmond and Lily Safra Center for Brain Sciences and the Institute for Medical Research Israel-Canada at the Hebrew University of Jerusalem Faculty of Medicine, has led research that uses sensory substitution devices (SSDs) to receive data input in the form of sound, enabling the blind to “see” what they are hearing. Amedi built his work on that of Dutch researcher Peter Meijer, who in 1992 came up with the idea that sound could substitute for sight. Amedi’s work has been patented by Yissum, Hebrew University’s technology transfer company.
The principles are rather simple once one tries them out, said Amedi. Users are taught to differentiate between “horizontal” and “vertical” sounds (long and short bursts), “curved” sounds, trellises and waffles, pitch changes, and combinations of sounds. A jarring cacophony to those who hear it for the first time, the sounds quickly fall into place – and into a rhythm. At the TED talk, for example, Amedi taught an audience of hundreds to aurally identify words with about five minutes of training.
After they understand the system, users can be equipped with an SSD, such as a miniature camera connected to a small computer (or smart phone) and stereo headphones. The images are converted into “soundscapes,” using a predictable algorithm, allowing the user to listen to and then interpret the visual information coming from the camera.
Amedi’s commercial product based on the system is called EyeMusic. Amedi has been conducting tests with blind and blindfolded-sighted users of EyeMusic, who, after training, are able to correctly perceive and interact with objects, such as recognizing different shapes and colors or reaching for a beverage. Several, in fact, have gotten to the point where they can recognize human features, such as when a person they are speaking to is slouching. The results of the tests have been written up in recent articles in several scientific journals, including Restorative Neurology and Neuroscience and Scientific Reports. Another report in the Current Biology Journal described how the system was used to let blind trainees “see” body shapes and sizes of people they were interacting with.
To demonstrate the system to the public at large, and to introduce the concept of sound training to those who want to learn it, Amedi’s lab developed an iPhone app, called EyeMusic, which teaches the basic sounds and pitches. The app also teaches how to identify colors and shapes using different instruments, and even variations in color and shape within an image or object, by mixing pitches, tones, instruments, and order of the sounds.
The app represents a great leap forward in the development of Amedi’s system. If anything has held up widespread adoption until now, it’s been the high cost of the SSDs that act as artificial eyes to detect and scan objects. With more powerful smartphones equipped with higher resolution cameras, however, it will be easier to develop apps and input devices that can be widely distributed, Hebrew University said.
Amedi’s research, said Hebrew University, shows that the long-held conception of the cortex being divided into separate vision-processing areas, auditory areas, etc., is incorrect; new findings show that many brain areas are characterized by their computational task, and can be activated using senses other than the one commonly used for this task, even for people who were never exposed to “original” sensory information at all – like people who were born blind.
“The human brain is more flexible than we thought,” said Amedi. “These results give a lot of hope for the successful regaining of visual functions using cheap non-invasive SSDs or other invasive sight restoration approaches. They suggest that in the blind, brain areas have the potential to be ‘awakened’ to processing visual properties and tasks even after years or maybe even lifelong blindness, if the proper technologies and training approaches are used.”