Quantcast

Jewish Journal

An app that brings music to our eyes

by Jared Sichel

June 25, 2014 | 12:59 pm

<em>Amir Amedi is a neurologist at The Hebrew University of Jerusalem.</em>

Amir Amedi is a neurologist at The Hebrew University of Jerusalem.

Amir Amedi, a neurologist at The Hebrew University of Jerusalem, is certainly happy about his team’s recent breakthroughs, but that’s not why he was doodling on a notepad during a recent interview.

He was trying to demonstrate how his work could revolutionize the way neurologists understand the human brain, explaining how blind people really can see — by using sound, which, according to Amedi, can be processed by the brain’s “visual” area. 

That’s where the smiley face came in.

Sitting in the lounge of the Luxe Rodeo Drive Hotel in Beverly Hills, he drew two dots for eyes, and, as he drew the smile, made sounds that corresponded with each part he was drawing.

Dee-da-da” — high to low.

Da-dee-dee” — low to high.

“See?” Amedi said. “That’s a smile.”

He was no longer referring to his drawing, but to the sounds he was making, dipping and rising with the inverted arc to create an aural version of a smile. No vision required.

Regardless of whether they can see, the ability of all humans to understand the language of sound as representing visual objects is evidence, Amedi said, that the brain is not a “sensory machine” in which a functioning visual cortex depends on eyes that work. Instead, he called it a “task machine,” meaning  that even when there is no eyesight, the visual cortex is still useful.

In Los Angeles in April, on a short trip sponsored by American Friends of The Hebrew University, Amedi said one proof of this is that when blind people, including those who could not see from an early age, perform nonvisual tasks, such as reading Braille or hearing sounds, the activity in the part of their brains reserved for processing visual information nearly matches that of sighted people performing visual activities.

“This is very radical,” Amedi said excitedly. “If you train them to read using touch with Braille, they recruit the same visual system [the visual cortex].”

As Amedi put it, we may eventually no longer understand the visual cortex as purely visual, but rather as a part of the brain that can convey stimuli, like sounds, into words and shapes and forms.

Using sounds as tools for visual perception — a concept known as sonification — Amedi already has helped hundreds of blind people “see” by training them in his lab’s 70-hour “boot camp.” It uses “sensory substitution devices,” the technical term for an instrument that helps people use one sense — hearing, for example — to deliver information to a different one, sight. Amedi said that after just 30 minutes of instruction, a blind person can correctly identify any series of shapes as represented through sounds. 

In one video provided by his team, a man named Oded, one of Amedi’s blind patients who had received the training,  wore headphones and rested his head on his hands as he listened carefully to computer-generated sounds. As a cartoon image of “Nicolas” — white male, black eyes, yellow goatee and a comb-over on his receding hairline — popped up on the top right corner of the screen, various sounds, including “dee” and “da,” were played.

A female researcher asked him to identify the name of the person whose facial features were conveyed by the sounds coming out of his headphones. Oded, speaking in Hebrew, correctly identified the person as Nicolas. He said he knew it was Nicolas because those computer-generated sounds indicated someone with those attributes. 

But visualizing computer-generated sounds of static objects, like an apple resting on a table, or a face, is a long way from being able to do the same in a dynamic environment, with people coming and going and doors opening and closing. Bringing sensory substitution technology and research into the world, so that blind people can more fully appreciate their environments, is another thing entirely. 

That’s why Amedi’s lab created EyeMusic, a free app available in Apple’s App Store that, using an algorithm, translates the image or video captured by an iPhone’s camera into notes that, after enough training, can be understood by the user. About 3,000 people have downloaded the app so far.

Using it, a blind man holding his phone out in front of him during a hike may, with extensive training, be able to create a mental image of the bed of red roses that he will walk into if he doesn’t turn left to get back onto the dirt trail that’s covered in green leaves, which fell from overhanging branches.

Musical instruments such as pianos, trumpets and violins represent colors — notes of higher pitch signify pixels that appear higher on the iPhone screen; those of lower pitch refer to pixels lower on the screen. To complete the mental picture, the notes that sound first indicate objects on the left.

Using only sound, a red grape can be picked out from a bowl of green ones and Mountain Dew can be distinguished from Pepsi.

Amedi said that his technology could, one day, be used by military and law enforcement during operations. Pulling out of his backpack a simple infrared heat sensor, he imagined aloud how Jack Bauer, the lead character on the TV series “24,” could pair it with this research to make out human shapes beyond a wall or inside a room.

“Let’s say I’m walking in this corridor. I’m Jack Bauer, and I’m looking for the bad guys in this hotel, and I don’t want to [enter] all the rooms,” Amedi said. “I can use my vision to look around, but at the same time, using headphones [and] information from a sensor of infrared, [I can] use our algorithm to give you the picture.”

Of course, Amedi also understands the limits of using sensory substitution devices. He’s quick to point out that while sighted people can identify objects and recognize subtle changes in an instant, blind patients who have mastered his technology still may take one second or more to identify a scene with different colors and shapes. That gap may seem minuscule, but it can be the difference between life and death when it comes to crossing a street. 

“For us it’s like this,” Amedi said, snapping his fingers. “We can judge tiny changes very fast.”

But that didn’t stop him from smiling as one of his patients in a video, a woman, correctly identified the computer-generated image hidden in the sounds she was hearing — a man standing upright, legs wide apart and arms spread out. 

“This,” he said, “is good enough for now.”

For video demonstrations of Amir Amedi’s blindness research, visit brain.huji.ac.il/site.

Tracker Pixel for Entry

COMMENTS

We welcome your feedback.

Privacy Policy

Your information will not be shared or sold without your consent. Get all the details.

Terms of Service

JewishJournal.com has rules for its commenting community.Get all the details.

Publication

JewishJournal.com reserves the right to use your comment in our weekly print publication.

ADVERTISEMENT
PUT YOUR AD HERE