COVID-19 Update: The university remains under a state of emergency. Post-pandemic planning is underway. MORE INFO

You are here

Reinventing hearing aids with deep learning

Illustration of a digital hearing aid behind the ear with a sound wave drawn in the background.

Inspired by his mother’s struggle to hear conversations at the dinner table, Computer Science and Engineering Professor DeLiang Wang has been working for two decades to help the hearing impaired understand speech in noisy environments.

Thanks to recent breakthroughs his team has achieved, Wang believes their new technology could be incorporated into next-generation hearing aids within five years.

Hearing loss affects 37.5 million Americans, estimates the National Institute on Deafness and Other Communication Disorders, but only one in five people who would benefit from a hearing aid use one.

Wang’s mother began to lose her hearing while he was away at college. Soon, it was difficult for her to hold a conversation if more than one person spoke at a time. Even with hearing aids, she still struggles to distinguish the sound of each voice.

This phenomenon is called the cocktail party problem. While the human auditory system can easily pick out a voice in a crowded room, hearing aids are unable to distinguish speech in noisy environments, like those with multiple talkers or other background noise.

“Interference really reduces human speech recognition for people with hearing impairment,” Wang explained, even for people wearing hearing aids. “The device cranks up the volume on both, creating an incoherent din.”

Wang and his team have made great progress toward solving this problem, which has perplexed scientists and engineers for decades.

In 2001, his team was the first to create a digital filter that effectively labels sounds as either speech or noise. In 2012, the researchers began developing a machine-learning program that runs on a deep neural network—one made up of many layers—to separate and remove interference from the speech signal.

“This algorithm was the first, of any technique relying on monaural techniques, to achieve major improvements in hearing-impaired listeners’ ability to make sense of spoken phrases obscured by noise,” Wang wrote in an IEEE Spectrum cover article.

Their deep learning approach involves first training the system using large amounts of data.

HeadshotDeLiang Wang

“In the beginning you teach very, very detailed output—here’s the input, here’s what the output should be,” Wang said. “After a while the machine is able to figure out the underlying patterns.”

By the end of the supervised training, the Buckeyes’ filter proved to be far superior to earlier methods at separating speech from noise. It not only amplifies sound, but can also isolate speech from background noise and automatically adjust the volumes of each separately.

“We believe this approach can ultimately restore a hearing-impaired person’s comprehension to match—or even exceed—that of someone with normal hearing,” Wang added.

When researchers tested their filter with both hearing-impaired people and those with normal hearing, both groups showed major improvement in comprehension after sentences were processed through Wang’s program.

The algorithm was especially effective in clarifying words muddled by simultaneous speakers, improving hearing-impaired listeners’ comprehension from 29 percent to 84 percent. For phrases obscured by background noise, recognition improved from 36 percent to 82 percent.

Even people with normal hearing benefitted from the technology. Their comprehension of words spoken amid steady background noise rose from 37 percent to 80 percent with it and from 42 percent to 78 percent with simultaneous speakers.

Supported by a new five-year, $1.5 million NIH grant, the researchers are now working to enhance their technology to handle more common noise situations, including background noise and competing speech in reverberant, or echoing, spaces, as well as combatting both simultaneously.

Even in these more complex situations, Wang’s technology is proving successful.

“We already made another breakthrough, which is that we actually have tested subjects on speech signals with both background noise and room reverberation and we see a very clear improvement of the speech recognition rate,” he explained.

The new technology requires more computing power than hearing aids currently possess, due to size and battery limitations. But with the recent trend of incorporating hearing aid technology into smart phones, Wang feels this won’t be an issue for much longer.

“That will remove a lot of limitations with current hearing aids,” he said. “The computing power of a smartphone is tens or dozens of times that of a typical hearing aid.” 

Wang envisions next-generation hearing aids that are incorporated into a smart phone and utilize it for processing before wirelessly transmitting the enhanced signal to a small in-ear device that simply functions as a transmitter—much like wireless headphones do today.

While Wang’s primary focus is improving life for the hearing-impaired, his technology also has other telecommunication applications, such as improving speech recognition by machines and alleviating background interference during phone and video calls.

“Our technology is being exploited and utilized for such applications,” he said.

After so many years of work, Wang is excited to see rapid progress toward his goal of improving the experience for hearing aid users.

“The ultimate goal is to remove this handicap,” Wang said. “We want the hearing-impaired to perform as well as people with no hearing impairment in adverse acoustic environments.”

That’s something Wang’s mom will be overjoyed to hear.

by Candi Clevenger, College of Engineering Communications,

Tags: Research