Speech Neuroprosthesis Turns Brain Signals Into Words On A Screen
Researchers from UC San Francisco have developed a new speech neuroprosthesis designed to enable a man with severe paralysis to communicate in complete sentences. The neuroprosthesis translates signals from the brain to the vocal tract directly into words that appear on screen as text. The breakthrough was developed in collaboration with a participant in a clinical research trial and built on over a decade of effort.
UCSF neurosurgeon Edward Chang, MD, says to his knowledge this is the first successful demonstration of direct decoding of full words from the brain activity of a paralyzed person who can't speak. The breakthrough shows promise in restoring communication by tapping into the brain's natural speech machinery. Losing the ability to speak is unfortunately not uncommon due to stroke, accident, and disease.
Being unable to communicate is a significant detriment to a person's health and well-being. While most studies in the field of communication neuroprosthetics focuses on restoring communication using spelling-based approaches requiring typing letters one by one in text format, the new study is different. Chang and his team are working on translating signals intended to control muscles of the vocal system for speaking words rather than signals to move an arm or hand to enable typing.
Chang says his team's approach has taken taps into the natural and fluid aspects of speech and promises more rapid and organic communication. During speech, people normally communicate at a high rate of up to 150 to 200 words per minute. Nothing that is spelling-based is that quick making that form of communication considerably slower.
By capturing the brain signals and going straight to words, it's much closer to how we normally speak. Chang has been working towards developing his speech neuroprosthesis over the last decade. He progressed toward the goal with the help of patients at the UCSF Epilepsy Center that underwent neurosurgery to pinpoint their regions of seizures using electrode arrays placed on the surface of their brains. All of those patients had normal speech and volunteered to have their brain recordings analyzed for speech-related activity.
Chang and colleagues mapped the cortical activity patterns associated with vocal tract movements that produce each consonant and vowel. Those findings are translated into speech recognition of words, and methods for real-time decoding of those patterns and statistical language models to improve accuracy were developed. The first patient in the trial was a man in his late 30s who suffered a brain stem stroke more than 15 years ago that damaged the connection between his brain, his vocal tract, and limbs.
As a result, he has extremely limited head, neck, and limb movements and communicates using a pointer attached to a baseball cap to poke letters on a screen. The patient worked with Chang and his team to form a 50-word vocabulary that could be recognized from brain activity, which was sufficient to create hundreds of sentences expressing concepts applicable to the user's life.