IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Paralyzed Man Can Send Messages from His Brain Via Implant

New technology developed by University of California San Francisco scientists has done the impossible, allowing a man who lost his ability to speak years ago to relay messages directly from his brain.

Brain implant concept
Shutterstock/Mopic
(TNS) — UC San Francisco scientists have restored communication to a paralyzed patient, using a computer to send messages straight from his brain.

The extraordinary achievement is a step toward the day when implantable prostheses could help people who have lost the ability to speak due to stroke, spinal cord injury or neurodegenerative disease.

“This trial tells us that, yes, we can restore words to someone who’s lost speech,” said UCSF neurosurgeon Dr. Edward Chang, who led the study. “It’s the very beginning, but it definitely tells us that this is possible.”

The project is thought to be the first successful demonstration of direct decoding of full words from brain activity. Known as “BRAVO” (Brain-Computer Interface Restoration of Arm and Voice), the study is published in Wednesday’s issue of The New England Journal of Medicine.

Just as the brain sends signals to move an arm or leg, it sends signals to the vocal cords to produce a sound. But people with vocal paralysis can’t control those muscles. Their brains prepare messages for delivery, but those messages are trapped.

The scientists have tapped into this system, putting a flexible pad of electrodes onto the part of the brain that controls these vocal muscles. Then they decoded those signals into words, which are displayed on a screen.

When asked “How are you today?” and “Would you like some water?” the patient’s answers appeared on a computer screen.

“I am very good,” he said. “No, I am not thirsty.”

The volunteer is a man in his late 30s who suffered a devastating stroke after surgery for injuries from a car crash 15 years ago. Since his injury, he has extremely limited head, neck, and limb movements. He uses an electric wheelchair and typically communicates by using a pointer attached to a baseball cap to poke letters on a screen.

The 128 electrodes sit gently on the surface of his brain without penetrating the tissue. This approach is safe and has been used for years to monitor seizure activity in patients with epilepsy.

The system can translate his words from brain activity at a rate of up to 18 words per minute, according to the journal report. Normal speech is 150 or 200 words per minute.

While much slower than ordinary speech, this is faster than other efforts in the field of communication neuroprosthetics, which use eye movements or muscle twitches — typing out letters one by one, as text — to restore communication.

Faster decoding is possible, according to Chang.

The patient’s speech was up to 93% accurate, enlisting an “auto-correct” software similar to what is used while texting.

According to Chang, “in many cases, information needed to produce fluent speech is still there in their brains. We just need the technology to allow them to express it.”

“With this pioneering demonstration of how a person can generate text just by attempting to speak, efforts to restore neurologic function for persons with amyotrophic lateral sclerosis, cerebral palsy, stroke, or other disorders move closer toward clinical benefit,” wrote neurologists Dr. Leigh Hochberg of Massachusetts General Hospital and Dr. Sydney Cash of Harvard Medical School in an accompanying editorial.

“Ultimately, success will be marked by how readily our patients can share their thoughts with all of us.”

The work builds on innovations in several different fields.

For years, Chang’s lab focused on fundamental questions about how brain circuits interpret and produce speech — specifically, how it allows us to control the vocal tract to coordinate the lips, jaw, tongue and larynx.

“We knew enough to ask a very basic question: If we now know how speech works when people are speaking normally, could we use that information to help someone who has lost the ability to speak after they've become paralyzed?” said Chang.

Then, with colleagues in the UCSF Weill Institute for Neurosciences, the team learned to listen for the firing of brain cells as they told the vocal organs to move.

The team recorded this brain data while volunteers with normal speech, who had a small patch of tiny recording electrodes temporarily placed on the surface of their brains, answered simple questions.

Then they made a map of the pattern of brain signaling.

To reconstruct words or word sounds from the brain signals, postdoctoral engineer David Moses developed a set of machine-learning algorithms equipped with speech models. Statistical language models improved accuracy.

In the new BRAVO study, they connected the electrodes to a computer by a cable attached to a port in the patient's head.

Over 1.5 years, they asked him to create a 50-word vocabulary, including “good,” “water,” “music,” “family” and “computer.” These were sufficient to create hundreds of sentences about his daily life.

This approach is not reading people’s minds.

“Internal thoughts are really complex ... they’re distributed throughout the brain,” said Chang. “They’re not in one particular part of the brain, and we’re far from understanding how that works.”

Instead, it’s decoding what they are trying to say out loud. “Those are the signals that are disconnected from the vocal tract, either through a stroke or other kind of brain injury,” said Chang.

It is not yet known if this approach would be clinically practical for many people. It has been used only in a single individual, so it might not work as well in others.

There is still more work to be done to improve this approach, according to the team. They plan to build systems that have higher data resolution to record more information, more quickly, from the brain. They want to expand vocabularies. And they dream of creating systems that can translate these very complex brain signals into spoken words — not just text.

“What this means,” said Chang, “is that people who have been suffering because they’re not able to communicate with people they love, not able to communicate with their caretakers about their basic needs, will be able to essentially express some of those feelings or emotions.”

©2021 Palo Alto Daily News, Distributed by Tribune Content Agency, LLC.