Teaching the brains of paralyzed people to communicate

By Michael Cook

One of the most terrifying accidents imaginable is to become paralysed and to be unable to speak. Fortunately, scientists are gradually learning how to interpret signals from the brain and translate them into letters, words, and eventually speech.

In a milestone for brain-computer interfaces (BCI), researchers at UC San Francisco and UC Berkeley have shown that a speech-controlled BCI can be used to spell out intended sentences from a large vocabulary in real time with 94% accuracy.

The findings, published in Nature Communications, expand on previous work from a clinical trial led by UCSF neurosurgeon Edward Chang which demonstrated that it was possible to decode full words and sentences directly from neural signals sent from the brain to the vocal tract.

In the previous work, a high-density electrocorticography array was implanted over the sensorimotor cortex of a man who had suffered a severe brainstem stroke and subsequently lost his ability to produce intelligible speech.

In that study, Chang and his team developed a computer algorithm to decode neural signals corresponding to a vocabulary of 50 words, which could translate the signals into text on a screen as the man attempted to say the words out loud.

In the new study, the same man attempted to silently spell words using the NATO phonetic alphabet (Alfa for A, Beta for B, etc.). The researchers chose the NATO alphabet after discovering the signal was stronger and led to greater accuracy than attempting to spell letters alone.

“The biggest advance over our previous work is the increase in vocabulary size with this new spelling approach,” says David Moses, a co-leader of the study. “Although he’s now spelling out the sentences letter-by-letter, our participant has access to over 1,000 words, and in offline analyses we showed that the system can generalize to over 9,000 words, which exceeds the threshold for basic fluency in English.”

As speech is a more innate form of communication than writing or typing, the hope is that further development of this technology will enable more rapid and natural expression. The researchers posit that a system combining spelling with direct decoding of whole words will allow for greater flexibility and utility in day-to-day use.

The current study also proves that the system can decode silent attempts at speech without needing vocal output. “The effort to try to vocalize can be very fatiguing for people with speech paralysis, so silently mouthing the words helps them to flow faster and is less taxing,” says Metzger. “It may also allow us to expand the technology to a wider pool of users and offer hope for individuals who are not able to produce any sounds at all.”

According to STAT:

These systems are still far from producing natural speech in real-time from continuous thoughts. But that reality is inching closer. “It’s likely in our reach now,” said Anna-Lise Giraud, director of the Hearing Institute at the Pasteur Institute in Paris, who is part of a European consortium on decoding speech from brain activity. “With each new trial we learn a lot about the technology but also about the brain functioning and its plasticity.”

Editor’s note. This appeared at BioEdge and is reposted with permission.