Brain-computer interfaces restore language ability

Brain-computer interfaces restore language ability

Patient training with the speech system. © Steve Fish

Brain-computer interfaces should enable paralyzed people to communicate with their environment again. Two new releases now mark significant advances in technology. The devices presented recognize the imaginary language much faster than their predecessors and allow an extensive vocabulary. One of the devices even combines the voice output with an avatar that imitates the appropriate facial expressions. The two devices are not yet ready for the market, but could form a basis for developing versions that are suitable for everyday use in the near future.

Diseases such as amyotrophic lateral sclerosis (ALS), injuries to the cervical spine or strokes in the brainstem area can lead to people losing the ability to speak. Some people can initially make themselves understood in other ways, but as the paralysis increases, the possibilities become more and more limited. Researchers are therefore trying to use interfaces between the brain and computer to read signals directly from the brain of those affected and convert them into speech in order to enable them to communicate with the outside world. However, most previous technologies are slow and error-prone.

Recognition of sounds

Now two research teams have independently developed new brain-computer interfaces that should enable more natural communication. While most previous approaches were based on artificial intelligence recognizing entire words from a given vocabulary from the brain waves, the two new devices rely on recognizing individual phonemes, i.e. the sounds from which all words are made up. In English, the language in which the experiments took place, there are 39 phonemes, in German there are 40.

So instead of having to recognize whole words straight away, the algorithm derives from the brainwaves of the test person which sound they want to form. Speech software then puts the individual sounds together into words, using mechanisms similar to those of an auto-correction program. “The system is trained to know which words should come before others and which phonemes form which words,” explains Francis Willett from Stanford University in California, who developed one of the two devices now being presented together with his team. “If some phonemes have been misinterpreted, it can still make a good guess.”

Higher speed

The implant developed by Willett and his team tested a 68-year-old woman who was diagnosed with ALS in 2012 and is now unable to communicate verbally due to speech muscle paralysis. In March 2022, the researchers implanted two sensors with tiny electrodes in two locations in the subject’s brain associated with language. For around 100 hours, the test person and the research team trained the connected software by having the test person repeatedly try to speak given sentences while the software recorded the brain signals and assigned them to the sounds.

The test person is now able to communicate via the brain-computer interface at an average speed of 62 words per minute – more than three times as fast as was possible with previous devices. The natural speaking rate in English is about 160 words per minute. With a vocabulary of 125,000 words, the software had an error rate of 23.8 percent. If the researchers limited the vocabulary to 50 words, the error rate was 9.1 percent.

Avatar for facial expressions

A team led by Sean Metzger from the University of California in San Francisco has developed another brain-computer interface that is intended to enable even more natural communication. “Our goal is to restore a full, embodied mode of communication, which is the most natural way for us to speak to others,” explains Metzger’s colleague Edward Chang. In addition to the text output, the researchers therefore created a virtual avatar that, based on the brain signals, expresses the appropriate facial expressions for what was said. To do this, the electrodes implanted in the brain not only record the signals to the speech muscles, but also to the other facial muscles.

The subject of this study – a stroke patient – was able to form an average of 78 words per minute after appropriate training, with the error rate for a vocabulary of 1,024 words being 25 percent. “We’re reconnecting the connections between the brain and the vocal tract that were disrupted by the stroke,” says Kaylo Littlejohn. In order to actually reproduce the test person’s voice, the researchers evaluated old voice recordings of her and programmed the device’s voice output so that it spoke with the test person’s voice. “Our results present a multimodal speech neuroprosthetics approach that holds promise for enabling full, embodied communication in people with severe paralysis,” the research team writes.

So far only in the laboratory

Outside of scientific studies, the devices are not yet available. “This is a scientific proof of concept, not a device that you can use in everyday life,” says Willett. Apart from the still relatively high error rate, the computer has so far had to be connected to the electrodes in the brain by cable. However, both teams are working on wireless versions of their interface. “If people were given the ability to freely control their own computers and phones using this technology, it would have a profound impact on their independence and social interactions,” says Metzger’s colleague David Moses.

Sources: Francis Willett (Stanford University, California) et al., Nature, doi: 10.1038/s41586-023-06377-x; Sean Metzger (University of California, San Francisco) et al., Nature, doi: 10.1038/s41586-023-06443-4

Recent Articles

Related Stories