Neuroprosthesis converts brain activity into letters

brain activity

Researchers can read imaginary letters from brain waves. © metamorworks/ iStock

Brain-computer interfaces can open up an opportunity for people with severe paralysis to locked-in syndrome to communicate with their environment again. A computer converts certain patterns of brain activity into language. Previous devices mostly use presented movements. Researchers have now tested a system that does not require this detour: it recognizes thoughts directly from letters. In combination with an extensive integrated dictionary, this should make operation more intuitive and faster.

Severe neurological damage, such as that caused by a stroke or the progressive disease amyotrophic lateral sclerosis (ALS), can leave people with no control over their body's muscles. People living with the so-called locked-in syndrome are in full possession of their mental faculties, but can no longer make themselves understood because they can neither speak nor move. With the help of brain-computer interfaces, researchers are trying to give them contact with the outside world again. However, previous systems have the disadvantage that operation is usually not very intuitive and each individual input takes a long time.

Enable natural communication

A team led by Sean Metzger from the University of California in San Francisco has now developed a system that is said to be faster and more intuitive to use than previous models and has a low error rate. "Existing brain-computer interfaces for communication typically rely on decoding imaginary arm and hand movements into letters to enable intended sentences to be spelled," the researchers explain. "Although this approach has already delivered promising results, the direct decoding of speech tests into speech can be more natural and faster."

To make this possible, Metzger and his colleagues trained a system to recognize which letter a person is thinking about. The test subject was a 36-year-old man who is paralyzed from spasticity after a stroke and can no longer speak. Since he is still able to move his head, he communicates in everyday life with the help of a speech computer controlled in this way. For experiments with brain-computer interfaces, the researchers implanted electrodes in the areas of his brain associated with language, with his consent. In an earlier study, he had already used it to test a system in which the computer could decode up to 50 words if the subject tried to say them out loud. However, due to his paralysis, this required considerable effort and his vocabulary was limited.

imaginary letters

The new system, on the other hand, is capable of recognizing imaginary letters. Metzger and his colleagues taught the subject to use the NATO spelling code, for example "Alpha" for A, "Charlie" for C and "November" for N. They recorded his exact brain activity when he thought these letter codes and used it to train the self-learning artificial intelligence. For the actual experiment, they presented the test person with 75 different sentences, which he had to spell out one after the other. They also asked him some questions to which he was supposed to answer using the brain-computer interface.

The software evaluated his brain signals in real time and also compared them with an integrated dictionary with 1152 words to find out which letter and which word are the most likely. In this way, the system achieved a relatively low error rate per character of 6.13 percent. Compared to the voice computer he uses in everyday life, with which the test person can enter around 17 characters per minute, he was significantly faster with the new device: on average, he managed 29.4 characters per minute. In order to start spelling, it was enough for the subject to imagine speaking. He could end the program with an imaginary hand movement.

Advanced Dictionary

In further experiments, in which the researchers tested the software's speech recognition capabilities without the subjects, they expanded the integrated dictionary to over 9000 words. The character error rate increased only slightly to 8.23 ​​percent. "These results demonstrate the clinical utility of a speech prosthesis for generating sentences from a large vocabulary through a spell-based approach and complement previous demonstrations of direct whole-word decoding," the authors summarize. In future studies, they want to validate the approach with other subjects.

Source: Sean Metzger (University of California, San Francisco) et al., Nature Communications, doi: 10.1038/s41467-022-33611-3

Recent Articles

Related Stories