Rock music reconstructed from brain activity

Rock music reconstructed from brain activity

Researchers have reconstructed the piece of music heard solely from the pattern of brain activity. An artificial intelligence helped them. © Bellier et al., 2023/ PLOS Biology,

Music activates a widespread network in our brain. Researchers have now used the brain activity of test subjects with electrodes in their brains to reconstruct which piece of music they were listening to during the recording. An artificial intelligence translated the brainwaves into a recognizable version of Pink Floyd's "Another brick in the wall." In the long term, the technology could help to construct more powerful brain-computer interfaces for people who have lost their ability to speak. Instead of robotic sentences, the intended speech melody could then also be reconstructed.

Music and language are closely related and the melody of speech provides important information about how we mean something and what emotions we feel when we hear it. Previous speech computers for people who can no longer speak due to paralysis, however, output the sentences with a monotonous, robot-sounding voice. The so-called prosody, i.e. rhythm, stress, accent and intonation, is left out. So far, it has been a great challenge to even read the intended words from brain activity. Melodic elements have so far been completely absent, especially since their processing and generation in the brain is only beginning to be understood.

Signals from the surface of the brain

A team led by Ludovic Bellier from the University of California at Berkeley has now succeeded in reconstructing a recognizable piece of music solely from the brain activity of music listeners. They used a data set from 29 test subjects who had electrodes implanted in their brains due to epilepsy. These electrodes allowed the researchers to record signals directly on the surface of the brain, which is much more accurate than recordings on the scalp.

For the experiments, subjects listened to an approximately three-minute excerpt from Pink Floyd's rock song "Another brick in the wall" while their brain activity was recorded. The survey took place in 2012 and 2013. With the technology available at the time, only information about the music genre could be derived from brain activity. However, Bellier and his team analyzed the data again using modern methods of speech recognition and supported by artificial intelligence.

spectrogram
Spectrogram of the subjects' heard (above) and reconstructed version of the Pink Floyd song. © Bellier et al., 2023/ PLOS Biology, CC by 4.0

Recognizable piece of music reconstructed

And indeed: "We managed to reconstruct a recognizable song directly from the neural recordings," reports the team. The melody and rhythm were correct and the words were muddled but understandable. In order to find out which brain areas are particularly important for the decoding, the researchers excluded the signals from individual groups of the more than 2,500 electrodes from the evaluation in further analysis steps. In this way, they found out that three brain regions in particular react specifically to music: the superior temporal gyrus, the inferior frontal gyrus and the sensorimotor cortex. "In the superior temporal gyrus, we detected a previously unknown subregion that responds specifically to musical rhythm," reports the team.

They also identified structures that are particularly active when singing or an instrument starts again. While the left hemisphere tends to dominate when processing language, the results show that reactions to music take place primarily in the right hemisphere.

Musical brain-computer interfaces?

"As brain-computer interfaces advance, the new findings offer an opportunity to add musicality to future brain implants for people with neurological or developmental disorders that impair speech," says Bellier's colleague Robert Knight. "With that, you can decode not only the linguistic content, but also part of the prosodic content of language, part of the affect." While other research teams working on brain-computer interfaces for speech recognition often focus on motor areas that involved in the coordination of the tongue, lips and larynx, the current study focuses on auditory regions.

"Decoding from the auditory cortices, which are closer to the acoustics of sounds than the motor cortex, is very promising," adds Bellier. “That gives more color to what is decoded.” However, it is unclear whether the detection will ever work without electrodes implanted in the brain. "Today's non-invasive techniques just aren't accurate enough," says Bellier. "Let's hope for the patients that in the future we will be able to read the activity in deeper brain regions with a good signal quality with the help of electrodes that are attached to the outside of the skull. But we are still a long way from that.”

Source: Ludovic Bellier (University of California, Berkeley) et al., PLoS Biology, doi: 10.1371/journal.pbio.3002176

Recent Articles

Related Stories