What happens in us when we see something and then grab it? Researchers have now developed a model that for the first time completely depicts the neural processes from visual perception to movement control. It is an artificial neural network that they created using information on brain activity and data from grasping experiments with rhesus monkeys. The model could now contribute to the development of neuroprostheses that enable deliberately controlled movement sequences through interfaces between the nervous system and electronic elements.
We grab coffee cups, the cell phone, our keys … These everyday movements seem simple and natural to us – but if you look closely, they are based on an astonishingly complex interplay: the features of objects are processed in the visual center of the brain and then transformed into motor commands, the enable gripping movement adapted to the respective object. The importance of the correct functioning of this system becomes particularly clear when it is disturbed in the case of nerve injuries – or when a whole hand is lost, for example.
Scientists are working on the development of so-called neuroprostheses in order to give people with such disabilities the most natural movement possible. These technical systems can transmit nerve stimuli or convert them into complex movements by electronic units. In order to imitate complex processes as well as possible, however, detailed insights are necessary, which naturally occurs during movement planning.
On the trail of grasping
The neuroscientists around Hansjörg Scherberger from the German Primate Center in Göttingen are dedicated to this research topic. They use rhesus monkeys as a model for humans because, like us, they have a highly developed nervous and visual system as well as pronounced fine motor skills when grasping movements. As the researchers report, previous research has already provided detailed insights into which regions of the monkey’s brain play a role in grasping previously seen objects.
Accordingly, certain activity patterns occur in the so-called anterior intraparietal area, in the area of the ventral premotor cortex responsible for the hands and in the primary motor cortex. But how are these activity patterns linked to muscle control in the arm and hand? Until now, there has not been a detailed model that could map the entire process from processing visual information to grasping an object on a neural level, the scientists emphasize.
As they explain, precise data on the gripping behavior were required to develop such a model. They delivered two rhesus monkeys that were trained to grip 42 objects of different shapes and sizes. The monkeys wore a data glove that precisely recorded the movements of the arm, hand and fingers. The animals received the request to grasp by briefly illuminating the respective object. The test conditions were able to provide information about the point in time at which the different brain areas are active in order to generate the grasping movement and the associated muscle activation based on the visual signals.
Simulation through artificial intelligence
The researchers then transferred the data obtained and the information about brain activities from the previous studies to an artificial neural network in the computer, the functioning of which was modeled on the biological processes in the brain. The network model consisted of three interconnected levels, which thus corresponded to the three brain areas involved in the monkeys. Another data basis for the development of the artificial neural network was formed by the images of the 42 objects as they were visible from the monkey’s perspective.
What makes artificial neural networks so interesting is their ability to learn – in this case, too, this achievement led to success: After training with the behavioral data of the monkeys, the network was able to precisely reflect the grasping movements of the rhesus monkeys, the scientists report. It was able to process images with recognizable objects and use them to reproduce very precisely the muscle dynamics of the monkeys that are necessary to grasp the objects. The results, which were obtained with the help of the artificial network model, were in high agreement with the neuronal dynamics of the monkeys’ brain areas, say the scientists.
They therefore now see considerable potential in their results, especially for medical technology. “This artificial model describes for the first time in a biologically realistic way the neural processing process from seeing an object to object recognition, through action planning, to hand muscle control when grasping,” summarizes Scherberger. “This model is now helping to better understand the processes taking place in the brain, and could be used in the long term to develop more efficient neuroprostheses,” says the neurobiologist.
Source: German Primate Center GmbH – Leibniz Institute for Primate Research, specialist article: PNAS, doi: 10.1073 / pnas.2005087117