How we access knowledge

How we access knowledge

If one thinks of the word “telephone” only in terms of ringing or in terms of its use, only those areas of the brain become active that would actually process the actual hearing (yellow) or the actual action (blue). If, on the other hand, we call up the overall “telephone” concept, in addition to these two so-called modality-specific areas, multimodal (green) areas also come into action. (Image: MPI CBS)

“Telephone”, “book” or “guitar” – in order to be able to function in our living environment, we have certain concepts of objects in our heads that are linked to their characteristics. But how does the brain call up knowledge if we cannot see, hear or feel things directly, but only their name appears? The cerebral mirror image of a term is shaped by the aspect on which we concentrate on an object, emerges from studies of brain activity in experiments. Based on the results, the researchers have developed a model of how we process our knowledge.

If you read these lines and classify the information, your brain will do astonishing things: It is still largely a mystery how complex information is processed through the interaction of neurons and brain areas. Basically, however, it seems clear: In order to understand the world, we form spiritual concepts about objects, people or events. Concepts such as “telephone” consist of visible features, i.e. shape and color, as well as noises – such as ringing. However, the concept of the telephone also includes actions – information about how and why we use this object.

If we read the word “telephone”, our brain calls up the corresponding mental concept. It seems to simulate the characteristics of this object. But how? So far it has not been clear whether the entire concept is activated in the brain or only individual features such as noises or actions that are currently important are called up. Specifically: To what extent do we always think of all of a telephone’s features or only of the part that is currently needed? Neuroscientists at the Max Planck Institute for Human Cognitive and Brain Sciences in Leipzig investigated this question and further aspects.

Brain activity with concepts in mind

They examined 40 subjects using functional magnetic resonance imaging (fMRI). This imaging process can make it clear which areas are activated in the brain when certain requirements are met. During the recording, the study participants were confronted with numerous terms. These included words such as “telephone” or “guitar”, which designate objects that can be heard and used, but also terms such as “satellite”, which cannot be associated with noises or actions. In one round, the test subjects should decide whether a term stands for an audible object. In another, however, they should answer whether you can use the object in question. In this way, the participants were mentally conditioned to focus on one of the two aspects, even if an object had both.

The recordings of brain activity by the fMRI confirmed that the mental mirror image of an object is dependent on the context: If the study participants were pre-marked on the sound aspect when calling up a mental concept such as “telephone”, a cerebral mirror image of this term emerged at the auditory areas in the cerebral cortex were active. If, on the other hand, the use of terms such as telephone was in the foreground, so-called somatomotor areas of the brain came into action, which would also be active when an action was actually carried out. The researchers describe this type of intellectual processing as modality-specific.

When the researchers asked about both aspects – audibility and usability – in further test runs, a further element of processing terms emerged: The left inferior parietal lobe (IPL) is responsible for integration – the researchers therefore refer to this brain area as a multimodal area. Through further experiments, they were also able to show that the interaction between the modality-specific areas and the multimodal area is reflected in the respondents’ assessment of an object: the more intensively the respective regions worked together, the more the participants associated an object-related term with actions or noises.

Hierarchy levels are emerging

The experiments were rounded off by sprinkling pseudo-words. These invented words were intended to distinguish the subjects from real terms. It turned out that this task started a brain region that is neither responsible for actions nor for noises – the so-called anterior temporal lobe (ATL). According to the scientists, this area seems to process concepts in an abstract or amodal manner, completely detached from sensory impressions.

They finally integrated the findings into a model that is supposed to describe how conceptual knowledge is represented in the human brain. Accordingly, information is passed on from one hierarchical level to the next and at the same time more abstract with each step. On the lowest level are the modality-specific areas that process individual sensory impressions or actions. These transmit their information to the multimodal regions such as the IPL, which can integrate several linked perceptions – such as sounds and actions. At the highest level, the amodal ATL, in turn, represents features that are detached from sensory impressions.

“Ultimately, it is becoming apparent that our concepts of things, people and events are composed, on the one hand, of the sensory impressions and actions associated with them and, on the other hand, of abstract symbol-like features. What is activated, in turn, depends heavily on the respective situation or task, ”is how first author Philipp Kuhnke from the Max Planck Institute for Human Cognitive and Brain Sciences sums up the new findings.

Source: Max Planck Institute for Human Cognitive and Brain Sciences, Article: Cerebral Cortex, doi: 10.1093 / cercor / bhab026; doi: 10.1093 / cercor / bhaa010

Recent Articles

Related Stories