Simply observing another person can activate our sense of touch.

It’s a universal human experience: when your friend cuts their finger while cooking, you reflexively wince, grimace, and perhaps even feel a sensation of pain yourself.

That sudden feeling of empathy isn’t just sympathy. Your brain is actually feeling what it sees.

On November 26, an international team of researchers published a new study revealing the mechanism behind this phenomenon. Neuroscientists have discovered how the brain turns what we see into what we feel, shaping our physical experience of the world.

The study was led by the University of Reading and the Netherlands Institute for Neuroscience & Vrije Universiteit Amsterdam. 

Eight maps, one shared language

To unlock this mystery, the researchers took an unusual approach. Rather than studying reactions to basic cues, they immersed participants in the world of Hollywood movie narratives.

Participants were placed in fMRI scanners and shown clips from blockbusters like The Social Network and Inception. This allowed the team to pinpoint the underlying brain structures that enable us to “experience” what we observe.

The physical sensation (such as wincing) is caused by the activation of your brain’s somatosensory cortex.

This cortex contains “maps” — organized regions where the entire body (from head to toe) is represented. These maps help the brain pinpoint the origin of a sensation.

The most surprising discovery? “We found not one, or two, but eight remarkably similar maps in the visual cortex! Finding so many shows how strongly the visual brain speaks the language of touch,” said Tomas Knapen, the study author. 

These visual maps are identical to the body’s layout in the touch center. 

This suggests that when we see another person, our brain organizes that visual information in precisely the same bodily way it would if we were feeling that sensation ourselves. The visual brain, it seems, speaks the very language of touch.

Use in autism, AI tech

The multiple body maps serve specialized functions; for example, some maps identify body parts, while others determine their spatial locations.

These maps allow the brain to extract specific, relevant information (such as focusing on a hand for an action or a posture for an emotional state) from a visual scene.

The discovery of these visual-tactile body maps is important for future research and technology.

It could help in investigating emotional processing and could lead to better treatments for conditions like autism, where individuals often struggle with this type of processing.

Moreover, the ability to activate broad bodily processes using these maps could advance the development of Brain-Computer Interfaces (BCIs). 

“Training sets for brain implants often start with instructions like ‘try to think of a movement’. If these bodily processes can be activated in much broader ways, then there might be much broader possibilities to train and develop those brain-computer interfaces,” the researchers noted.

Knapen believes his findings will greatly advance AI development by providing the “bodily dimension” currently missing from systems that rely solely on text and video.

“This aspect of human experience is a fantastic area for AI development. Our work shows the potential for very large, precision brain imaging datasets to fuel this development: a beautiful synergy between neuroscience and AI,” he stated. 

The study was published in the journal Nature

Comments are closed.