Why are some images more memorable than others? – Neuroscience news

Resume: A new study shows that the brain prioritizes remembering images that are more difficult to explain. Researchers used a computational model and behavioral experiments to show that scenes that were difficult for the model to reconstruct were more memorable for participants.

This finding helps explain why certain visual experiences stick in our memory. The study could also inform the development of AI memory systems.

Key Facts:

  • Memory formation: The brain tends to remember images that are difficult to interpret or explain.
  • Computational model: A model that addresses visual signal compression and reconstruction was used.
  • AI implications: Insights can help create more efficient memory systems for artificial intelligence.

Source: Jale

The human brain filters through a stream of experiences to create specific memories. In this flood of sensory information, why do some experiences become “memorable,” while most are ignored by the brain?

A computational model and behavioral study developed by Yale scientists suggests a new clue to this age-old question, they report in the journal Nature Human behavior.

The Yale team found that the harder it was for the computer model to reconstruct an image, the more likely it was that the image would be remembered by the participants. Credit: Neuroscience News

“The mind prioritizes remembering things it can’t explain very well,” says Ilker Yildirim, assistant professor of psychology at Yale’s Faculty of Arts and Sciences and senior author of the paper. “If a scene is predictable and unsurprising, it can be ignored.”

For example, a person may be momentarily confused by the presence of a fire hydrant in a remote natural setting, making the image difficult to interpret and therefore more memorable. “Our research explored the question of what visual information is memorable by combining a computational model of scene complexity with a behavioral study,” says Yildirim.

For the study, which was led by Yildirim and John Lafferty, the John C. Malone Professor of Statistics and Data Science at Yale, the researchers developed a computational model that addressed two steps in memory formation: the compression of visual signals and their reconstruction.

Based on this model, they designed a series of experiments in which people were asked whether they remembered specific images from a series of natural images shown in rapid succession. The Yale team found that the harder it was for the computer model to reconstruct an image, the more likely it was that the image would be remembered by the participants.

“We used an AI model to shed light on people’s perception of scenes – this insight could help develop more efficient memory systems for AI in the future,” said Lafferty, who is also director of the Center for Neurocomputation. and Machine Intelligence at the Wu Tsai Institute at Yale.

Former Yale students Qi Lin (Psychology) and Zifan Lin (Statistics and Data Science) are co-first authors of the paper.

About this visual memory research news

Author: Bill Hathaway
Source: Jale
Contact: Bill Hathaway-Yale
Image: The image is credited to Neuroscience News

Original research: Closed access.
“Images with more difficult to reconstruct visual representations leave stronger memory traces” by Ilker Yildirim et al. Nature Human behavior


Abstract

Images with more difficult to reconstruct visual representations leave stronger memory traces

Much of what we remember is not the result of intentional selection, but simply a byproduct of perception.

This raises a fundamental question about the architecture of the mind: how does perception interact with and influence memory?

Inspired by a classic proposal relating perceptual processing to memory durability, the level-of-processing theory, here we present a sparse coding model for compressing feature embeddings of images, and show that the reconstruction residuals of this model predict how well images are encoded in memory.

In an open memorability dataset of scene images, we show that reconstruction errors explain not only memory accuracy but also response latencies during retrieval, with the latter capturing all variance explained by powerful vision-only models. We also confirm a prediction of this story with ‘model-driven psychophysics’.

This work establishes that reconstruction errors are an important signal linking perception and memory, possibly through adaptive modulation of perceptual processing.

Leave a Comment