Adaptive Changes in the Viewing Behavior of Faces Following Aversive Learning....

I decided to write few paragraphs about papers I will be publishing from now on. This will be targeted for the non-technical audience and I hope will increase the accessibility to the published results. 

Here is our latest work that shows how eye-movement patterns during viewing of faces are modified when people learn to associate faces with an aversive outcome.

Eye movements can be effortlessly recorded while humans are engaged in different situations. This can provide important insights on what the nervous system tries to achieve, as eye-movements represent the final behavioral outcome of many complex neuronal processes, which are difficult to record and understand. 

We measured eye-movements while humans were viewing faces, and analyzed the resulting exploration patterns. These faces were calibrated to have a known similarity relationship. For example, faces A, B and C were physically organised in such a way that B was perceived equally similar to A and C, whereas A and C were the most dissimilar pairs. First, using novel similarity-based analyses we show that exploration patterns are dominated by physical aspects of faces. That is, the physical similarity relationship between A, B and C could be estimated to a good degree from the similarity of eye-movement patterns that were generated during viewing of these faces. 

Following in the experiment, we selected one face to be a nasty one by associating its presentation with a mild electric current on the hand of volunteers to generate an unpleasant feeling without hurting them. Participants learnt to associate this unpleasant outcome with only one face while other faces were kept the same as before. This resulted in a gradient of unpleasantness that was not present before, and led volunteers to generalize this unpleasant association to other faces to the extent they were perceived similar with the nasty face. This is a classic phenomenon known as generalization, since early times of Pavlov.

How does this new situation modify the similarity relationships between exploration patterns? Following learning, similarity relationships of eye movement patterns started to mirror this newly learnt categories of nasty vs. safe faces, even though there were no physical changes associated with faces. This is compatible with the idea that following learning along an arbitrary continuum of stimuli, a categorization process occurs internally that distinguishes safe from nasty faces. This then biases eye-movement patterns during viewing of faces in such a way to collect information specifically associated with the safe and nasty prototypes, leading faces resembling to these prototypes to be scanned similarly.  

This study provides a nice illustration on how eye movements patterns can shed light onto neuronal processes and help us understand what the brain is trying to achieve during learning.



Eye movement patterns on 8 different, but similar faces that were carefully calibrated to form a similarity continuum. These maps show the most attended locations for a single participant before learning. Similarity analysis of these heatmaps using FPSA method can detect learning induced changes in the scanning behavior.


Reference:

Aversive Learning Changes Face-Viewing Strategies, as Revealed by Model-Based Fixation-Pattern Similarity Analysis. Lea Kampermann, Niklas Wilming, Arjen Alink, Christian Buechel, Selim Onat.

All content in this post released under CC-BY 4.0.