Just submitted the following short paper to the 1st Cognitive Computational Neuroscience meeting. I am looking forward to participate!

Model-Based Fixation-Pattern Similarity Analysis Reveals Adaptive Changes in Face-Viewing Strategies Following Aversive Learning

Lea Kampermann (l.kampermann@uke.de)
Department of Systems Neuroscience
University Medical Center Hamburg-Eppendorf

Niklas Wilming (n.wilming@uke.de)
Department of Neurophysiology and Pathophysiology
University Medical Center Hamburg-Eppendorf

Arjen Alink (a.alink@uke.de)
Department of Systems Neuroscience
University Medical Center Hamburg-Eppendorf

Christian Büchel (c.buechel@uke.de)
Department of Systems Neuroscience
University Medical Center Hamburg-Eppendorf

Selim Onat (sonat@uke.de)
Department of Systems Neuroscience
University Medical Center Hamburg-Eppendorf


Abstract:


Learning to associate an event with an aversive outcome typically leads to generalization when similar situations are encountered. In real-world situations, generalization must be based on the sensory evidence collected through active exploration. However, our knowledge on how exploration can be adaptively tailored during generalization is scarce. Here, we investigated learning-induced changes in eye movement patterns using a similarity-based multivariate fixation-pattern analysis. Humans learnt to associate an aversive outcome (a mild electric shock) with one face along a circular perceptual continuum, whereas the most dissimilar face on this continuum was kept neutral. Before learning, eye-movement patterns mirrored the similarity characteristics of the stimulus continuum, indicating that exploration was mainly guided by subtle physical differences between the faces. Aversive learning increased the dissimilarity of exploration patterns. In particular, this increase occurred specifically along the axis separating the shock predicting face from the neutral one. We suggest that this separation of patterns results from an internal categorization process for the newly learnt harmful and safe facial prototypes.
Keywords: Eye movements; Generalization; Categorization; Face Perception; Aversive Learning; Multivariate Pattern Analysis; Pattern Similarity

To avoid costly situations, animals must be able to rapidly predict future adversity based on actively harvested information from the environment. In humans, a central part of active exploration involves eye movements, which can rapidly determine what information is available in a scene. However, we currently do not know the extent to which eye movement strategies are flexible and can be adaptive following aversive learning.

We investigated how aversive learning influences exploration strategies during viewing of faces that were designed to form a circular perceptual continuum (Fig. 1A). One randomly chosen face along this continuum (CS+; Fig. 1, red, see colorwheel) was paired with a mild electric shock, which introduced an adversity gradient based on  physical  similarity  to  the


Figure 1: (A) 8 exploration patterns (FDMs, colored frames) from a representative individual overlaid on 8 face stimuli (numbered 1 to 8) calibrated to span a circular similarity continuum across two dimensions. A pair of maximally dissimilar faces was randomly selected as CS+ (red border) and CS– (cyan border; see color wheel for color code). The similarity relationships among the 8 faces and the resulting exploration patterns are depicted as two 8×8 matrices. (B-E) Multidimensional-scaling representations (top row) and the corresponding dissimilarity matrices (bottom row) depicting four possible scenarios on how learning could change the similarity geometry between the exploration maps (same color scheme; red: CS+; cyan: CS–). These matrices are decomposed onto covariate components (middle row) centered either on the CS+/CS– (specific component) or +90°/–90° faces (unspecific component). A third component is uniquely centered on the CS+ face (adversity component). These components were fitted to the observed dissimilarity matrices, and model selection procedure was carried out.

CS+ face. The most dissimilar face (CS–; Fig. 1, cyan) separated by 180° on the circular continuum was not reinforced and thus stayed neutral. Using this paradigm, we were able to investigate how exploration strategies were modified by both the physical similarity relationships between faces, and the adversity gradient introduced through aversive learning.

We used a variant of representational similarity analysis (Kriegeskorte, Mur, & Bandettini, 2008) that we term “fixation-pattern similarity analysis” (FPSA). FPSA considers exploration patterns as multivariate entities and assesses between-condition dissimilarity of fixation patterns for individual participants (Fig. 1A). We formulated 4 different hypotheses (Bottom-up saliency, increased arousal, adversity categorization, adversity tuning) based on how aversive learning might alter the similarity relationships between exploration patterns when one face on the continuum started to predict adversity (Fig. 1B-E).

Before learning, eye movement patterns mirrored the similarity characteristics of the stimulus continuum, indicating that exploration was mainly guided by subtle physical differences between the faces. Aversive learning resulted in a global increase in dissimilarity of eye movement patterns following learning. Model-based analysis of the similarity geometry indicated that this increase was specifically driven by a separation of patterns along the adversity gradient, in agreement with the adversity categorization model (Fig. 1D). These findings show that aversive learning can introduce substantial remodeling of exploration patterns in an adaptive manner during viewing of faces. In particular, we suggest that separation of patterns for harmful and safe prototypes results from an internal categorization process operating along the perceptual continuum following learning.

References

Kriegeskorte, N., Mur, M., & Bandettini, P. (2008). Representational Similarity Analysis – Connecting the Branches of Systems Neuroscience. Frontiers in Systems Neuroscience, 2. https://doi.org/10.3389/neuro.06.004.2008

 



Conventions on Folder Structures for Storing Experimental Data vs. Object-Oriented Programming

What is the best folder structure to store data recorded during an experiment? Is this an important question? This seems to be a question that stirs up debates. In the fMRI side of the experimental spectrum (without breaking generality to other domains), we typically record enormous amounts of data per participant covering BOLD acquisition, raw skin-conductance recordings, eye-movements, and what not. While the difference between this or that folder structure could be as insignificant as the difference between apples and peers, some people here propose a folder structure.

However, I actually think, this is a unnecessary question. Why? Because using object-oriented programming, one can simply design a data object (for example an object for single Subjects) that simply knows where the data is located.

In OOP, we start by defining some properties that our object will need to have. Therefore, this view already enforces us to plan beforehand what we would like to achieve with a given object we are programming. For example, if our aim is to define an object for representing individual subjects recorded in an experiment (a subject Object 😀), this object might have the following properties defined:

classdef Subject < Project
    properties
        id                           %subject ID
        path                         %path to the participant
        path_fmri
        path_scr
        path_eye
    end

...

And, my Subject object for the recorded participant number 5 could be constructed as follows:


>> s = Subject(5)
Subject 05 (/Users/onat/Documents/project_FPSA_FearGen/data//sub005/)
s = 
  Subject with properties:

                id: 5
              path      : '/data//sub005/'
              path_fmri : '/data/sub005/run000/fmri'
              path_eye  : '/data//sub005/run000/eye'
              path_scr  : '/data//sub005/run000/scr'


path_fmri, path_eye, path_scr are all properties of participants. And their values are automatically filled in during the construction of the Subject object when I called Subject with the argument 5.

This data has been filled in by the methods that are included in the definition of the Subject object. For example, a single-liner method called get.path_fmri fills in the path to the fmri data for this subject. This could be easily changed or adapted to fit another data set (given that is consistent across participants).

Therefore according to my view, the key is not to settle on a convention that everybody will agree for folder structures, but rather is to use intelligent objects to represent scientific data, that know already where things are stored. This is one of the enormous benefits that OOP introduces for the organization and analysis of scientific data. 

Talk given at the EMHFC Conference


I gave this talk at the European Meeting on Human Fear Conditioning about "Temporal Dynamics of Aversive Generalization".


Adaptive Changes in the Viewing Behavior of Faces Following Aversive Learning....

I decided to write few paragraphs about papers I will be publishing from now on. This will be targeted for the non-technical audience and I hope will increase the accessibility to the published results. 

Here is our latest work that shows how eye-movement patterns during viewing of faces are modified when people learn to associate faces with an aversive outcome.

Eye movements can be effortlessly recorded while humans are engaged in different situations. This can provide important insights on what the nervous system tries to achieve, as eye-movements represent the final behavioral outcome of many complex neuronal processes, which are difficult to record and understand. 

We measured eye-movements while humans were viewing faces, and analyzed the resulting exploration patterns. These faces were calibrated to have a known similarity relationship. For example, faces A, B and C were physically organised in such a way that B was perceived equally similar to A and C, whereas A and C were the most dissimilar pairs. First, using novel similarity-based analyses we show that exploration patterns are dominated by physical aspects of faces. That is, the physical similarity relationship between A, B and C could be estimated to a good degree from the similarity of eye-movement patterns that were generated during viewing of these faces. 

Following in the experiment, we selected one face to be a nasty one by associating its presentation with a mild electric current on the hand of volunteers to generate an unpleasant feeling without hurting them. Participants learnt to associate this unpleasant outcome with only one face while other faces were kept the same as before. This resulted in a gradient of unpleasantness that was not present before, and led volunteers to generalize this unpleasant association to other faces to the extent they were perceived similar with the nasty face. This is a classic phenomenon known as generalization, since early times of Pavlov.

How does this new situation modify the similarity relationships between exploration patterns? Following learning, similarity relationships of eye movement patterns started to mirror this newly learnt categories of nasty vs. safe faces, even though there were no physical changes associated with faces. This is compatible with the idea that following learning along an arbitrary continuum of stimuli, a categorization process occurs internally that distinguishes safe from nasty faces. This then biases eye-movement patterns during viewing of faces in such a way to collect information specifically associated with the safe and nasty prototypes, leading faces resembling to these prototypes to be scanned similarly.  

This study provides a nice illustration on how eye movements patterns can shed light onto neuronal processes and help us understand what the brain is trying to achieve during learning.



Eye movement patterns on 8 different, but similar faces that were carefully calibrated to form a similarity continuum. These maps show the most attended locations for a single participant before learning. Similarity analysis of these heatmaps using FPSA method can detect learning induced changes in the scanning behavior.


Reference:

Aversive Learning Changes Face-Viewing Strategies, as Revealed by Model-Based Fixation-Pattern Similarity Analysis. Lea Kampermann, Niklas Wilming, Arjen Alink, Christian Buechel, Selim Onat.

All content in this post released under CC-BY 4.0.

Explore