Categorical Representation of Visual Stimuli in the Primate Prefrontal Cortex

Freedman et al. generated pictures of complex objects using parametric combinations of 6 base images. These base images represented different kinds of felines or dogs, therefore their combinations gave rise to images that were graded in their category membership (e.g. whereas some combinations were clearly dog- or feline-like, others pictures were somewhere in between) while guaranteeing diversity within each category. These images were shown in a delayed-match-to-category task to monkeys. Solving this task requires a level of abstraction from the sheer appearances of the stimuli. Even at category boundaries where the discrimination is most difficult, the performance was high. They recorded activity of single neurons from prefrontal cortex, more precisely from the ventral part of the principal sulcus. Their results show evidence for neurons that are able to distinguish between these two, supposedly learnt categories. That is the responses are 1/ not gradual as the stimuli  and 2/ characterized by a sharp step-like function at the category boundary. The data is clear, the interpretation is inline with the data. I find it unfortunately that pictures which are at the category boundary were not presented. And the study would gain if the similarity measure was in the perceptual space rather than in the stimulus domain. The 6 base images could in principle be tested on humans using perceptual mapping techniques.