Perceptual decision making

The neural encoding of guesses in the human brain.

Bode S, Bogler C, Soon CS, Haynes JD.

Published in Neuroimage. 2012 Jan 16;59(2):1924-31.

Human perception depends heavily on the quality of sensory information. When objects are hard to see we often believe ourselves to be purely guessing. Here we investigated whether such guesses use brain networks involved in perceptual decision making or independent networks. We used a combination of fMRI and pattern classification to test how visibility affects the signals, which determine choices. We found that decisions regarding clearly visible objects are predicted by signals in sensory brain regions, whereas different regions in parietal cortex became predictive when subjects were shown invisible objects and believed themselves to be purely guessing. This parietal network was highly overlapping with regions, which have previously been shown to encode free decisions. Thus, the brain might use a dedicated network for determining choices when insufficient sensory information is available.

Perceptual learning and decision-making in human medial frontal cortex.

Kahnt T, Grueschow M, Speck O, Haynes JD.

Published in Neuron. 2011 May 12;70(3):549-59.

The dominant view that perceptual learning is accompanied by changes in early sensory representations has recently been challenged. Here we tested the idea that perceptual learning can be accounted for by reinforcement learning involving changes in higher decision-making areas. We trained subjects on an orientation discrimination task involving feedback over 4 days, acquiring fMRI data on the first and last day. Behavioral improvements were well explained by a reinforcement learning model in which learning leads to enhanced readout of sensory information, thereby establishing noise-robust representations of decision variables. We find stimulus orientation encoded in early visual and higher cortical regions such as lateral parietal cortex and anterior cingulate cortex (ACC). However, only activity patterns in the ACC tracked changes in decision variables during learning. These results provide strong evidence for perceptual learning-related changes in higher order areas and suggest that perceptual and reward learning are based on a common neurobiological mechanism.

Decoding different roles for vmPFC and dlPFC in multi-attribute decision making.

Kahnt T, Heinzle J, Park SQ, Haynes JD.

Published in Neuroimage. 2011 May 15;56(2):709-15.

In everyday life, successful decision making requires precise representations of expected values. However, for most behavioral options more than one attribute can be relevant in order to predict the expected reward. Thus, to make good or even optimal choices the reward predictions of multiple attributes need to be integrated into a combined expected value. Importantly, the individual attributes of such multi-attribute objects can agree or disagree in their reward prediction. Here we address where the brain encodes the combined reward prediction (averaged across attributes) and where it encodes the variability of the value predictions of the individual attributes. We acquired fMRI data while subjects performed a task in which they had to integrate reward predictions from multiple attributes into a combined value. Using time-resolved pattern recognition techniques (support vector regression) we find that (1) the combined value is encoded in distributed fMRI patterns in the ventromedial prefrontal cortex (vmPFC) and that (2) the variability of value predictions of the individual attributes is encoded in the dorsolateral prefrontal cortex (dlPFC). The combined value could be used to guide choices, whereas the variability of the value predictions of individual attributes indicates an ambiguity that results in an increased difficulty of the value-integration. These results demonstrate that the different features defining multi-attribute objects are encoded in non-overlapping brain regions and therefore suggest different roles for vmPFC and dlPFC in multi-attribute decision making.