November 28, 2019
CiNet 1F Conference Room
National Institute for Physiological Sciences
Host: Shinji Nishimoto (PI)
Information of the external environment is represented in multiple domains in our brains. Forms of representations vary across brain regions, such as visual features of objects, including orientations, spatial frequencies, and colors in the primary visual cortex and integrated object information (e.g., a visual image of a face or a chair) in the temporal cortex. In this presentation, I will show recent findings on neural representations of value and taste qualities. By applying multivoxel pattern analysis to the functional MRI data obtained in the visual and gustatory experiment, I revealed modality-independent value representations in the orbitofrontal cortex (Chikazoe et al., Nature Neuroscience, 2014) as well as local integration of gustatory quality information (e.g. sour and salt etc…) in the primary gustatory cortex, insula (Chikazoe et al., Nature Communications, 2019). Furthermore, I will talk about our ongoing study, which investigates how value emerges from visual information in our brain, by using the deep neural network as the model of vision-to-value transformation. By applying representational similarity analysis, we explored brain regions corresponding to the respective layers in the neural network, and found that broad regions, including the insula and medial prefrontal cortex, are associated with the intermediate representations between vision and value, but not value per se. These results suggest that the combination of fMRI and deep learning can be a useful tool for elucidating unknown mechanisms for information transformation in our brains. The presentation will close with a discussion of potential clinical applications for neuroimaging in psychiatric diagnosis as my future research direction.