Natural visual scenes induce rich perceptual experiences that are highly diverse from scene to scene and from person to person. In the new article published in NeuroImage, CiNet scientists Satoshi Nishida and Shinji Nishimoto proposed a new framework for decoding such experiences from human brain activity using a distributed representation of words.
Nishida and Nishimoto used functional magnetic resonance imaging (fMRI) to measure brain activity evoked by natural movie scenes. Then, they constructed a high-dimensional feature space of perceptual experiences using skip-gram, a state-of-the-art distributed word embedding model. They built a decoder that associates brain activity with perceptual experiences via the distributed word representation. The decoder successfully estimated perceptual contents consistent with the scene descriptions by multiple annotators.
Their results illustrate three advantages of their new decoding framework: (1) three types of perceptual contents could be decoded in the form of nouns (objects), verbs (actions), and adjectives (impressions) contained in 10,000 vocabulary words; (2) despite using such a large vocabulary, they could decode novel words that were absent in the datasets to train the decoder; and (3) the inter-individual variability of the decoded contents co-varied with that of the contents of scene descriptions. These findings suggest that their decoding framework can recover diverse aspects of perceptual experiences in naturalistic situations and could be useful in various scientific and practical applications.
“Decoding naturalistic experiences from human brain activity via distributed representations of words”
Satoshi Nishida and Shinji Nishimoto
Neuroimage 2017, in press
doi: