68th CiNet Monthly Seminar: David Whitney “The aperture problem of emotion perception”

CiNet Monthly Seminar (英語で開催)
CiNet棟大会議室にて開催いたします。

2024年6月7日(金)
16:00-17:00

“The aperture problem of emotion perception”

米国 カリフォルニア大学バークレー校
心理学部
教授
David Whitney

担当 : 村井 祐基

Abstract:
Understanding emotion is of paramount importance for humans. Although most work on emotion recognition focuses on face perception, my lab has taken a different approach and shown that there is a fundamental aperture problem in emotion perception. We developed a novel inferential emotion tracking (IET) task to measure observers’ abilities to track emotion information in natural movies when faces were completely masked and only the background context was visible (Chen & Whitney, PNAS, 2019). We found that observers use the spatial and temporal context to perceive emotion and that relying on face information alone is misleading, highlighting an aperture problem in emotion perception (Chen & Whitney, Emotion, 2020). Our finding that the use of context in emotion perception is not delayed relative to faces (Chen & Whitney, Cognition, 2021) indicates parallel pathways for face-based and context-based emotion recognition. The brain solves the emotion aperture problem by integrating different cues (i.e., background context and face information) but does so in a heuristic or naïve Bayesian manner, which, surprisingly, does not weigh cue reliability (Ortega et al, in revision). Some atypical populations, including those with autism, do not integrate background context information successfully, and because of this are not as sensitive or accurate in emotion recognition tasks (Ortega et al, Sci Reports, 2023). Indeed, individual differences reveal that those observers who are better able to code and incorporate the background context are more successful at accurately tracking the emotions of others (Ortega & Whitney, VSS, 2023). The aperture problem extends beyond just emotion perception to all visuo-social understanding. As an example, we have used it to measure the trustworthiness of faces in movies (Ortega, et al., submitted). Collectively, our research demonstrates a fundamental aperture problem in social and emotional perception, and it reveals why computer vision models of emotion and trustworthiness fail so spectacularly when tested with naturally dynamic scenes: they overemphasize the value of facial expression information at the expense of dynamic background context. To address this shortcoming, we created the largest psychophysical dataset of continuous emotion tracking in natural movies (Ren, et al., IEEE/CVF, 2024), which serves as a benchmark to improve computer vision models of emotion perception, diagnostic testing in atypical populations, and computational models of emotion recognition.