Ikuhisa Mitsugami: “Gaze from Head: Gaze Estimation without Observing Eyes”

April 20, 2018  Friday Lunch Seminar
12:15 〜 13:00

CiNet 1F Conference Room

“Gaze from Head: Gaze Estimation without Observing Eyes”

Ikuhisa Mitsugami

Associate Professor
Dept. of Systems Engineering
Graduate School of Information Sciences
Hiroshima City University

Host PI :  Noriko Yamagishi

We propose a novel approach for gaze estimation without direct observation ofthe eyes. This ability is commonly exhibited in everyday life, when the gaze direction of a person at a distance is determined even when their eyes cannot be directly observed. Physiological research has suggested that this ability may rely on implicit knowledge of the coordination of the eyes and head, enabling the estimation of gaze direction. We propose a method in which information regarding the motion of the eyes and head is collected, and the system is trained to extract temporal relations as a measure of eye-head coordination. This information is then used to estimate gaze direction from the head pose sequence.


In this talk, I deliver a presentation about following two topics from my recent work. (1) Phantom movement of a paralyzed hand is clinically important, but the biomarker representing the subjective mobility of the phantom hand remains unclear. Our previous study suggested that the accuracy of movement decoding using sensorimotor cortical potentials measured by electrocorticogram (ECoG) was higher for paralyzed patients who were able to move the phantom hand easily than those who were not able to move the phantom hand subjectively. Here, we hypothesized that the decoding accuracy of phantom movements of a paralyzed hand using magnetoencephalogram represents the subjective mobility of the phantom hand. The hand posture (grasp or open) was inferred by decoding using estimated cortical potential in the sensorimotor cortices contralateral to the phantom hand. The decoding accuracy was negatively correlated with the time needed to grasp and open the phantom hand. (2) Recent studies using functional magnetic resonance imaging (fMRI) have enabled quantitative evaluation of the semantic space during processing of visual stimuli. In the semantic space of the natural language processing model Word2Vec, decoders were shown to generalize to natural scenes of a movie that were not included in the training data of the decoders. Combined with ECoG, which has a higher sampling rate than fMRI, this approach is expected to aid in development of a practical brain-machine interface. Here, we decoded vector representations of scenes within the Word2Vec semantic space to assess whether a decoder trained using ECoG features still generalizes to words new to the decoder.

About CiNet’s Friday Lunch Seminars:
The Friday Lunch Seminar is CiNet’s main regular meeting series, held every week at 12:15 in the beautiful main lecture theatre on the ground floor at CiNet. The talks are typically 40mins long and orientated towards an inter-disciplinary audience. They are informal, social, and most people bring their own lunch to eat during the talk. They are open to anyone who is feeling curious and wants to come, regardless of where you work.