<CiNet メンバーを対象に開催>74th CiNet Monthly Seminar: Michael J. Frank “From Descriptive to Normative Accounts of Frontostriatal Control” & Thomas Serre “Aligning deep networks with human vision will require novel neural architectures, data diets and training algorithms”

CiNet Monthly Seminar (英語で開催)

2025年3月14日(金)
15:00-17:00
CiNet棟大会議室にて開催いたします。


1. “From Descriptive to Normative Accounts of Frontostriatal Control”

米国 ブラウン大学
Professor of Brain Science
Edgar L. Marston Professor of Psychology
Michael J. Frank

Abstract:
The basal ganglia and dopaminergic (DA) systems are well studied for their roles in reinforcement learning, but the underlying architecture is notoriously complex. First, I will present a computational account of how this complexity is optimized to provide robust advantages over traditional reinforcement learning models over a range of environments,  and suggest that empirical observations of altered learning and decision making in patient populations reflect a byproduct of an otherwise normative mechanism.  Second, I will show how this system, when interacting with prefrontal cortex, can learn to influence cognitive actions such as working memory updating and “chunking” strategies that are adapted as function of task demands, mimicking human performance and normative models.



2. “Aligning deep networks with human vision will require novel neural architectures, data diets and training algorithms”

米国 ブラウン大学
Departments of Cognitive & Psychological Sciences and Computer Science
Carney Institute for Brain Science
Thomas J. Watson, Sr. Professor of Science
Thomas Serre

Abstract:
Recent advances in artificial intelligence have been mainly driven by the rapid scaling of deep neural networks (DNNs), which now contain unprecedented numbers of learnable parameters and are trained on massive datasets, covering large portions of the internet. This scaling has enabled DNNs to develop visual competencies that approach human levels. However, even the most sophisticated DNNs still exhibit strange, inscrutable failures that diverge markedly from human-like behavior—a misalignment that seems to worsen as models grow in scale.

In this talk, I will discuss recent work from our group addressing this misalignment via the development of DNNs that mimic human perception by incorporating computational, algorithmic, and representational principles fundamental to natural intelligence. First, I will review our ongoing efforts in characterizing human visual strategies in image categorization tasks and contrasting these strategies with modern deep nets. I will present initial results suggesting we must explore novel data regimens and training algorithms for deep nets to learn more human-like visual representations. Second, I will show results suggesting that neural architectures inspired by cortex-like recurrent neural circuits offer a compelling alternative to the prevailing transformers, particularly for tasks requiring visual reasoning beyond simple categorization.



担当 :  春野 雅彦