{"id":4786,"date":"2025-02-18T14:52:25","date_gmt":"2025-02-18T05:52:25","guid":{"rendered":"http:\/\/cinetjp-static3.nict.go.jp\/japanese\/?post_type=event&p=4786"},"modified":"2025-03-11T16:55:36","modified_gmt":"2025-03-11T07:55:36","slug":"20250314_5783","status":"publish","type":"event","link":"http:\/\/cinetjp-static3.nict.go.jp\/japanese\/event\/20250314_5783\/","title":{"rendered":"\uff1cCiNet \u30e1\u30f3\u30d0\u30fc\u3092\u5bfe\u8c61\u306b\u958b\u50ac\uff1e74th CiNet Monthly Seminar: Michael J. Frank \u201cFrom Descriptive to Normative Accounts of Frontostriatal Control\u201d & Thomas Serre \u201cAligning deep networks with human vision will require novel neural architectures, data diets and training algorithms\u201d"},"content":{"rendered":"\n
CiNet Monthly Seminar \uff08\u82f1\u8a9e\u3067\u958b\u50ac\uff09 <\/p>\n\n\n\n In this talk, I will discuss recent work from our group addressing this misalignment via the development of DNNs that mimic human perception by incorporating computational, algorithmic, and representational principles fundamental to natural intelligence. First, I will review our ongoing efforts in characterizing human visual strategies in image categorization tasks and contrasting these strategies with modern deep nets. I will present initial results suggesting we must explore novel data regimens and training algorithms for deep nets to learn more human-like visual representations. Second, I will show results suggesting that neural architectures inspired by cortex-like recurrent neural circuits offer a compelling alternative to the prevailing transformers, particularly for tasks requiring visual reasoning beyond simple categorization.<\/p>\n\n\n\n <\/p>\n\n\n\n
2025\u5e743\u670814\u65e5\uff08\u91d1\uff09
15:00-17:00
CiNet\u68df\u5927\u4f1a\u8b70\u5ba4\u306b\u3066\u958b\u50ac\u3044\u305f\u3057\u307e\u3059\u3002
1. “From Descriptive to Normative Accounts of Frontostriatal Control”<\/strong>
\u7c73\u56fd\u3000\u30d6\u30e9\u30a6\u30f3\u5927\u5b66
Professor of Brain Science
Edgar L. Marston Professor of Psychology
Michael J. Frank
Abstract:
The basal ganglia and dopaminergic (DA) systems are well studied for their roles in reinforcement learning, but the underlying architecture is notoriously complex. First, I will present a computational account of how this complexity is optimized to provide robust advantages over traditional reinforcement learning models over a range of environments, and suggest that empirical observations of altered learning and decision making in patient populations reflect a byproduct of an otherwise normative mechanism. Second, I will show how this system, when interacting with prefrontal cortex, can learn to influence cognitive actions such as working memory updating and \u201cchunking\u201d strategies that are adapted as function of task demands, mimicking human performance and normative models.<\/p>\n\n\n\n
2. “Aligning deep networks with human vision will require novel neural architectures, data diets and training algorithms”<\/strong>
\u7c73\u56fd\u3000\u30d6\u30e9\u30a6\u30f3\u5927\u5b66
Departments of Cognitive & Psychological Sciences and Computer Science
Carney Institute for Brain Science
Thomas J. Watson, Sr. Professor of Science
Thomas Serre
Abstract:
Recent advances in artificial intelligence have been mainly driven by the rapid scaling of deep neural networks (DNNs), which now contain unprecedented numbers of learnable parameters and are trained on massive datasets, covering large portions of the internet. This scaling has enabled DNNs to develop visual competencies that approach human levels. However, even the most sophisticated DNNs still exhibit strange, inscrutable failures that diverge markedly from human-like behavior\u2014a misalignment that seems to worsen as models grow in scale.<\/p>\n\n\n\n