{"id":1720,"date":"2019-02-20T16:58:00","date_gmt":"2019-02-20T07:58:00","guid":{"rendered":"http:\/\/cinetjp-static3.nict.go.jp\/english\/?p=1720"},"modified":"2022-09-19T21:10:30","modified_gmt":"2022-09-19T12:10:30","slug":"5th_cinetconf","status":"publish","type":"event","link":"http:\/\/cinetjp-static3.nict.go.jp\/english\/event\/5th_cinetconf\/","title":{"rendered":"The 5th CiNet Conference: Computation and representation in brains and machines (Registration-closed)"},"content":{"rendered":"\n
\n
\n

Recent advancements in machine learning and artificial intelligence techniques have facilitated a quantitative and generalizable understanding of representations and computations in the brain. This interdisciplinary trend also opens new opportunities for lively discussions on topics including the effective handling and interpretation of large-scale models and data, the design of brain-inspired machine intelligence, and real-world applications. This conference aims to bring together cognitive and systems neuroscientists as well as AI researchers to discuss the cutting-edge findings and future directions by cross-referencing brain and machine studies.<\/p>\n\n\n\n

Presentations and slides partially available<\/strong><\/a><\/p>\n\n\n\n

Date:
<\/strong>     February 20 (Wed.) 1:30 pm to 22 (Fri.) 6 pm, 2019<\/h4>\n\n\n\n

Venue:
<\/strong>     Conference Room, CiNet Bldg. 
(1-4 Yamadaoka, Suita, Osaka, Japan)<\/a><\/p>\n<\/div>\n\n\n\n

\n
\"\"\/<\/a><\/figure>\n<\/div>\n<\/div>\n\n\n\n

Registration:
<\/strong>    Closed<\/p>\n\n\n\n

Program:<\/strong> download<\/a><\/p>\n\n\n\n

Speakers:
\u00a0\u00a0\u00a0\u00a0 Shun-ichi Amari, RIKEN
<\/strong>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0<\/strong>\u201cStatistical Neurodynamics of Deep Networks: Signal Propagation and Fisher Information\u201d<\/em>
\u00a0\u00a0\u00a0\u00a0 Matthew Botvinick, DeepMind
<\/strong>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0<\/strong>\u201cA distributional code for value in dopaminergic reinforcement learning\u201d<\/em>
\u00a0\u00a0\u00a0\u00a0 David Cox, MIT-IBM Watson AI Lab\/Harvard University
<\/strong>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0<\/strong>\u201cPredictive Coding Models of Perception\u201d<\/em>
\u00a0\u00a0\u00a0\u00a0 Dileep George, Vicarious
\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0<\/strong>\u201cUnderstanding the brain by building machines that generalize like the brain\u201d<\/em>
\u00a0\u00a0\u00a0\u00a0 Marcel van Gerven, Radboud University
<\/strong>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0<\/strong>\u201cAI-Driven Neuroscience\u201d<\/em>
\u00a0\u00a0\u00a0\u00a0 Iris Groen, New York University
<\/strong>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0<\/strong>\u201cMind the gap: comparing multiple models of scene representation in brain and behavior\u201d
<\/em>\u00a0\u00a0\u00a0\u00a0 Michael Hanke, Otto-von-Guericke University\/Center for Behavioral Brain Sciences<\/strong>
\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \u00a0\u201cPerpetual, decentralized management of digital objects for collaborative open science
\u00a0\u00a0\u00a0\u00a0\u00a0 \u00a0\u00a0 \u2013\u00a0conclusions from the studyforrest.org project\u201d
<\/strong><\/em>\u00a0\u00a0\u00a0\u00a0 Uri Hasson, Princeton University<\/strong>
<\/strong>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u201cRobust-fit to nature: taking evolutionary perspective for biological (and artificial) neural networks\u201d
<\/strong><\/em>\u00a0\u00a0\u00a0\u00a0 Aapo Hyv\u00e4rinen, University College London\/University of Helsinki<\/strong>
<\/strong>\u00a0\u00a0\u00a0\u00a0\u00a0 \u00a0\u201cNonlinear independent component analysis: A principled framework for\u00a0unsupervised learning\u201d
<\/strong><\/em>\u00a0\u00a0\u00a0\u00a0 Yukiyasu Kamitani, Kyoto University\/ATR Computational Neuroscience\u00a0Laboratories<\/strong>
<\/strong>\u00a0\u00a0\u00a0 \u00a0\u00a0 \u201cDeep image reconstruction from the human brain\u201d
<\/strong><\/em>
\u00a0\u00a0\u00a0\u00a0 Shigeru Kitazawa<\/a>, Osaka University\/CiNet<\/strong>
<\/strong>\u00a0\u00a0\u00a0\u00a0\u00a0 \u201cError signals in reaching: neural representations and their roles in optimizing the movement\u201d
<\/strong><\/em>\u00a0\u00a0\u00a0\u00a0 Nikolaus Kriegeskorte, Columbia University<\/strong>
\u00a0\u00a0\u00a0\u00a0 \u00a0\u201cCognitive computational neuroscience of vision\u201d
<\/strong><\/em>\u00a0\u00a0\u00a0\u00a0 Jun Morimoto, ATR\/CiNet<\/strong>
<\/strong>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \u201cModel-based approaches to humanoid motor learning\u201d
<\/strong><\/em>\u00a0\u00a0\u00a0\u00a0 Tomoya Nakai<\/strong>, NICT\/CiNet
<\/strong>\u00a0\u00a0\u00a0\u00a0\u00a0 \u00a0\u201cQuantitative models reveal the structure and organization of diverse cognitive functions in the human<\/em>\u00a0brain\u201d
\u00a0\u00a0\u00a0\u00a0 Satoshi<\/a><\/strong>\u00a0Nishida<\/a>, NICT\/CiNet
<\/strong>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \u201cBrain decoding of human natural perception using statistical language<\/em>\u00a0modeling\u201d
\u00a0\u00a0\u00a0\u00a0 Shinji<\/a><\/strong>\u00a0Nishimoto<\/a>, NICT\/CiNet
<\/strong>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \u201cRepresentation and computation in brains and machines\u201d
<\/strong><\/em>\u00a0\u00a0\u00a0\u00a0 Ana Lu\u00edsa Pinho, Inria, CEA, Paris-Saclay University<\/strong>
<\/strong>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \u00a0\u201cIndividual Brain Charting, a high-resolution fMRI dataset for cognitive mapping of the human brain\u201d
<\/strong><\/em>\u00a0\u00a0\u00a0\u00a0 Odelia Schwartz, University of Miami<\/strong>
<\/strong>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \u00a0\u201cImage statistics and cortical visual processing: V1, V2, and deep learning\u201d
<\/strong><\/em>\u00a0\u00a0\u00a0\u00a0 Taro Toyoizumi, RIKEN\u00a0<\/strong>
<\/strong>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \u00a0\u201cAn Optimization Approach to Understand Biological Search\u201d
<\/em>\u00a0\u00a0\u00a0\u00a0\u00a0Kai Wang, NEC Corporation<\/strong>
<\/strong>\u00a0\u00a0\u00a0\u00a0\u00a0 \u00a0 \u201cExperimental Platform for brain function model design\u201d
<\/em>\u00a0\u00a0\u00a0\u00a0 Dan Yamins, Stanford University<\/strong>
\u00a0\u00a0\u00a0\u00a0\u00a0 \u00a0\u201cCognitively Inspired Artificial Intelligence for Neuroscience\u201d
<\/strong><\/em>\u00a0\u00a0\u00a0\u00a0 Takufumi Yanagisawa, Osaka University\/CiNet<\/strong>
<\/strong>\u00a0\u00a0\u00a0\u00a0 \u00a0 \u201cSemantic decoding of visual stimulus using electrocorticogram and application for BCI\u201d<\/em><\/p>\n\n\n\n

Organizers:
<\/strong>     <\/strong><\/em>
National Institute of Information and Communications Technology (NICT)<\/a>
     <\/strong><\/em>Center for Information and Neural Networks (CiNet)<\/p>\n\n\n\n

Sponsors:
<\/strong>     <\/strong><\/em>Grant-in-Aid for Scientific Research on Innovative Areas, MEXT, Japan
        <\/strong><\/em>\u201cChronogenesis: How the Mind Generates Time\u201d
     <\/strong><\/em>NEC Corporation
     <\/strong><\/em>NTT Data Institute of Management Consulting, Inc.<\/p>\n\n\n\n

Financial support:
<\/strong>     <\/strong><\/em>Ichimura Foundation for New Technology<\/p>\n\n\n\n

Meeting Chair: <\/strong> Shinji Nishimoto<\/a> (NICT\/CiNet)
Co-chair:  <\/strong>
Shigeru Kitazawa <\/a>(Osaka University\/CiNet),  Takafumi Suzuki <\/a>(NICT\/CiNet)
Meeting Director:<\/strong> Takahisa Taguchi (NICT\/CiNet)<\/p>\n\n\n\n

Language: <\/strong>English
Seating capacity: <\/strong>130<\/p>\n\n\n\n

Inquires to: reg@ml.nict.go.jp<\/a>
(Japanese or English)<\/p>\n","protected":false},"featured_media":0,"template":"","acf":[],"_links":{"self":[{"href":"http:\/\/cinetjp-static3.nict.go.jp\/english\/wp-json\/wp\/v2\/event\/1720"}],"collection":[{"href":"http:\/\/cinetjp-static3.nict.go.jp\/english\/wp-json\/wp\/v2\/event"}],"about":[{"href":"http:\/\/cinetjp-static3.nict.go.jp\/english\/wp-json\/wp\/v2\/types\/event"}],"wp:attachment":[{"href":"http:\/\/cinetjp-static3.nict.go.jp\/english\/wp-json\/wp\/v2\/media?parent=1720"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}