{"id":3602,"date":"2023-09-01T15:06:58","date_gmt":"2023-09-01T06:06:58","guid":{"rendered":"http:\/\/cinetjp-static3.nict.go.jp\/japanese\/?post_type=event&p=3602"},"modified":"2023-11-15T13:11:10","modified_gmt":"2023-11-15T04:11:10","slug":"20230922_1550","status":"publish","type":"event","link":"http:\/\/cinetjp-static3.nict.go.jp\/japanese\/event\/20230922_1550\/","title":{"rendered":"Friday Lunch Seminar\uff1con-line\u00a0\u958b\u50ac\uff1e \u68ee\u5ca1 \u535a\u53f2 \uff1a”Identifiable nonlinear representation learning and some applications”"},"content":{"rendered":"\n

2023\u5e749\u670822\u65e5\u3000\u3000Friday Lunch Seminar \uff08\u82f1\u8a9e\u3067\u958b\u50ac\u3057\u307e\u3059\uff09
12:15\u00a0\u301c\u00a013:00
On-line\u3067\u958b\u50ac\u3044\u305f\u3057\u307e\u3059\u3002
\u2192\u7533\u8fbc\u307f\u306f\u3000\u3053\u3061\u3089
\uff08\u7de0\u3081\u5207\u308a\uff1a9\u670821\u65e5\u6b63\u5348\u3001\u53c2\u52a0\u8981\u9818\u306f9\u670821\u65e5\u306be\u30e1\u30fc\u30eb\u306b\u3066\u304a\u77e5\u3089\u305b\u3057\u307e\u3059\u3002\uff09<\/p>\n\n\n\n

\u6f14\u984c\uff1aIdentifiable nonlinear representation learning and some applications<\/p>\n\n\n\n

\u7406\u5316\u5b66\u7814\u7a76\u6240
\u9769\u65b0\u77e5\u80fd\u7d71\u5408\u7814\u7a76\u30bb\u30f3\u30bf\u30fc
\u7814\u7a76\u54e1
\u68ee\u5ca1 \u535a\u53f2<\/p>\n\n\n\n

\u62c5\u5f53PI\uff1a\u5c71\u4e0b \u5b99\u4eba<\/a><\/p>\n\n\n\n

Abstract:
Revealing fundamental representation (latent components) generating observational data in a data-driven manner is called representation learning, and has a long history including such as principal component analysis (PCA) and independent component analysis (ICA). Many frameworks were proposed based on deep learning in recent years to somehow extend them to nonlinear cases, including variational autoencoders (VAEs) and generative adversarial networks (GANs). However, such nonlinear representation learning is in general ill-posed, and it is known that there is no theoretical guarantee that they can estimate the \u201ctrue\u201d components. In this talk, I will introduce our work on nonlinear independent component analysis (NICA), and explain how such problems can be solved and made identifiable by adding some assumptions on the latent components. I will also introduce some recent extensions of NICA to dynamical models and causal discovery, with some applications to neuroimaging data.<\/p>\n\n\n\n


<\/p>\n","protected":false},"featured_media":0,"template":"","acf":[],"_links":{"self":[{"href":"http:\/\/cinetjp-static3.nict.go.jp\/japanese\/wp-json\/wp\/v2\/event\/3602"}],"collection":[{"href":"http:\/\/cinetjp-static3.nict.go.jp\/japanese\/wp-json\/wp\/v2\/event"}],"about":[{"href":"http:\/\/cinetjp-static3.nict.go.jp\/japanese\/wp-json\/wp\/v2\/types\/event"}],"wp:attachment":[{"href":"http:\/\/cinetjp-static3.nict.go.jp\/japanese\/wp-json\/wp\/v2\/media?parent=3602"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}