Latent Variable
Latent variable modeling aims to uncover hidden factors underlying observed data, improving understanding of complex systems and enabling more accurate predictions. Current research focuses on developing robust and efficient algorithms for inferring these latent variables, particularly within variational autoencoders (VAEs), diffusion models, and generative adversarial networks (GANs), often incorporating techniques like disentanglement and causal discovery. These advancements are impacting diverse fields, from medical diagnostics (integrating genomic and imaging data) to recommender systems (mitigating bias) and neuroscience (interpreting neural activity), by providing more interpretable and informative representations of complex datasets.
Papers
Sparsity regularization via tree-structured environments for disentangled representations
Elliot Layne, Jason Hartford, Sébastien Lachapelle, Mathieu Blanchette, Dhanya Sridhar
Unraveling the Impact of Heterophilic Structures on Graph Positive-Unlabeled Learning
Yuhao Wu, Jiangchao Yao, Bo Han, Lina Yao, Tongliang Liu
Local Causal Structure Learning in the Presence of Latent Variables
Feng Xie, Zheng Li, Peng Wu, Yan Zeng, Chunchen Liu, Zhi Geng
Deep Causal Generative Models with Property Control
Qilong Zhao, Shiyu Wang, Guangji Bai, Bo Pan, Zhaohui Qin, Liang Zhao
From Orthogonality to Dependency: Learning Disentangled Representation for Multi-Modal Time-Series Sensing Signals
Ruichu Cai, Zhifang Jiang, Zijian Li, Weilin Chen, Xuexin Chen, Zhifeng Hao, Yifan Shen, Guangyi Chen, Kun Zhang
Causal Effect Identification in a Sub-Population with Latent Variables
Amir Mohammad Abouei, Ehsan Mokhtarian, Negar Kiyavash, Matthias Grossglauser
Poisson Variational Autoencoder
Hadi Vafaii, Dekel Galor, Jacob L. Yates
When predict can also explain: few-shot prediction to select better neural latents
Kabir Dabholkar, Omri Barak