Latent Causal
Latent causal representation learning aims to uncover hidden causal variables and their relationships from high-dimensional observational data, enabling causal inference and improved prediction in various domains. Current research focuses on developing methods that address challenges like non-linearity, partial observability, and unknown interventions, often employing techniques such as autoencoders, variational inference, and graph neural networks to learn disentangled representations and identify causal structures. These advancements are significant for improving the interpretability and robustness of machine learning models, and have implications across diverse fields including healthcare, ecology, and reinforcement learning.
Papers
Causal Representation Learning Made Identifiable by Grouping of Observational Variables
Hiroshi Morioka, Aapo Hyvärinen
Identifiable Latent Polynomial Causal Models Through the Lens of Change
Yuhang Liu, Zhen Zhang, Dong Gong, Mingming Gong, Biwei Huang, Anton van den Hengel, Kun Zhang, Javen Qinfeng Shi
General Identifiability and Achievability for Causal Representation Learning
Burak Varıcı, Emre Acartürk, Karthikeyan Shanmugam, Ali Tajer