Causal Representation
Causal representation learning aims to uncover hidden causal variables and their relationships from high-dimensional data, enabling more robust and interpretable models. Current research focuses on developing algorithms and model architectures, such as variational autoencoders and graph neural networks, that can identify these causal structures from observational and interventional data, often leveraging principles like invariance and sparsity. This field is significant because it promises to improve the reliability and generalizability of machine learning models across diverse domains, including healthcare, climate science, and reinforcement learning, by moving beyond simple correlation to uncover underlying causal mechanisms. The ability to learn accurate causal representations is crucial for reliable causal inference and effective interventions in complex systems.
Papers
Towards Generalizable Reinforcement Learning via Causality-Guided Self-Adaptive Representations
Yupei Yang, Biwei Huang, Fan Feng, Xinyue Wang, Shikui Tu, Lei Xu
DiffusionCounterfactuals: Inferring High-dimensional Counterfactuals with Guidance of Causal Representations
Jiageng Zhu, Hanchen Xie, Jiazhi Li, Wael Abd-Almageed