Latent Causal Representation

Latent causal representation learning aims to discover hidden causal structures underlying observed data, focusing on identifying these latent variables and their relationships. Current research emphasizes developing theoretical conditions for identifiability—ensuring the learned representation accurately reflects the true causal mechanisms—within various model frameworks, including linear Gaussian, polynomial, and those incorporating observed variables modulating causal effects. These efforts leverage distribution shifts and structural properties to improve identifiability, often resulting in novel algorithms like variational autoencoders tailored for causal inference. The ability to reliably learn latent causal representations holds significant promise for improving prediction accuracy under changing conditions and advancing our understanding of complex systems in various scientific domains.

Papers