Supervised CAusal Disentanglement

Supervised causal disentanglement aims to separate underlying factors influencing observed data, enabling a deeper understanding of complex systems and improved prediction accuracy. Current research focuses on developing self-supervised and weakly supervised methods to reduce reliance on expensive labeled datasets, employing techniques like masked structural causal models and contrastive regularization within latent variable models. These advancements are proving valuable in diverse applications, including hate speech detection, video editing, and human mobility prediction, by allowing for more robust and interpretable models that can handle noisy or incomplete data. The resulting disentangled representations offer improved generalization and facilitate targeted interventions or manipulations within the modeled system.

Papers