Disentangled Causal
Disentangled causal representation learning aims to decompose complex data into underlying factors, revealing their causal relationships rather than simply correlations. Current research focuses on developing models, such as variational autoencoders and graph neural networks, that can effectively disentangle these factors, often employing techniques like self-supervised learning and causal flows to achieve this. This approach enhances model interpretability, improves generalization performance by mitigating bias, and enables more accurate causal inference in diverse applications, including fraud detection, root cause analysis, and recommendation systems. The resulting disentangled representations offer significant advantages for both scientific understanding and practical decision-making.
Papers
DisenGCD: A Meta Multigraph-assisted Disentangled Graph Learning Framework for Cognitive Diagnosis
Shangshang Yang, Mingyang Chen, Ziwen Wang, Xiaoshan Yu, Panpan Zhang, Haiping Ma, Xingyi Zhang
GDDA: Semantic OOD Detection on Graphs under Covariate Shift via Score-Based Diffusion Models
Zhixia He, Chen Zhao, Minglai Shao, Yujie Lin, Dong Li, Qin Tian