Disentangled Representation
Disentangled representation learning aims to decompose complex data into independent, interpretable latent factors, improving model understanding and control. Current research focuses on developing novel architectures, such as variational autoencoders and generative adversarial networks, often incorporating techniques like mutual information maximization and adversarial training to achieve effective disentanglement. This field is significant because disentangled representations enhance model interpretability, improve generalization across diverse datasets (including those with domain shifts or missing modalities), and enable more precise control over data generation and manipulation in various applications, from medical image analysis to music generation.
Papers
DRL-STNet: Unsupervised Domain Adaptation for Cross-modality Medical Image Segmentation via Disentangled Representation Learning
Hui Lin, Florian Schiffers, Santiago López-Tapia, Neda Tavakoli, Daniel Kim, Aggelos K. Katsaggelos
Transferring disentangled representations: bridging the gap between synthetic and real images
Jacopo Dapueto, Nicoletta Noceti, Francesca Odone
Dyn-Adapter: Towards Disentangled Representation for Efficient Visual Recognition
Yurong Zhang, Honghao Chen, Xinyu Zhang, Xiangxiang Chu, Li Song
DisenSemi: Semi-supervised Graph Classification via Disentangled Representation Learning
Yifan Wang, Xiao Luo, Chong Chen, Xian-Sheng Hua, Ming Zhang, Wei Ju