Representation Disentanglement
Representation disentanglement aims to decompose complex data into independent, interpretable factors, improving model generalization and interpretability. Current research focuses on developing methods to achieve this disentanglement, often employing variational autoencoders (VAEs), contrastive learning, and diffusion models, with a strong emphasis on unsupervised or weakly supervised approaches to mitigate reliance on labeled data. This area is crucial for advancing robust AI systems, particularly in applications like multi-modal learning, improving out-of-distribution generalization, and mitigating biases in machine learning models.
Papers
November 12, 2024
September 19, 2024
July 8, 2024
February 5, 2024
February 4, 2024
April 22, 2023
March 25, 2023
February 28, 2023
September 21, 2022
August 24, 2022
May 30, 2022
December 30, 2021
November 27, 2021