Disentangled Learning
Disentangled learning aims to decompose complex data into independent, interpretable factors, improving model performance and interpretability. Current research focuses on developing methods to achieve this disentanglement, often employing variational autoencoders, Siamese networks, and recurrent neural networks, with applications ranging from recommendation systems and speech emotion recognition to cross-domain adaptation in remote sensing and continual learning. The ability to isolate relevant features from confounding variables holds significant promise for enhancing the robustness, generalizability, and explainability of machine learning models across diverse fields.
Papers
October 15, 2024
August 4, 2024
July 29, 2024
June 9, 2024
April 25, 2024
December 27, 2023
December 25, 2023
July 7, 2023
June 16, 2023
March 21, 2023
January 13, 2023
December 12, 2022
November 1, 2022
June 14, 2022
May 6, 2022
April 21, 2022
April 5, 2022