Latent Subspace
Latent subspace learning aims to discover lower-dimensional representations of high-dimensional data, revealing underlying structure and improving model interpretability and generalization. Current research focuses on developing novel algorithms, such as variational autoencoders (VAEs) and contrastive learning methods, to disentangle these subspaces, enabling the identification of meaningful factors of variation and mitigating issues like shortcut learning and dimensional collapse. This work has significant implications for diverse fields, including image analysis, speech processing, and multi-task learning, by enhancing model explainability, improving performance on complex datasets, and facilitating robust cross-domain adaptation.
16papers
Papers
March 12, 2025
February 17, 2025
November 26, 2024
June 17, 2024
November 5, 2023
October 2, 2023
June 10, 2023
March 22, 2023
November 28, 2022
April 14, 2022
March 5, 2022
February 17, 2022
January 24, 2022
December 29, 2021