Concept FacTorization
Concept factorization is a representation learning paradigm aiming to decompose complex models into smaller, more interpretable components, each representing a specific concept or task. Current research focuses on applying this approach to improve large language models, multi-view clustering, and few-shot learning, often employing techniques like mixture-of-experts, hypergraph regularization, and latent space factorization to achieve efficient knowledge representation and transfer. This work holds significant promise for enhancing model efficiency, interpretability, and generalization capabilities across various machine learning applications, particularly in scenarios with limited data or computational resources.
Papers
August 15, 2024
July 3, 2023
April 22, 2023
April 8, 2023
November 17, 2022
August 21, 2022