Sufficient Representation
Sufficient representation learning aims to identify the minimal set of features or parameters needed to accurately model a system or phenomenon, improving efficiency and generalizability. Current research focuses on developing methods to learn these representations using neural networks, including variational autoencoders and transformers, often incorporating techniques like mutual information maximization and entropy bottlenecks to minimize redundancy. This pursuit is crucial for advancing causal inference, self-supervised learning, and various other machine learning applications by enhancing model interpretability, reducing computational costs, and improving robustness to noise and distribution shifts.
Papers
August 30, 2024
June 25, 2024
March 28, 2024
December 13, 2023
July 21, 2023
July 7, 2023
June 12, 2023
June 8, 2023
April 17, 2023
March 30, 2023
March 1, 2023
November 26, 2022
November 21, 2022
October 21, 2022
September 20, 2022
August 30, 2022
July 21, 2022
March 14, 2022
January 2, 2022