Information Bottleneck
The Information Bottleneck (IB) principle aims to learn compressed data representations that retain only information relevant to a specific task, discarding irrelevant details and noise. Current research focuses on applying IB to diverse machine learning problems, including multi-task learning, causal inference, and improving the robustness and interpretability of neural networks (e.g., through graph neural networks and variational autoencoders). This framework is proving valuable for enhancing model efficiency, generalization, fairness, and interpretability across various applications, from molecular dynamics simulations to natural language processing and image generation.
Papers
June 16, 2023
June 13, 2023
May 30, 2023
May 25, 2023
May 19, 2023
May 18, 2023
May 17, 2023
May 13, 2023
April 28, 2023
April 19, 2023
March 31, 2023
March 24, 2023
March 22, 2023
February 28, 2023
February 10, 2023
February 9, 2023
February 7, 2023
December 28, 2022