Information Bottleneck
The Information Bottleneck (IB) principle aims to learn compressed data representations that retain only information relevant to a specific task, discarding irrelevant details and noise. Current research focuses on applying IB to diverse machine learning problems, including multi-task learning, causal inference, and improving the robustness and interpretability of neural networks (e.g., through graph neural networks and variational autoencoders). This framework is proving valuable for enhancing model efficiency, generalization, fairness, and interpretability across various applications, from molecular dynamics simulations to natural language processing and image generation.
Papers
April 1, 2024
March 31, 2024
March 23, 2024
March 22, 2024
March 21, 2024
March 15, 2024
February 28, 2024
February 17, 2024
February 14, 2024
February 12, 2024
February 8, 2024
February 7, 2024
February 2, 2024
January 3, 2024
December 28, 2023
December 18, 2023
December 11, 2023
December 4, 2023
November 11, 2023