Information Bottleneck
The Information Bottleneck (IB) principle aims to learn compressed data representations that retain only information relevant to a specific task, discarding irrelevant details and noise. Current research focuses on applying IB to diverse machine learning problems, including multi-task learning, causal inference, and improving the robustness and interpretability of neural networks (e.g., through graph neural networks and variational autoencoders). This framework is proving valuable for enhancing model efficiency, generalization, fairness, and interpretability across various applications, from molecular dynamics simulations to natural language processing and image generation.
Papers
January 10, 2025
January 2, 2025
December 13, 2024
December 11, 2024
November 7, 2024
October 30, 2024
October 23, 2024
October 10, 2024
October 8, 2024
October 3, 2024
October 1, 2024
September 29, 2024
September 18, 2024
September 17, 2024
September 11, 2024
September 10, 2024
September 2, 2024
August 26, 2024
August 20, 2024