Information Bottleneck
The Information Bottleneck (IB) principle aims to learn compressed data representations that retain only information relevant to a specific task, discarding irrelevant details and noise. Current research focuses on applying IB to diverse machine learning problems, including multi-task learning, causal inference, and improving the robustness and interpretability of neural networks (e.g., through graph neural networks and variational autoencoders). This framework is proving valuable for enhancing model efficiency, generalization, fairness, and interpretability across various applications, from molecular dynamics simulations to natural language processing and image generation.
Papers
October 30, 2024
October 23, 2024
October 10, 2024
October 8, 2024
October 3, 2024
October 1, 2024
September 29, 2024
September 18, 2024
September 17, 2024
September 11, 2024
September 10, 2024
September 2, 2024
August 26, 2024
August 20, 2024
August 13, 2024
August 10, 2024
August 3, 2024
August 2, 2024
August 1, 2024