Information Bottleneck
The Information Bottleneck (IB) principle aims to learn compressed data representations that retain only information relevant to a specific task, discarding irrelevant details and noise. Current research focuses on applying IB to diverse machine learning problems, including multi-task learning, causal inference, and improving the robustness and interpretability of neural networks (e.g., through graph neural networks and variational autoencoders). This framework is proving valuable for enhancing model efficiency, generalization, fairness, and interpretability across various applications, from molecular dynamics simulations to natural language processing and image generation.
Papers
Transfer Entropy Bottleneck: Learning Sequence to Sequence Information Transfer
Damjan Kalajdzievski, Ximeng Mao, Pascal Fortier-Poisson, Guillaume Lajoie, Blake Richards
Disentangled Generation with Information Bottleneck for Few-Shot Learning
Zhuohang Dang, Jihong Wang, Minnan Luo, Chengyou Jia, Caixia Yan, Qinghua Zheng