Information Bottleneck
The Information Bottleneck (IB) principle aims to learn compressed data representations that retain only information relevant to a specific task, discarding irrelevant details and noise. Current research focuses on applying IB to diverse machine learning problems, including multi-task learning, causal inference, and improving the robustness and interpretability of neural networks (e.g., through graph neural networks and variational autoencoders). This framework is proving valuable for enhancing model efficiency, generalization, fairness, and interpretability across various applications, from molecular dynamics simulations to natural language processing and image generation.
Papers
Tackling Distribution Shifts in Task-Oriented Communication with Information Bottleneck
Hongru Li, Jiawei Shao, Hengtao He, Shenghui Song, Jun Zhang, Khaled B. Letaief
TimeX++: Learning Time-Series Explanations with Information Bottleneck
Zichuan Liu, Tianchun Wang, Jimeng Shi, Xu Zheng, Zhuomin Chen, Lei Song, Wenqian Dong, Jayantha Obeysekera, Farhad Shirani, Dongsheng Luo