Mutual Information
Mutual information (MI), a measure of the statistical dependence between variables, is a cornerstone of information theory with broad applications in machine learning and beyond. Current research focuses on developing accurate MI estimators for high-dimensional data and leveraging MI to improve various machine learning tasks, including model training, feature selection, and data representation learning. This involves employing techniques like normalizing flows, variational inference, and contrastive learning within diverse model architectures, such as neural networks and diffusion models. The accurate estimation and effective application of MI are crucial for advancing fields ranging from causal inference and image processing to natural language processing and federated learning.
Papers
Understanding Generalization via Leave-One-Out Conditional Mutual Information
Mahdi Haghifam, Shay Moran, Daniel M. Roy, Gintare Karolina Dziugaite
Auto-Encoder-Extreme Learning Machine Model for Boiler NOx Emission Concentration Prediction
Zhenhao Tang, Shikui Wang, Xiangying Chai, Shengxian Cao, Tinghui Ouyang, Yang Li
MaNi: Maximizing Mutual Information for Nuclei Cross-Domain Unsupervised Segmentation
Yash Sharma, Sana Syed, Donald E. Brown