Mutual Information
Mutual information (MI), a measure of the statistical dependence between variables, is a cornerstone of information theory with broad applications in machine learning and beyond. Current research focuses on developing accurate MI estimators for high-dimensional data and leveraging MI to improve various machine learning tasks, including model training, feature selection, and data representation learning. This involves employing techniques like normalizing flows, variational inference, and contrastive learning within diverse model architectures, such as neural networks and diffusion models. The accurate estimation and effective application of MI are crucial for advancing fields ranging from causal inference and image processing to natural language processing and federated learning.
Papers
Conditional Energy-Based Models for Implicit Policies: The Gap between Theory and Practice
Duy-Nguyen Ta, Eric Cousineau, Huihua Zhao, Siyuan Feng
Exploring Adversarial Examples and Adversarial Robustness of Convolutional Neural Networks by Mutual Information
Jiebao Zhang, Wenhua Qian, Rencan Nie, Jinde Cao, Dan Xu