Mutual Information
Mutual information (MI), a measure of the statistical dependence between variables, is a cornerstone of information theory with broad applications in machine learning and beyond. Current research focuses on developing accurate MI estimators for high-dimensional data and leveraging MI to improve various machine learning tasks, including model training, feature selection, and data representation learning. This involves employing techniques like normalizing flows, variational inference, and contrastive learning within diverse model architectures, such as neural networks and diffusion models. The accurate estimation and effective application of MI are crucial for advancing fields ranging from causal inference and image processing to natural language processing and federated learning.
Papers
Weighted Point Cloud Embedding for Multimodal Contrastive Learning Toward Optimal Similarity Metric
Toshimitsu Uesaka, Taiji Suzuki, Yuhta Takida, Chieh-Hsin Lai, Naoki Murata, Yuki Mitsufuji
Why does Knowledge Distillation Work? Rethink its Attention and Fidelity Mechanism
Chenqi Guo, Shiwei Zhong, Xiaofeng Liu, Qianli Feng, Yinglong Ma