Information Theoretic
Information theory provides a rigorous mathematical framework for quantifying and analyzing information in various systems, with applications increasingly prevalent in machine learning and related fields. Current research focuses on applying information-theoretic principles to understand and improve model performance, including generalization, privacy, and robustness, often leveraging techniques like mutual information estimation and rate-distortion theory. This approach offers a powerful lens for analyzing fundamental limits of learning algorithms, guiding the design of more efficient and reliable models, and providing theoretical guarantees for their performance in diverse applications such as multi-view learning and domain generalization. The resulting insights are crucial for advancing both the theoretical understanding and practical applications of machine learning.
Papers
On Instance-Dependent Bounds for Offline Reinforcement Learning with Linear Function Approximation
Thanh Nguyen-Tang, Ming Yin, Sunil Gupta, Svetha Venkatesh, Raman Arora
Mutual Information Learned Regressor: an Information-theoretic Viewpoint of Training Regression Systems
Jirong Yi, Qiaosheng Zhang, Zhen Chen, Qiao Liu, Wei Shao, Yusen He, Yaohua Wang