Information Theoretic Framework
Information-theoretic frameworks provide a powerful lens for analyzing and improving machine learning models, focusing on quantifying information flow and dependencies within data and models. Current research emphasizes applications in model interpretability, robust generalization (including out-of-distribution performance), and privacy-preserving techniques, often employing autoencoders, generative adversarial networks (GANs), and various information bottleneck methods. These frameworks offer valuable tools for understanding model behavior, enhancing model robustness and efficiency, and addressing critical issues like bias and privacy in machine learning applications across diverse fields.
Papers
June 4, 2024
May 22, 2024
April 15, 2024
March 29, 2024
March 23, 2024
March 4, 2024
December 10, 2023
November 11, 2023
June 5, 2023
May 30, 2023
May 13, 2023
February 28, 2023
December 16, 2022
December 8, 2022
December 3, 2022
November 3, 2022
March 3, 2022
March 1, 2022
February 23, 2022