Theoretical Understanding
Theoretical understanding in artificial intelligence currently focuses on rigorously analyzing the capabilities and limitations of various models, aiming to bridge the gap between empirical observations and formal guarantees. Research emphasizes developing theoretical frameworks for explaining model behavior, particularly in areas like large language models (LLMs), diffusion models, and graph neural networks, often employing techniques from information theory, optimization, and statistical learning theory to analyze model performance and generalization. These theoretical advancements are crucial for improving model design, enhancing reliability, and addressing concerns about robustness, fairness, and explainability, ultimately impacting the trustworthiness and responsible deployment of AI systems across diverse applications.
Papers
An SDE for Modeling SAM: Theory and Insights
Enea Monzio Compagnoni, Luca Biggio, Antonio Orvieto, Frank Norbert Proske, Hans Kersting, Aurelien Lucchi
Global Nash Equilibrium in Non-convex Multi-player Game: Theory and Algorithms
Guanpu Chen, Gehui Xu, Fengxiang He, Yiguang Hong, Leszek Rutkowski, Dacheng Tao
ComplAI: Theory of A Unified Framework for Multi-factor Assessment of Black-Box Supervised Machine Learning Models
Arkadipta De, Satya Swaroop Gudipudi, Sourab Panchanan, Maunendra Sankar Desarkar
MAUVE Scores for Generative Models: Theory and Practice
Krishna Pillutla, Lang Liu, John Thickstun, Sean Welleck, Swabha Swayamdipta, Rowan Zellers, Sewoong Oh, Yejin Choi, Zaid Harchaoui