Theoretical Understanding
Theoretical understanding in artificial intelligence currently focuses on rigorously analyzing the capabilities and limitations of various models, aiming to bridge the gap between empirical observations and formal guarantees. Research emphasizes developing theoretical frameworks for explaining model behavior, particularly in areas like large language models (LLMs), diffusion models, and graph neural networks, often employing techniques from information theory, optimization, and statistical learning theory to analyze model performance and generalization. These theoretical advancements are crucial for improving model design, enhancing reliability, and addressing concerns about robustness, fairness, and explainability, ultimately impacting the trustworthiness and responsible deployment of AI systems across diverse applications.
Papers
A Dynamic Model of Performative Human-ML Collaboration: Theory and Empirical Evidence
Tom Sühr, Samira Samadi, Chiara Farronato
Continual Learning in Medical Imaging: A Survey and Practical Analysis
Mohammad Areeb Qazi, Anees Ur Rehman Hashmi, Santosh Sanjeev, Ibrahim Almakky, Numan Saeed, Camila Gonzalez, Mohammad Yaqub
A theory of neural emulators
Catalin C. Mitelut
Fault Detection and Monitoring using an Information-Driven Strategy: Method, Theory, and Application
Camilo Ramírez, Jorge F. Silva, Ferhat Tamssaouet, Tomás Rojas, Marcos E. Orchard
Direct Training High-Performance Deep Spiking Neural Networks: A Review of Theories and Methods
Chenlin Zhou, Han Zhang, Liutao Yu, Yumin Ye, Zhaokun Zhou, Liwei Huang, Zhengyu Ma, Xiaopeng Fan, Huihui Zhou, Yonghong Tian