Theoretical Foundation

Theoretical foundations in machine learning and related fields currently focus on rigorously establishing the properties and limitations of existing algorithms and models. Research actively explores the convergence rates and consistency of optimization methods like stochastic gradient descent for complex models such as Cox neural networks and spiking neural networks, as well as the theoretical underpinnings of explainable AI and the use of surrogate gradients. These investigations aim to improve model performance, reliability, and interpretability, impacting areas like healthcare (through improved survival analysis), robotics (via LLM-driven autonomous systems), and natural language processing (via improved ASR and MT). Ultimately, strengthening theoretical understanding enhances the design, deployment, and trustworthiness of AI systems.

Papers