Learning Theory

Learning theory seeks to understand how and why machine learning algorithms generalize from training data to unseen examples. Current research focuses on refining generalization bounds for various settings, including dependent data and distribution regression, often employing techniques like Rademacher complexity and algorithmic stability to analyze model performance. This work is crucial for improving the reliability and efficiency of machine learning algorithms across diverse applications, from predicting dynamical systems to understanding the implicit biases of deep learning architectures like graph convolutional networks and neural networks designed for distribution inputs. Ultimately, these advancements aim to provide a more rigorous theoretical foundation for the design and application of machine learning models.

Papers