High Dimensional Regime
High-dimensional regimes, where the number of variables exceeds the number of observations, pose significant challenges in various fields, demanding new theoretical frameworks and algorithms to analyze and extract meaningful information. Current research focuses on understanding the behavior of optimization algorithms (like stochastic gradient descent) and model architectures (including neural networks and kernel methods) within these high-dimensional settings, often employing random matrix theory and mean-field analysis to characterize their performance. These investigations are crucial for advancing our understanding of fundamental limits in machine learning, signal processing, and materials science, leading to improved algorithms and more efficient data analysis techniques in high-dimensional data scenarios.
Papers
Fundamental computational limits of weak learnability in high-dimensional multi-index models
Emanuele Troiani, Yatin Dandi, Leonardo Defilippis, Lenka Zdeborová, Bruno Loureiro, Florent Krzakala
Repetita Iuvant: Data Repetition Allows SGD to Learn High-Dimensional Multi-Index Functions
Luca Arnaboldi, Yatin Dandi, Florent Krzakala, Luca Pesce, Ludovic Stephan