Core Stability
Core stability, broadly defined as the robustness and reliability of a system or model's performance under various conditions, is a central theme across diverse scientific fields. Current research focuses on improving stability in machine learning models (e.g., through regularization techniques, modified optimization algorithms like AdamG, and analyses of Jacobian alignment), in inverse problems (using methods like Wasserstein gradient flows and data-dependent regularization), and in dynamical systems (by mitigating data-induced instability). Understanding and enhancing core stability is crucial for building reliable and trustworthy systems across applications ranging from medical imaging and AI to robotics and control systems.
Papers
(S)GD over Diagonal Linear Networks: Implicit Regularisation, Large Stepsizes and Edge of Stability
Mathieu Even, Scott Pesme, Suriya Gunasekar, Nicolas Flammarion
More Data Types More Problems: A Temporal Analysis of Complexity, Stability, and Sensitivity in Privacy Policies
Juniper Lovato, Philip Mueller, Parisa Suchdev, Peter S. Dodds
Transformers as Algorithms: Generalization and Stability in In-context Learning
Yingcong Li, M. Emrullah Ildiz, Dimitris Papailiopoulos, Samet Oymak
Temporal Dynamics of Coordinated Online Behavior: Stability, Archetypes, and Influence
Serena Tardelli, Leonardo Nizzoli, Maurizio Tesconi, Mauro Conti, Preslav Nakov, Giovanni Da San Martino, Stefano Cresci