Core Stability
Core stability, broadly defined as the robustness and reliability of a system or model's performance under various conditions, is a central theme across diverse scientific fields. Current research focuses on improving stability in machine learning models (e.g., through regularization techniques, modified optimization algorithms like AdamG, and analyses of Jacobian alignment), in inverse problems (using methods like Wasserstein gradient flows and data-dependent regularization), and in dynamical systems (by mitigating data-induced instability). Understanding and enhancing core stability is crucial for building reliable and trustworthy systems across applications ranging from medical imaging and AI to robotics and control systems.
Papers
Shaping Up SHAP: Enhancing Stability through Layer-Wise Neighbor Selection
Gwladys Kelodjou, Laurence Rozé, Véronique Masson, Luis Galárraga, Romaric Gaudel, Maurice Tchuente, Alexandre Termier
Stability of Multi-Agent Learning in Competitive Networks: Delaying the Onset of Chaos
Aamal Hussain, Francesco Belardinelli