Stochastic Optimisation
Stochastic optimization tackles the challenge of finding optimal solutions in scenarios with noisy or incomplete data, a common problem in machine learning and inverse problems. Current research emphasizes efficient algorithms like stochastic gradient descent and its variants (e.g., incorporating variance reduction and momentum), often tailored to specific problem structures (e.g., composite objectives, high-dimensional spaces). These advancements improve the speed and scalability of optimization, impacting fields like image reconstruction, deep learning, and personalized federated learning by enabling the efficient training and deployment of complex models on massive datasets. The development of adaptive methods that automatically tune hyperparameters further enhances the practicality and robustness of these techniques.