Stochastic Way
Stochastic methods are increasingly used to address challenges in optimization and uncertainty quantification across diverse fields, from machine learning to process mining and control systems. Current research focuses on developing efficient algorithms, such as variance-reduced gradient methods and Bayesian approaches, often integrated with neural networks or other advanced model architectures, to improve accuracy and computational efficiency in handling stochasticity. These advancements are significant because they enable more robust and reliable solutions for complex problems involving noisy data, uncertain parameters, and high-dimensional spaces, impacting fields ranging from deep learning and traffic prediction to disease modeling and auction theory. The development of theoretically sound and practically efficient stochastic methods continues to be a major driver of progress in many scientific and engineering domains.
Papers
Invariant Representations with Stochastically Quantized Neural Networks
Mattia Cerrato, Marius Köppel, Roberto Esposito, Stefan Kramer
Adaptive Stochastic Gradient Descent for Fast and Communication-Efficient Distributed Learning
Serge Kas Hanna, Rawad Bitar, Parimal Parag, Venkat Dasari, Salim El Rouayheb