Optimization Algorithm
Optimization algorithms aim to find the best solution within a given search space, a fundamental problem across numerous scientific and engineering disciplines. Current research emphasizes improving convergence rates and efficiency, particularly in distributed settings and for high-dimensional problems, with a focus on algorithms like gradient descent and its variants (including second-order methods and adaptive optimizers), as well as meta-learning approaches and derivative-free methods. These advancements are crucial for tackling increasingly complex problems in machine learning, wireless systems, and other fields where efficient and robust optimization is paramount.
Papers
Empirical Tests of Optimization Assumptions in Deep Learning
Hoang Tran, Qinzi Zhang, Ashok Cutkosky
Meta-Learning Based Optimization for Large Scale Wireless Systems
Rafael Cerna Loli, Bruno Clerckx
Parameter Tuning of the Firefly Algorithm by Standard Monte Carlo and Quasi-Monte Carlo Methods
Geethu Joy, Christian Huyck, Xin-She Yang