Optimization Algorithm
Optimization algorithms aim to find the best solution within a given search space, a fundamental problem across numerous scientific and engineering disciplines. Current research emphasizes improving convergence rates and efficiency, particularly in distributed settings and for high-dimensional problems, with a focus on algorithms like gradient descent and its variants (including second-order methods and adaptive optimizers), as well as meta-learning approaches and derivative-free methods. These advancements are crucial for tackling increasingly complex problems in machine learning, wireless systems, and other fields where efficient and robust optimization is paramount.
Papers
March 9, 2022
February 24, 2022
February 21, 2022
February 6, 2022
February 2, 2022
January 27, 2022
January 26, 2022
January 3, 2022
December 20, 2021
December 14, 2021
December 9, 2021