Optimization Algorithm
Optimization algorithms aim to find the best solution within a given search space, a fundamental problem across numerous scientific and engineering disciplines. Current research emphasizes improving convergence rates and efficiency, particularly in distributed settings and for high-dimensional problems, with a focus on algorithms like gradient descent and its variants (including second-order methods and adaptive optimizers), as well as meta-learning approaches and derivative-free methods. These advancements are crucial for tackling increasingly complex problems in machine learning, wireless systems, and other fields where efficient and robust optimization is paramount.
Papers
March 22, 2023
March 13, 2023
March 8, 2023
February 13, 2023
January 26, 2023
December 2, 2022
November 29, 2022
November 11, 2022
October 12, 2022
September 25, 2022
September 16, 2022
July 17, 2022
July 1, 2022
June 18, 2022
May 17, 2022
April 27, 2022
April 25, 2022
April 23, 2022
April 22, 2022
April 1, 2022