Newton Raphson
The Newton-Raphson method, a root-finding algorithm based on iterative linear approximations, remains a cornerstone of numerical optimization, finding renewed application in diverse fields. Current research emphasizes its extension to broader contexts, including its integration within gradient descent methods (like Adam and SGD) for accelerated convergence in machine learning and its adaptation for distributed optimization in multi-agent systems. This versatility is demonstrated through applications ranging from quadrotor control and hyperparameter tuning to solving complex problems in robotics and reinforcement learning, highlighting its enduring significance in scientific computing and engineering.
Papers
August 20, 2024
July 3, 2024
February 14, 2024
January 7, 2024
October 11, 2023
May 13, 2023
November 8, 2022