Gradient Based Parameter
Gradient-based parameter optimization is a core technique across numerous scientific fields, aiming to efficiently find optimal parameter values for models by leveraging the gradient of an objective function. Current research focuses on improving the efficiency and robustness of these methods, particularly in challenging scenarios like high-dimensional spaces, discontinuous functions, and complex models such as neural networks and ordinary differential equations. This involves developing novel architectures (e.g., gradient networks) and algorithms (e.g., diffusion tempering, dual block coordinate descent) to address issues like local minima, computational cost, and the need for minimal gradient information. The impact spans diverse applications, from accelerating material design through quantum transport simulations to enhancing machine learning model training and improving the security of machine unlearning.