Neural Network Policy
Neural network policies are learned control strategies, often trained via reinforcement learning, that guide the behavior of autonomous systems. Current research emphasizes improving the safety, interpretability, and efficiency of these policies, focusing on techniques like decision trees for explainability, latent space manipulation for behavioral control, and the integration of large language models for high-level planning in multi-agent systems. These advancements are crucial for deploying reliable and trustworthy neural network controllers in safety-critical applications such as robotics and control systems, particularly in scenarios requiring robustness, adaptability, and human-understandable decision-making.
Papers
October 5, 2024
September 5, 2024
July 29, 2024
June 3, 2024
June 2, 2024
May 29, 2024
April 5, 2024
March 21, 2024
March 14, 2024
February 9, 2024
December 8, 2023
December 3, 2023
July 31, 2023
June 30, 2023
May 22, 2023
October 13, 2022
July 4, 2022
July 3, 2022
June 14, 2022