Soft Actor Critic
Soft Actor-Critic (SAC) is a deep reinforcement learning algorithm aiming to learn robust and efficient policies by maximizing both expected reward and policy entropy. Current research focuses on improving SAC's sample efficiency, addressing safety constraints through methods like Lagrangian formulations and meta-gradient optimization, and extending its applicability to various domains including robotics, autonomous driving, and multi-agent systems. These advancements are significant because they enhance the practicality and reliability of reinforcement learning for real-world applications requiring safe and efficient decision-making in complex environments.
Papers
October 26, 2024
October 24, 2024
October 22, 2024
September 29, 2024
September 8, 2024
August 15, 2024
August 4, 2024
May 2, 2024
April 4, 2024
April 3, 2024
April 2, 2024
March 21, 2024
March 18, 2024
March 1, 2024
February 24, 2024
February 15, 2024
January 31, 2024
October 23, 2023
September 7, 2023
September 1, 2023