Soft Actor Critic
Soft Actor-Critic (SAC) is a deep reinforcement learning algorithm aiming to learn robust and efficient policies by maximizing both expected reward and policy entropy. Current research focuses on improving SAC's sample efficiency, addressing safety constraints through methods like Lagrangian formulations and meta-gradient optimization, and extending its applicability to various domains including robotics, autonomous driving, and multi-agent systems. These advancements are significant because they enhance the practicality and reliability of reinforcement learning for real-world applications requiring safe and efficient decision-making in complex environments.
Papers
July 5, 2023
June 28, 2023
May 19, 2023
March 21, 2023
March 8, 2023
March 7, 2023
March 1, 2023
January 31, 2023
December 20, 2022
November 26, 2022
October 20, 2022
October 10, 2022
September 27, 2022
September 21, 2022
April 22, 2022
April 20, 2022
March 19, 2022
February 9, 2022
February 7, 2022