Action Space
Action space, in reinforcement learning, refers to the set of all possible actions an agent can take within an environment. Current research focuses on efficiently handling large or complex action spaces, particularly in multi-agent systems and continuous control problems, employing techniques like action discretization, factorization, and the use of large language models for guidance. These advancements are crucial for scaling reinforcement learning to real-world applications, such as robotics and resource management, where high-dimensional and nuanced action choices are common. Improved methods for handling action spaces directly impact the sample efficiency and overall performance of reinforcement learning algorithms.
Papers
Kernelized Reinforcement Learning with Order Optimal Regret Bounds
Sattar Vakili, Julia Olkhovskaya
Stepsize Learning for Policy Gradient Methods in Contextual Markov Decision Processes
Luca Sabbioni, Francesco Corda, Marcello Restelli
Dynamic Interval Restrictions on Action Spaces in Deep Reinforcement Learning for Obstacle Avoidance
Tim Grams
From Pixels to UI Actions: Learning to Follow Instructions via Graphical User Interfaces
Peter Shaw, Mandar Joshi, James Cohan, Jonathan Berant, Panupong Pasupat, Hexiang Hu, Urvashi Khandelwal, Kenton Lee, Kristina Toutanova
Dynamic Neighborhood Construction for Structured Large Discrete Action Spaces
Fabian Akkerman, Julius Luy, Wouter van Heeswijk, Maximilian Schiffer
A Minimal Approach for Natural Language Action Space in Text-based Games
Dongwon Kelvin Ryu, Meng Fang, Shirui Pan, Gholamreza Haffari, Ehsan Shareghi
Explaining RL Decisions with Trajectories
Shripad Vilasrao Deshmukh, Arpan Dasgupta, Balaji Krishnamurthy, Nan Jiang, Chirag Agarwal, Georgios Theocharous, Jayakumar Subramanian