Active Exploration
Active exploration in reinforcement learning focuses on efficiently gathering information to improve decision-making in uncertain environments, aiming to balance exploration of unknown states with exploitation of current knowledge. Current research emphasizes developing algorithms that enhance exploration efficiency, often employing techniques like Bayesian optimization, Thompson sampling, and various deep reinforcement learning architectures (e.g., Proximal Policy Optimization, Twin Delayed Deep Deterministic Policy Gradient) to guide the exploration process. This research is significant for improving the sample efficiency of reinforcement learning agents across diverse applications, from robotics and autonomous navigation to scientific experimentation and personalized recommendations, ultimately leading to more robust and adaptable intelligent systems.
Papers
Learning to Race in Extreme Turning Scene with Active Exploration and Gaussian Process Regression-based MPC
Guoqiang Wu, Cheng Hu, Wangjia Weng, Zhouheng Li, Yonghao Fu, Lei Xie, Hongye Su
Diminishing Exploration: A Minimalist Approach to Piecewise Stationary Multi-Armed Bandits
Kuan-Ta Li, Ping-Chun Hsieh, Yu-Chih Huang
Knowing What Not to Do: Leverage Language Model Insights for Action Space Pruning in Multi-agent Reinforcement Learning
Zhihao Liu, Xianliang Yang, Zichuan Liu, Yifan Xia, Wei Jiang, Yuanyu Zhang, Lijuan Li, Guoliang Fan, Lei Song, Bian Jiang
Latent Energy-Based Odyssey: Black-Box Optimization via Expanded Exploration in the Energy-Based Latent Space
Peiyu Yu, Dinghuai Zhang, Hengzhi He, Xiaojian Ma, Ruiyao Miao, Yifan Lu, Yasi Zhang, Deqian Kong, Ruiqi Gao, Jianwen Xie, Guang Cheng, Ying Nian Wu