Reward Function
Reward functions, crucial for guiding reinforcement learning agents towards desired behaviors, are the focus of intense research. Current efforts center on automatically learning reward functions from diverse sources like human preferences, demonstrations (including imperfect ones), and natural language descriptions, often employing techniques like inverse reinforcement learning, large language models, and Bayesian optimization within various architectures including transformers and generative models. This research is vital for improving the efficiency and robustness of reinforcement learning, enabling its application to complex real-world problems where manually designing reward functions is impractical or impossible. The ultimate goal is to create more adaptable and human-aligned AI systems.
Papers
A Decision-Language Model (DLM) for Dynamic Restless Multi-Armed Bandit Tasks in Public Health
Nikhil Behari, Edwin Zhang, Yunfan Zhao, Aparna Taneja, Dheeraj Nagaraj, Milind Tambe
Generalizing Reward Modeling for Out-of-Distribution Preference Learning
Chen Jia
Transformable Gaussian Reward Function for Socially-Aware Navigation with Deep Reinforcement Learning
Jinyeob Kim, Sumin Kang, Sungwoo Yang, Beomjoon Kim, Jargalbaatar Yura, Donghan Kim