Partial Observability
Partial observability, where agents lack complete information about their environment's state, is a central challenge in many areas of artificial intelligence, particularly reinforcement learning. Current research focuses on developing methods to mitigate this limitation, employing architectures like recurrent neural networks, Kalman filters, and transformers, along with algorithms that incorporate uncertainty representation and efficient planning techniques under partial observation. These advancements are crucial for improving the performance and robustness of AI systems in complex real-world applications, such as autonomous driving, robotics, and traffic control, where complete information is rarely available. The ultimate goal is to enable agents to make optimal decisions despite incomplete knowledge of their surroundings.
Papers
Belief-State Query Policies for Planning With Preferences Under Partial Observability
Daniel Bramblett, Siddharth Srivastava
Model-free reinforcement learning with noisy actions for automated experimental control in optics
Lea Richtmann, Viktoria-S. Schmiesing, Dennis Wilken, Jan Heine, Aaron Tranter, Avishek Anand, Tobias J. Osborne, Michèle Heurs