Paper ID: 2312.11138

Rapid Open-World Adaptation by Adaptation Principles Learning

Cheng Xue, Ekaterina Nikonova, Peng Zhang, Jochen Renz

Novelty adaptation is the ability of an intelligent agent to adjust its behavior in response to changes in its environment. This is an important characteristic of intelligent agents, as it allows them to continue to function effectively in novel or unexpected situations, but still stands as a critical challenge for deep reinforcement learning (DRL). To tackle this challenge, we propose a simple yet effective novel method, NAPPING (Novelty Adaptation Principles Learning), that allows trained DRL agents to respond to different classes of novelties in open worlds rapidly. With NAPPING, DRL agents can learn to adjust the trained policy only when necessary. They can quickly generalize to similar novel situations without affecting the part of the trained policy that still works. To demonstrate the efficiency and efficacy of NAPPING, we evaluate our method on four action domains that are different in reward structures and the type of task. The domains are CartPole and MountainCar (classic control), CrossRoad (path-finding), and AngryBirds (physical reasoning). We compare NAPPING with standard online and fine-tuning DRL methods in CartPole, MountainCar and CrossRoad, and state-of-the-art methods in the more complicated AngryBirds domain. Our evaluation results demonstrate that with our proposed method, DRL agents can rapidly and effectively adjust to a wide range of novel situations across all tested domains.

Submitted: Dec 18, 2023