Paper ID: 2205.00399

Learning user-defined sub-goals using memory editing in reinforcement learning

GyeongTaek Lee

The aim of reinforcement learning (RL) is to allow the agent to achieve the final goal. Most RL studies have focused on improving the efficiency of learning to achieve the final goal faster. However, the RL model is very difficult to modify an intermediate route in the process of reaching the final goal. That is, the agent cannot be under control to achieve other sub-goals in the existing studies. If the agent can go through the sub-goals on the way to the destination, the RL can be applied and studied in various fields. In this study, I propose a methodology to achieve the user-defined sub-goals as well as the final goal using memory editing. The memory editing is performed to generate various sub-goals and give an additional reward to the agent. In addition, the sub-goals are separately learned from the final goal. I set two simple environments and various scenarios in the test environments. As a result, the agent almost successfully passed the sub-goals as well as the final goal under control. Moreover, the agent was able to be induced to visit the novel state indirectly in the environments. I expect that this methodology can be used in the fields that need to control the agent in a variety of scenarios.

Submitted: May 1, 2022