Paper ID: 2311.12854
Enhancing Robotic Manipulation: Harnessing the Power of Multi-Task Reinforcement Learning and Single Life Reinforcement Learning in Meta-World
Ghadi Nehme, Ishan Sabane, Tejas Y. Deo
At present, robots typically require extensive training to successfully accomplish a single task. However, to truly enhance their usefulness in real-world scenarios, robots should possess the capability to perform multiple tasks effectively. To address this need, various multi-task reinforcement learning (RL) algorithms have been developed, including multi-task proximal policy optimization (PPO), multi-task trust region policy optimization (TRPO), and multi-task soft-actor critic (SAC). Nevertheless, these algorithms demonstrate optimal performance only when operating within an environment or observation space that exhibits a similar distribution. In reality, such conditions are often not the norm, as robots may encounter scenarios or observations that differ from those on which they were trained. Addressing this challenge, algorithms like Q-Weighted Adversarial Learning (QWALE) attempt to tackle the issue by training the base algorithm (generating prior data) solely for a particular task, rendering it unsuitable for generalization across tasks. So, the aim of this research project is to enable a robotic arm to successfully execute seven distinct tasks within the Meta World environment. To achieve this, a multi-task soft actor-critic (MT-SAC) is employed to train the robotic arm. Subsequently, the trained model will serve as a source of prior data for the single-life RL algorithm. The effectiveness of this MT-QWALE algorithm will be assessed by conducting tests on various target positions (novel positions). In the end, a comparison is provided between the trained MT-SAC and the MT-QWALE algorithm where the MT-QWALE performs better. An ablation study demonstrates that MT-QWALE successfully completes tasks with a slightly larger number of steps even after hiding the final goal position.
Submitted: Oct 23, 2023