Paper ID: 2408.14855

Enhancing Analogical Reasoning in the Abstraction and Reasoning Corpus via Model-Based RL

Jihwan Lee, Woochang Sim, Sejin Kim, Sundong Kim

This paper demonstrates that model-based reinforcement learning (model-based RL) is a suitable approach for the task of analogical reasoning. We hypothesize that model-based RL can solve analogical reasoning tasks more efficiently through the creation of internal models. To test this, we compared DreamerV3, a model-based RL method, with Proximal Policy Optimization, a model-free RL method, on the Abstraction and Reasoning Corpus (ARC) tasks. Our results indicate that model-based RL not only outperforms model-free RL in learning and generalizing from single tasks but also shows significant advantages in reasoning across similar tasks.

Submitted: Aug 27, 2024