Paper ID: 2112.07752
Representation and Invariance in Reinforcement Learning
Samuel Alexander, Arthur Paul Pedersen
Researchers have formalized reinforcement learning (RL) in different ways. If an agent in one RL framework is to run within another RL framework's environments, the agent must first be converted, or mapped, into that other framework. Whether or not this is possible depends on not only the RL frameworks in question and but also how intelligence itself is measured. In this paper, we lay foundations for studying relative-intelligence-preserving mappability between RL frameworks. We define two types of mappings, called weak and strong translations, between RL frameworks and prove that existence of these mappings enables two types of intelligence comparison according to the mappings preserving relative intelligence. We investigate the existence or lack thereof of these mappings between: (i) RL frameworks where agents go first and RL frameworks where environments go first; and (ii) twelve different RL frameworks differing in terms of whether or not agents or environments are required to be deterministic. In the former case, we consider various natural mappings between agent-first and environment-first RL and vice versa; we show some positive results (some such mappings are strong or weak translations) and some negative results (some such mappings are not). In the latter case, we completely characterize which of the twelve RL-framework pairs admit weak translations, under the assumption of integer-valued rewards and some additional mild assumptions.
Submitted: Dec 14, 2021