Paper ID: 2409.13299

OMG-RL:Offline Model-based Guided Reward Learning for Heparin Treatment

Yooseok Lim, Sujee Lee

Accurate diagnosis of individual patient conditions and appropriate medication dosing strategies are core elements of personalized medical decision-making processes. This therapeutic procedure, which entails recursively assessing the patient's condition and administering suitable medications, can effectively be modeled as a reinforcement learning (RL) problem. Crucially, the success of RL in this context depends on the establishment of a well-defined reward function that accurately represents the optimal treatment strategy. However, defining the learning direction in RL with only a limited set of explicit indicators complicates the task due to the inherent complexity of the required domain knowledge. This approach may also increase the likelihood that the RL policy does not adequately reflect the clinician's treatment intentions, which are determined by considering various situations and indicators. In this study, we focus on developing a reward function that reflects the clinician's intentions and introduce Offline Model-based Guided Reward Learning (OMG-RL), which performs offline inverse reinforcement learning (IRL) aligned with the offline RL environment. Through OMG-RL, we learn a parameterized reward function that includes the expert's intentions from limited data, thereby enhancing the agent's policy. We validate the proposed approach on the heparin dosing task. The results demonstrate that policy learning through OMG-RL is meaningful and confirm that the learned policy is positively reinforced in terms of activated partial thromboplastin time (aPTT), a key indicator for monitoring the effects of heparin. This approach can be broadly utilized not only for the heparin dosing problem but also for RL-based medication dosing tasks in general.

Submitted: Sep 20, 2024