Paper ID: 2411.14783

Segmenting Action-Value Functions Over Time-Scales in SARSA via TD($Δ$)

Mahammad Humayoo

In numerous episodic reinforcement learning (RL) settings, SARSA-based methodologies are employed to enhance policies aimed at maximizing returns over long horizons. Conventional SARSA algorithms, however, have difficulties in balancing bias and variation due to the reliance on a singular, fixed discount factor. This study expands the temporal difference decomposition approach, TD($\Delta$), to the SARSA algorithm, which we designate as SARSA($\Delta$). SARSA, a widely utilised on-policy RL method, enhances action-value functions via temporal difference updates. TD($\Delta$) facilitates learning over several time-scales by breaking the action-value function into components associated with distinct discount factors. This decomposition improves learning efficiency and stability, particularly in problems necessitating long-horizon optimization. We illustrate that our methodology mitigates bias in SARSA's updates while facilitating accelerated convergence in both deterministic and stochastic environments. Experimental findings across many benchmark tasks indicate that the proposed SARSA($\Delta$) surpasses conventional TD learning methods in both tabular and deep RL environments.

Submitted: Nov 22, 2024