Paper ID: 2403.12428
Transfer in Sequential Multi-armed Bandits via Reward Samples
Rahul N R, Vaibhav Katewa
We consider a sequential stochastic multi-armed bandit problem where the agent interacts with bandit over multiple episodes. The reward distribution of the arms remain constant throughout an episode but can change over different episodes. We propose an algorithm based on UCB to transfer the reward samples from the previous episodes and improve the cumulative regret performance over all the episodes. We provide regret analysis and empirical results for our algorithm, which show significant improvement over the standard UCB algorithm without transfer.
Submitted: Mar 19, 2024