Linear Bandit

Linear bandits are a class of online learning problems where an agent sequentially selects actions (arms) from a set characterized by linear features, receiving stochastic rewards dependent on an unknown linear function of those features. Current research focuses on improving algorithm efficiency and robustness, exploring variations such as contextual bandits, incorporating human response times for preference learning, and addressing misspecified models or non-stationary environments. These advancements are significant for applications requiring efficient sequential decision-making under uncertainty, including personalized recommendations, clinical trials, and resource allocation, by enabling more accurate and adaptable algorithms.

Papers