Online Learning
Online learning focuses on developing algorithms that adapt and improve their performance over time using sequentially arriving data, aiming to minimize cumulative errors or regret. Current research emphasizes robust methods for handling noisy, incomplete, or adversarial data streams, exploring architectures like neural networks, quasi-Newton methods, and multi-armed bandits, often incorporating techniques from online convex optimization. These advancements have significant implications for various fields, including robotics, network management, and personalized education, by enabling systems to learn and adapt efficiently in dynamic and unpredictable environments.
Papers
Online Learning via Memory: Retrieval-Augmented Detector Adaptation
Yanan Jian, Fuxun Yu, Qi Zhang, William Levine, Brandon Dubbs, Nikolaos Karianakis
Hedging Is Not All You Need: A Simple Baseline for Online Learning Under Haphazard Inputs
Himanshu Buckchash, Momojit Biswas, Rohit Agarwal, Dilip K. Prasad