Federated Active Learning
Federated active learning (FAL) aims to efficiently train machine learning models collaboratively across multiple decentralized devices while minimizing the need for expensive and time-consuming data annotation. Current research focuses on developing algorithms that effectively select the most informative data points for labeling, addressing challenges like non-independent and identically distributed (non-IID) data and domain shifts across devices, often employing ensemble methods and uncertainty quantification techniques. This approach is particularly significant for applications like medical image analysis where data is scarce, privacy is paramount, and efficient annotation strategies are crucial for model performance.
Papers
September 6, 2024
June 17, 2024
December 5, 2023
December 4, 2023
March 22, 2023
November 24, 2022