Weak Supervision Source
Weak supervision aims to train machine learning models using noisy or incomplete labels, significantly reducing the need for expensive manual annotation. Current research focuses on developing robust frameworks that integrate diverse weak supervision sources—ranging from heuristic rules and crowdsourced labels to outputs from pre-trained models—and efficiently aggregate these sources to generate high-quality probabilistic labels. This approach is crucial for scaling machine learning to data-rich domains where full annotation is impractical, impacting fields like healthcare (ECG analysis) and behavioral science, and improving the efficiency of model training across various applications.
Papers
June 16, 2024
February 2, 2024
June 23, 2023
June 2, 2023
May 29, 2023
June 19, 2022
May 25, 2022
May 11, 2022
January 9, 2022