Hard Sample
"Hard samples," in machine learning, refer to data points that are difficult for models to classify correctly, often due to inherent ambiguity or noise. Current research focuses on identifying and utilizing these samples to improve model robustness and generalization, employing techniques like learned reweighting, meta-learning, and adversarial training across various model architectures (e.g., deep neural networks, graph neural networks). Understanding and addressing the challenges posed by hard samples is crucial for enhancing the reliability and performance of machine learning models across diverse applications, from medical image analysis to natural language processing.
Papers
November 1, 2024
October 26, 2024
September 30, 2024
September 22, 2024
March 18, 2024
March 7, 2024
October 20, 2023
September 22, 2023
August 18, 2023
August 7, 2023
July 20, 2023
July 19, 2023
June 16, 2023
May 30, 2023
May 23, 2023
May 9, 2023
April 7, 2023
March 24, 2023
March 15, 2023