Imbalanced Learning
Imbalanced learning tackles the challenge of building accurate machine learning models when training data contains a disproportionate number of samples from different classes. Current research focuses on developing improved algorithms and model architectures, such as cost-sensitive learning, ensemble methods (like balanced random forests), and novel data augmentation techniques (e.g., variations of SMOTE and Mixup), to mitigate the bias towards majority classes and improve minority class prediction. This field is crucial for numerous real-world applications where imbalanced data is prevalent, including fraud detection, medical diagnosis, and rare event prediction, impacting the reliability and fairness of machine learning systems.
Papers
June 5, 2024
May 24, 2024
March 14, 2024
February 14, 2024
January 4, 2024
October 11, 2023
October 7, 2023
August 29, 2023
August 26, 2023
August 25, 2023
May 19, 2023
April 9, 2023
April 4, 2023
March 29, 2023
March 28, 2023
October 13, 2022
September 26, 2022
September 16, 2022
August 26, 2022