Imbalanced Learning
Imbalanced learning tackles the challenge of building accurate machine learning models when training data contains a disproportionate number of samples from different classes. Current research focuses on developing improved algorithms and model architectures, such as cost-sensitive learning, ensemble methods (like balanced random forests), and novel data augmentation techniques (e.g., variations of SMOTE and Mixup), to mitigate the bias towards majority classes and improve minority class prediction. This field is crucial for numerous real-world applications where imbalanced data is prevalent, including fraud detection, medical diagnosis, and rare event prediction, impacting the reliability and fairness of machine learning systems.
Papers
July 13, 2022
May 24, 2022
April 20, 2022
April 19, 2022
March 17, 2022
January 28, 2022