Minority Sample
Minority sample problems arise in machine learning when datasets contain a disproportionately small number of instances from a particular class, hindering accurate model training and prediction for that class. Current research focuses on developing techniques to address this imbalance, including oversampling methods (e.g., SMOTE and its variants, generative models like VAEs and diffusion models) and cost-sensitive learning approaches that adjust the penalty for misclassifying minority samples. These advancements aim to improve the fairness and accuracy of machine learning models across all classes, impacting diverse applications such as medical diagnosis, fraud detection, and natural language processing where minority class representation is crucial for reliable and equitable outcomes.