Biased Classifier

Biased classifiers, a prevalent issue in machine learning, arise when models unfairly favor certain classes or attributes, often due to imbalances in training data or inherent biases in feature representations. Current research focuses on mitigating this bias through various techniques, including data augmentation, algorithmic adjustments like adaptive margin methods and fairness-aware pruning, and novel training strategies inspired by neural collapse phenomena. Addressing biased classifiers is crucial for ensuring fairness and reliability in machine learning applications, impacting fields ranging from computer vision and natural language processing to healthcare and finance.

Papers