Bias Calibration

Bias calibration in machine learning aims to correct systematic errors in model predictions that disproportionately affect certain subgroups, improving fairness and reliability. Current research focuses on developing methods to detect and mitigate these biases, employing techniques like conformal prediction for uncertainty quantification, prompt engineering for bias adjustment in language models, and disentangled network architectures for isolating relevant features in tasks such as speech emotion recognition. These advancements are crucial for building trustworthy AI systems across diverse applications, particularly in sensitive domains like healthcare and legal decision-making, where unbiased predictions are paramount. The ultimate goal is to create models that are not only accurate but also consistently reliable across all relevant subpopulations.

Papers