Model Bias
Model bias, the tendency of machine learning models to produce unfair or inaccurate predictions for certain subgroups, is a critical area of research aiming to improve model fairness and reliability. Current efforts focus on identifying and mitigating bias through techniques like data augmentation, causal inference methods, and algorithmic adjustments to model architectures such as transformers and neural networks, often employing post-processing or in-processing approaches. Understanding and addressing model bias is crucial for ensuring the responsible development and deployment of AI systems across various applications, impacting fields from healthcare and finance to climate modeling and natural language processing.
Papers
Contrastive Learning for Climate Model Bias Correction and Super-Resolution
Tristan Ballard, Gopal Erinjippurath
Debiasing Methods for Fairer Neural Models in Vision and Language Research: A Survey
Otávio Parraga, Martin D. More, Christian M. Oliveira, Nathan S. Gavenski, Lucas S. Kupssinskü, Adilson Medronha, Luis V. Moura, Gabriel S. Simões, Rodrigo C. Barros