Model Bias
Model bias, the tendency of machine learning models to produce unfair or inaccurate predictions for certain subgroups, is a critical area of research aiming to improve model fairness and reliability. Current efforts focus on identifying and mitigating bias through techniques like data augmentation, causal inference methods, and algorithmic adjustments to model architectures such as transformers and neural networks, often employing post-processing or in-processing approaches. Understanding and addressing model bias is crucial for ensuring the responsible development and deployment of AI systems across various applications, impacting fields from healthcare and finance to climate modeling and natural language processing.
Papers
Step by Step to Fairness: Attributing Societal Bias in Task-oriented Dialogue Systems
Hsuan Su, Rebecca Qian, Chinnadhurai Sankar, Shahin Shayandeh, Shang-Tse Chen, Hung-yi Lee, Daniel M. Bikel
Online Continual Learning via Logit Adjusted Softmax
Zhehao Huang, Tao Li, Chenhe Yuan, Yingwen Wu, Xiaolin Huang