Faulty Negative Mitigation
Faulty negative mitigation, encompassing the detection and correction of errors and biases in various machine learning models, is a crucial area of research aiming to improve model reliability and trustworthiness. Current efforts focus on mitigating hallucinations in large language models (LLMs), addressing implicit biases in multi-agent systems and federated learning, and tackling concept drift in dynamic environments, often employing techniques like causal inference, adversarial training, and data augmentation. These advancements are vital for ensuring the responsible deployment of AI systems across diverse applications, ranging from code generation and medical image analysis to social simulations and speech recognition, ultimately enhancing the safety and fairness of AI-driven technologies.
Papers
Efficient Unsupervised Shortcut Learning Detection and Mitigation in Transformers
Lukas Kuhn, Sari Sadiya, Jorg Schlotterer, Christin Seifert, Gemma Roig
AttriReBoost: A Gradient-Free Propagation Optimization Method for Cold Start Mitigation in Attribute Missing Graphs
Mengran Li, Chaojun Ding, Junzhou Chen, Wenbin Xing, Cong Ye, Ronghui Zhang, Songlin Zhuang, Jia Hu, Tony Z. Qiu, Huijun Gao
Bias in Large Language Models: Origin, Evaluation, and Mitigation
Yufei Guo, Muzhe Guo, Juntao Su, Zhou Yang, Mengqiu Zhu, Hongfei Li, Mengyang Qiu, Shuo Shuo Liu
MaskMedPaint: Masked Medical Image Inpainting with Diffusion Models for Mitigation of Spurious Correlations
Qixuan Jin, Walter Gerych, Marzyeh Ghassemi
FG-PRM: Fine-grained Hallucination Detection and Mitigation in Language Model Mathematical Reasoning
Ruosen Li, Ziming Luo, Xinya Du
Listening to Patients: A Framework of Detecting and Mitigating Patient Misreport for Medical Dialogue Generation
Lang Qin, Yao Zhang, Hongru Liang, Adam Jatowt, Zhenglu Yang