Faulty Negative Mitigation
Faulty negative mitigation, encompassing the detection and correction of errors and biases in various machine learning models, is a crucial area of research aiming to improve model reliability and trustworthiness. Current efforts focus on mitigating hallucinations in large language models (LLMs), addressing implicit biases in multi-agent systems and federated learning, and tackling concept drift in dynamic environments, often employing techniques like causal inference, adversarial training, and data augmentation. These advancements are vital for ensuring the responsible deployment of AI systems across diverse applications, ranging from code generation and medical image analysis to social simulations and speech recognition, ultimately enhancing the safety and fairness of AI-driven technologies.
Papers
Bias in Large Language Models: Origin, Evaluation, and Mitigation
Yufei Guo, Muzhe Guo, Juntao Su, Zhou Yang, Mengqiu Zhu, Hongfei Li, Mengyang Qiu, Shuo Shuo Liu
MaskMedPaint: Masked Medical Image Inpainting with Diffusion Models for Mitigation of Spurious Correlations
Qixuan Jin, Walter Gerych, Marzyeh Ghassemi
FG-PRM: Fine-grained Hallucination Detection and Mitigation in Language Model Mathematical Reasoning
Ruosen Li, Ziming Luo, Xinya Du
Listen to the Patient: Enhancing Medical Dialogue Generation with Patient Hallucination Detection and Mitigation
Lang Qin, Yao Zhang, Hongru Liang, Adam Jatowt, Zhenglu Yang
Modeling Electromagnetic Signal Injection Attacks on Camera-based Smart Systems: Applications and Mitigation
Youqian Zhang, Michael Cheung, Chunxi Yang, Xinwei Zhai, Zitong Shen, Xinyu Ji, Eugene Y. Fu, Sze-Yiu Chau, Xiapu Luo
GlitchProber: Advancing Effective Detection and Mitigation of Glitch Tokens in Large Language Models
Zhibo Zhang, Wuxia Bai, Yuxi Li, Mark Huasong Meng, Kailong Wang, Ling Shi, Li Li, Jun Wang, Haoyu Wang