Bias Related Task
Bias in artificial intelligence, particularly in language and vision models, is a significant research area focusing on identifying and mitigating biases stemming from training data and model architectures. Current efforts involve developing multi-task learning approaches, like those using pre-trained models such as RoBERTa, to improve bias detection and generalization across diverse datasets and languages. Research highlights the pervasive nature of bias, manifesting in areas like media bias detection, conflict reporting, and even seemingly objective tasks, impacting the reliability and fairness of AI systems. Addressing these biases is crucial for ensuring responsible AI development and deployment across various applications.
Papers
July 25, 2024
February 27, 2024
July 26, 2023
July 19, 2023
May 7, 2023