Biased Association
Biased association in artificial intelligence focuses on identifying and mitigating unintended biases embedded within machine learning models, stemming from biases present in training data. Current research investigates these biases across various model architectures, including vision-language models (like CLIP), self-supervised speech models, and large language models (LLMs), employing techniques such as counterfactual generation, model compression, and novel bias metrics to quantify and reduce discriminatory outputs. Understanding and addressing these biases is crucial for ensuring fairness, accountability, and trustworthiness in AI systems, impacting diverse applications from image recognition and natural language processing to decision-making processes across various sectors.