Red Teaming
Red teaming, in the context of artificial intelligence, involves adversarial testing of AI models, particularly large language models (LLMs) and increasingly multimodal models, to identify vulnerabilities and biases. Current research focuses on automating this process using techniques like reinforcement learning, generative adversarial networks, and novel scoring functions to create diverse and effective adversarial prompts or inputs that expose model weaknesses. This rigorous evaluation is crucial for improving the safety, robustness, and ethical implications of AI systems, informing both model development and deployment strategies across various applications.
Papers
STAR: SocioTechnical Approach to Red Teaming Language Models
Laura Weidinger, John Mellor, Bernat Guillen Pegueroles, Nahema Marchal, Ravin Kumar, Kristian Lum, Canfer Akbulut, Mark Diaz, Stevie Bergman, Mikel Rodriguez, Verena Rieser, William Isaac
"Not Aligned" is Not "Malicious": Being Careful about Hallucinations of Large Language Models' Jailbreak
Lingrui Mei, Shenghua Liu, Yiwei Wang, Baolong Bi, Jiayi Mao, Xueqi Cheng
Ruby Teaming: Improving Quality Diversity Search with Memory for Automated Red Teaming
Vernon Toh Yan Han, Rishabh Bhardwaj, Soujanya Poria
CSRT: Evaluation and Analysis of LLMs using Code-Switching Red-Teaming Dataset
Haneul Yoo, Yongjin Yang, Hwaran Lee