Paper ID: 2409.15623

Safe Guard: an LLM-agent for Real-time Voice-based Hate Speech Detection in Social Virtual Reality

Yiwen Xu, Qinyang Hou, Hongyu Wan, Mirjana Prpa

In this paper, we present Safe Guard, an LLM-agent for the detection of hate speech in voice-based interactions in social VR (VRChat). Our system leverages Open AI GPT and audio feature extraction for real-time voice interactions. We contribute a system design and evaluation of the system that demonstrates the capability of our approach in detecting hate speech, and reducing false positives compared to currently available approaches. Our results indicate the potential of LLM-based agents in creating safer virtual environments and set the groundwork for further advancements in LLM-driven moderation approaches.

Submitted: Sep 23, 2024