Sensitive Topic

Research on sensitive topic detection and mitigation focuses on developing automated systems, primarily leveraging large language models (LLMs), to identify and manage sensitive information within text and images. Current efforts concentrate on improving the accuracy and reliability of these models, addressing ethical concerns surrounding their use, and developing privacy-preserving techniques like text rewriting and redaction. This work is crucial for creating safer online environments, particularly in areas like mental health support and content moderation, while also raising important questions about algorithmic bias and the responsible deployment of AI.

Papers