Malicious Content

Malicious content generation and detection is a rapidly evolving area of research focusing on how large language models (LLMs) and other AI systems can be used to create and spread harmful material, including phishing scams, hate speech, and malware. Current research emphasizes developing robust detection methods, often employing transformer-based architectures like DeBERTa and BERT, and exploring multimodal approaches that integrate audio and visual cues for enhanced accuracy in identifying malicious content in various formats (text, images, videos). This research is crucial for mitigating the growing threat of AI-generated malicious content and informing the development of effective safeguards for online platforms and users.

Papers