Malicious Content
Malicious content generation and detection is a rapidly evolving area of research focusing on how large language models (LLMs) and other AI systems can be used to create and spread harmful material, including phishing scams, hate speech, and malware. Current research emphasizes developing robust detection methods, often employing transformer-based architectures like DeBERTa and BERT, and exploring multimodal approaches that integrate audio and visual cues for enhanced accuracy in identifying malicious content in various formats (text, images, videos). This research is crucial for mitigating the growing threat of AI-generated malicious content and informing the development of effective safeguards for online platforms and users.
Papers
August 20, 2024
June 10, 2024
May 9, 2024
April 7, 2024
November 30, 2023
November 15, 2023
October 29, 2023
October 4, 2023
August 21, 2023
August 17, 2023
June 1, 2023
May 24, 2023
May 9, 2023
February 11, 2023