Unsafe Image
Unsafe image research focuses on mitigating the generation and spread of harmful or inappropriate visual content produced by AI models, particularly text-to-image generators. Current research emphasizes developing methods to detect and remove unsafe content, including techniques like prompt purification, feature suppression, and adversarial attack defense, often leveraging large language models and diffusion models. This field is crucial for responsible AI development, aiming to improve the safety and ethical implications of AI-generated imagery across various applications, from social media moderation to preventing the creation and dissemination of harmful propaganda.
Papers
October 4, 2024
September 22, 2024
August 2, 2024
July 27, 2024
July 17, 2024
May 25, 2024
May 6, 2024
April 10, 2024
April 8, 2024
March 24, 2024
February 14, 2024
February 1, 2024
January 19, 2024
December 12, 2023
October 18, 2023
August 26, 2023
July 19, 2023
May 23, 2023