Model Attribution
Model attribution aims to identify the source or generating model of a given output, particularly crucial for understanding the origins of AI-generated content like text and images. Current research focuses on developing robust methods for model attribution across various domains, employing techniques like supervised contrastive learning, evolutionary strategies, and final-layer inversion, often within ensemble or single-model frameworks. This field is vital for addressing concerns about misinformation, intellectual property theft, and ensuring accountability in the rapidly expanding landscape of AI-generated content, impacting both the security and ethical considerations of artificial intelligence.
Papers
August 27, 2024
August 6, 2024
July 31, 2024
July 23, 2024
May 10, 2024
May 6, 2024
December 16, 2023
November 7, 2023
September 27, 2023
September 23, 2023
September 14, 2023
May 26, 2023
March 13, 2023
March 1, 2023
February 13, 2023
November 20, 2022
November 8, 2022
May 31, 2022
May 15, 2022