New Threat
Emerging threats to AI systems and their applications are a growing area of concern, focusing on vulnerabilities to adversarial attacks and privacy breaches. Research currently investigates the robustness of various models, including large language models (LLMs) and underwater image enhancement models, employing techniques like adversarial training and differential privacy to mitigate risks. These findings are crucial for developing more secure and reliable AI systems, impacting both the trustworthiness of AI applications and the safety of critical infrastructure like power grids.
Papers
October 18, 2024
September 10, 2024
September 3, 2024
August 10, 2024
June 23, 2024
June 17, 2024
November 22, 2023
October 30, 2023
July 21, 2023
March 5, 2023
March 8, 2022