New Threat

Emerging threats to AI systems and their applications are a growing area of concern, focusing on vulnerabilities to adversarial attacks and privacy breaches. Research currently investigates the robustness of various models, including large language models (LLMs) and underwater image enhancement models, employing techniques like adversarial training and differential privacy to mitigate risks. These findings are crucial for developing more secure and reliable AI systems, impacting both the trustworthiness of AI applications and the safety of critical infrastructure like power grids.

Papers