Security Vulnerability
Security vulnerabilities in software and AI systems are a major research focus, aiming to identify and mitigate weaknesses that can be exploited by malicious actors. Current research emphasizes the use of deep learning models, large language models (LLMs), and topological data analysis to detect vulnerabilities in code, assess the robustness of AI models against adversarial attacks, and evaluate the security of AI agents and retrieval-augmented generation (RAG) systems. These efforts are crucial for improving the security and trustworthiness of software and AI systems across various domains, impacting both the development of more robust security tools and the responsible deployment of AI technologies.
Papers
January 13, 2025
January 5, 2025
December 20, 2024
December 19, 2024
December 18, 2024
December 15, 2024
December 11, 2024
December 10, 2024
December 8, 2024
December 3, 2024
December 2, 2024
November 25, 2024
November 21, 2024
November 13, 2024
November 11, 2024
November 10, 2024
November 6, 2024
October 29, 2024
October 22, 2024
October 21, 2024