Security Vulnerability
Security vulnerabilities in software and AI systems are a major research focus, aiming to identify and mitigate weaknesses that can be exploited by malicious actors. Current research emphasizes the use of deep learning models, large language models (LLMs), and topological data analysis to detect vulnerabilities in code, assess the robustness of AI models against adversarial attacks, and evaluate the security of AI agents and retrieval-augmented generation (RAG) systems. These efforts are crucial for improving the security and trustworthiness of software and AI systems across various domains, impacting both the development of more robust security tools and the responsible deployment of AI technologies.
Papers
April 3, 2023
March 20, 2023
March 16, 2023
March 13, 2023
February 21, 2023
February 4, 2023
January 18, 2023
December 23, 2022
December 21, 2022
December 15, 2022
November 28, 2022
November 25, 2022
November 15, 2022
October 5, 2022
July 24, 2022
July 16, 2022
June 22, 2022
June 20, 2022