Security Vulnerability
Security vulnerabilities in software and AI systems are a major research focus, aiming to identify and mitigate weaknesses that can be exploited by malicious actors. Current research emphasizes the use of deep learning models, large language models (LLMs), and topological data analysis to detect vulnerabilities in code, assess the robustness of AI models against adversarial attacks, and evaluate the security of AI agents and retrieval-augmented generation (RAG) systems. These efforts are crucial for improving the security and trustworthiness of software and AI systems across various domains, impacting both the development of more robust security tools and the responsible deployment of AI technologies.
Papers
Insights and Current Gaps in Open-Source LLM Vulnerability Scanners: A Comparative Analysis
Jonathan Brokman, Omer Hofman, Oren Rachmil, Inderjeet Singh, Vikas Pahuja, Rathina Sabapathy Aishvariya Priya, Amit Giloni, Roman Vainshtein, Hisashi Kojima
Vulnerabilities in Machine Learning-Based Voice Disorder Detection Systems
Gianpaolo Perelli, Andrea Panzino, Roberto Casula, Marco Micheletto, Giulia Orrù, Gian Luca Marcialis