Security Risk
Security risks associated with increasingly prevalent AI systems, particularly large language models (LLMs) and their applications in diverse sectors like healthcare and IoT, are a major area of concern. Current research focuses on identifying vulnerabilities stemming from model fine-tuning, adversarial attacks on data inputs, and weaknesses in communication channels connecting AI systems to physical devices. This work aims to develop robust security protocols and risk assessment methods to mitigate these threats, impacting the safety and trustworthiness of AI-driven applications across various domains. The ultimate goal is to ensure responsible AI development and deployment, balancing innovation with stringent security measures.
Papers
November 12, 2024
October 6, 2024
March 29, 2024
January 30, 2024
November 19, 2023
August 28, 2023
August 6, 2023
June 9, 2023
May 13, 2023
May 3, 2023