Security Risk

Security risks associated with increasingly prevalent AI systems, particularly large language models (LLMs) and their applications in diverse sectors like healthcare and IoT, are a major area of concern. Current research focuses on identifying vulnerabilities stemming from model fine-tuning, adversarial attacks on data inputs, and weaknesses in communication channels connecting AI systems to physical devices. This work aims to develop robust security protocols and risk assessment methods to mitigate these threats, impacting the safety and trustworthiness of AI-driven applications across various domains. The ultimate goal is to ensure responsible AI development and deployment, balancing innovation with stringent security measures.

Papers