Security Context
Security context research currently centers on leveraging large language models (LLMs) to enhance cybersecurity practices, encompassing tasks like penetration testing, vulnerability analysis, and threat detection. This involves developing and evaluating LLMs' performance in realistic security scenarios, often using novel benchmarks and datasets to assess their accuracy and reliability in handling complex security-related information. The overarching goal is to improve the efficiency and effectiveness of cybersecurity efforts, bridging the gap between human expertise and automated tools, while mitigating risks associated with LLM limitations like hallucinations and adversarial attacks. This research has significant implications for improving both the security posture of various systems and the efficiency of security professionals.
Papers
Fortify Your Defenses: Strategic Budget Allocation to Enhance Power Grid Cybersecurity
Rounak Meyur, Sumit Purohit, Braden K. Webb
Graphene: Infrastructure Security Posture Analysis with AI-generated Attack Graphs
Xin Jin, Charalampos Katsis, Fan Sang, Jiahao Sun, Elisa Bertino, Ramana Rao Kompella, Ashish Kundu