Paper ID: 2503.15552 • Published Mar 18, 2025
Personalized Attacks of Social Engineering in Multi-turn Conversations -- LLM Agents for Simulation and Detection
Tharindu Kumarage, Cameron Johnson, Jadie Adams, Lin Ai, Matthias Kirchner, Anthony Hoogs, Joshua Garland, Julia...
Arizona State University•Kitware, Inc•Columbia University
TL;DR
Get AI-generated summaries with premium
Get AI-generated summaries with premium
The rapid advancement of conversational agents, particularly chatbots powered
by Large Language Models (LLMs), poses a significant risk of social engineering
(SE) attacks on social media platforms. SE detection in multi-turn, chat-based
interactions is considerably more complex than single-instance detection due to
the dynamic nature of these conversations. A critical factor in mitigating this
threat is understanding the mechanisms through which SE attacks operate,
specifically how attackers exploit vulnerabilities and how victims' personality
traits contribute to their susceptibility. In this work, we propose an
LLM-agentic framework, SE-VSim, to simulate SE attack mechanisms by generating
multi-turn conversations. We model victim agents with varying personality
traits to assess how psychological profiles influence susceptibility to
manipulation. Using a dataset of over 1000 simulated conversations, we examine
attack scenarios in which adversaries, posing as recruiters, funding agencies,
and journalists, attempt to extract sensitive information. Based on this
analysis, we present a proof of concept, SE-OmniGuard, to offer personalized
protection to users by leveraging prior knowledge of the victims personality,
evaluating attack strategies, and monitoring information exchanges in
conversations to identify potential SE attempts.
Figures & Tables
Unlock access to paper figures and tables to enhance your research experience.