Social Engineering Attack
Social engineering attacks exploit human psychology to trick individuals into divulging sensitive information or performing actions that compromise security. Current research heavily focuses on the impact of large language models (LLMs) on the creation and detection of these attacks, particularly in phishing emails and voice-based scams (vishing), exploring both the effectiveness of LLM-generated attacks and the development of LLM-based defenses. These studies utilize various machine learning models, including neural networks and transformer architectures, to analyze attack patterns and improve detection capabilities. The findings highlight the urgent need for robust countermeasures, as AI-powered social engineering poses a significant and evolving threat to individuals and organizations.
Papers
Assessing AI vs Human-Authored Spear Phishing SMS Attacks: An Empirical Study Using the TRAPD Method
Jerson Francia, Derek Hansen, Ben Schooley, Matthew Taylor, Shydra Murray, Greg Snow
Defending Against Social Engineering Attacks in the Age of LLMs
Lin Ai, Tharindu Kumarage, Amrita Bhattacharjee, Zizhou Liu, Zheng Hui, Michael Davinroy, James Cook, Laura Cassani, Kirill Trapeznikov, Matthias Kirchner, Arslan Basharat, Anthony Hoogs, Joshua Garland, Huan Liu, Julia Hirschberg