Shilling Attack
Shilling attacks involve manipulating recommender systems by injecting fake user profiles to artificially boost or suppress the ranking of specific items. Current research focuses on developing increasingly sophisticated attack methods, including generative models like GANs and reinforcement learning approaches, that create realistic fake reviews and user profiles to evade detection. These attacks pose a significant threat to the integrity and trustworthiness of recommender systems across various domains, driving research into more robust detection and defense mechanisms. The development of both more effective attacks and more resilient defenses is crucial for maintaining the reliability of recommendation systems.
Papers
April 24, 2024
June 27, 2023
February 14, 2023