Paper ID: 2203.10629
Explicit User Manipulation in Reinforcement Learning Based Recommender Systems
Matthew Sparr
Recommender systems are highly prevalent in the modern world due to their value to both users and platforms and services that employ them. Generally, they can improve the user experience and help to increase satisfaction, but they do not come without risks. One such risk is that of their effect on users and their ability to play an active role in shaping user preferences. This risk is more significant for reinforcement learning based recommender systems. These are capable of learning for instance, how recommended content shown to a user today may tamper that user's preference for other content recommended in the future. Reinforcement learning based recommendation systems can thus implicitly learn to influence users if that means maximizing clicks, engagement, or consumption. On social news and media platforms, in particular, this type of behavior is cause for alarm. Social media undoubtedly plays a role in public opinion and has been shown to be a contributing factor to increased political polarization. Recommender systems on such platforms, therefore, have great potential to influence users in undesirable ways. However, it may also be possible for this form of manipulation to be used intentionally. With advancements in political opinion dynamics modeling and larger collections of user data, explicit user manipulation in which the beliefs and opinions of users are tailored towards a certain end emerges as a significant concern in reinforcement learning based recommender systems.
Submitted: Mar 20, 2022