Explainable Recommender System

Explainable recommender systems aim to improve user trust and satisfaction by providing understandable reasons for recommendations, addressing a key limitation of traditional "black box" methods. Current research focuses on integrating large language models and graph-based approaches to generate more coherent and accurate explanations, often incorporating techniques like counterfactual reasoning and prototype-based matrix factorization. This field is significant because transparent and reliable explanations enhance user experience, improve system debugging, and mitigate potential biases or manipulation, ultimately leading to more effective and trustworthy recommendation systems.

Papers