Explainable Recommendation

Explainable recommendation aims to enhance the transparency and trustworthiness of recommender systems by providing understandable justifications for suggested items. Current research focuses on generating natural language explanations using large language models (LLMs), often incorporating techniques like aspect-based planning, prompt engineering, and reinforcement learning to improve explanation quality and address issues like bias and robustness. This field is significant because providing clear explanations can increase user trust, satisfaction, and understanding of the recommendation process, leading to improved user experience and more informed decision-making.

Papers