Biased Feedback

Biased feedback, a pervasive issue in machine learning applications like recommendation systems, hinders the development of accurate and fair models by skewing training data. Current research focuses on mitigating this bias through techniques such as inverse propensity weighting, contextual bandit algorithms (including Thompson Sampling variants), and dual learning approaches that model both user and item biases, often incorporating self-attention mechanisms or contrastive learning for improved performance. Addressing biased feedback is crucial for building reliable and equitable AI systems, impacting various fields from personalized recommendations to online advertising and beyond.

Papers