Biased Feedback
Biased feedback, a pervasive issue in machine learning applications like recommendation systems, hinders the development of accurate and fair models by skewing training data. Current research focuses on mitigating this bias through techniques such as inverse propensity weighting, contextual bandit algorithms (including Thompson Sampling variants), and dual learning approaches that model both user and item biases, often incorporating self-attention mechanisms or contrastive learning for improved performance. Addressing biased feedback is crucial for building reliable and equitable AI systems, impacting various fields from personalized recommendations to online advertising and beyond.
Papers
October 14, 2024
August 26, 2024
August 19, 2024
July 22, 2024
March 16, 2024
March 12, 2024
November 24, 2023
September 20, 2023
August 23, 2023
July 14, 2023
June 13, 2023
April 10, 2023
March 8, 2023
February 27, 2023
January 28, 2023
July 26, 2022
June 9, 2022
June 7, 2022
May 30, 2022