Feature Preference
Feature preference, the tendency of machine learning models to disproportionately rely on certain input features, is a significant challenge across various applications, from image classification to personalized education. Current research focuses on understanding and mitigating this bias, exploring techniques like feature balancing and incorporating user feedback to guide model learning, often employing algorithms such as multi-armed bandits and conditional variational autoencoders. Addressing feature preference is crucial for improving model accuracy, fairness, and efficiency, ultimately leading to more robust and reliable AI systems in diverse fields.
Papers
May 23, 2024
September 11, 2023
September 6, 2023
May 24, 2023
April 1, 2023
April 28, 2022