Responsible Machine Learning
Responsible Machine Learning (RML) focuses on developing and deploying machine learning models that are fair, transparent, and aligned with ethical and societal values. Current research emphasizes mitigating biases in datasets and algorithms, improving model interpretability and explainability, and ensuring privacy protection, often using causal inference techniques and systems safety engineering frameworks to analyze and manage risks. This field is crucial for building trustworthy AI systems and preventing unintended harms in high-stakes applications like credit scoring and news recommendation, impacting both the scientific understanding of AI and its ethical deployment in society.
Papers
Best Practices for Responsible Machine Learning in Credit Scoring
Giovani Valdrighi, Athyrson M. Ribeiro, Jansen S. B. Pereira, Vitoria Guardieiro, Arthur Hendricks, Décio Miranda Filho, Juan David Nieto Garcia, Felipe F. Bocca, Thalita B. Veronese, Lucas Wanner, Marcos Medeiros Raimundo
RecSys Challenge 2024: Balancing Accuracy and Editorial Values in News Recommendations
Johannes Kruse, Kasper Lindskow, Saikishore Kalloori, Marco Polignano, Claudio Pomo, Abhishek Srivastava, Anshuk Uppal, Michael Riis Andersen, Jes Frellsen