Responsible Machine Learning

Responsible Machine Learning (RML) focuses on developing and deploying machine learning models that are fair, transparent, and aligned with ethical and societal values. Current research emphasizes mitigating biases in datasets and algorithms, improving model interpretability and explainability, and ensuring privacy protection, often using causal inference techniques and systems safety engineering frameworks to analyze and manage risks. This field is crucial for building trustworthy AI systems and preventing unintended harms in high-stakes applications like credit scoring and news recommendation, impacting both the scientific understanding of AI and its ethical deployment in society.

Papers