Paper ID: 2401.05240
Decoupling Decision-Making in Fraud Prevention through Classifier Calibration for Business Logic Action
Emanuele Luzio, Moacir Antonelli Ponti, Christian Ramirez Arevalo, Luis Argerich
Machine learning models typically focus on specific targets like creating classifiers, often based on known population feature distributions in a business context. However, models calculating individual features adapt over time to improve precision, introducing the concept of decoupling: shifting from point evaluation to data distribution. We use calibration strategies as strategy for decoupling machine learning (ML) classifiers from score-based actions within business logic frameworks. To evaluate these strategies, we perform a comparative analysis using a real-world business scenario and multiple ML models. Our findings highlight the trade-offs and performance implications of the approach, offering valuable insights for practitioners seeking to optimize their decoupling efforts. In particular, the Isotonic and Beta calibration methods stand out for scenarios in which there is shift between training and testing data.
Submitted: Jan 10, 2024