Paper ID: 2207.09560
Holistic Robust Data-Driven Decisions
Amine Bennouna, Bart Van Parys
The design of data-driven formulations for machine learning and decision-making with good out-of-sample performance is a key challenge. The observation that good in-sample performance does not guarantee good out-of-sample performance is generally known as overfitting. Practical overfitting can typically not be attributed to a single cause but instead is caused by several factors all at once. We consider here three overfitting sources: (i) statistical error as a result of working with finite sample data, (ii) data noise which occurs when the data points are measured only with finite precision, and finally (iii) data misspecification in which a small fraction of all data may be wholly corrupted. We argue that although existing data-driven formulations may be robust against one of these three sources in isolation they do not provide holistic protection against all overfitting sources simultaneously. We design a novel data-driven formulation which does guarantee such holistic protection and is furthermore computationally viable. Our distributionally robust optimization formulation can be interpreted as a novel combination of a Kullback-Leibler and Levy-Prokhorov robust optimization formulation which is novel in its own right. However, we show how in the context of classification and regression problems that several popular regularized and robust formulations reduce to a particular case of our proposed novel formulation. Finally, we apply the proposed HR formulation on a portfolio selection problem with real stock data, and analyze its risk/return tradeoff against several benchmarks formulations. Our experiments show that our novel ambiguity set provides a significantly better risk/return trade-off.
Submitted: Jul 19, 2022