Paper ID: 2409.12217

Effects of Common Regularization Techniques on Open-Set Recognition

Zachary Rabin, Jim Davis, Benjamin Lewis, Matthew Scherreik

In recent years there has been increasing interest in the field of Open-Set Recognition, which allows a classification model to identify inputs as "unknown" when it encounters an object or class not in the training set. This ability to flag unknown inputs is of vital importance to many real world classification applications. As almost all modern training methods for neural networks use extensive amounts of regularization for generalization, it is therefore important to examine how regularization techniques impact the ability of a model to perform Open-Set Recognition. In this work, we examine the relationship between common regularization techniques and Open-Set Recognition performance. Our experiments are agnostic to the specific open-set detection algorithm and examine the effects across a wide range of datasets. We show empirically that regularization methods can provide significant improvements to Open-Set Recognition performance, and we provide new insights into the relationship between accuracy and Open-Set performance.

Submitted: Sep 3, 2024