Paper ID: 2201.01415

Problem-dependent attention and effort in neural networks with applications to image resolution and model selection

Chris Rohlfs

This paper introduces two new ensemble-based methods to reduce the data and computation costs of image classification. They can be used with any set of classifiers and do not require additional training. In the first approach, data usage is reduced by only analyzing a full-sized image if the model has low confidence in classifying a low-resolution pixelated version. When applied on the best performing classifiers considered here, data usage is reduced by 61.2% on MNIST, 69.6% on KMNIST, 56.3% on FashionMNIST, 84.6% on SVHN, 40.6% on ImageNet, and 27.6% on ImageNet-V2, all with a less than 5% reduction in accuracy. However, for CIFAR-10, the pixelated data are not particularly informative, and the ensemble approach increases data usage while reducing accuracy. In the second approach, compute costs are reduced by only using a complex model if a simpler model has low confidence in its classification. Computation cost is reduced by 82.1% on MNIST, 47.6% on KMNIST, 72.3% on FashionMNIST, 86.9% on SVHN, 89.2% on ImageNet, and 81.5% on ImageNet-V2, all with a less than 5% reduction in accuracy; for CIFAR-10 the corresponding improvements are smaller at 13.5%. When cost is not an object, choosing the projection from the most confident model for each observation increases validation accuracy to 81.0% from 79.3% for ImageNet and to 69.4% from 67.5% for ImageNet-V2.

Submitted: Jan 5, 2022