Paper ID: 2205.00224
Loss Function Entropy Regularization for Diverse Decision Boundaries
Sue Sin Chong
Is it possible to train several classifiers to perform meaningful crowd-sourcing to produce a better prediction label set without ground-truth annotation? This paper will modify the contrastive learning objectives to automatically train a self-complementing ensemble to produce a state-of-the-art prediction on the CIFAR10 and CIFAR100-20 tasks. This paper will present a straightforward method to modify a single unsupervised classification pipeline to automatically generate an ensemble of neural networks with varied decision boundaries to learn a more extensive feature set of classes. Loss Function Entropy Regularization (LFER) are regularization terms to be added to the pre-training and contrastive learning loss functions. LFER is a gear to modify the entropy state of the output space of unsupervised learning, thereby diversifying the latent representation of decision boundaries of neural networks. Ensemble trained with LFER has higher successful prediction accuracy for samples near decision boundaries. LFER is an adequate gear to perturb decision boundaries and has produced classifiers that beat state-of-the-art at the contrastive learning stage. Experiments show that LFER can produce an ensemble with accuracy comparable to the state-of-the-art yet have varied latent decision boundaries. It allows us to perform meaningful verification for samples near decision boundaries, encouraging the correct classification of near-boundary samples. By compounding the probability of correct prediction of a single sample amongst an ensemble of neural network trained, our method can improve upon a single classifier by denoising and affirming correct feature mappings.
Submitted: Apr 30, 2022