Paper ID: 2304.04137

RD-DPP: Rate-Distortion Theory Meets Determinantal Point Process to Diversify Learning Data Samples

Xiwen Chen, Huayu Li, Rahul Amin, Abolfazl Razi

In some practical learning tasks, such as traffic video analysis, the number of available training samples is restricted by different factors, such as limited communication bandwidth and computation power. Determinantal Point Process (DPP) is a common method for selecting the most diverse samples to enhance learning quality. However, the number of selected samples is restricted to the rank of the kernel matrix implied by the dimensionality of data samples. Secondly, it is not easily customizable to different learning tasks. In this paper, we propose a new way of measuring task-oriented diversity based on the Rate-Distortion (RD) theory, appropriate for multi-level classification. To this end, we establish a fundamental relationship between DPP and RD theory. We observe that the upper bound of the diversity of data selected by DPP has a universal trend of $\textit{phase transition}$, which suggests that DPP is beneficial only at the beginning of sample accumulation. This led to the design of a bi-modal method, where RD-DPP is used in the first mode to select initial data samples, then classification inconsistency (as an uncertainty measure) is used to select the subsequent samples in the second mode. This phase transition solves the limitation to the rank of the similarity matrix. Applying our method to six different datasets and five benchmark models suggests that our method consistently outperforms random selection, DPP-based methods, and alternatives like uncertainty-based and coreset methods under all sampling budgets, while exhibiting high generalizability to different learning tasks.

Submitted: Apr 9, 2023