Paper ID: 2405.21042
Comparing the information content of probabilistic representation spaces
Kieran A. Murphy, Sam Dillavou, Dani S. Bassett
Probabilistic representation spaces convey information about a dataset, and to understand the effects of factors such as training loss and network architecture, we seek to compare the information content of such spaces. However, most existing methods to compare representation spaces assume representations are points, and neglect the distributional nature of probabilistic representations. Here, instead of building upon point-based measures of comparison, we build upon classic methods from literature on hard clustering. We generalize two information-theoretic methods of comparing hard clustering assignments to be applicable to general probabilistic representation spaces. We then propose a practical method of estimation that is based on fingerprinting a representation space with a sample of the dataset and is applicable when the communicated information is only a handful of bits. With unsupervised disentanglement as a motivating problem, we find information fragments that are repeatedly contained in individual latent dimensions in VAE and InfoGAN ensembles. Then, by comparing the full latent spaces of models, we find highly consistent information content across datasets, methods, and hyperparameters, even though there is often a point during training with substantial variety across repeat runs. Finally, we leverage the differentiability of the proposed method and perform model fusion by synthesizing the information content of multiple weak learners, each incapable of representing the global structure of a dataset. Across the case studies, the direct comparison of information content provides a natural basis for understanding the processing of information.
Submitted: May 31, 2024