Probabilistic Embeddings

Probabilistic embeddings represent data points not as single vectors, but as probability distributions, capturing inherent uncertainty and variability within the data. Current research focuses on developing methods to learn these distributions effectively, often employing techniques like maximum kernel entropy, variational inference, and probabilistic versions of existing algorithms such as multidimensional scaling and contrastive learning. This approach improves performance in various applications, including cross-modal retrieval, domain generalization with limited data, and self-supervised learning, by better handling ambiguity and noise inherent in real-world data. The resulting more robust and informative representations are proving valuable across diverse fields, from soundscape mapping to medical image analysis.

Papers