Similarity Measure

Similarity measures quantify the resemblance between data points, objects, or systems, serving as fundamental tools across diverse scientific fields and applications. Current research emphasizes developing more robust and nuanced measures, particularly those addressing limitations of existing methods in handling complex data structures (e.g., images, text, time series, graphs) and accounting for factors like local variations, noise, and high dimensionality. This includes exploring novel approaches based on copulas, optimal transport, contrastive learning, and graph embeddings, often integrated within machine learning models like sparse mixture of experts or graph neural networks. Improved similarity measures are crucial for advancing fields like computer vision, natural language processing, bioinformatics, and personalized medicine, enabling more accurate analysis and improved decision-making.

Papers