Mixed Space
Mixed space research focuses on developing methods to effectively represent and analyze data residing in multiple, potentially disparate spaces, aiming to bridge the gap between different data modalities and improve downstream tasks. Current research emphasizes the development of novel embedding techniques, often leveraging graph neural networks, variational autoencoders, and optimal transport methods, to create unified representations that capture relationships across diverse data types. This work has significant implications for various fields, including machine learning, natural language processing, and cultural heritage studies, by enabling more robust and interpretable analyses of complex, multi-modal datasets.
Papers
mCLARI: a shape-morphing insect-scale robot capable of omnidirectional terrain-adaptive locomotion in laterally confined spaces
Heiko Kabutz, Alexander Hedrick, Parker McDonnell, Kaushik Jayaram
Demystifying Embedding Spaces using Large Language Models
Guy Tennenholtz, Yinlam Chow, Chih-Wei Hsu, Jihwan Jeong, Lior Shani, Azamat Tulepbergenov, Deepak Ramachandran, Martin Mladenov, Craig Boutilier
$\omega$PAP Spaces: Reasoning Denotationally About Higher-Order, Recursive Probabilistic and Differentiable Programs
Mathieu Huot, Alexander K. Lew, Vikash K. Mansinghka, Sam Staton
The Gaussian kernel on the circle and spaces that admit isometric embeddings of the circle
Nathaël Da Costa, Cyrus Mostajeran, Juan-Pablo Ortega