Mixed Space
Mixed space research focuses on developing methods to effectively represent and analyze data residing in multiple, potentially disparate spaces, aiming to bridge the gap between different data modalities and improve downstream tasks. Current research emphasizes the development of novel embedding techniques, often leveraging graph neural networks, variational autoencoders, and optimal transport methods, to create unified representations that capture relationships across diverse data types. This work has significant implications for various fields, including machine learning, natural language processing, and cultural heritage studies, by enabling more robust and interpretable analyses of complex, multi-modal datasets.
Papers
AI-EDI-SPACE: A Co-designed Dataset for Evaluating the Quality of Public Spaces
Shreeyash Gowaikar, Hugo Berard, Rashid Mushkani, Emmanuel Beaudry Marchand, Toumadher Ammar, Shin Koseki
From Fake Perfects to Conversational Imperfects: Exploring Image-Generative AI as a Boundary Object for Participatory Design of Public Spaces
Jose A. Guridi, Angel Hsing-Chi Hwang, Duarte Santo, Maria Goula, Cristobal Cheyre, Lee Humphreys, Marco Rangel
Using Contrastive Learning with Generative Similarity to Learn Spaces that Capture Human Inductive Biases
Raja Marjieh, Sreejan Kumar, Declan Campbell, Liyi Zhang, Gianluca Bencomo, Jake Snell, Thomas L. Griffiths
Topological Perspectives on Optimal Multimodal Embedding Spaces
Abdul Aziz A. B, A. B Abdul Rahim