Compositional Embeddings
Compositional embeddings aim to represent complex concepts by combining simpler, learned representations, enabling more nuanced and flexible modeling of data relationships. Current research focuses on improving the robustness and efficiency of these methods, particularly addressing challenges like overfitting in image generation and handling compositional queries in information retrieval, often employing probabilistic models or region-based (box) embeddings. This work has significant implications for various applications, including recommendation systems, multimodal image retrieval, and zero-shot learning, by allowing for more accurate and scalable handling of complex, high-dimensional data.
Papers
October 25, 2024
December 13, 2023
June 7, 2023
November 5, 2022
October 12, 2022
April 12, 2022