Continuous Embeddings

Continuous embeddings represent discrete data, such as words or logical formulas, as continuous vectors in a high-dimensional space, enabling the application of continuous mathematical tools and machine learning algorithms. Current research focuses on improving the quality and interpretability of these embeddings, particularly within large language models and multi-modal learning frameworks, often employing transformer-based architectures and variational flow methods. This approach bridges the gap between symbolic reasoning and data-driven learning, with applications ranging from enhancing the security and robustness of LLMs to improving knowledge graph completion and generative modeling in speech and vision.

Papers