Relative Representation
Relative representations are a technique for aligning and integrating latent spaces from independently trained neural networks, enabling zero-shot model stitching and improved interoperability across diverse models and modalities. Current research focuses on improving the robustness and efficiency of these representations, often through techniques like topological regularization, inverse relative projections, and dynamic optimization strategies, applied to various architectures including CNNs, transformers, and VAEs. This approach holds significant promise for enhancing the reusability and composability of pre-trained models, facilitating advancements in fields like reinforcement learning, semantic communication, and weakly supervised learning.