Positional Encoding
Positional encoding methods aim to incorporate information about the order and relative positions of elements within data sequences into neural network architectures, particularly transformers, which are inherently order-agnostic. Current research focuses on developing more effective positional encodings for various data types, including sequences, graphs, and even higher-dimensional structures like cell complexes, often tailoring encoding schemes to specific tasks (e.g., arithmetic, visual grounding, or time series forecasting) and model architectures (e.g., graph transformers, diffusion models). These advancements are crucial for improving the performance and generalization capabilities of deep learning models across numerous applications, ranging from natural language processing and computer vision to scientific simulations and process monitoring.
Papers
2D-TPE: Two-Dimensional Positional Encoding Enhances Table Understanding for Large Language Models
Jia-Nan Li, Jian Guan, Wei Wu, Zhengtao Yu, Rui Yan
OrientedFormer: An End-to-End Transformer-Based Oriented Object Detector in Remote Sensing Images
Jiaqi Zhao, Zeyu Ding, Yong Zhou, Hancheng Zhu, Wen-Liang Du, Rui Yao, Abdulmotaleb El Saddik