Long Range
Long-range interactions and perception are a significant challenge across diverse scientific fields, aiming to accurately model and predict phenomena extending beyond immediate spatial or temporal proximity. Current research focuses on developing novel architectures and algorithms, such as transformers, spiking neural networks, and various message-passing methods, to effectively capture long-range dependencies in data, often incorporating techniques like attention mechanisms and memory modules to improve performance. These advancements have implications for various applications, including autonomous driving, remote sensing, human-robot interaction, and the analysis of complex systems, by enabling more accurate and efficient modeling of long-range effects. The development of robust and efficient long-range models is crucial for advancing these fields.
Papers
MambaOcc: Visual State Space Model for BEV-based Occupancy Prediction with Local Adaptive Reordering
Yonglin Tian, Songlin Bai, Zhiyao Luo, Yutong Wang, Yisheng Lv, Fei-Yue Wang
Long-Range Vision-Based UAV-assisted Localization for Unmanned Surface Vehicles
Waseem Akram, Siyuan Yang, Hailiang Kuang, Xiaoyu He, Muhayy Ud Din, Yihao Dong, Defu Lin, Lakmal Seneviratne, Shaoming He, Irfan Hussain