Cooperative Perception
Cooperative perception leverages communication between multiple agents (e.g., autonomous vehicles, UAVs) to enhance environmental awareness beyond the limitations of individual sensors. Current research focuses on developing robust fusion algorithms, often employing deep learning architectures like transformers and autoencoders, to integrate heterogeneous sensor data while addressing challenges like communication bandwidth constraints, localization errors, and data heterogeneity. This field is crucial for advancing autonomous driving and other applications requiring reliable, long-range perception in complex environments, with significant implications for safety and efficiency.
Papers
A Spatial Calibration Method for Robust Cooperative Perception
Zhiying Song, Tenghui Xie, Hailiang Zhang, Jiaxin Liu, Fuxi Wen, Jun Li
Interruption-Aware Cooperative Perception for V2X Communication-Aided Autonomous Driving
Shunli Ren, Zixing Lei, Zi Wang, Mehrdad Dianati, Yafei Wang, Siheng Chen, Wenjun Zhang
Cooperverse: A Mobile-Edge-Cloud Framework for Universal Cooperative Perception with Mixed Connectivity and Automation
Zhengwei Bai, Guoyuan Wu, Matthew J. Barth, Yongkang Liu, Emrah Akin Sisbot, Kentaro Oguchi
Generating Evidential BEV Maps in Continuous Driving Space
Yunshuang Yuan, Hao Cheng, Michael Ying Yang, Monika Sester