3D Perception
3D perception aims to create comprehensive, accurate, and robust representations of the three-dimensional world from sensor data, primarily for applications like autonomous driving and robotics. Current research emphasizes developing efficient and robust models, often employing deep learning architectures such as transformers and convolutional neural networks, to handle diverse data sources (cameras, LiDAR, radar) and challenging conditions (occlusion, adverse weather). These advancements are crucial for improving the safety and reliability of autonomous systems and enabling more sophisticated human-computer interaction in various domains.
Papers
Multi-Modal Dataset Acquisition for Photometrically Challenging Object
HyunJun Jung, Patrick Ruhkamp, Nassir Navab, Benjamin Busam
UniM$^2$AE: Multi-modal Masked Autoencoders with Unified 3D Representation for 3D Perception in Autonomous Driving
Jian Zou, Tianyu Huang, Guanglei Yang, Zhenhua Guo, Tao Luo, Chun-Mei Feng, Wangmeng Zuo
Robo3D: Towards Robust and Reliable 3D Perception against Corruptions
Lingdong Kong, Youquan Liu, Xin Li, Runnan Chen, Wenwei Zhang, Jiawei Ren, Liang Pan, Kai Chen, Ziwei Liu
SynBody: Synthetic Dataset with Layered Human Models for 3D Human Perception and Modeling
Zhitao Yang, Zhongang Cai, Haiyi Mei, Shuai Liu, Zhaoxi Chen, Weiye Xiao, Yukun Wei, Zhongfei Qing, Chen Wei, Bo Dai, Wayne Wu, Chen Qian, Dahua Lin, Ziwei Liu, Lei Yang