3D Perception
3D perception aims to create comprehensive, accurate, and robust representations of the three-dimensional world from sensor data, primarily for applications like autonomous driving and robotics. Current research emphasizes developing efficient and robust models, often employing deep learning architectures such as transformers and convolutional neural networks, to handle diverse data sources (cameras, LiDAR, radar) and challenging conditions (occlusion, adverse weather). These advancements are crucial for improving the safety and reliability of autonomous systems and enabling more sophisticated human-computer interaction in various domains.
Papers
Individuation of 3D perceptual units from neurogeometry of binocular cells
Maria Virginia Bolelli, Giovanna Citti, Alessandro Sarti, Steven W. Zucker
Learning 3D Perception from Others' Predictions
Jinsu Yoo, Zhenyang Feng, Tai-Yu Pan, Yihong Sun, Cheng Perng Phoo, Xiangyu Chen, Mark Campbell, Kilian Q. Weinberger, Bharath Hariharan, Wei-Lun Chao