Autonomous Perception

Autonomous perception focuses on enabling machines to understand their environment using sensors, aiming for robust and efficient scene interpretation for tasks like navigation and manipulation. Current research emphasizes multi-sensor fusion (e.g., LiDAR and camera data), leveraging deep neural networks (including transformers and UNets) for tasks such as object detection, semantic segmentation, and 3D scene reconstruction, often incorporating uncertainty modeling for improved reliability. This field is crucial for advancing autonomous vehicles, robotics, and other applications requiring real-time environmental understanding, with ongoing efforts to improve accuracy, efficiency, and robustness in challenging conditions like adverse weather.

Papers