Eye View
Bird's-Eye-View (BEV) perception aims to create a top-down representation of a scene from multiple camera images, mimicking a helicopter view, crucial for autonomous driving and robotics. Current research focuses on improving BEV generation accuracy and robustness using various deep learning architectures, including transformers and attention mechanisms, often incorporating sensor fusion (e.g., lidar and camera) and addressing challenges like occlusion and varying camera viewpoints. This work is significant because accurate and reliable BEV representations are essential for safe and efficient navigation in autonomous systems, impacting the development of self-driving cars and other robotic applications.
Papers
Estimation of Appearance and Occupancy Information in Birds Eye View from Surround Monocular Images
Sarthak Sharma, Unnikrishnan R. Nair, Udit Singh Parihar, Midhun Menon S, Srikanth Vidapanakal
Calibrated Perception Uncertainty Across Objects and Regions in Bird's-Eye-View
Markus Kängsepp, Meelis Kull