Eye View
Bird's-Eye-View (BEV) perception aims to create a top-down representation of a scene from multiple camera images, mimicking a helicopter view, crucial for autonomous driving and robotics. Current research focuses on improving BEV generation accuracy and robustness using various deep learning architectures, including transformers and attention mechanisms, often incorporating sensor fusion (e.g., lidar and camera) and addressing challenges like occlusion and varying camera viewpoints. This work is significant because accurate and reliable BEV representations are essential for safe and efficient navigation in autonomous systems, impacting the development of self-driving cars and other robotic applications.
Papers
UAP-BEV: Uncertainty Aware Planning using Bird's Eye View generated from Surround Monocular Images
Vikrant Dewangan, Basant Sharma, Tushar Choudhary, Sarthak Sharma, Aakash Aanegola, Arun K. Singh, K. Madhava Krishna
An Efficient Transformer for Simultaneous Learning of BEV and Lane Representations in 3D Lane Detection
Ziye Chen, Kate Smith-Miles, Bo Du, Guoqi Qian, Mingming Gong
TiG-BEV: Multi-view BEV 3D Object Detection via Target Inner-Geometry Learning
Peixiang Huang, Li Liu, Renrui Zhang, Song Zhang, Xinli Xu, Baichao Wang, Guoyi Liu
Periocular Biometrics: A Modality for Unconstrained Scenarios
Fernando Alonso-Fernandez, Josef Bigun, Julian Fierrez, Naser Damer, Hugo Proença, Arun Ross