Adverse Weather
Adverse weather significantly degrades the performance of computer vision systems used in autonomous driving and other applications. Current research focuses on developing robust models, often employing transformer networks, diffusion models, and contrastive learning, to improve image and LiDAR point cloud processing in challenging conditions like rain, fog, snow, and low light. This work emphasizes improving the accuracy and safety of perception systems by mitigating the effects of adverse weather on object detection, semantic segmentation, and other crucial tasks. The resulting advancements have significant implications for the safety and reliability of autonomous vehicles and other applications relying on robust environmental perception.
Papers
A-BDD: Leveraging Data Augmentations for Safe Autonomous Driving in Adverse Weather and Lighting
Felix Assion, Florens Gressner, Nitin Augustine, Jona Klemenc, Ahmed Hammam, Alexandre Krattinger, Holger Trittenbach, Sascha Riemer
Boosting Adverse Weather Crowd Counting via Multi-queue Contrastive Learning
Tianhang Pan, Xiuyi Jia
Robust ADAS: Enhancing Robustness of Machine Learning-based Advanced Driver Assistance Systems for Adverse Weather
Muhammad Zaeem Shahzad, Muhammad Abdullah Hanif, Muhammad Shafique
Rethinking Data Augmentation for Robust LiDAR Semantic Segmentation in Adverse Weather
Junsung Park, Kyungmin Kim, Hyunjung Shim