Perception Module
Perception modules are crucial components of autonomous systems, aiming to accurately interpret sensor data (visual, audio, depth, etc.) and provide reliable information for decision-making. Current research emphasizes improving the robustness and generalization of these modules, often employing deep learning architectures like transformers and convolutional neural networks, and exploring techniques like self-supervised learning and multi-modal fusion to enhance performance. This work is vital for advancing autonomous driving, robotics, and other applications requiring reliable real-time environmental understanding, particularly in addressing challenges like perception errors and sim-to-real transfer.
Papers
Learning A Simulation-based Visual Policy for Real-world Peg In Unseen Holes
Liang Xie, Hongxiang Yu, Kechun Xu, Tong Yang, Minhang Wang, Haojian Lu, Rong Xiong, Yue Wang
Approaches and Challenges in Robotic Perception for Table-top Rearrangement and Planning
Aditya Agarwal, Bipasha Sen, Shankara Narayanan, Vishal Reddy Mandadi, Brojeshwar Bhowmick, K Madhava Krishna