Robot Perception
Robot perception research aims to equip robots with robust and efficient ways to understand their environment through various sensor modalities, enabling safe and effective interaction. Current efforts focus on improving the accuracy and speed of perception using techniques like Bayesian frameworks, deep learning models (including transformers and neural networks), and multimodal data fusion (combining visual, acoustic, tactile, and inertial data). These advancements are crucial for enabling more sophisticated robotic applications in diverse fields, such as manufacturing, healthcare, and agriculture, by enhancing robots' ability to navigate, manipulate objects, and collaborate with humans.
Papers
Flat'n'Fold: A Diverse Multi-Modal Dataset for Garment Perception and Manipulation
Lipeng Zhuang, Shiyu Fan, Yingdong Ru, Florent Audonnet, Paul Henderson, Gerardo Aragon-Camarasa
Robotic-CLIP: Fine-tuning CLIP on Action Data for Robotic Applications
Nghia Nguyen, Minh Nhat Vu, Tung D. Ta, Baoru Huang, Thieu Vo, Ngan Le, Anh Nguyen
A Bayesian framework for active object recognition, pose estimation and shape transfer learning through touch
Haodong Zheng, Andrei Jalba, Raymond H. Cuijpers, Wijnand IJsselsteijn, Sanne Schoenmakers
Ultrafast vision perception by neuromorphic optical flow
Shengbo Wang, Shuo Gao, Tongming Pu, Liangbing Zhao, Arokia Nathan