Learning Visuotactile Skill
Learning visuotactile skills focuses on enabling robots to perform dexterous manipulation tasks by integrating visual and tactile information. Current research emphasizes developing robust algorithms, often employing transformer networks or differentiable filtering, to fuse these multimodal sensory inputs for tasks like object pose estimation, in-hand manipulation, and object property inference. This research is crucial for advancing robotic dexterity and enabling robots to interact more effectively with complex and unstructured environments, impacting fields such as manufacturing, surgery, and assistive robotics. The development of affordable and accessible hardware platforms for data collection and training is also a significant focus.