Tactile Input
Tactile input research aims to equip robots with a sense of touch, enabling them to interact with their environment more effectively, particularly in manipulation tasks where vision alone is insufficient. Current research focuses on integrating tactile data with other modalities like vision and language, using various approaches including reinforcement learning, graph convolutional networks, and deep learning models (e.g., DeepSDF) to process sensor data and predict object properties or actions. This work is significant because it allows robots to perform complex tasks involving contact and manipulation of objects with varying properties, improving dexterity and robustness in areas such as manufacturing, healthcare, and domestic robotics.
Papers
A Study of Human-Robot Handover through Human-Human Object Transfer
Charlotte Morissette, Bobak H. Baghi, Francois R. Hogan, Gregory Dudek
TouchSDF: A DeepSDF Approach for 3D Shape Reconstruction using Vision-Based Tactile Sensing
Mauro Comi, Yijiong Lin, Alex Church, Alessio Tonioni, Laurence Aitchison, Nathan F. Lepora