Vision Based Tactile
Vision-based tactile sensing integrates visual and tactile data to improve robotic manipulation and object understanding. Current research focuses on developing deep learning models, including convolutional neural networks and graph neural networks, to reconstruct 3D shapes from tactile sensor images, track contact points, and estimate properties like liquid volume. This interdisciplinary field is advancing robotic dexterity and perception, with applications ranging from precise object manipulation to medical diagnostics, by providing richer, more robust sensory information than vision alone.
Papers
November 7, 2024
October 31, 2024
May 23, 2024
November 21, 2023
March 25, 2023
November 8, 2022
November 7, 2022
October 17, 2022
March 29, 2022