Vision Based Tactile

Vision-based tactile sensing integrates visual and tactile data to improve robotic manipulation and object understanding. Current research focuses on developing deep learning models, including convolutional neural networks and graph neural networks, to reconstruct 3D shapes from tactile sensor images, track contact points, and estimate properties like liquid volume. This interdisciplinary field is advancing robotic dexterity and perception, with applications ranging from precise object manipulation to medical diagnostics, by providing richer, more robust sensory information than vision alone.

Papers