Paper ID: 2408.15480

Feelit: Combining Compliant Shape Displays with Vision-Based Tactile Sensors for Real-Time Teletaction

Oscar Yu, Yu She

Teletaction, the transmission of tactile feedback or touch, is a crucial aspect in the field of teleoperation. High-quality teletaction feedback allows users to remotely manipulate objects and increase the quality of the human-machine interface between the operator and the robot, making complex manipulation tasks possible. Advances in the field of teletaction for teleoperation however, have yet to make full use of the high-resolution 3D data provided by modern vision-based tactile sensors. Existing solutions for teletaction lack in one or more areas of form or function, such as fidelity or hardware footprint. In this paper, we showcase our design for a low-cost teletaction device that can utilize real-time high-resolution tactile information from vision-based tactile sensors, through both physical 3D surface reconstruction and shear displacement. We present our device, the Feelit, which uses a combination of a pin-based shape display and compliant mechanisms to accomplish this task. The pin-based shape display utilizes an array of 24 servomotors with miniature Bowden cables, giving the device a resolution of 6x4 pins in a 15x10 mm display footprint. Each pin can actuate up to 3 mm in 200 ms, while providing 80 N of force and 1.5 um of depth resolution. Shear displacement and rotation is achieved using a compliant mechanism design, allowing a minimum of 1 mm displacement laterally and 10 degrees of rotation. This real-time 3D tactile reconstruction is achieved with the use of a vision-based tactile sensor, the GelSight [1], along with an algorithm that samples the depth data and marker tracking to generate actuator commands. Through a series of experiments including shape recognition and relative weight identification, we show that our device has the potential to expand teletaction capabilities in the teleoperation space.

Submitted: Aug 28, 2024