Paper ID: 2409.09870 • Published Sep 15, 2024
TransForce: Transferable Force Prediction for Vision-based Tactile Sensors with Sequential Image Translation
Zhuo Chen, Ni Ou, Xuyang Zhang, Shan Luo
TL;DR
Get AI-generated summaries with premium
Get AI-generated summaries with premium
Vision-based tactile sensors (VBTSs) provide high-resolution tactile images
crucial for robot in-hand manipulation. However, force sensing in VBTSs is
underutilized due to the costly and time-intensive process of acquiring paired
tactile images and force labels. In this study, we introduce a transferable
force prediction model, TransForce, designed to leverage collected image-force
paired data for new sensors under varying illumination colors and marker
patterns while improving the accuracy of predicted forces, especially in the
shear direction. Our model effectively achieves translation of tactile images
from the source domain to the target domain, ensuring that the generated
tactile images reflect the illumination colors and marker patterns of the new
sensors while accurately aligning the elastomer deformation observed in
existing sensors, which is beneficial to force prediction of new sensors. As
such, a recurrent force prediction model trained with generated sequential
tactile images and existing force labels is employed to estimate
higher-accuracy forces for new sensors with lowest average errors of 0.69N
(5.8\% in full work range) in x-axis, 0.70N (5.8\%) in y-axis, and 1.11N
(6.9\%) in z-axis compared with models trained with single images. The
experimental results also reveal that pure marker modality is more helpful than
the RGB modality in improving the accuracy of force in the shear direction,
while the RGB modality show better performance in the normal direction.