Paper ID: 2411.07342
Learning Dynamic Tasks on a Large-scale Soft Robot in a Handful of Trials
Sicelukwanda Zwane (1), Daniel Cheney (2), Curtis C. Johnson (2), Yicheng Luo (1), Yasemin Bekiroglu (1 and 3), Marc D. Killpack (2), Marc Peter Deisenroth (1) ((1) UCL Centre for Artificial Intelligence, University College London, UK, (2) Department of Mechanical Engineering, Brigham Young University, USA, (3) Department of Electrical Engineering, Chalmers University of Technology, Sweden)
Soft robots offer more flexibility, compliance, and adaptability than traditional rigid robots. They are also typically lighter and cheaper to manufacture. However, their use in real-world applications is limited due to modeling challenges and difficulties in integrating effective proprioceptive sensors. Large-scale soft robots ($\approx$ two meters in length) have greater modeling complexity due to increased inertia and related effects of gravity. Common efforts to ease these modeling difficulties such as assuming simple kinematic and dynamics models also limit the general capabilities of soft robots and are not applicable in tasks requiring fast, dynamic motion like throwing and hammering. To overcome these challenges, we propose a data-efficient Bayesian optimization-based approach for learning control policies for dynamic tasks on a large-scale soft robot. Our approach optimizes the task objective function directly from commanded pressures, without requiring approximate kinematics or dynamics as an intermediate step. We demonstrate the effectiveness of our approach through both simulated and real-world experiments.
Submitted: Nov 11, 2024