Robot Action
Robot action research focuses on enabling robots to perform complex tasks in human environments, prioritizing safe, socially appropriate, and generalizable behaviors. Current efforts concentrate on developing robust control policies using techniques like reinforcement learning, imitation learning, and large language models (LLMs) to interpret natural language instructions and visual data, often incorporating hierarchical architectures and modular designs for improved efficiency and adaptability. This research is crucial for advancing human-robot collaboration, improving robot autonomy in unstructured settings, and creating more reliable and versatile robotic systems for various applications.
Papers
An Environment-Adaptive Position/Force Control Based on Physical Property Estimation
Tomoya Kitamura, Yuki Saito, Hiroshi Asai, Kouhei Ohnishi
Dream to Manipulate: Compositional World Models Empowering Robot Imitation Learning with Imagination
Leonardo Barcellona, Andrii Zadaianchuk, Davide Allegro, Samuele Papa, Stefano Ghidoni, Efstratios Gavves