Eye Hand Coordination

Eye-hand coordination research focuses on understanding and replicating the complex interplay between visual perception and motor control needed for precise actions. Current research emphasizes developing robust models, often employing convolutional and graph neural networks, to predict gaze direction from full-body pose and to enable robots to perform intricate tasks like object manipulation and parkour-like movements using only low-cost visual input. This work is significant for advancing both our understanding of human motor control and for creating more adaptable and efficient robots capable of operating in complex, unstructured environments. The development of more economical and adaptable robotic systems has implications for various fields, including assistive robotics and manufacturing.

Papers