Motion Imitation
Motion imitation research focuses on enabling robots and virtual characters to replicate human movements, aiming for both visual fidelity and physical plausibility. Current efforts concentrate on developing robust and efficient algorithms, often employing deep learning architectures like GANs, reinforcement learning, and diffusion models, to address challenges such as data scarcity, physical constraints, and generalization to unseen scenarios. These advancements are significant for robotics, animation, and healthcare, offering potential improvements in robot control, realistic character animation, and gait analysis. The field is actively exploring methods to improve the efficiency and robustness of motion imitation, particularly from less-than-ideal data sources like casual videos.
Papers
HOVER: Versatile Neural Whole-Body Controller for Humanoid Robots
Tairan He, Wenli Xiao, Toru Lin, Zhengyi Luo, Zhenjia Xu, Zhenyu Jiang, Jan Kautz, Changliu Liu, Guanya Shi, Xiaolong Wang, Linxi Fan, Yuke Zhu
MovieCharacter: A Tuning-Free Framework for Controllable Character Video Synthesis
Di Qiu, Zheng Chen, Rui Wang, Mingyuan Fan, Changqian Yu, Junshi Huan, Xiang Wen