Motion Information
Motion information research focuses on accurately estimating and utilizing movement data from various sources, including videos, sensor data, and medical images, for diverse applications. Current research emphasizes developing robust and efficient algorithms, often employing deep learning models like diffusion models and Siamese networks, to address challenges such as motion blur, occlusions, and limited training data. These advancements are significantly impacting fields like computer vision, robotics, and medical imaging, enabling improved 3D reconstruction, autonomous navigation, and medical image analysis. The development of more accurate and generalized motion models continues to be a key focus.
Papers
The Language of Motion: Unifying Verbal and Non-verbal Language of 3D Human Motion
Changan Chen, Juze Zhang, Shrinidhi K. Lakshmikanth, Yusu Fang, Ruizhi Shao, Gordon Wetzstein, Li Fei-Fei, Ehsan Adeli
TIV-Diffusion: Towards Object-Centric Movement for Text-driven Image to Video Generation
Xingrui Wang, Xin Li, Yaosi Hu, Hanxin Zhu, Chen Hou, Cuiling Lan, Zhibo Chen
Text to Blind Motion
Hee Jae Kim, Kathakoli Sengupta, Masaki Kuribayashi, Hernisa Kacorri, Eshed Ohn-Bar
SoPo: Text-to-Motion Generation Using Semi-Online Preference Optimization
Xiaofeng Tan, Hongsong Wang, Xin Geng, Pan Zhou
Assessing Similarity Measures for the Evaluation of Human-Robot Motion Correspondence
Charles Dietzel, Patrick J. Martin
MegaSaM: Accurate, Fast, and Robust Structure and Motion from Casual Dynamic Videos
Zhengqi Li, Richard Tucker, Forrester Cole, Qianqian Wang, Linyi Jin, Vickie Ye, Angjoo Kanazawa, Aleksander Holynski, Noah Snavely
Monocular Dynamic Gaussian Splatting is Fast and Brittle but Smooth Motion Helps
Yiqing Liang, Mikhail Okunev, Mikaela Angelina Uy, Runfeng Li, Leonidas Guibas, James Tompkin, Adam W. Harley
RMD: A Simple Baseline for More General Human Motion Generation via Training-free Retrieval-Augmented Motion Diffuse
Zhouyingcheng Liao, Mingyuan Zhang, Wenjia Wang, Lei Yang, Taku Komura