Action Feature
Action feature research focuses on understanding and representing actions within various data modalities, aiming to improve automated action recognition, generation, and understanding. Current research emphasizes deep learning models, particularly transformers and variational autoencoders, often incorporating multimodal inputs (vision, language, audio) to achieve robust and context-aware action representation. This work has significant implications for diverse fields, including sports analytics, human-computer interaction, robotics, and healthcare, by enabling more accurate and efficient analysis of human and machine actions. Furthermore, the development of robust action feature representations is crucial for advancing explainability and safety in AI systems.
Papers
REACT: Recognize Every Action Everywhere All At Once
Naga VS Raviteja Chappa, Pha Nguyen, Page Daniel Dobbs, Khoa Luu
An HCAI Methodological Framework: Putting It Into Action to Enable Human-Centered AI
Wei Xu, Zaifeng Gao, Marvin Dainoff
From Prediction to Action: Critical Role of Performance Estimation for Machine-Learning-Driven Materials Discovery
Mario Boley, Felix Luong, Simon Teshuva, Daniel F Schmidt, Lucas Foppa, Matthias Scheffler
From Explanation to Action: An End-to-End Human-in-the-loop Framework for Anomaly Reasoning and Management
Xueying Ding, Nikita Seleznev, Senthil Kumar, C. Bayan Bruss, Leman Akoglu
Therbligs in Action: Video Understanding through Motion Primitives
Eadom Dessalene, Michael Maynord, Cornelia Fermuller, Yiannis Aloimonos
Approach Intelligent Writing Assistants Usability with Seven Stages of Action
Avinash Bhat, Disha Shrivastava, Jin L. C. Guo