Action Feature
Action feature research focuses on understanding and representing actions within various data modalities, aiming to improve automated action recognition, generation, and understanding. Current research emphasizes deep learning models, particularly transformers and variational autoencoders, often incorporating multimodal inputs (vision, language, audio) to achieve robust and context-aware action representation. This work has significant implications for diverse fields, including sports analytics, human-computer interaction, robotics, and healthcare, by enabling more accurate and efficient analysis of human and machine actions. Furthermore, the development of robust action feature representations is crucial for advancing explainability and safety in AI systems.
Papers
Interpretability in Action: Exploratory Analysis of VPT, a Minecraft Agent
Karolis Jucys, George Adamopoulos, Mehrab Hamidi, Stephanie Milani, Mohammad Reza Samsami, Artem Zholus, Sonia Joseph, Blake Richards, Irina Rish, Özgür Şimşek
Exciting Action: Investigating Efficient Exploration for Learning Musculoskeletal Humanoid Locomotion
Henri-Jacques Geiß, Firas Al-Hafez, Andre Seyfarth, Jan Peters, Davide Tateo