Action Mapping
Action mapping focuses on learning and representing the relationship between states and actions, primarily within the context of reinforcement learning and robotics. Current research emphasizes developing robust and generalizable action mapping methods, often employing neural networks and incorporating techniques like continual learning and data-free knowledge transfer to improve efficiency and adaptability in complex, dynamic environments. This research is significant for advancing autonomous systems, particularly in robotics and control systems, by enabling more flexible, efficient, and explainable decision-making in diverse scenarios.
Papers
Learning State Conditioned Linear Mappings for Low-Dimensional Control of Robotic Manipulators
Michael Przystupa, Kerrick Johnstonbaugh, Zichen Zhang, Laura Petrich, Masood Dehghan, Faezeh Haghverd, Martin Jagersand
Investigating the Benefits of Nonlinear Action Maps in Data-Driven Teleoperation
Michael Przystupa, Gauthier Gidel, Matthew E. Taylor, Martin Jagersand, Justus Piater, Samuele Tosatto