Strategic Manipulation
Strategic manipulation encompasses the study of how agents, whether human or artificial, can influence systems or other agents to achieve desired outcomes. Current research focuses on developing methods to detect and mitigate manipulation in various contexts, including language models, robotic control, and multi-agent systems, often employing techniques like hierarchical planning, diffusion models, and transformer-based architectures. This field is crucial for building trustworthy AI systems and understanding human-computer interaction, with implications for improving the safety and robustness of robots and mitigating harmful biases in AI.
Papers
Manipulation via Membranes: High-Resolution and Highly Deformable Tactile Sensing and Control
Miquel Oller, Mireia Planas, Dmitry Berenson, Nima Fazeli
Safe Reinforcement Learning of Dynamic High-Dimensional Robotic Tasks: Navigation, Manipulation, Interaction
Puze Liu, Kuo Zhang, Davide Tateo, Snehal Jauhri, Zhiyuan Hu, Jan Peters, Georgia Chalvatzaki