Targeted Perturbation
Targeted perturbation involves strategically altering a system (e.g., a biological system, a machine learning model, or a reward function in reinforcement learning) to understand its behavior or improve its performance. Current research focuses on developing methods to identify the specific components affected by a perturbation, often using causal inference techniques or by leveraging data characteristics like sample density in machine learning. These advancements have implications for diverse fields, including disease modeling, improving the robustness of AI systems against adversarial attacks, and enhancing reinforcement learning algorithms' resilience to reward noise.
Papers
October 4, 2024
June 8, 2024
January 11, 2024
June 30, 2022
December 16, 2021