Power Regularization
Power regularization is a technique used to mitigate undesirable power imbalances in various machine learning systems, aiming to improve robustness and fairness. Current research focuses on applying this to multi-agent reinforcement learning, where it addresses issues like single-agent failures and adversarial communication by controlling the influence agents exert on each other. Methods involve modifying training objectives to explicitly balance task performance with power distribution, sometimes incorporating intrinsic motivations or adversarial training. This work has implications for developing more reliable and ethical AI systems, particularly in collaborative settings and applications sensitive to power dynamics.
Papers
June 17, 2024
April 9, 2024
August 17, 2023
September 1, 2022
August 24, 2022