Non Robust
Non-robustness in machine learning and robotics focuses on understanding and mitigating the vulnerability of systems to unexpected inputs or perturbations. Current research investigates this issue across various models, including deep neural networks and robotic control systems, exploring factors like noise sensitivity, adversarial examples, and the impact of model parameters and training regimes. This research is crucial for improving the reliability and safety of AI systems in critical applications, such as autonomous vehicles and medical diagnosis, where robustness is paramount. Addressing non-robustness is essential for building trustworthy and dependable AI systems.
Papers
October 14, 2024
January 31, 2024
August 18, 2023
July 5, 2023
June 9, 2023
May 18, 2023
March 15, 2023
June 30, 2022
March 22, 2022