Safe Deep
"Safe Deep" research focuses on developing and deploying deep learning models while mitigating risks and ensuring safety, reliability, and trustworthiness. Current efforts concentrate on improving model robustness through techniques like Lyapunov function-based reinforcement learning and adversarial training for imitation learning, as well as optimizing resource usage and latency in edge computing environments using deep reinforcement learning frameworks. This work is crucial for responsible AI development, enabling the safe application of deep learning in high-stakes domains such as autonomous systems, finance, and healthcare, while addressing concerns about bias, instability, and inappropriate content generation.
Papers
August 30, 2024
May 25, 2024
April 8, 2024
March 7, 2024
November 27, 2023