Current State
Current research focuses on understanding and mitigating the limitations of various AI models, particularly large language models (LLMs). Key areas include addressing biases in LLMs and developing effective "guardrails" to ensure safe and responsible deployment, as well as exploring the use of synthetic data to overcome data limitations in training and evaluating these models across diverse applications like medical imaging and autonomous vehicles. This work is crucial for advancing AI's reliability and trustworthiness, impacting fields ranging from healthcare and transportation to environmental sustainability and online safety.
Papers
October 3, 2024
September 24, 2024
August 21, 2024
June 16, 2024
June 4, 2024
May 8, 2024
May 2, 2024
May 1, 2024
March 27, 2024
March 8, 2024
February 14, 2024
February 6, 2024
February 2, 2024
January 24, 2024
January 10, 2024
October 2, 2023
September 26, 2023
September 24, 2023
August 27, 2023