Current State
Current research focuses on understanding and mitigating the limitations of various AI models, particularly large language models (LLMs). Key areas include addressing biases in LLMs and developing effective "guardrails" to ensure safe and responsible deployment, as well as exploring the use of synthetic data to overcome data limitations in training and evaluating these models across diverse applications like medical imaging and autonomous vehicles. This work is crucial for advancing AI's reliability and trustworthiness, impacting fields ranging from healthcare and transportation to environmental sustainability and online safety.
Papers
June 29, 2023
June 5, 2023
May 8, 2023
May 1, 2023
March 25, 2023
February 2, 2023
January 19, 2023
December 29, 2022
November 1, 2022
September 29, 2022
August 22, 2022
July 5, 2022
May 24, 2022
February 18, 2022
January 19, 2022
December 21, 2021