Black Box Language Model
Black-box language models (LLMs) are large language models whose internal workings are opaque to users, posing challenges for understanding their behavior and improving their performance. Current research focuses on developing methods to adapt, analyze, and explain these models without direct access to their internal parameters, employing techniques like prompt engineering, watermarking, and adversarial attacks to probe their capabilities and limitations. This research is crucial for mitigating risks associated with using powerful yet inscrutable AI systems and for advancing the development of more trustworthy and reliable language technologies.
Papers
October 21, 2024
October 4, 2024
October 2, 2024
August 3, 2024
July 31, 2024
July 22, 2024
July 5, 2024
July 1, 2024
June 30, 2024
June 7, 2024
May 8, 2024
March 19, 2024
November 13, 2023
November 3, 2023
October 26, 2023
October 19, 2023
October 1, 2023
June 6, 2023
May 23, 2023