Linguistic Competence
Linguistic competence in language models (LLMs) focuses on evaluating the models' understanding of linguistic structures and their ability to use language functionally, not just statistically. Current research uses benchmarks like Holmes to assess various aspects of this competence, including syntax, morphology, and semantics, often employing probing techniques on architectures such as BERT and GPT-like models to analyze internal representations. This research is crucial for improving LLMs' robustness and reliability, informing our understanding of both human and artificial language processing, and ultimately leading to more effective and ethically sound applications of these powerful technologies.
Papers
May 9, 2024
April 29, 2024
April 15, 2024
November 8, 2023
October 30, 2023
October 23, 2023
October 17, 2023
June 28, 2023
June 6, 2023
May 23, 2023
March 1, 2023
February 4, 2023
January 16, 2023
November 3, 2022
May 22, 2022
March 17, 2022