Linguistic Acceptability
Linguistic acceptability research focuses on understanding and modeling how humans judge the grammaticality and naturalness of sentences, encompassing various modalities like spoken and written language, code-mixing, and even robotic interactions. Current research employs machine learning models, including large language models (LLMs) and multilayer perceptrons (MLPs), often trained on large, newly-created datasets of human acceptability judgments across multiple languages. This work is crucial for improving natural language processing (NLP) systems, enabling more accurate language generation and analysis, and informing the design of human-computer interfaces that are more socially acceptable and effective.
Papers
October 11, 2024
September 13, 2024
September 6, 2024
May 9, 2024
May 8, 2024
February 22, 2024
February 18, 2024
November 15, 2023
October 20, 2023
September 22, 2023
July 5, 2023
June 13, 2023
May 23, 2023
May 16, 2023
April 28, 2023
January 5, 2023
October 23, 2022
April 15, 2022
March 29, 2022
February 27, 2022