Linguistic Acceptability

Linguistic acceptability research focuses on understanding and modeling how humans judge the grammaticality and naturalness of sentences, encompassing various modalities like spoken and written language, code-mixing, and even robotic interactions. Current research employs machine learning models, including large language models (LLMs) and multilayer perceptrons (MLPs), often trained on large, newly-created datasets of human acceptability judgments across multiple languages. This work is crucial for improving natural language processing (NLP) systems, enabling more accurate language generation and analysis, and informing the design of human-computer interfaces that are more socially acceptable and effective.

Papers