Linguistic Property
Research on linguistic properties within language models focuses on understanding how these models acquire and utilize grammatical and semantic information, often investigating the impact of different pre-training objectives and architectural choices (like transformer-based models such as BERT and T5). Current work explores how linguistic features (e.g., tense, aspect, gender) influence model performance on various tasks, including truthfulness detection and cross-lingual transfer, and examines the alignment between model representations and human brain activity during language processing. These investigations are crucial for improving model robustness, mitigating biases, and furthering our understanding of how language is represented and processed both computationally and cognitively.