Linguistic Representation
Linguistic representation research focuses on how language is encoded and processed, both computationally and neurally, aiming to create more accurate and robust models of human language understanding. Current research explores various model architectures, including large language models (LLMs), vision-language models (VLMs), and smaller, phoneme-based models, investigating how well these capture syntactic and semantic information across languages and modalities. This work is significant for advancing our understanding of language acquisition and processing, improving machine translation and other NLP tasks, and potentially enabling more effective brain-computer interfaces. Furthermore, ongoing efforts address biases in existing models and strive for greater explainability and robustness in linguistic representations.