Natural Language
Natural language processing (NLP) focuses on enabling computers to understand, interpret, and generate human language. Current research heavily utilizes large language models (LLMs), such as BERT and others, to tackle diverse tasks including text-to-SQL translation, semantic analysis of images, and even controlling robots via natural language commands. The field's impact spans various sectors, from improving search engines and e-commerce platforms to advancing healthcare diagnostics and facilitating more efficient scientific research through automated literature analysis and data extraction.
Papers
Using Natural Language and Program Abstractions to Instill Human Inductive Biases in Machines
Sreejan Kumar, Carlos G. Correa, Ishita Dasgupta, Raja Marjieh, Michael Y. Hu, Robert D. Hawkins, Nathaniel D. Daw, Jonathan D. Cohen, Karthik Narasimhan, Thomas L. Griffiths
On the Paradox of Learning to Reason from Data
Honghua Zhang, Liunian Harold Li, Tao Meng, Kai-Wei Chang, Guy Van den Broeck
On the Limits of Evaluating Embodied Agent Model Generalization Using Validation Sets
Hyounghun Kim, Aishwarya Padmakumar, Di Jin, Mohit Bansal, Dilek Hakkani-Tur
Persian Natural Language Inference: A Meta-learning approach
Heydar Soudani, Mohammad Hassan Mojab, Hamid Beigy
Addressing Resource and Privacy Constraints in Semantic Parsing Through Data Augmentation
Kevin Yang, Olivia Deng, Charles Chen, Richard Shin, Subhro Roy, Benjamin Van Durme
Natural Language Specifications in Proof Assistants
Colin S. Gordon, Sergey Matskevich
A Precis of Language Models are not Models of Language
Csaba Veres
Assessing the Limits of the Distributional Hypothesis in Semantic Spaces: Trait-based Relational Knowledge and the Impact of Co-occurrences
Mark Anderson, Jose Camacho-Collados
Reasoning about Procedures with Natural Language Processing: A Tutorial
Li Zhang