Natural Language
Natural language processing (NLP) focuses on enabling computers to understand, interpret, and generate human language. Current research heavily utilizes large language models (LLMs), such as BERT and others, to tackle diverse tasks including text-to-SQL translation, semantic analysis of images, and even controlling robots via natural language commands. The field's impact spans various sectors, from improving search engines and e-commerce platforms to advancing healthcare diagnostics and facilitating more efficient scientific research through automated literature analysis and data extraction.
Papers
NLPositionality: Characterizing Design Biases of Datasets and Models
Sebastin Santy, Jenny T. Liang, Ronan Le Bras, Katharina Reinecke, Maarten Sap
GAIA Search: Hugging Face and Pyserini Interoperability for NLP Training Data Exploration
Aleksandra Piktus, Odunayo Ogundepo, Christopher Akiki, Akintunde Oladipo, Xinyu Zhang, Hailey Schoelkopf, Stella Biderman, Martin Potthast, Jimmy Lin
An Empirical Study on Challenging Math Problem Solving with GPT-4
Yiran Wu, Feiran Jia, Shaokun Zhang, Hangyu Li, Erkang Zhu, Yue Wang, Yin Tat Lee, Richard Peng, Qingyun Wu, Chi Wang
Text-to-Motion Retrieval: Towards Joint Understanding of Human Motion Data and Natural Language
Nicola Messina, Jan Sedmidubsky, Fabrizio Falchi, Tomáš Rebok
Asking Before Acting: Gather Information in Embodied Decision Making with Language Models
Xiaoyu Chen, Shenao Zhang, Pushi Zhang, Li Zhao, Jianyu Chen
Harnessing the Power of Large Language Models for Natural Language to First-Order Logic Translation
Yuan Yang, Siheng Xiong, Ali Payani, Ehsan Shareghi, Faramarz Fekri
Linear-Time Modeling of Linguistic Structure: An Order-Theoretic Perspective
Tianyu Liu, Afra Amini, Mrinmaya Sachan, Ryan Cotterell
Transferring Visual Attributes from Natural Language to Verified Image Generation
Rodrigo Valerio, Joao Bordalo, Michal Yarom, Yonatan Bitton, Idan Szpektor, Joao Magalhaes
GlobalBench: A Benchmark for Global Progress in Natural Language Processing
Yueqi Song, Catherine Cui, Simran Khanuja, Pengfei Liu, Fahim Faisal, Alissa Ostapenko, Genta Indra Winata, Alham Fikri Aji, Samuel Cahyawijaya, Yulia Tsvetkov, Antonios Anastasopoulos, Graham Neubig
Modeling rapid language learning by distilling Bayesian priors into artificial neural networks
R. Thomas McCoy, Thomas L. Griffiths