Natural Language
Natural language processing (NLP) focuses on enabling computers to understand, interpret, and generate human language. Current research heavily utilizes large language models (LLMs), such as BERT and others, to tackle diverse tasks including text-to-SQL translation, semantic analysis of images, and even controlling robots via natural language commands. The field's impact spans various sectors, from improving search engines and e-commerce platforms to advancing healthcare diagnostics and facilitating more efficient scientific research through automated literature analysis and data extraction.
Papers
GlobalBench: A Benchmark for Global Progress in Natural Language Processing
Yueqi Song, Catherine Cui, Simran Khanuja, Pengfei Liu, Fahim Faisal, Alissa Ostapenko, Genta Indra Winata, Alham Fikri Aji, Samuel Cahyawijaya, Yulia Tsvetkov, Antonios Anastasopoulos, Graham Neubig
Modeling rapid language learning by distilling Bayesian priors into artificial neural networks
R. Thomas McCoy, Thomas L. Griffiths
Chain-of-Symbol Prompting Elicits Planning in Large Langauge Models
Hanxu Hu, Hongyuan Lu, Huajian Zhang, Yun-Ze Song, Wai Lam, Yue Zhang
Can Language Models Solve Graph Problems in Natural Language?
Heng Wang, Shangbin Feng, Tianxing He, Zhaoxuan Tan, Xiaochuang Han, Yulia Tsvetkov
Explaining black box text modules in natural language with language models
Chandan Singh, Aliyah R. Hsu, Richard Antonello, Shailee Jain, Alexander G. Huth, Bin Yu, Jianfeng Gao
NL2TL: Transforming Natural Languages to Temporal Logics using Large Language Models
Yongchao Chen, Rujul Gandhi, Yang Zhang, Chuchu Fan
Knowledge Authoring for Rules and Actions
Yuheng Wang, Paul Fodor, Michael Kifer
Interactive Text-to-SQL Generation via Editable Step-by-Step Explanations
Yuan Tian, Zheng Zhang, Zheng Ning, Toby Jia-Jun Li, Jonathan K. Kummerfeld, Tianyi Zhang