Natural Language
Natural language processing (NLP) focuses on enabling computers to understand, interpret, and generate human language. Current research heavily utilizes large language models (LLMs), such as BERT and others, to tackle diverse tasks including text-to-SQL translation, semantic analysis of images, and even controlling robots via natural language commands. The field's impact spans various sectors, from improving search engines and e-commerce platforms to advancing healthcare diagnostics and facilitating more efficient scientific research through automated literature analysis and data extraction.
Papers
A Material Lens on Coloniality in NLP
William Held, Camille Harris, Michael Best, Diyi Yang
How Well Do Large Language Models Understand Syntax? An Evaluation by Asking Natural Language Questions
Houquan Zhou, Yang Hou, Zhenghua Li, Xuebin Wang, Zhefeng Wang, Xinyu Duan, Min Zhang
Human-Centric Autonomous Systems With LLMs for User Command Reasoning
Yi Yang, Qingwen Zhang, Ci Li, Daniel Simões Marta, Nazre Batool, John Folkesson
Spot: A Natural Language Interface for Geospatial Searches in OSM
Lynn Khellaf, Ipek Baris Schlicht, Julia Bayer, Ruben Bouwmeester, Tilman Miraß, Tilman Wagner
Human-Centered Programming: The Design of a Robotic Process Automation Language
Piotr Gago, Anna Voitenkova, Daniel Jabłonski, Ihor Debelyi, Kinga Skorupska, Maciej Grzeszczuk, Wiesław Kopeć
Multilingual Mathematical Autoformalization
Albert Q. Jiang, Wenda Li, Mateja Jamnik
Are Words Enough? On the semantic conditioning of affective music generation
Jorge Forero, Gilberto Bernardes, Mónica Mendes