Natural Language
Natural language processing (NLP) focuses on enabling computers to understand, interpret, and generate human language. Current research heavily utilizes large language models (LLMs), such as BERT and others, to tackle diverse tasks including text-to-SQL translation, semantic analysis of images, and even controlling robots via natural language commands. The field's impact spans various sectors, from improving search engines and e-commerce platforms to advancing healthcare diagnostics and facilitating more efficient scientific research through automated literature analysis and data extraction.
Papers
Reranking for Natural Language Generation from Logical Forms: A Study based on Large Language Models
Levon Haroutunian, Zhuang Li, Lucian Galescu, Philip Cohen, Raj Tumuluri, Gholamreza Haffari
SCOB: Universal Text Understanding via Character-wise Supervised Contrastive Learning with Online Text Rendering for Bridging Domain Gap
Daehee Kim, Yoonsik Kim, DongHyun Kim, Yumin Lim, Geewook Kim, Taeho Kil
JobRecoGPT -- Explainable job recommendations using LLMs
Preetam Ghosh, Vaishali Sadaphal
Improve the efficiency of deep reinforcement learning through semantic exploration guided by natural language
Zhourui Guo, Meng Yao, Yang Yu, Qiyue Yin
LLM-based Medical Assistant Personalization with Short- and Long-Term Memory Coordination
Kai Zhang, Yangyang Kang, Fubang Zhao, Xiaozhong Liu
Natural Language Embedded Programs for Hybrid Language Symbolic Reasoning
Tianhua Zhang, Jiaxin Ge, Hongyin Luo, Yung-Sung Chuang, Mingye Gao, Yuan Gong, Xixin Wu, Yoon Kim, Helen Meng, James Glass
Rigorously Assessing Natural Language Explanations of Neurons
Jing Huang, Atticus Geiger, Karel D'Oosterlinck, Zhengxuan Wu, Christopher Potts
Drive as You Speak: Enabling Human-Like Interaction with Large Language Models in Autonomous Vehicles
Can Cui, Yunsheng Ma, Xu Cao, Wenqian Ye, Ziran Wang