Semantic Parsing
Semantic parsing aims to translate natural language into formal, structured representations, enabling computers to understand and act upon human instructions. Current research focuses on improving the accuracy and robustness of semantic parsers, particularly using large language models and sequence-to-sequence architectures, often augmented with techniques like in-context learning and grammar constraints to handle ambiguity and improve generalization. This field is crucial for bridging the gap between human language and machine action, with applications ranging from question answering and database querying to controlling robots and other intelligent systems. Ongoing efforts address challenges like handling complex queries, diverse data sources, and cross-lingual transfer.
Papers
SPICE, A Dataset of Drug-like Molecules and Peptides for Training Machine Learning Potentials
Peter Eastman, Pavan Kumar Behara, David L. Dotson, Raimondas Galvelis, John E. Herr, Josh T. Horton, Yuezhi Mao, John D. Chodera, Benjamin P. Pritchard, Yuanqing Wang, Gianni De Fabritiis, Thomas E. Markland
T5QL: Taming language models for SQL generation
Samuel Arcadinho, David Aparício, Hugo Veiga, António Alegria
TAGPRIME: A Unified Framework for Relational Structure Extraction
I-Hung Hsu, Kuan-Hao Huang, Shuning Zhang, Wenxin Cheng, Premkumar Natarajan, Kai-Wei Chang, Nanyun Peng
Non-Programmers Can Label Programs Indirectly via Active Examples: A Case Study with Text-to-SQL
Ruiqi Zhong, Charlie Snell, Dan Klein, Jason Eisner
Evaluating the Impact of Model Scale for Compositional Generalization in Semantic Parsing
Linlu Qiu, Peter Shaw, Panupong Pasupat, Tianze Shi, Jonathan Herzig, Emily Pitler, Fei Sha, Kristina Toutanova
GraphQ IR: Unifying the Semantic Parsing of Graph Query Languages with One Intermediate Representation
Lunyiu Nie, Shulin Cao, Jiaxin Shi, Jiuding Sun, Qi Tian, Lei Hou, Juanzi Li, Jidong Zhai