Abstract Meaning Representation
Abstract Meaning Representation (AMR) is a structured semantic formalism aiming to capture the core meaning of sentences as graphs, facilitating various natural language processing (NLP) tasks. Current research focuses on improving AMR parsing accuracy and efficiency using transformer-based models and graph neural networks, as well as exploring its integration with large language models (LLMs) for enhanced performance and interpretability in tasks like question answering and dialogue generation. AMR's ability to provide a robust, interpretable semantic representation holds significant promise for advancing NLP research and improving the performance and explainability of various applications, particularly in multilingual and cross-domain settings.
Papers
Analyzing the Role of Semantic Representations in the Era of Large Language Models
Zhijing Jin, Yuen Chen, Fernando Gonzalez, Jiarui Liu, Jiayi Zhang, Julian Michael, Bernhard Schölkopf, Mona Diab
Identification of Entailment and Contradiction Relations between Natural Language Sentences: A Neurosymbolic Approach
Xuyao Feng, Anthony Hunter
Learning Symbolic Rules over Abstract Meaning Representations for Textual Reinforcement Learning
Subhajit Chaudhury, Sarathkrishna Swaminathan, Daiki Kimura, Prithviraj Sen, Keerthiram Murugesan, Rosario Uceda-Sosa, Michiaki Tatsubori, Achille Fokoue, Pavan Kapanipathi, Asim Munawar, Alexander Gray
Leveraging Denoised Abstract Meaning Representation for Grammatical Error Correction
Hejing Cao, Dongyan Zhao