Semantic Grounding
Semantic grounding in AI focuses on establishing a robust connection between symbolic representations (like words and sentences) and their corresponding real-world meanings or concepts. Current research emphasizes improving the semantic grounding of large language and vision-language models (LLMs and VLMs) through techniques such as incorporating extrinsic knowledge, utilizing feedback mechanisms, and refining training data to enhance accuracy and address misalignment issues in tasks like object recognition and scene understanding. This work is crucial for advancing AI's reliability and safety, particularly in applications requiring interaction with the physical world, such as robotics and autonomous systems.
Papers
August 26, 2024
June 24, 2024
April 24, 2024
April 9, 2024
February 16, 2024
September 7, 2023
September 1, 2023
January 20, 2023
November 23, 2022
November 16, 2022