Commonsense Transformer
Commonsense Transformer models aim to integrate everyday knowledge into artificial intelligence systems, enabling them to perform tasks requiring reasoning beyond literal information present in data. Current research focuses on enhancing vision-language transformers with commonsense knowledge, often leveraging pre-trained language models and knowledge bases, to improve performance on tasks like visual question answering and referring expression comprehension. This work is significant because it addresses a critical limitation of many AI systems—the lack of common sense—and holds potential for improving the robustness and real-world applicability of various AI applications.
Papers
May 27, 2024
February 17, 2023
October 24, 2022