Compositional Generalization
Compositional generalization, the ability of AI models to handle novel combinations of previously learned concepts, is a crucial area of research aiming to create more robust and adaptable systems. Current efforts focus on understanding how different model architectures, including transformers and neural networks with modular designs, learn and generalize compositionally, often employing techniques like meta-learning and data augmentation to improve performance. This research is vital for advancing AI safety and building more human-like intelligence, with implications for various applications such as natural language processing, robotics, and computer vision. The development of more effective compositional generalization methods is key to unlocking the full potential of AI systems in complex, real-world scenarios.
Papers
Skills-in-Context Prompting: Unlocking Compositionality in Large Language Models
Jiaao Chen, Xiaoman Pan, Dian Yu, Kaiqiang Song, Xiaoyang Wang, Dong Yu, Jianshu Chen
Adapt and Decompose: Efficient Generalization of Text-to-SQL via Domain Adapted Least-To-Most Prompting
Aseem Arora, Shabbirhussain Bhaisaheb, Harshit Nigam, Manasi Patwardhan, Lovekesh Vig, Gautam Shroff
Learning Disentangled Prompts for Compositional Image Synthesis
Kihyuk Sohn, Albert Shaw, Yuan Hao, Han Zhang, Luisa Polania, Huiwen Chang, Lu Jiang, Irfan Essa
Differentiable Tree Operations Promote Compositional Generalization
Paul Soulos, Edward Hu, Kate McCurdy, Yunmo Chen, Roland Fernandez, Paul Smolensky, Jianfeng Gao