Compositional Generalization
Compositional generalization, the ability of AI models to handle novel combinations of previously learned concepts, is a crucial area of research aiming to create more robust and adaptable systems. Current efforts focus on understanding how different model architectures, including transformers and neural networks with modular designs, learn and generalize compositionally, often employing techniques like meta-learning and data augmentation to improve performance. This research is vital for advancing AI safety and building more human-like intelligence, with implications for various applications such as natural language processing, robotics, and computer vision. The development of more effective compositional generalization methods is key to unlocking the full potential of AI systems in complex, real-world scenarios.
Papers
Equi-Tuning: Group Equivariant Fine-Tuning of Pretrained Models
Sourya Basu, Prasanna Sattigeri, Karthikeyan Natesan Ramamurthy, Vijil Chenthamarakshan, Kush R. Varshney, Lav R. Varshney, Payel Das
Categorizing Semantic Representations for Neural Machine Translation
Yongjing Yin, Yafu Li, Fandong Meng, Jie Zhou, Yue Zhang
ViLPAct: A Benchmark for Compositional Generalization on Multimodal Human Activities
Terry Yue Zhuo, Yaqing Liao, Yuecheng Lei, Lizhen Qu, Gerard de Melo, Xiaojun Chang, Yazhou Ren, Zenglin Xu
Robust and Controllable Object-Centric Learning through Energy-based Models
Ruixiang Zhang, Tong Che, Boris Ivanovic, Renhao Wang, Marco Pavone, Yoshua Bengio, Liam Paull