Fundamental Limitation
Fundamental limitations in artificial intelligence research currently focus on identifying and addressing bottlenecks in model capabilities and performance. Active research areas include exploring the limitations of large language models (LLMs) in reasoning, particularly compositional abilities and handling complex tasks; analyzing the inherent quadratic time complexity of transformer architectures and the challenges of developing subquadratic alternatives; and investigating the impact of data quality and size on model performance and safety. Understanding these limitations is crucial for improving the reliability, safety, and efficiency of AI systems and for developing more robust and generalizable models across various applications.
Papers
Data-Centric AI Governance: Addressing the Limitations of Model-Focused Policies
Ritwik Gupta, Leah Walker, Rodolfo Corona, Stephanie Fu, Suzanne Petryk, Janet Napolitano, Trevor Darrell, Andrew W. Reddie
Limitations of (Procrustes) Alignment in Assessing Multi-Person Human Pose and Shape Estimation
Drazic Martin, Pierre Perrault
Do Large Language Models Have Compositional Ability? An Investigation into Limitations and Scalability
Zhuoyan Xu, Zhenmei Shi, Yingyu Liang
Annealed Multiple Choice Learning: Overcoming limitations of Winner-takes-all with annealing
David Perera, Victor Letzelter, Théo Mariotte, Adrien Cortés, Mickael Chen, Slim Essid, Gaël Richard