Fundamental Limitation
Fundamental limitations in artificial intelligence research currently focus on identifying and addressing bottlenecks in model capabilities and performance. Active research areas include exploring the limitations of large language models (LLMs) in reasoning, particularly compositional abilities and handling complex tasks; analyzing the inherent quadratic time complexity of transformer architectures and the challenges of developing subquadratic alternatives; and investigating the impact of data quality and size on model performance and safety. Understanding these limitations is crucial for improving the reliability, safety, and efficiency of AI systems and for developing more robust and generalizable models across various applications.
Papers
How toxic is antisemitism? Potentials and limitations of automated toxicity scoring for antisemitic online content
Helena Mihaljević, Elisabeth Steffen
Exploring DINO: Emergent Properties and Limitations for Synthetic Aperture Radar Imagery
Joseph A. Gallego-Mejia, Anna Jungbluth, Laura Martínez-Ferrer, Matt Allen, Francisco Dorr, Freddie Kalaitzis, Raúl Ramos-Pollán
Potential and limitations of random Fourier features for dequantizing quantum machine learning
Ryan Sweke, Erik Recio, Sofiene Jerbi, Elies Gil-Fuster, Bryce Fuller, Jens Eisert, Johannes Jakob Meyer
Examining the Limitations of Computational Rumor Detection Models Trained on Static Datasets
Yida Mu, Xingyi Song, Kalina Bontcheva, Nikolaos Aletras
Limitations in odour recognition and generalisation in a neuromorphic olfactory circuit
Nik Dennler, André van Schaik, Michael Schmuker