Continuum Limit
The continuum limit explores the behavior of systems as their discrete components become infinitely small, effectively transitioning from a discrete to a continuous representation. Current research focuses on understanding and quantifying the limitations of this transition in various domains, including neural networks, agent-based models, and dynamical systems, often employing techniques like metric entropy analysis and novel algorithms for improved approximation and efficiency. These investigations are crucial for advancing our understanding of complex systems and improving the performance of machine learning models and other computational methods in scenarios with limited resources or inherent constraints.
Papers
Push the Limit of Multi-modal Emotion Recognition by Prompting LLMs with Receptive-Field-Aware Attention Weighting
Liyun Zhang, Dian Ding, Yu Lu, Yi-Chao Chen, Guangtao Xue
Pushing the Limits of Large Language Model Quantization via the Linearity Theorem
Vladimir Malinovskii, Andrei Panferov, Ivan Ilin, Han Guo, Peter Richtárik, Dan Alistarh
Inference Scaling $\scriptsize\mathtt{F}$Laws: The Limits of LLM Resampling with Imperfect Verifiers
Benedikt Stroebl, Sayash Kapoor, Arvind Narayanan
Understanding the Limits of Vision Language Models Through the Lens of the Binding Problem
Declan Campbell, Sunayana Rane, Tyler Giallanza, Nicolò De Sabbata, Kia Ghods, Amogh Joshi, Alexander Ku, Steven M. Frankland, Thomas L. Griffiths, Jonathan D. Cohen, Taylor W. Webb
First, Learn What You Don't Know: Active Information Gathering for Driving at the Limits of Handling
Alexander Davydov, Franck Djeumou, Marcus Greiff, Makoto Suminaka, Michael Thompson, John Subosits, Thomas Lew