Continuum Limit
The continuum limit explores the behavior of systems as their discrete components become infinitely small, effectively transitioning from a discrete to a continuous representation. Current research focuses on understanding and quantifying the limitations of this transition in various domains, including neural networks, agent-based models, and dynamical systems, often employing techniques like metric entropy analysis and novel algorithms for improved approximation and efficiency. These investigations are crucial for advancing our understanding of complex systems and improving the performance of machine learning models and other computational methods in scenarios with limited resources or inherent constraints.
Papers
Understanding the Limits of Vision Language Models Through the Lens of the Binding Problem
Declan Campbell, Sunayana Rane, Tyler Giallanza, Nicolò De Sabbata, Kia Ghods, Amogh Joshi, Alexander Ku, Steven M. Frankland, Thomas L. Griffiths, Jonathan D. Cohen, Taylor W. Webb
First, Learn What You Don't Know: Active Information Gathering for Driving at the Limits of Handling
Alexander Davydov, Franck Djeumou, Marcus Greiff, Makoto Suminaka, Michael Thompson, John Subosits, Thomas Lew
FineZip : Pushing the Limits of Large Language Models for Practical Lossless Text Compression
Fazal Mittu, Yihuan Bu, Akshat Gupta, Ashok Devireddy, Alp Eren Ozdarendeli, Anant Singh, Gopala Anumanchipalli
Numerical Approximation Capacity of Neural Networks with Bounded Parameters: Do Limits Exist, and How Can They Be Measured?
Li Liu, Tengchao Yu, Heng Yong
Examining Independence in Ensemble Sentiment Analysis: A Study on the Limits of Large Language Models Using the Condorcet Jury Theorem
Baptiste Lefort, Eric Benhamou, Jean-Jacques Ohana, Beatrice Guez, David Saltiel, Thomas Jacquot
1-Bit FQT: Pushing the Limit of Fully Quantized Training to 1-bit
Chang Gao, Jianfei Chen, Kang Zhao, Jiaqi Wang, Liping Jing