Visual Analogue Scale
Visual Analogue Scale (VAS) research, while not explicitly mentioned in the provided abstracts, is implicitly relevant to many of the described projects. These projects focus on developing and evaluating large-scale models across various domains, including language, image processing, and robotics, often using novel architectures like transformers and employing techniques such as federated learning and imitation learning to improve efficiency and performance. The overarching goal is to create more robust, scalable, and generalizable models, impacting fields ranging from natural language processing and computer vision to medical diagnosis and industrial automation. The success of these efforts hinges on the ability to effectively evaluate model performance across diverse and complex tasks, a challenge that implicitly relates to the need for robust and reliable evaluation metrics, such as those potentially provided by a VAS.
Papers
The FineWeb Datasets: Decanting the Web for the Finest Text Data at Scale
Guilherme Penedo, Hynek Kydlíček, Loubna Ben allal, Anton Lozhkov, Margaret Mitchell, Colin Raffel, Leandro Von Werra, Thomas Wolf
MatText: Do Language Models Need More than Text & Scale for Materials Modeling?
Nawaf Alampara, Santiago Miret, Kevin Maik Jablonka
Why Has Predicting Downstream Capabilities of Frontier AI Models with Scale Remained Elusive?
Rylan Schaeffer, Hailey Schoelkopf, Brando Miranda, Gabriel Mukobi, Varun Madan, Adam Ibrahim, Herbie Bradley, Stella Biderman, Sanmi Koyejo
Differentiable Combinatorial Scheduling at Scale
Mingju Liu, Yingjie Li, Jiaqi Yin, Zhiru Zhang, Cunxi Yu