Visual Analogue Scale
Visual Analogue Scale (VAS) research, while not explicitly mentioned in the provided abstracts, is implicitly relevant to many of the described projects. These projects focus on developing and evaluating large-scale models across various domains, including language, image processing, and robotics, often using novel architectures like transformers and employing techniques such as federated learning and imitation learning to improve efficiency and performance. The overarching goal is to create more robust, scalable, and generalizable models, impacting fields ranging from natural language processing and computer vision to medical diagnosis and industrial automation. The success of these efforts hinges on the ability to effectively evaluate model performance across diverse and complex tasks, a challenge that implicitly relates to the need for robust and reliable evaluation metrics, such as those potentially provided by a VAS.
Papers
TRAK: Attributing Model Behavior at Scale
Sung Min Park, Kristian Georgiev, Andrew Ilyas, Guillaume Leclerc, Aleksander Madry
ASTRA-sim2.0: Modeling Hierarchical Networks and Disaggregated Systems for Large-model Training at Scale
William Won, Taekyung Heo, Saeed Rashidi, Srinivas Sridharan, Sudarshan Srinivasan, Tushar Krishna
Augmenting Rule-based DNS Censorship Detection at Scale with Machine Learning
Jacob Brown, Xi Jiang, Van Tran, Arjun Nitin Bhagoji, Nguyen Phong Hoang, Nick Feamster, Prateek Mittal, Vinod Yegneswaran
Blockwise Self-Supervised Learning at Scale
Shoaib Ahmed Siddiqui, David Krueger, Yann LeCun, Stéphane Deny