Visual Analogue Scale
Visual Analogue Scale (VAS) research, while not explicitly mentioned in the provided abstracts, is implicitly relevant to many of the described projects. These projects focus on developing and evaluating large-scale models across various domains, including language, image processing, and robotics, often using novel architectures like transformers and employing techniques such as federated learning and imitation learning to improve efficiency and performance. The overarching goal is to create more robust, scalable, and generalizable models, impacting fields ranging from natural language processing and computer vision to medical diagnosis and industrial automation. The success of these efforts hinges on the ability to effectively evaluate model performance across diverse and complex tasks, a challenge that implicitly relates to the need for robust and reliable evaluation metrics, such as those potentially provided by a VAS.
Papers
What Matters for Model Merging at Scale?
Prateek Yadav, Tu Vu, Jonathan Lai, Alexandra Chronopoulou, Manaal Faruqui, Mohit Bansal, Tsendsuren Munkhdalai
Teaching Transformers Modular Arithmetic at Scale
Eshika Saxena, Alberto Alfarano, Emily Wenger, Kristin Lauter
X-ALMA: Plug & Play Modules and Adaptive Rejection for Quality Translation at Scale
Haoran Xu, Kenton Murray, Philipp Koehn, Hieu Hoang, Akiko Eriguchi, Huda Khayrallah