Visual Analogue Scale
Visual Analogue Scale (VAS) research, while not explicitly mentioned in the provided abstracts, is implicitly relevant to many of the described projects. These projects focus on developing and evaluating large-scale models across various domains, including language, image processing, and robotics, often using novel architectures like transformers and employing techniques such as federated learning and imitation learning to improve efficiency and performance. The overarching goal is to create more robust, scalable, and generalizable models, impacting fields ranging from natural language processing and computer vision to medical diagnosis and industrial automation. The success of these efforts hinges on the ability to effectively evaluate model performance across diverse and complex tasks, a challenge that implicitly relates to the need for robust and reliable evaluation metrics, such as those potentially provided by a VAS.
Papers
SpectralEarth: Training Hyperspectral Foundation Models at Scale
Nassim Ait Ali Braham, Conrad M Albrecht, Julien Mairal, Jocelyn Chanussot, Yi Wang, Xiao Xiang Zhu
P/D-Serve: Serving Disaggregated Large Language Model at Scale
Yibo Jin, Tao Wang, Huimin Lin, Mingyang Song, Peiyang Li, Yipeng Ma, Yicheng Shan, Zhengfan Yuan, Cailong Li, Yajing Sun, Tiandeng Wu, Xing Chu, Ruizhi Huan, Li Ma, Xiao You, Wenting Zhou, Yunpeng Ye, Wen Liu, Xiangkun Xu, Yongsheng Zhang, Tiantian Dong, Jiawei Zhu, Zhe Wang, Xijian Ju, Jianxun Song, Haoliang Cheng, Xiaojing Li, Jiandong Ding, Hefei Guo, Zhengyong Zhang
Mitigating Metropolitan Carbon Emissions with Dynamic Eco-driving at Scale
Vindula Jayawardana, Baptiste Freydt, Ao Qu, Cameron Hickert, Edgar Sanchez, Catherine Tang, Mark Taylor, Blaine Leonard, Cathy Wu
LLMServingSim: A HW/SW Co-Simulation Infrastructure for LLM Inference Serving at Scale
Jaehong Cho, Minsu Kim, Hyunmin Choi, Guseul Heo, Jongse Park
GRUtopia: Dream General Robots in a City at Scale
Hanqing Wang, Jiahe Chen, Wensi Huang, Qingwei Ben, Tai Wang, Boyu Mi, Tao Huang, Siheng Zhao, Yilun Chen, Sizhe Yang, Peizhou Cao, Wenye Yu, Zichao Ye, Jialun Li, Junfeng Long, Zirui Wang, Huiling Wang, Ying Zhao, Zhongying Tu, Yu Qiao, Dahua Lin, Jiangmiao Pang
LLM Circuit Analyses Are Consistent Across Training and Scale
Curt Tigges, Michael Hanna, Qinan Yu, Stella Biderman