Universal Model
Universal models aim to create single, adaptable machine learning systems capable of handling diverse tasks and datasets without extensive retraining for each new application. Current research focuses on developing architectures like transformers and employing techniques such as prompt engineering, multi-stage decoding, and data augmentation to achieve this universality across various domains, including medical imaging, robotics, and natural language processing. The success of universal models promises significant advancements by reducing the need for task-specific models, improving efficiency, and facilitating knowledge transfer across different applications, ultimately leading to more robust and adaptable AI systems.
Papers
Uni$^2$Det: Unified and Universal Framework for Prompt-Guided Multi-dataset 3D Detection
Yubin Wang, Zhikang Zou, Xiaoqing Ye, Xiao Tan, Errui Ding, Cairong Zhao
Universal Medical Image Representation Learning with Compositional Decoders
Kaini Wang, Kaini Wang, Siping Zhou, Guangquan Zhou, Wentao Zhang, Bin Cui, Shuo Li
Fade-in Reverberation in Multi-room Environments Using the Common-Slope Model
Kyung Yun Lee, Nils Meyer-Kahlen, Georg Götz, U. Peter Svensson, Sebastian J. Schlecht, Vesa Välimäki
UCIP: A Universal Framework for Compressed Image Super-Resolution using Dynamic Prompt
Xin Li, Bingchen Li, Yeying Jin, Cuiling Lan, Hanxin Zhu, Yulin Ren, Zhibo Chen
CityLight: A Universal Model for Coordinated Traffic Signal Control in City-scale Heterogeneous Intersections
Jinwei Zeng, Chao Yu, Xinyi Yang, Wenxuan Ao, Qianyue Hao, Jian Yuan, Yong Li, Yu Wang, Huazhong Yang
Dealing with All-stage Missing Modality: Towards A Universal Model with Robust Reconstruction and Personalization
Yunpeng Zhao, Cheng Chen, Qing You Pang, Quanzheng Li, Carol Tang, Beng-Ti Ang, Yueming Jin