Unified Framework
Unified frameworks in machine learning aim to consolidate diverse approaches to a specific problem into a single, coherent architecture, improving efficiency and facilitating comparative analysis. Current research focuses on developing such frameworks for various tasks, including recommendation systems, video understanding, and natural language processing, often leveraging transformer models, diffusion models, and recurrent neural networks. These unified approaches enhance model performance, enable more robust comparisons between methods, and offer improved interpretability and controllability, ultimately advancing both theoretical understanding and practical applications across numerous domains.
Papers
Revealing the Parametric Knowledge of Language Models: A Unified Framework for Attribution Methods
Haeun Yu, Pepa Atanasova, Isabelle Augenstein
MM-TTS: A Unified Framework for Multimodal, Prompt-Induced Emotional Text-to-Speech Synthesis
Xiang Li, Zhi-Qi Cheng, Jun-Yan He, Xiaojiang Peng, Alexander G. Hauptmann
UniRGB-IR: A Unified Framework for RGB-Infrared Semantic Tasks via Adapter Tuning
Maoxun Yuan, Bo Cui, Tianyi Zhao, Jiayi Wang, Shan Fu, Xingxing Wei
MetaSD: A Unified Framework for Scalable Downscaling of Meteorological Variables in Diverse Situations
Jing Hu, Honghu Zhang, Peng Zheng, Jialin Mu, Xiaomeng Huang, Xi Wu
Rethinking 3D Dense Caption and Visual Grounding in A Unified Framework through Prompt-based Localization
Yongdong Luo, Haojia Lin, Xiawu Zheng, Yigeng Jiang, Fei Chao, Jie Hu, Guannan Jiang, Songan Zhang, Rongrong Ji
Unified Examination of Entity Linking in Absence of Candidate Sets
Nicolas Ong, Hassan Shavarani, Anoop Sarkar