Unified Framework
Unified frameworks in machine learning aim to consolidate diverse approaches to a specific problem into a single, coherent architecture, improving efficiency and facilitating comparative analysis. Current research focuses on developing such frameworks for various tasks, including recommendation systems, video understanding, and natural language processing, often leveraging transformer models, diffusion models, and recurrent neural networks. These unified approaches enhance model performance, enable more robust comparisons between methods, and offer improved interpretability and controllability, ultimately advancing both theoretical understanding and practical applications across numerous domains.
Papers
RAGLAB: A Modular and Research-Oriented Unified Framework for Retrieval-Augmented Generation
Xuanwang Zhang, Yunze Song, Yidong Wang, Shuyun Tang, Xinfeng Li, Zhengran Zeng, Zhen Wu, Wei Ye, Wenyuan Xu, Yue Zhang, Xinyu Dai, Shikun Zhang, Qingsong Wen
A Unified Framework for Continual Learning and Unlearning
Romit Chatterjee, Vikram Chundawat, Ayush Tarun, Ankur Mali, Murari Mandal
ULLME: A Unified Framework for Large Language Model Embeddings with Generation-Augmented Learning
Hieu Man, Nghia Trung Ngo, Franck Dernoncourt, Thien Huu Nguyen
OpenFactCheck: A Unified Framework for Factuality Evaluation of LLMs
Hasan Iqbal, Yuxia Wang, Minghan Wang, Georgi Georgiev, Jiahui Geng, Iryna Gurevych, Preslav Nakov