AI Foundation Model
AI foundation models are large, general-purpose AI models trained on massive datasets to perform a wide range of tasks across various modalities, including text, images, and code. Current research emphasizes developing more efficient and adaptable models, often using transformer architectures and techniques like retrieval-augmented instruction tuning and parameter-efficient fine-tuning, to improve performance and address issues like bias and dual-use potential. These models are proving impactful across diverse fields, from medical imaging and climate modeling to process engineering and document processing, by enabling more robust and versatile AI applications.
Papers
Orchestration of Emulator Assisted Mobile Edge Tuning for AI Foundation Models: A Multi-Agent Deep Reinforcement Learning Approach
Wenhan Yu, Terence Jie Chua, Jun Zhao
FedPEAT: Convergence of Federated Learning, Parameter-Efficient Fine Tuning, and Emulator Assisted Tuning for Artificial Intelligence Foundation Models with Mobile Edge Computing
Terence Jie Chua, Wenhan Yu, Jun Zhao, Kwok-Yan Lam