Language Foundation Model
Language foundation models are large, pre-trained AI systems capable of understanding and generating both text and visual information, aiming to achieve robust performance across diverse tasks. Current research focuses on mitigating biases, addressing vulnerabilities like backdoors, and improving their capabilities in specialized domains such as medical imaging and ancient text analysis, often employing techniques like multi-modal prompting and visual-language alignment. These models are significantly impacting various fields, offering improved performance in tasks ranging from anomaly detection to human mobility forecasting and demonstrating the potential for more equitable and reliable AI applications.
Papers
Mudjacking: Patching Backdoor Vulnerabilities in Foundation Models
Hongbin Liu, Michael K. Reiter, Neil Zhenqiang Gong
Demographic Bias of Expert-Level Vision-Language Foundation Models in Medical Imaging
Yuzhe Yang, Yujia Liu, Xin Liu, Avanti Gulhane, Domenico Mastrodicasa, Wei Wu, Edward J Wang, Dushyant W Sahani, Shwetak Patel