Full Model
"Full Model" research encompasses the development and improvement of large-scale machine learning models across diverse applications, aiming to enhance performance, efficiency, and robustness. Current research focuses on addressing model vulnerabilities (e.g., adversarial attacks, hallucinations), improving efficiency for resource-constrained devices, and developing specialized models for specific domains (e.g., finance, astronomy, medical imaging). This work is significant for advancing AI capabilities in various fields and for mitigating potential risks associated with deploying complex models in real-world settings.
Papers
A Contemporary Overview: Trends and Applications of Large Language Models on Mobile Devices
Lianjun Liu, Hongli An, Pengxuan Chen, Longxiang Ye
PrefixKV: Adaptive Prefix KV Cache is What Vision Instruction-Following Models Need for Efficient Generation
Ao Wang, Hui Chen, Jianchao Tan, Kefeng Zhang, Xunliang Cai, Zijia Lin, Jungong Han, Guiguang Ding
RedStone: Curating General, Code, Math, and QA Data for Large Language Models
Yaoyao Chang, Lei Cui, Li Dong, Shaohan Huang, Yangyu Huang, Yupan Huang, Scarlett Li, Tengchao Lv, Shuming Ma, Qinzheng Sun, Wenhui Wang, Furu Wei, Ying Xin, Mao Yang, Qiufeng Yin, Xingxing Zhang
EchoONE: Segmenting Multiple echocardiography Planes in One Model
Jiongtong Hu, Wei Zhuo, Jun Cheng, Yingying Liu, Wufeng Xue, Dong Ni
RARE: Retrieval-Augmented Reasoning Enhancement for Large Language Models
Hieu Tran, Zonghai Yao, Junda Wang, Yifan Zhang, Zhichao Yang, Hong Yu
Gracefully Filtering Backdoor Samples for Generative Large Language Models without Retraining
Zongru Wu, Pengzhou Cheng, Lingyong Fang, Zhuosheng Zhang, Gongshen Liu
AH-OCDA: Amplitude-based Curriculum Learning and Hopfield Segmentation Model for Open Compound Domain Adaptation
Jaehyun Choi, Junwon Ko, Dong-Jae Lee, Junmo Kim
LayoutVLM: Differentiable Optimization of 3D Layout via Vision-Language Models
Fan-Yun Sun, Weiyu Liu, Siyi Gu, Dylan Lim, Goutam Bhat, Federico Tombari, Manling Li, Nick Haber, Jiajun Wu
Recurrent Neural Network on PICTURE Model
Weihan Xu
Mastering Board Games by External and Internal Planning with Language Models
John Schultz, Jakub Adamek, Matej Jusup, Marc Lanctot, Michael Kaisers, Sarah Perrin, Daniel Hennes, Jeremy Shar, Cannada Lewis, Anian Ruoss, Tom Zahavy, Petar Veličković, Laurel Prince, Satinder Singh, Eric Malmi, Nenad Tomašev
Noise Injection Reveals Hidden Capabilities of Sandbagging Language Models
Cameron Tice, Philipp Alexander Kreer, Nathan Helm-Burger, Prithviraj Singh Shahani, Fedor Ryzhenkov, Jacob Haimes, Felix Hofstätter, Teun van der Weij
DiffPatch: Generating Customizable Adversarial Patches using Diffusion Model
Zhixiang Wang, Guangnan Ye, Xiaosen Wang, Siheng Chen, Zhibo Wang, Xingjun Ma, Yu-Gang Jiang
FoundIR: Unleashing Million-scale Training Data to Advance Foundation Models for Image Restoration
Hao Li, Xiang Chen, Jiangxin Dong, Jinhui Tang, Jinshan Pan
MoTrans: Customized Motion Transfer with Text-driven Video Diffusion Models
Xiaomin Li, Xu Jia, Qinghe Wang, Haiwen Diao, Mengmeng Ge, Pengxiang Li, You He, Huchuan Lu
MFTF: Mask-free Training-free Object Level Layout Control Diffusion Model
Shan Yang
EventGPT: Event Stream Understanding with Multimodal Large Language Models
Shaoyu Liu, Jianing Li, Guanghui Zhao, Yunjian Zhang, Xin Meng, Fei Richard Yu, Xiangyang Ji, Ming Li
Enhancing the Generalization Capability of Skin Lesion Classification Models with Active Domain Adaptation Methods
Jun Ye
LVLM-COUNT: Enhancing the Counting Ability of Large Vision-Language Models
Muhammad Fetrat Qharabagh, Mohammadreza Ghofrani, Kimon Fountoulakis
Multi-Agent Collaboration in Incident Response with Large Language Models
Zefang Liu