Full Model
"Full Model" research encompasses the development and improvement of large-scale machine learning models across diverse applications, aiming to enhance performance, efficiency, and robustness. Current research focuses on addressing model vulnerabilities (e.g., adversarial attacks, hallucinations), improving efficiency for resource-constrained devices, and developing specialized models for specific domains (e.g., finance, astronomy, medical imaging). This work is significant for advancing AI capabilities in various fields and for mitigating potential risks associated with deploying complex models in real-world settings.
Papers
Understanding Generative AI Content with Embedding Models
Max Vargas, Reilly Cannon, Andrew Engel, Anand D. Sarwate, Tony Chiang
Towards a Knowledge Graph for Models and Algorithms in Applied Mathematics
Björn Schembera, Frank Wübbeling, Hendrik Kleikamp, Burkhard Schmidt, Aurela Shehu, Marco Reidelbach, Christine Biedinger, Jochen Fiedler, Thomas Koprucki, Dorothea Iglezakis, Dominik Göddeke
Approximating Discrimination Within Models When Faced With Several Non-Binary Sensitive Attributes
Yijun Bian, Yujie Luo, Ping Xu
Building Decision Making Models Through Language Model Regime
Yu Zhang, Haoxiang Liu, Feijun Jiang, Weihua Luo, Kaifu Zhang
HcNet: Image Modeling with Heat Conduction Equation
Zhemin Zhang, Xun Gong
Saliency Detection in Educational Videos: Analyzing the Performance of Current Models, Identifying Limitations and Advancement Directions
Evelyn Navarrete, Ralph Ewerth, Anett Hoppe
Pairwise Judgment Formulation for Semantic Embedding Model in Web Search
Mengze Hong, Wailing Ng, Zichang Guo, Chen Jason Zhang
ComKD-CLIP: Comprehensive Knowledge Distillation for Contrastive Language-Image Pre-traning Model
Yifan Chen, Xiaozhen Qiao, Zhe Sun, Xuelong Li