Many Parameter
Research on "many parameter" models focuses on optimizing the number and utilization of parameters in various machine learning architectures to improve efficiency and performance. Current efforts concentrate on developing parameter-efficient fine-tuning techniques, exploring different model architectures like transformers and graph convolutional networks, and investigating the impact of parameter count on model capabilities and generalization. This research is significant because it addresses the computational cost and resource limitations associated with large models, enabling wider accessibility and applicability across diverse fields, including medical imaging, robotics, and natural language processing.
Papers
Estimating the Distribution of Parameters in Differential Equations with Repeated Cross-Sectional Data
Hyeontae Jo, Sung Woong Cho, Hyung Ju Hwang
Med42 -- Evaluating Fine-Tuning Strategies for Medical LLMs: Full-Parameter vs. Parameter-Efficient Approaches
Clément Christophe, Praveen K Kanithi, Prateek Munjal, Tathagata Raha, Nasir Hayat, Ronnie Rajan, Ahmed Al-Mahrooqi, Avani Gupta, Muhammad Umar Salman, Gurpreet Gosal, Bhargav Kanakiya, Charles Chen, Natalia Vassilieva, Boulbaba Ben Amor, Marco AF Pimentel, Shadab Khan
SPHINX-X: Scaling Data and Parameters for a Family of Multi-modal Large Language Models
Dongyang Liu, Renrui Zhang, Longtian Qiu, Siyuan Huang, Weifeng Lin, Shitian Zhao, Shijie Geng, Ziyi Lin, Peng Jin, Kaipeng Zhang, Wenqi Shao, Chao Xu, Conghui He, Junjun He, Hao Shao, Pan Lu, Hongsheng Li, Yu Qiao, Peng Gao
Determining the significance and relative importance of parameters of a simulated quenching algorithm using statistical tools
Pedro A. Castillo, Maribel García Arenas, Nuria Rico, Antonio Miguel Mora, Pablo García-Sánchez, Juan Luis Jiménez Laredo, Juan Julián Merelo Guervós