Loss Function
Loss functions are crucial components of machine learning models, guiding the learning process by quantifying the difference between predicted and actual values. Current research emphasizes developing loss functions tailored to specific challenges, such as class imbalance in classification (addressed through asymmetric losses and hyperparameter distributions) and robustness to noise and outliers (using bounded and smooth alternatives to standard functions like mean squared error). These advancements improve model accuracy, efficiency, and generalizability across diverse applications, including medical image analysis, time series prediction, and physics-informed neural networks. The ongoing exploration of loss function design directly impacts the performance and reliability of machine learning models in various scientific and engineering domains.
Papers
Sigmoid Loss for Language Image Pre-Training
Xiaohua Zhai, Basil Mustafa, Alexander Kolesnikov, Lucas Beyer
On the Connection between $L_p$ and Risk Consistency and its Implications on Regularized Kernel Methods
Hannes Köhler
Intersection over Union with smoothing for bounding box regression
Petra Števuliáková, Petr Hurtik
Revisiting the Fragility of Influence Functions
Jacob R. Epifano, Ravi P. Ramachandran, Aaron J. Masino, Ghulam Rasool
Distribution-restrained Softmax Loss for the Model Robustness
Hao Wang, Chen Li, Jinzhe Jiang, Xin Zhang, Yaqian Zhao, Weifeng Gong
Error Analysis of Physics-Informed Neural Networks for Approximating Dynamic PDEs of Second Order in Time
Yanxia Qian, Yongchao Zhang, Yunqing Huang, Suchuan Dong