Margin Softmax
Margin softmax loss functions aim to improve the discriminative power of deep learning models by increasing the separation between different classes in the feature space. Current research focuses on refining these losses, addressing issues like class imbalance and computational cost through techniques such as adaptive margin scaling, partial updates of fully connected layers, and integration with other loss functions like focal loss and optimal transport. These advancements lead to improved performance in various applications, including image retrieval, face recognition, and speaker verification, by enhancing feature representation learning and generalization capabilities.
Papers
Killing Two Birds with One Stone:Efficient and Robust Training of Face Recognition CNNs by Partial FC
Xiang An, Jiankang Deng, Jia Guo, Ziyong Feng, Xuhan Zhu, Jing Yang, Tongliang Liu
OTFace: Hard Samples Guided Optimal Transport Loss for Deep Face Representation
Jianjun Qian, Shumin Zhu, Chaoyu Zhao, Jian Yang, Wai Keung Wong