Hard Label
Hard labels, representing definitive class assignments in machine learning, are a cornerstone of supervised learning but often fail to capture inherent data uncertainty. Current research focuses on mitigating this limitation, exploring alternatives like soft labels (probability distributions over classes) to improve model performance, particularly with limited data or noisy annotations. This involves investigating various methods for incorporating soft labels into training, including ensemble techniques and novel loss functions, and analyzing their impact on model accuracy, calibration, and robustness against adversarial attacks. The improved understanding and utilization of label uncertainty holds significant implications for enhancing the reliability and generalizability of machine learning models across diverse applications.