Confidence Minimization
Confidence minimization in machine learning aims to improve model reliability by reducing prediction confidence on uncertain or out-of-distribution inputs, thereby promoting safer and more robust AI systems. Current research focuses on techniques like test-time adaptation of large language and vision-language models, often employing low-rank updates to attention weights or Bayesian methods with calibration regularization. These approaches enhance both in-distribution accuracy and out-of-distribution detection, addressing critical needs in safety-critical applications and improving the trustworthiness of AI predictions. The ultimate goal is to develop more reliable AI systems that can accurately assess their own uncertainty and avoid making erroneous predictions in situations where they lack sufficient knowledge.