Paper ID: 2407.12626

Domain-specific or Uncertainty-aware models: Does it really make a difference for biomedical text classification?

Aman Sinha, Timothee Mickus, Marianne Clausel, Mathieu Constant, Xavier Coubez

The success of pretrained language models (PLMs) across a spate of use-cases has led to significant investment from the NLP community towards building domain-specific foundational models. On the other hand, in mission critical settings such as biomedical applications, other aspects also factor in-chief of which is a model's ability to produce reasonable estimates of its own uncertainty. In the present study, we discuss these two desiderata through the lens of how they shape the entropy of a model's output probability distribution. We find that domain specificity and uncertainty awareness can often be successfully combined, but the exact task at hand weighs in much more strongly.

Submitted: Jul 17, 2024