Paper ID: 2412.07781
Can LLMs faithfully generate their layperson-understandable 'self'?: A Case Study in High-Stakes Domains
Arion Das, Asutosh Mishra, Amitesh Patel, Soumilya De, V. Gurucharan, Kripabandhu Ghosh
Large Language Models (LLMs) have significantly impacted nearly every domain of human knowledge. However, the explainability of these models esp. to laypersons, which are crucial for instilling trust, have been examined through various skeptical lenses. In this paper, we introduce a novel notion of LLM explainability to laypersons, termed $\textit{ReQuesting}$, across three high-priority application domains -- law, health and finance, using multiple state-of-the-art LLMs. The proposed notion exhibits faithful generation of explainable layman-understandable algorithms on multiple tasks through high degree of reproducibility. Furthermore, we observe a notable alignment of the explainable algorithms with intrinsic reasoning of the LLMs.
Submitted: Nov 25, 2024