Addressing uncertainty in LLM outputs for trust calibration through visualization and user interface design

dc.contributor.authorArmstrong, Helenen
dc.contributor.authorAnderson, Ashley Lynneen
dc.contributor.authorPlanchart, Rebeccaen
dc.contributor.authorBaidoo, Kwekuen
dc.contributor.authorPeterson, Matthewen
dc.date.accessioned2026-01-29T14:52:39Zen
dc.date.available2026-01-29T14:52:39Zen
dc.date.issued2025-08-15en
dc.description.abstractLarge language models (LLMs) are becoming ubiquitous in knowledge work. However, the uncertainty inherent to LLM summary generation limits the efficacy of human-machine teaming, especially when users are unable to properly calibrate their trust in automation. Visual conventions for signifying uncertainty and interface design strategies for engaging users are needed to realize the full potential of LLMs. We report on an exploratory interdisciplinary project that resulted in four main contributions to explainable artificial intelligence in and beyond an intelligence analysis context. First, we provide and evaluate eight potential visual conventions for representing uncertainty in LLM summaries. Second, we describe a framework for uncertainty specific to LLM technology. Third, we specify 10 features for a proposed LLM validation system — the Multiple Agent Validation System (MAVS) — that utilizes the visual conventions, the framework, and three virtual agents to aid in language analysis. Fourth, we provide and describe four MAVS prototypes, one as an interactive simulation interface and the others as narrative interface videos. All four utilize a language analysis scenario to educate users on the potential of LLM technology in human-machine teams. To demonstrate applicability of the contributions beyond intelligence analysis, we also consider LLM-derived uncertainty in clinical decision-making in medicine and in climate forecasting. Ultimately, this investigation makes a case for the importance of visual and interface design in shaping the development of LLM technology.en
dc.description.versionPublished versionen
dc.format.extentPages 176-217en
dc.format.extent41 page(s)en
dc.format.mimetypeapplication/pdfen
dc.identifier.issue2en
dc.identifier.orcidAnderson, Ashley [0000-0003-2361-7030]en
dc.identifier.urihttps://hdl.handle.net/10919/141043en
dc.identifier.volume59en
dc.language.isoenen
dc.publisherVisible Language Consortiumen
dc.rightsCreative Commons Attribution-NonCommercial-NoDerivatives 4.0 Internationalen
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/4.0/en
dc.subjectexplainable AIen
dc.subjecthuman-machine teamingen
dc.subjectintelligence analysisen
dc.subjectlarge language modelsen
dc.subjecttrust calibrationen
dc.subjectuncertaintyen
dc.subjectuser interface designen
dc.subjectvisual representationen
dc.titleAddressing uncertainty in LLM outputs for trust calibration through visualization and user interface designen
dc.title.serialVisible Languageen
dc.typeArticle - Refereeden
dc.type.dcmitypeTexten
dc.type.otherArticleen
pubs.organisational-groupVirginia Techen
pubs.organisational-groupVirginia Tech/Architecture, Arts, and Designen
pubs.organisational-groupVirginia Tech/Architecture, Arts, and Design/School of Visual Artsen
pubs.organisational-groupVirginia Tech/All T&R Facultyen

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
addressing-uncertainty-in-llm-outputs-for-trust-calibration-through-visualization-and-user-interface-design (2).pdf
Size:
8.63 MB
Format:
Adobe Portable Document Format
Description:
Published version
License bundle
Now showing 1 - 1 of 1
Name:
license.txt
Size:
1.5 KB
Format:
Plain Text
Description: