Trusting Sources and Machines
Files
TR Number
Date
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
Modern intelligence workflows are overwhelmed by the scale and speed of incoming information, while analyst resources remain limited. This imbalance necessitates tools that can effectively surface the most relevant insights for decision-makers without overburdening human attention. Retrieval-Augmented Generation (RAG) has emerged as a promising solution, combining large language models with information retrieval systems. However, current RAG pipelines often lack the ability to rigorously evaluate and transparently communicate the uncertainty inherent in both their source material and generative outputs. In this project, we explore how RAG-based systems might better characterize and convey epistemic uncertainty about both the provenance and reliability of retrieved documents, as well as the confidence of the language model’s synthesis.