AI Risk Management

TR Number

Date

2025-06-16

Journal Title

Journal ISSN

Volume Title

Publisher

Virginia Tech

Abstract

This case study responds to the necessity of risk management in AI development as technologies increasingly come to characterize society. It begins by contrasting the hype of AI—promises of abundance and civilization-altering automation—with the reality of systems that too frequently inflict harm, ranging from biased sentencing algorithms to dangerous user-facing chatbots. The article introduces the National Institute of Standards and Technology (NIST) AI Risk Management Framework, a voluntary framework of practices to help organizations map, measure, manage, and govern AI risks. Through the example of the Richmond, Virginia city government, the case illustrates how the use of the framework can reveal gaps in oversight, stakeholder engagement, and bias testing, especially when AI is developed outside the organization. It also brings to the fore the political, economic, and cultural dilemmas of establishing good governance, as critics argue that regulation stifles innovation and intrudes into personal choice. The paper sets these tensions within the context of new technologies in general, describing how the Collingridge Dilemma makes intervention early on difficult and intervention late on all but impossible. Lastly, it challenges students to wonder whether a society driven by rapid AI innovation can balance innovation with safety, fairness, and human dignity—and to consider what responsible AI would look like in action.

Description

Keywords

AI Development, Risk Management, Governance & Accountability

Citation