Automating Death: AI and Battlefield Weapons Decisions

TR Number

Date

2025-06

Journal Title

Journal ISSN

Volume Title

Publisher

Virginia Tech

Abstract

This case study explores the ethical, legal, and technological implications of Lethal Autonomous Weapons Systems (LAWS) through the fictional but realistic development of AIM-R, a narrow AI-driven defense system modeled on existing military technology. Designed to protect U.S. Forward Operating Bases from aerial threats, AIM-R operates independently, selecting and engaging targets without direct human intervention. While the system successfully minimizes soldier casualties, it raises urgent concerns about accountability, transparency, and the dehumanization of warfare. Who bears responsibility when a machine misfires or kills a civilian? Can code ever be held morally or legally accountable? The case also considers how increasing AI complexity limits human comprehension of battlefield decisions, making oversight and correction difficult. Despite DoD guidelines, the pressure to match adversarial capabilities accelerates the deployment of these systems. By confronting the line between defensive automation and moral agency, this case compels reflection on the future of warfare, the ethics of killing at a distance, and what it means to hand life-and-death decisions over to machines.

Description

Keywords

Lethal autonomous weapons, Military AI ethics, Algorithmic accountability

Citation