Using AI to Detect AI

Loading...
Thumbnail Image

TR Number

Date

2025-07

Journal Title

Journal ISSN

Volume Title

Publisher

Virginia Tech

Abstract

This case study explores the ethical and practical challenges of using AI-based detection tools to identify student work potentially written with generative AI. Focusing on a case in which a student was falsely accused of academic misconduct by Turnitin’s AI detection software, the study questions the reliability, fairness, and long-term implications of automating academic integrity enforcement. While such tools are intended to curb cheating, their inaccuracies can irreparably damage students’ academic futures. As AI-generated content becomes more common—even expected—in workplaces, universities face conflicting pressures to both regulate and integrate AI. The study proposes a shift away from punitive AI-detection regimes toward educational models that foster critical engagement with AI. Rather than fueling an AI arms race, the case argues for rethinking assignments and teaching students how to assess, interrogate, and ethically use AI tools. This approach equips students with valuable digital literacy skills for future employment, while mitigating fear-based overreliance on detection systems that may do more harm than good.

Description

Keywords

Academic integrity, Generative AI in education, Critical AI literacy

Citation