SPHERE: Supporting Personalized Feedback at Scale in Programming Classrooms with Structured Review of Generative AI Outputs
Files
TR Number
Date
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
This paper introduces SPHERE, a system that enables instructors to effectively create and review personalized feedback for in-class coding activities. Comprehensive personalized feedback is crucial for programming learning. However, providing such feedback in large programming classrooms poses significant challenges for instructors. While Large Language Models (LLMs) offer potential assistance, how to efficiently ensure the quality of LLM-generated feedback remains an open question. SPHERE guides instructors’ attention to critical students’ issues, empowers them with guided control over LLM-generated feedback, and provides visual scaffolding to facilitate verification of feedback quality. Our between-subject study with 20 participants demonstrates SPHERE’s effectiveness in creating more high-quality feedback while not increasing the time spent on the overall review process compared to a baseline system. This work contributes a synergistic approach to scaling personalized feedback in programming education, addressing the challenges of real-time response, issue prioritization, and largescale personalization.