Designing Answer-Aware LLM Hints to Scaffold Deeper Learning in K–12 Programming Education
Files
TR Number
Date
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
Studies have shown that many K–12 students develop misconceptions about programming concepts such as variables, conditionals, and loops, particularly when learning through block-based environments like Scratch, where visual abstractions can obscure underlying computational logic. While tools powered by artificial intelligence (AI) can provide quick help, they often give direct answers that reduce students' opportunities to think and learn. This work explores how AI can support learning without encouraging overreliance. In a study with 105 students using CodeKids, 31.4% showed misconceptions about variable assignment and data types, and only 20% correctly solved conditional problems, highlighting the need for better scaffolding to address these conceptual gaps. To tackle this challenge, we designed and implemented an LLM-powered hint generation system within CodeKids, an open-source, curriculum-aligned learning platform developed by Virginia Tech in collaboration with local schools. The system generates short, step-by-step hints when students ask for help, encouraging reasoning rather than direct answer-seeking. Grounded in Vygotsky's Zone of Proximal Development, this approach balances guidance and autonomy through structured prompting that preserves productive struggle. The system was tested with real students and evaluated through automated analysis and surveys, which showed that the hints were clear, helpful, and easy to use. Students who used the hints reported higher confidence and improved problem-solving skills. These results demonstrate promising progress in using AI to support K–12 programming education and lay the foundation for future tools that personalize hints, adapt to different learners, and make AI-driven learning more effective and engaging.