Modeling Student Software Testing Processes: Attitudes, Behaviors, Interventions, and Their Effects
dc.contributor.author | Buffardi, Kevin John | en |
dc.contributor.committeechair | Edwards, Stephen H. | en |
dc.contributor.committeemember | Tilevich, Eli | en |
dc.contributor.committeemember | Fowler, Shelli B. | en |
dc.contributor.committeemember | Shaffer, Clifford A. | en |
dc.contributor.committeemember | Perez-Quinonez, Manuel A. | en |
dc.contributor.department | Computer Science | en |
dc.date.accessioned | 2014-07-24T08:00:12Z | en |
dc.date.available | 2014-07-24T08:00:12Z | en |
dc.date.issued | 2014-07-23 | en |
dc.description.abstract | Effective software testing identifies potential bugs and helps correct them, producing more reliable and maintainable software. As software development processes have evolved, incremental testing techniques have grown in popularity, particularly with introduction of test-driven development (TDD). However, many programmers struggle to adopt TDD's "test a little, code a little" approach and conventional computer science classrooms neglect evaluating software development as a process. In response, we explore influences on students' testing behaviors, effects of incremental testing strategies, and describe approaches to help computer science students adopt good testing practices. First, to understand students' perspectives and adoption of testing strategies, we investigated their attitudes toward different aspects of TDD. In addition, we observed trends in when and how thoroughly students tested their code and how these choices impacted the quality of their assignments. However, with insight into why students struggle to adopt incremental testing, we identified a need to assess their behaviors during the software development process as a departure from traditional product-oriented evaluation. By building upon an existing automated grading system, we developed an adaptive feedback system to provide customized incentives to reinforce incremental testing behaviors while students solved programming assignments. We investigated how students react to concrete testing goals and hint reward mechanisms and found approaches for identifying testing behaviors and influencing short-term behavioral change. Moreover, we discovered how students incorporate automated feedback systems into their software development strategies. Finally, we compared testing strategies students exhibited through analyzing five years and thousands of snapshots of students' code during development. Even when accounting for factors such as procrastinating on assignments, we found that testing early and consistently maintaining testing throughout development helps produce better quality code and tests. By applying our findings of student software development behaviors to effective testing strategies and teaching techniques, we developed a framework for adaptively scaffolding feedback to empower students to critically reflect over their code and adopt incremental testing approaches. | en |
dc.description.degree | Ph. D. | en |
dc.format.medium | ETD | en |
dc.identifier.other | vt_gsexam:3398 | en |
dc.identifier.uri | http://hdl.handle.net/10919/49668 | en |
dc.publisher | Virginia Tech | en |
dc.rights | In Copyright | en |
dc.rights.uri | http://rightsstatements.org/vocab/InC/1.0/ | en |
dc.subject | Computer Science Education | en |
dc.subject | Software Testing | en |
dc.subject | Test-driven development | en |
dc.subject | eLearning | en |
dc.subject | Adaptive Feedback | en |
dc.title | Modeling Student Software Testing Processes: Attitudes, Behaviors, Interventions, and Their Effects | en |
dc.type | Dissertation | en |
thesis.degree.discipline | Computer Science and Applications | en |
thesis.degree.grantor | Virginia Polytechnic Institute and State University | en |
thesis.degree.level | doctoral | en |
thesis.degree.name | Ph. D. | en |