Towards Improving Students' Software Testing Practices using Modified Mutation Testing
dc.contributor.author | Mansur, Rifat Sabbir | en |
dc.contributor.committeechair | Shaffer, Clifford A. | en |
dc.contributor.committeechair | Edwards, Stephen H. | en |
dc.contributor.committeemember | Price, Thomas W. | en |
dc.contributor.committeemember | Tilevich, Eli | en |
dc.contributor.committeemember | Servant Cortes, Francisco Javier | en |
dc.contributor.department | Computer Science and#38; Applications | en |
dc.date.accessioned | 2025-04-03T08:00:19Z | en |
dc.date.available | 2025-04-03T08:00:19Z | en |
dc.date.issued | 2025-04-02 | en |
dc.description.abstract | Mutation testing (MT) is a powerful technique for evaluating the quality of software test suites by introducing small faults, called ``mutations,'' into code to assess if tests can detect them. While MT has been extensively applied in the software industry, its use in programming courses faces both computational and pedagogical barriers. My research investigates the successful integration of MT in a post-CS2 Data Structures and Algorithms (DSA) course with 3-4 week long programming projects. Through a comprehensive study across multiple semesters, I investigated three key aspects: the computational demands of MT in an educational auto-grading system, the effect of MT on student test suite quality and coding practices, and the development of a framework for effectively integrating MT in programming courses. Initially, the implementation of standard MT showed mixed results due to inadequate stock feedback. This prompted me to develop a tailored approach that modified MT feedback, while also incorporating additional documentation and training materials. I also observed a noticeable increase (30-50 seconds per submission) in the auto-grader's processing time and feedback turnaround time when using MT, raising concerns about potential server overload. At the same time, the collection of changes made to the environment and requirements as part of this intervention led to an overall reduction in the number of submissions per student needed to complete the projects. My findings suggest that students using modified MT, as a group, demonstrated higher quality test suites and wrote better solution code compared to students whose test suites were graded on code coverage. This version of MT with modified feedback also showed positive results in student understanding and application of MT principles compared to MT with stock feedback. Analysis of IDE activity data, code submissions, and 38 semi-structured student interviews led me to provide a framework for introducing MT as an effective intervention. Thus, my research provides a framework for effectively integrating MT in programming courses, contributing to improved student test suite development and offering practical guidelines for instructors introducing MT in undergraduate Computer Science courses. | en |
dc.description.abstractgeneral | Software plays a crucial role in our daily lives, from the apps on our smartphones to the systems that control our cars and homes. Ensuring that software works correctly and reliably is essential to prevent potential harm or inconvenience caused by malfunctions. One way to ensure software quality is through thorough testing, which involves writing test cases that check whether the software behaves as expected under various conditions. One advanced testing technique used in the software industry is mutation testing (MT), which deliberately introduces small changes (or ``mutations'') into working code to see if existing tests can catch these artificial mistakes. While this technique is powerful, its adoption in programming courses has been challenging due to technical limitations and teaching difficulties. My research focused on successfully introducing MT in a college-level computer programming course. I studied how MT could be effectively taught to students while managing the additional computational demands it places on the course's automated grading system. Over multiple semesters, I developed and refined an approach that made MT more accessible and beneficial for students. Initially, using standard MT tools proved problematic because students found the stock feedback difficult to understand. In response, I modified a more student-friendly version with clear explanations, documentations, and training materials. While this improved version did require more processing time from the grading system, I also applied solutions to alleviate computational overload on the course's automated grading system, such as allowing students to run MT on their own computers. The results were encouraging: students who used modified MT wrote better tests and produced higher quality code compared to students graded using the previous test suite quality measure (code coverage). Following interviews with students and analysis of their coding patterns, I developed a comprehensive framework for successfully introducing MT in programming courses. Finally, my research provides practical guidelines for instructors who want to incorporate this industry-standard testing technique into their teaching, ultimately helping students become better programmers. My research contributes to the field of Computer Science education by improving training in testing, helping students create more robust software tests, and providing instructors with insights into effectively teaching this influential yet complex topic. | en |
dc.description.degree | Doctor of Philosophy | en |
dc.format.medium | ETD | en |
dc.identifier.other | vt_gsexam:42568 | en |
dc.identifier.uri | https://hdl.handle.net/10919/125125 | en |
dc.language.iso | en | en |
dc.publisher | Virginia Tech | en |
dc.rights | Creative Commons Attribution-NonCommercial 4.0 International | en |
dc.rights.uri | http://creativecommons.org/licenses/by-nc/4.0/ | en |
dc.subject | CS education | en |
dc.subject | post-CS2 | en |
dc.subject | data structures and algorithms | en |
dc.subject | software testing | en |
dc.subject | mutation testing | en |
dc.subject | software engineering education | en |
dc.subject | automated assessment tool | en |
dc.title | Towards Improving Students' Software Testing Practices using Modified Mutation Testing | en |
dc.type | Dissertation | en |
thesis.degree.discipline | Computer Science & Applications | en |
thesis.degree.grantor | Virginia Polytechnic Institute and State University | en |
thesis.degree.level | doctoral | en |
thesis.degree.name | Doctor of Philosophy | en |