Automated Students' short answers assessment


The objective of this innovative project was to create an automated web application for the assessment and scoring of computer science-related short answers. This solution directly addresses the often labor-intensive and time-consuming process of manually grading written responses, a challenge that educators across various academic disciplines frequently encounter. The developed web application stands out not just for its efficiency but also for its versatility, being applicable to a wide range of subjects beyond computer science, provided that appropriate teacher answer files are supplied.

At the heart of the application lies a user-friendly interface created using ReactJS. This frontend allows educators to seamlessly upload 'teacher' and 'student' files in .tsv format. Following the upload, the application's backend, developed using Flask, takes over. It processes these submissions by comparing student responses against predefined model answers. The scoring mechanism of the application is particularly noteworthy. It employs an advanced semantic analysis approach, utilizing a pre-existing deep learning model, RoBERTa Large. This model is integral to the AutoGrader class, which is responsible for the semantic evaluation of the text.

The grading logic embedded within the AutoGrader class is both innovative and sophisticated. It assesses student responses by breaking them down into phrases and then computing the semantic similarity between each phrase and the concepts outlined in the model answers. The process employs SentenceTransformer to generate text embeddings, allowing for a nuanced evaluation based on cosine similarity between vector representations. This method ensures a grading system that transcends simple keyword matching, delving into the semantic content and understanding of the student answers.

The application boasts several key features that enhance user experience and provide educators with comprehensive insights into student performance. These include the ability to display scores and grades directly on the web application, download detailed Grade Reports that include each question, student's response, the grade awarded, and the model answer. Additionally, the application allows for the viewing of previous submissions and the downloading of historical documents such as past versions of 'teacher file', 'student file', and grade reports.

In terms of future development, the project team has outlined several ambitious goals. These include implementing a dataset-driven strategy for enhancing the training of deep learning models, thereby significantly advancing the current framework. Another focus will be on allowing for a variety of file types to be uploaded for both teacher and student files, thereby increasing the accessibility and usability of the system. Lastly, there are plans to update the functionality and appearance of the web application, incorporating features such as scrolling, standardized formatting, and improved design elements to enhance the overall user experience.

The project was developed with the invaluable guidance and support of Dr. Mohamed Farag, a research associate at the Center for Sustainable Mobility at Virginia Tech. Dr. Farag's expertise in computer science and his commitment to educational innovation have been instrumental in steering the project towards success.

In conclusion, this project marks a significant advancement in the field of educational technology, particularly in the realm of academic grading. By leveraging the power of artificial intelligence and modern web technologies, it provides an efficient, reliable, and versatile tool for educators, streamlining the grading process and offering a scalable solution adaptable to various academic contexts. The future developments outlined promise to further enhance the capabilities of this already impressive tool, pointing towards a new era in academic assessment.