A User-Centered Design Approach to Evaluating the Usability of Automated Essay Scoring Systems

TR Number

Date

2023-09-21

Journal Title

Journal ISSN

Volume Title

Publisher

Virginia Tech

Abstract

In recent years, rapid advancements in computer science, including increased capabilities of machine learning models like Large Language Models (LLMs) and the accessibility of large datasets, have facilitated the widespread adoption of AI technology, such as ChatGPT, underscoring the need to design and evaluate these technologies with ethical considerations for their impact on students and teachers. Specifically, the rise of Automated Essay Scoring (AES) platforms have made it possible to provide real-time feedback and grades for student essays. Despite the increasing development and use of AES platforms, limited research has specifically focused on AI explainability and algorithm transparency and their influence on the usability of these platforms. To address this gap, we conducted a qualitative study on an AI-based essay writing and grading platform, with a primary focus to explore the experiences of students and graders. The study aimed to explore the usability aspects related to explainability and transparency and their implications for computer science education. Participants took part in surveys, semi-structured interviews, and a focus group. The findings reveal important considerations for evaluating AES systems, including the clarity of feedback and explanations, impact and actionability of feedback and explanations, user understanding of the system, trust in AI, major issues and user concerns, system strengths, user interface, and areas of improvement. These proposed key considerations can help guide the development of effective essay feedback and grading tools that prioritize explainability and transparency to improve usability in computer science education.

Description

Keywords

Usability, Algorithmic Transparency, Machine Learning, AI Explainability, Writing, Feedback

Citation

Collections