Shakir, Umair2023-12-072023-12-072023-12-06vt_gsexam:38900https://hdl.handle.net/10919/117103My dissertation is about how engineering educators can use natural language processing (NLP) in implementing open-ended assessments in undergraduate engineering degree programs. Engineering students need to develop an ability to exercise judgment about better and worse outcomes of their decisions. One important consideration for improving engineering students' judgment involves creating sound educational assessments. Currently, engineering educators face a trad-off in selecting between open- and closed-ended assessments. Closed-ended assessments are easy to administer and score but are limited in what they measure given students are required, in many instances, to choose from a priori list. Conversely, open-ended assessments allow students to write their answers in any way they choose in their own words. However, open-ended assessments are likely to take more personal hours and lack consistency for both inter-grader and intra-grader grading. The solution to this challenge is the use of NLP. The working principles of the existing NLP models is the tallying of words, keyword matching, or syntactic similarity of words, which have often proved too brittle in capturing the language diversity that students could write. Therefore, the problem that motivated the present study is how to assess student responses based on underlying concepts and meanings instead of morphological characteristics or grammatical structure in sentences. Some of this problem can be addressed by developing NLP-assisted grading tools based on transformer-based large language models (TLLMs) such as BERT, MPNet, GPT-4. This is because TLLMs are trained on billions of words and have billions of parameters, thereby providing capacity to capture richer semantic representations of input text. Given the availability of TLLMs in the last five years, there is a significant lack of research related to integrating TLLMs in the assessment of open-ended engineering case studies. My dissertation study aims to fill this research gap. I developed and evaluated four NLP approaches based on TLLMs for thematic analysis of student responses to eight question prompts of engineering ethics and systems thinking case scenarios. The study's research design comprised the following steps. First, I developed an example bank for each question prompt with two procedures: (a) human-in-the-loop natural language processing (HILNLP) and (b) traditional qualitative coding. Second, I assigned labels using the example banks to unlabeled student responses with the two NLP techniques: (i) k-Nearest Neighbors (kNN), and (ii) Zero-Shot Classification (ZSC). Further, I utilized the following configurations of these NLP techniques: (i) kNN (when k=1), (ii) kNN (when k=3), (iii) ZSC (multi-labels=false), and (iv) ZSC (multi-labels=true). The kNN approach took input of both sentences and their labels from the example banks. On the other hand, the ZSC approach only took input of labels from the example bank. Third, I read each sentence or phrase along with the model's suggested label(s) to evaluate whether the assigned label represented the idea described in the sentence and assigned the following numerical ratings: accurate (1), neutral (0), and inaccurate (-1). Lastly, I used those numerical evaluation ratings to calculate accuracy of the NLP approaches. The results of my study showed moderate accuracy in thematically analyzing students' open-ended responses to two different engineering case scenarios. This is because no single method among the four NLP methods performed consistently better than the other methods across all question prompts. The highest accuracy rate varied between 53% and 92%, depending upon the question prompts and NLP methods. Despite these mixed results, this study accomplishes multiple goals. My dissertation demonstrates to community members that TLLMs have potential for positive impacts on improving classroom practices in engineering education. In doing so, my dissertation study takes up one aspect of instructional design: assessment of students' learning outcomes in engineering ethics and systems thinking skills. Further, my study derived important implications for practice in engineering education. First, I gave important lessons and guidelines for educators interested in incorporating NLP into their educational assessment. Second, the open-source code is uploaded to a GitHub repository, thereby making it more accessible to a larger group of users. Third, I gave suggestions for qualitative researchers on conducting NLP-assisted qualitative analysis of textual data. Overall, my study introduced state-of-the-art TLLM-based NLP approaches to a research field where it holds potential yet remains underutilized. This study can encourage engineering education researchers to utilize these NLP methods that may be helpful in analyzing the vast textual data generated in engineering education, thereby reducing the number of missed opportunities to glean information for actors and agents in engineering education.ETDenCreative Commons Attribution-NonCommercial 4.0 Internationalnatural language processingopen-ended assessmentsengineering case studiesautomatic short answer gradingcomputerized qualitative data analysisA Novel Method for Thematically Analyzing Student Responses to Open-ended Case ScenariosDissertation