Identifying, Measuring, and Addressing Algorithmic Bias in AI Admission Systems for Graduate Education

dc.contributor.authorPrakash, Ananyaen
dc.contributor.committeechairSeyam, Mohammed Saad Mohamed Elmahdyen
dc.contributor.committeememberFarghally, Mohammed Fawzi Seddiken
dc.contributor.committeememberBrown, Dwayne Christianen
dc.contributor.departmentComputer Science and#38; Applicationsen
dc.date.accessioned2025-05-14T08:00:54Zen
dc.date.available2025-05-14T08:00:54Zen
dc.date.issued2025-05-13en
dc.description.abstractThe number of graduate students has been increasing rapidly to meet industry demands, with over 200% increase in competitive fields like computer science (CS) in the past decade. Several universities have adopted AI in their admissions processes for various tasks such as evaluating transcripts, extracting important information from essays, and scoring applications. While AI can greatly increase the efficiency of processing a large volume of applications, it is prone to data and algorithmic bias, which can lead to unfair outcomes for underprivileged subgroups among applicants. Recent changes in legislation such as the ban of affirmative action by the U.S. Supreme Court make it increasingly relevant to study the demographic composition of admitted students and ensure that we develop fair machine learning systems. We present a comprehensive two-phase methodology for detecting and mitigating algorithmic bias. Through analysis of graduate admissions data of the Computer Science department of a large R1 university, we found significant post-ban demographic shifts, including decreased applications from underrepresented groups and a 66% increase in applicants declining to report race post the affirmative action ban. Our preemptive bias detection phase includes exploratory data analysis, clustering, and subgroup discovery to identify both independent and intersectional sources of bias, revealing significant disparities based on citizenship status, gender and race. We then developed and evaluated a neural network model using fairness metrics, discovering substantial bias amplification for gender and citizenship status. Our fairness evaluation and bias correction phase demonstrated that preprocessing and postprocessing mitigation techniques could significantly improve fairness metrics, though with varying effectiveness across different protected attributes. SHAP analysis confirmed that while academic metrics like GPA remained the strongest predictors, demographic features substantially influenced model decisions even after mitigation. This work provides a systematic framework for institutions seeking to implement fair AI admissions systems while navigating new legal constraints on affirmative action, emphasizing the importance of proactive bias detection and mitigation to maintain diversity in higher education.en
dc.description.abstractgeneralThis study examines how artificial intelligence (AI) systems used in graduate computer science admissions might perpetuate bias, especially following the recent ban on affirmative action. We developed a two-step approach to detect and mitigate bias in these systems. Our analysis revealed concerning trends: fewer applications from underrepresented groups and many more applicants choosing not to report their race after the ban. We discovered that admission prediction models significantly favored certain group, particularly US citizens, and sometimes amplified existing biases significantly. While academic factors like GPA were the strongest predictors of admission, demographic characteristics still heavily influenced outcomes. We tested different bias correction techniques and found they could improve fairness, though no single approach worked perfectly for all groups. This research provides universities with practical tools to identify and reduce bias in AI admissions systems, helping maintain diversity in higher education while complying with new legal restrictions on affirmative action.en
dc.description.degreeMaster of Scienceen
dc.format.mediumETDen
dc.identifier.othervt_gsexam:43157en
dc.identifier.urihttps://hdl.handle.net/10919/132451en
dc.language.isoenen
dc.publisherVirginia Techen
dc.rightsIn Copyrighten
dc.rights.urihttp://rightsstatements.org/vocab/InC/1.0/en
dc.subjectAlgorithmic Biasen
dc.subjectAI Admissionsen
dc.subjectGraduate Educationen
dc.subjectComputer Scienceen
dc.subjectMachine Learningen
dc.titleIdentifying, Measuring, and Addressing Algorithmic Bias in AI Admission Systems for Graduate Educationen
dc.typeThesisen
thesis.degree.disciplineComputer Science & Applicationsen
thesis.degree.grantorVirginia Polytechnic Institute and State Universityen
thesis.degree.levelmastersen
thesis.degree.nameMaster of Scienceen

Files

Original bundle
Now showing 1 - 1 of 1
Name:
Prakash_A_T_2025.pdf
Size:
2.44 MB
Format:
Adobe Portable Document Format

Collections