Is AI Good for All? Ethical Concerns for the Training and Implementation of AI-Driven Decision-Making Systems

dc.contributor.authorGoncalves Reis, Ana Carolinaen
dc.contributor.committeechairSeref, Onuren
dc.contributor.committeememberThompson, Phillip Stevenen
dc.contributor.committeememberAdjerid, Idrisen
dc.contributor.committeememberWang, Alan Gangen
dc.contributor.departmentBusiness, Business Information Technologyen
dc.date.accessioned2025-05-09T08:04:28Zen
dc.date.available2025-05-09T08:04:28Zen
dc.date.issued2025-05-08en
dc.description.abstractArtificial intelligence (AI) systems increasingly mediate social and economic interactions. They automate a wide range of decisions, such as the best candidate for a job, and the information that should be allowed (or censored) online. To avoid a dystopian, algorithm-based future, it is paramount to raise ethics-related questions about the training and implementation of AI-driven decision-making systems. In this dissertation work, we raise some of these ethical questions by examining (1) how the implementation of AI systems in the hiring process impacts men's and women's behaviors in the labor market, and (2) how the process of human decision-making about digital toxicity generates inconsistent data, which is used to train AI systems responsible for determining what information can be shared online. To answer the first question, we investigate how disclosure of the use of hiring algorithms impacts applicants' willingness to pursue job opportunities in the labor market. Inspired by the literature on differences in job-search persistence for higher-return, male-typed jobs, we conducted two longitudinal field experiments (n1= 2,947, n2= 592) to examine men's and women's behaviors when they know algorithms are making hiring decisions. We found that men (but not women) are more likely to apply for higher-return jobs after rejection when they know hiring algorithms are making the hiring decisions (compared to men who know humans are making the hiring decisions). The gender difference in behavior increased the representation of men in the applicant pool for higher-return jobs, thereby widening the gender gap—a phenomenon we coined as the "overflowing pipeline problem." To answer the second question, we explore the content moderation process and study how decisions are made about what content is allowed online. Generative AI and social media platforms rely on human workers for content moderation, the process through which toxic content is reviewed, labeled, and removed from online platforms. However, this process is often kept opaque to veil information about how the content moderation process is conducted. Recent literature shed light on the effectiveness of different content moderation practices. Yet, little is known about the details of the content moderation decision-making process. To fill this gap, we interviewed content moderators (n= 24) located across the globe who work for large online platforms. Using grounded theory techniques, we constructed a model depicting the content moderation decision-making process. This model highlights the role of raw content, predefined guidelines, human internal tensions, and organizational regimes of value in influencing the production of moderation outputs, including AI training data. This overarching work is of interest to both research and practice as it advances our understanding of the influence of AI in human ecosystems and provides insights into the discussion around AI regulation.en
dc.description.abstractgeneralAn array of artificial intelligence (AI) models automates a wide range of decisions. AI systems select the best candidates for jobs, and which information should be allowed (or censored) online. Given their active role in organizations and society, it is important to understand whether AI generates beneficial outcomes for all. In this dissertation work, we investigate ethical concerns related to the training and implementation of AI-driven decision-making systems. First, we investigate how regulations requiring organizations to disclose the use of hiring algorithms (like New York City Local Law 144) impact applicants' willingness to pursue higher-return job opportunities. In our studies, we found that men (but not women) are more likely to apply for higher-return jobs after rejection when they know hiring algorithms are making the hiring decisions (compared to men who know humans are making the hiring decisions). The gender difference in behavior increases the representation of men in the applicant pool for higher-return jobs, thereby widening the gender gap—a phenomenon we coin the "overflowing pipeline problem." Second, we study how decisions are made about which content is allowed online. Generative AI and social media companies rely on human workers for content moderation—the process through which toxic content is reviewed, labeled, and removed from online platforms. The labeled content is used to train AI models that automate content moderation. Currently, little is known about the content moderation decision-making process because companies purposefully keep this process opaque. To fill this gap, we interview human moderators located across the globe who work for large online platforms and develop a model that underscores the key factors influencing the decision-making process in content moderation. In summary, this dissertation explores ethical concerns for AI-driven decision-making processes, and underlines that additional and careful efforts are needed to ensure AI can benefit all.en
dc.description.degreeDoctor of Philosophyen
dc.format.mediumETDen
dc.identifier.othervt_gsexam:42999en
dc.identifier.urihttps://hdl.handle.net/10919/130412en
dc.language.isoenen
dc.publisherVirginia Techen
dc.rightsCreative Commons Attribution 4.0 Internationalen
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/en
dc.subjectartificial intelligence algorithmsen
dc.subjectethics and information systemsen
dc.subjecthiringen
dc.subjectrecruitmenten
dc.subjectcontent moderationen
dc.subjecthuman moderatorsen
dc.subjectdecision-makingen
dc.titleIs AI Good for All? Ethical Concerns for the Training and Implementation of AI-Driven Decision-Making Systemsen
dc.typeDissertationen
thesis.degree.disciplineBusiness, Business Information Technologyen
thesis.degree.grantorVirginia Polytechnic Institute and State Universityen
thesis.degree.leveldoctoralen
thesis.degree.nameDoctor of Philosophyen

Files

Original bundle
Now showing 1 - 1 of 1
Name:
Goncalves_Reis_A_D_2025.pdf
Size:
1.45 MB
Format:
Adobe Portable Document Format