Is AI Good for All? Ethical Concerns for the Training and Implementation of AI-Driven Decision-Making Systems
Files
TR Number
Date
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
Artificial intelligence (AI) systems increasingly mediate social and economic interactions. They automate a wide range of decisions, such as the best candidate for a job, and the information that should be allowed (or censored) online. To avoid a dystopian, algorithm-based future, it is paramount to raise ethics-related questions about the training and implementation of AI-driven decision-making systems. In this dissertation work, we raise some of these ethical questions by examining (1) how the implementation of AI systems in the hiring process impacts men's and women's behaviors in the labor market, and (2) how the process of human decision-making about digital toxicity generates inconsistent data, which is used to train AI systems responsible for determining what information can be shared online. To answer the first question, we investigate how disclosure of the use of hiring algorithms impacts applicants' willingness to pursue job opportunities in the labor market. Inspired by the literature on differences in job-search persistence for higher-return, male-typed jobs, we conducted two longitudinal field experiments (n1= 2,947, n2= 592) to examine men's and women's behaviors when they know algorithms are making hiring decisions. We found that men (but not women) are more likely to apply for higher-return jobs after rejection when they know hiring algorithms are making the hiring decisions (compared to men who know humans are making the hiring decisions). The gender difference in behavior increased the representation of men in the applicant pool for higher-return jobs, thereby widening the gender gap—a phenomenon we coined as the "overflowing pipeline problem." To answer the second question, we explore the content moderation process and study how decisions are made about what content is allowed online. Generative AI and social media platforms rely on human workers for content moderation, the process through which toxic content is reviewed, labeled, and removed from online platforms. However, this process is often kept opaque to veil information about how the content moderation process is conducted. Recent literature shed light on the effectiveness of different content moderation practices. Yet, little is known about the details of the content moderation decision-making process. To fill this gap, we interviewed content moderators (n= 24) located across the globe who work for large online platforms. Using grounded theory techniques, we constructed a model depicting the content moderation decision-making process. This model highlights the role of raw content, predefined guidelines, human internal tensions, and organizational regimes of value in influencing the production of moderation outputs, including AI training data. This overarching work is of interest to both research and practice as it advances our understanding of the influence of AI in human ecosystems and provides insights into the discussion around AI regulation.