AI Bias in Streaming Platform Recommendation Systems: Exploring the Impact on Corporate Reputation
dc.contributor.author | Ehsan, Samia | en |
dc.contributor.committeechair | Logan, Nneka | en |
dc.contributor.committeemember | Tedesco, John C. | en |
dc.contributor.committeemember | Woods, Chelsea Lane | en |
dc.contributor.department | Communication | en |
dc.date.accessioned | 2025-05-24T08:00:57Z | en |
dc.date.available | 2025-05-24T08:00:57Z | en |
dc.date.issued | 2025-05-23 | en |
dc.description.abstract | Artificial intelligence (AI) systems, particularly recommendation algorithms, have transformed user engagement on digital platforms like Netflix and YouTube by delivering personalized experiences. However, biases in data, algorithms, and deployment environments raise significant social and professional concerns. This thesis uses Netflix and YouTube as case examples to study the relationship between AI bias and corporate reputation. The study examines how these companies utilize AI recommendation systems and whether biases are presented, and the reputational risks they might pose. The thesis employs thematic analysis to analyze news media coverage from Google News and from major technology news outlets between January 2017 to March 2025. The qualitative data analysis software NVivo has been used in this thesis to code and identify recurring themes systematically. This thesis examines various forms of bias in AI recommendation systems—including racism, sexism, and cultural misrepresentation—alongside the severity of algorithmic discrimination and the impact of corporate reputational risks these biases create. | en |
dc.description.abstractgeneral | Artificial intelligence (AI) helps platforms like Netflix and YouTube recommend shows and videos tailored to each user. While this makes viewing more convenient, the algorithms behind these suggestions can sometimes reflect biases, favoring certain content, ignoring others, or reinforcing social stereotypes. This thesis looks at how those biases show up and how news outlets have covered them over the years. By analyzing articles published between 2017 and 2025, the study explores how issues like racial or gender bias in AI systems can affect how people view companies like Netflix and YouTube. When platforms are seen as unfair, it can damage their reputation. This thesis helps explain what happens when the tools people trust to guide their choices are seen as biased and what companies can do to respond. | en |
dc.description.degree | MACOM | en |
dc.format.medium | ETD | en |
dc.identifier.other | vt_gsexam:43849 | en |
dc.identifier.uri | https://hdl.handle.net/10919/134206 | en |
dc.language.iso | en | en |
dc.publisher | Virginia Tech | en |
dc.rights | In Copyright | en |
dc.rights.uri | http://rightsstatements.org/vocab/InC/1.0/ | en |
dc.subject | Media narrative | en |
dc.subject | Platform governance | en |
dc.subject | Digital trust | en |
dc.subject | AI perception | en |
dc.subject | Corporate communication | en |
dc.subject | Algorithmic accountability | en |
dc.title | AI Bias in Streaming Platform Recommendation Systems: Exploring the Impact on Corporate Reputation | en |
dc.type | Thesis | en |
thesis.degree.discipline | Communication | en |
thesis.degree.grantor | Virginia Polytechnic Institute and State University | en |
thesis.degree.level | masters | en |
thesis.degree.name | MACOM | en |
Files
Original bundle
1 - 1 of 1