Tech for Humanity
Permanent URI for this collection
Browse
Recent Submissions
- Sephora Kids: What Is at Stake in Algorithm-Powered Consumption?Zhumadilova, Kulyash (Virginia Tech, 2025-06-27)This case questions the emergence of the "Sephora Kids" phenomenon—young kids, particularly pre-teens, participating in beauty culture fueled by algorithm-driven consumption. The study analyzes how the $400 billion beauty market, projected in 2024, uses new targeted advertising technologies to build youth identities and shape behavior. Like Ulta, Sephora is deeply immersed in influencer culture and digital media, and questions around how it influences Generation Alpha's values, health, and self-concept exist. The case critically examines how beauty standards are embedded with commercialization, neuromarketing, and changing social norms and placing beauty consumption in wider cultural, historical, and gendered contexts. It also addresses issues of informed consent, regulation (such as California's proposed anti-anti-aging product prohibitions for minors), and parental authority. Utilizing discussion questions and consideration of algorithmic influence, social responsibility, and digital ethics, the case compels students to consider the fine line between empowerment and exploitation in today's consumer culture.
- A Case Study on the Future of Humanity: AI and Robotics Worker ReplacementKretser, Michael (Virginia Tech, 2025-06-27)This case study explores the disruptive and potentially transformative impact of humanoid robots and artificial intelligence on society and work in the future. Based on Tesla's "Optimus" robot and Industry 4.0, the study provokes students to think about questions related to technological ethics, work displacement, and automation on a mass level. Drawing on contemporary news and hypothetical scenarios, the case challenges participants to reflect critically upon what happens when computers can learn, get better, and perform complex work more economically and better than human beings. With discussion topics and media clips, including from "The Tesla Bot Will Break Reality," the research encourages consideration of Universal Basic Income (UBI), algorithmic rule, and questions of power, ownership, and control. Will robots usher in a utopia of creative liberty, or an era of wealth and power in a dystopia? Through the examination of these tensions, the case creates debate on fairness, human agency, and co-created futures in the age of smart machines.
- Silicon Valley IdeologiesGiles, Kendall (Virginia Tech, 2025-06-27)This case study unpacks the powerful ideologies underlying Silicon Valley's technological aspirations and their implications for society. It traces how cyberlibertarianism—a fusion of radical individualism, free-market philosophy, and techno-utopianism—became embedded in engineering and corporate cultures that prioritize disruption over responsibility. The study discusses how technosolutionism and technological determinism frame technology as a solution to all social ills at the expense of human agency and moral complexity. Through the quest for Artificial General Intelligence (AGI), the case illustrates how intellectuals and institutions, from OpenAI to Google, sell ideologies branded as TESCREAL: Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism, and Longtermism. These ideologies predict posthuman evolutionary destinies, mind uploading, and space colonization, with speculative existential risk frequently taking precedence over near-term social harms like bias and inequality. The study argues that these ideologies disproportionately benefit technocratic elites at the cost of democratic discourse and marginalized groups' interests. Lastly, it challenges students to think about whether technological development can be separated from its ideological roots and how the humanities can inform more inclusive and human-focused innovation.
- Black (Cyber)Feminism: Creating Counterpublics in Social MediaRoberts, Elizabeth B. (Virginia Tech, 2025-06-27)This case study explores in what ways Black cyberfeminist activism becomes a space of radical intersectional resistance to systemic racial and gender oppression in online spaces. Focusing on grassroots movements and the cultural production of Black women in online spaces, the case reveals how Twitter, Instagram, and YouTube have been reframed as spaces of community-making, political education, and healing. In contrast to perceiving technology as neutral, the case shows how Black women engage critically with and resist technocultures devaluing them. It explains how #SayHerName and #BlackGirlMagic are both rhetorical tools and networked performances of protest, care, and joy. The case also talks about the limitations of conventional digital feminism in whiteness and the erasure of lived experiences of women of color. Drawn from the work of thinkers like Safiya Noble and Moya Bailey, it questions algorithmic bias, digital surveillance, and the commodification of Black expression on the internet. Using storytelling, reflective questions, and critical questions, the case encourages students to think about what equitable, just, and fair digital futures might look like. Ultimately, it asserts that Black cyberfeminism avows an alternative perspective on analyzing power, resistance, and imagination in the age of algorithmic life.
- Interoperability of Electronic Medical RecordsParrish, Roan (Virginia Tech, 2025-06-27)This case study explores the nuances of transitioning to interoperable electronic health record (EHR) systems in the U.S. health care setting, with attention to a hypothetical hospital's change from a homegrown EHR to a commercial TEFCA-aligned system. With reference to the HITECH Act and the 21st Century Cures Act, it highlights policy-driven rationales for interoperability, problems of data migration, and technology and social labor around these transitions. From the experience of Dr. George Foreman and programmer Sylvia Char, the case unravels vendor lock-in, regulatory mandates, and tensions between market incentives and patient needs. It also looks at how interoperability influences not just clinical workflows but also patient access, cybersecurity, and medical research. The case encourages critical consideration of who stands to gain from data portability, how to put the patient at the forefront of digital transformation, and what a democratic model of health data governance could mean.
- AI and the Shaping of Collective MemoryWood, Claire (Virginia Tech, 2025-06-27)This case study examines how artificial intelligence (AI) plays a role in building and fabricating collective memory. Following on from foundational memory work by Halbwachs, Zelizer, and Young, the case reveals how AI technologies, particularly generative AI, transform public conceptions of past events through the production of believable but potentially inaccurate content. Set in a near-future hypothetical, the case records the viral transmission of an AI-generated documentary about a fabricated civil rights protest. Even relying on fragmented truths, the AI authored a coherent if historically false narrative that had gone far beyond circulation by the time historians could intervene. This case provokes urgent questions regarding authenticity, responsibility, and the work of algorithms in fashioning memory and history. It also examines the power disparities created by AI technologies and insists on the critical oversight, such as the hypothetical Verified Memory Project, to verify the authenticity of public speech. The case asks us to ponder who commands history in the age of algorithms and what are the broader stakes for misinformation, algorithmic bias, and the democratic enterprise of digital memory-making.
- Artificial Intelligence as a Weapon of WarRoe, Elena (Virginia Tech, 2025-06-27)This case study discusses the rising use of Artificial Intelligence (AI) and Autonomous Weapons Systems (AWS) in modern warfare, with special focus on the humanitarian and moral implications of AI-driven military technology. Beginning with a discussion of the arms race among global powers for AWS, the case analyses how AI systems—such as drones and facial recognition software—are used to identify and engage targets with minimal human interaction. It examines the increasing reliance on AWS in wars like the Russia-Ukraine conflict and critically examines the Israeli military's application of AI in Gaza. The example centers around technologies like "Lavender," which provides Palestinian individuals with a score of likelihood of being linked with Hamas based on impenetrable patterns of data, and "The Gospel," which designates infrastructure for bombing. The case reveals the mechanisms through which AI can create prejudice, misidentification, and civilian casualties, and raise immediate concerns about algorithmic warfare, accountability, and international law. With over 50,000 Palestinians recorded to have been killed since October 2023, the case argues that AI use in targeting—when it makes distinction between combatants and civilians opaque—is highly perilous and necessitates strong regulation. With this focus, students are invited to discuss the moral obligations of AI developers, the personal liberties involved in pattern-of-life surveillance, and the necessity of global regulation of cyber war.
- Digital Humanitarianism: The Promise and Pitfalls of Technology in Humanitarian ResponseIslam, Muhammad Awfa (Virginia Tech, 2025-06-24)This case study explores the transformative but fraught role of digital technology in humanitarian intervention. As events such as the Haitian earthquake and the Rohingya refugee crisis illustrate, technologies such as crowdsourced mapping software, biometric registration systems, and AI-driven chatbots can radically improve coordination, effectiveness, and sharing of information. In Haiti, the Ushahidi platform integrated real-time SMS and social media data to guide relief responses, and biometric identification in Bangladesh facilitated the dispensation of aid and granted refugees a sense of identity. Yet these same technologies have the potential to place vulnerable communities at risk for new threats: surveillance, data breaches, coerced consent, and widening digital divides. The study also presents a fictional scenario where an AI chatbot for mental well-being wins over teenagers' trust but fails to pick up on signs of self-harm, illustrating how technology-based interventions can inadvertently replace human care. Through such illustrations, the study argues that while digital humanitarianism might bring speed and scale, it must be accompanied by ethics, local community engagement, and critical analysis. It raises questions on whether technological efficacy and human dignity are compatible, and how organizations can enable informed decision-making and justice where environments are plagued by trauma and unequal power dynamics.
- The Race for Global AI Dominance: USA vs. China How Do We Measure Who’s Ahead?Alwadi, Nada (Virginia Tech, 2025-06-18)This case study investigates the intensifying race between the United States and China for global dominance in artificial intelligence (AI), emphasizing that leadership in AI is not just about innovation but also about the ability to diffuse and adopt technologies across society. While both nations are leading in AI development, the case highlights political scientist Jeffrey Ding’s argument that China suffers from a “diffusion deficit,” struggling to implement AI widely beyond elite urban centers. Using examples such as OpenAI’s ChatGPT and China’s DeepSeek, the study contrasts U.S. first-mover advantages with China’s ambition to define global AI standards, often underpinned by authoritarian governance models. The case examines the relationship between innovation and diffusion processes with worldwide power transformations as well as economic development stages and ethical AI governance standards. The study investigates China's expanding digital authoritarianism exports which create worries about surveillance tools employed for human rights suppression. Through the lens of innovation capacity vs. diffusion capacity, the case encourages students to critically examine not only who is winning the AI race, but also what kind of global future each model of AI development might promote.
- The Shipping Container as an Agent of GlobalizationLutgens, Brian W. (Virginia Tech, 2025-06-20)This case study investigates how the standardized shipping container, a seemingly basic innovation, has radically altered global trade, labor markets, and economic structures. Containerized freight, invented in the 1950s and largely deployed by the 1980s, significantly reduced shipping costs, allowing the emergence of complex worldwide supply chains and propelling the third wave of globalization. The paper follows the container's progression from an innovative logistical experiment to a driver for global economic interdependence, focusing on how it displaced domestic manufacturing employment while making consumer goods more affordable and widely available. The study examines the container's dual influence, using instances such as worldwide Barbie doll manufacture and the rapid growth of international ports and transportation infrastructures. The research promotes analytical thinking about technical simplicity together with geographical advantage and inequality and economic efficiency social upheaval trade-offs. The study presents crucial questions about the human expenses of globalization and whether low-priced products bring more benefits than they cause major worldwide labor market and economic disruptions.
- The Machine in the Loop – Patient Empowered AI ImplementationBrantly, Aaron F. (Virginia Tech, 2025-07)This case study examines how Type 1 diabetes (T1D) patients and caregivers have leveraged open-source technologies to build more affordable, transparent, and flexible alternatives to commercial insulin management systems. Faced with the high cost, proprietary limitations, and opacity of FDA-approved devices like the MiniMed 670G, a community of patients and developers created tools such as NightScout, OpenAPS, and LoopKit—systems that enable real-time glucose monitoring, algorithmic insulin dosing, and remote care. These innovations offer substantial health benefits, cost savings, and patient autonomy but raise important ethical, legal, and regulatory questions. Developers operate outside FDA frameworks, disclaiming liability and working voluntarily, often driven by personal ties to diabetes. Their systems challenge traditional healthcare delivery by blurring lines between user and developer, patient and engineer. This case invites discussion on transparency, equity, accountability, and legitimacy in biomedical innovation. It asks how regulation, safety, and accessibility can coexist with user-driven innovation, and what role governments, corporations, and open-source communities should play in shaping the future of algorithmic healthcare.
- Gender, Beauty, and Plastic SurgeryFrommer, Lyndon (Virginia Tech, 2025-06-20)This case study examines ethical challenges together with cultural elements and technical aspects of plastic together with gender-affirmative surgical procedures. Modern medicine utilizes surgical technology extensively but its use in plastic and cosmetic procedures creates conflicts between health benefits and aesthetic goals and social standards. The case evaluates plastic surgery procedures that start from reconstructive cleft palate repair and extend to cosmetic treatments which create medical necessity questions alongside body standards and societal expectations. The case study centers on gender-affirmative surgeries because these medical procedures provide essential benefits to transgender and cisgender people who want to match their bodies with their gender identity. Different gender identities encounter varying levels of access and recognition and validation for these procedures because transgender individuals face more scrutiny and stigmatization. The study maintains that medical technologies receive different societal valuations because gender-affirmative care continues to be viewed as optional or cosmetic despite its ability to enhance patient well-being. The case uses this perspective to analyze medical professionalism and ableism together with medical ethics to determine who receives care. The study advocates for an expanded comprehension of health together with identity and surgical legitimacy.
- Functional Magnetic Resonance ImagingCozort, Sarah (Virginia Tech, 2025-06-20)This case study examines how functional magnetic resonance imaging (fMRI) reveals brain activity patterns between handwriting and typing and their effects on learning and memory retention. The overlapping neural networks respond to both writing methods but fMRI imaging demonstrates that handwriting activates broader motor control regions together with sensory integration and advanced cognitive areas. The study generates crucial questions regarding educational practices in writing-based fields that need to determine which writing approach educators should prioritize. The case presents Dr. Clary who teaches first-year writing while determining how to apply neuroscience evidence about writing methods for classroom use without forcing students to use specific tools that might hinder their accessibility or learning preferences. The case examines the extensive historical background of handwriting as an essential fine motor ability for developing language skills alongside cognitive growth while comparing it to modern keyboard use. The study investigates how touchscreen devices with stylus input functions as an intermediary between conventional writing methods and modern digital tools. The research uses pedagogical reflection together with thematic questions about ethics and educational technology and student agency to help students critically evaluate the appropriate role of scientific knowledge in modern instruction design for digital classrooms.
- Indigenous Data Sovereignty in the United StatesHeeren-Moon, Erika (Virginia Tech, 2025-06-20)This case study delves into the complicated environment of Indigenous data sovereignty in American Indian and Alaska Native communities in the United States. It investigates how historical abuse, mistrust, and exclusion from decision-making processes have resulted in calls for tribal self-determination in data collecting, governance, and utilization. The case demonstrates the insufficiency of national privacy rules in protecting tribal data rights, as well as how data obtained without sufficient tribe engagement can perpetuate harm or lead to erroneous policy. Drawing on the C.A.R.E. paradigm (Collective benefit, Authority to control, Responsibility, and Ethics), the study proposes for Indigenous-led data systems that adhere to cultural traditions and promote self-governance. It also explores the ramifications of emerging technologies such as AI and asks whether existing data practices are compatible with Indigenous worldviews. The case, using instances such as the exploitation of Havasupai tribe DNA and the constraints of federal data collecting, advocates for collaborative, culturally aware techniques that empower tribal nations to reclaim control of their digital destiny.
- Carbs, Fats, and Proteins: How Misappropriation of Biochemistry Distorts Our Relations with FoodZhumadilova, Kulyash (Virginia Tech, 2025-06-18)This case study is a critique of how biochemistry has been touted as the hegemonic paradigm through which modern societies interpret food. As dazzling advances like vitamin discovery transformed nutrition and public health, the study argues that reduction of food to its molecular components—carbs, fats, and proteins—has corrupted scientific inquiry as well as cultural interaction with eating. By using instances such as the rise in personal diets supported by apps and glucose monitors, the thesis demonstrates how discourse about individual optimization promotes consumerism while ignoring larger ethical issues such as labor, sustainability, and food justice. The narrative emphasizes how the reduction of foods into abstract nutrients deprives them of cultural, ecological, and social settings, reducing meals to commodities bereft of location and human connection. From molecular gastronomy to internet diet crazes, this molecular reductionism gives the impression of control while displacing more complex views of agriculture, tradition, and community. The study encourages rethinking the construction of science, technology, and marketing into our experience of eating and challenges students to imagine more inclusive, sustainable, and culture-sensitive ways of enjoying food alongside its biochemical definition.
- Digital WhitespacesRoberts, Elizabeth B. (Virginia Tech, 2025-06-18)This case study explores how digital spaces like social media, video games, and podcasts function as "digital whitespaces"—spaces where whiteness is normalized, centered, and reproduced. Using sociological theories of racialized spaces, the research argues that the seemingly neutral online spaces are biased towards white perspectives and other colored individuals. On social media, platform moderation policies and algorithms reinforce narratives that depict white individuals as victims and minimize systemic racism, especially in movements like Black Lives Matter. In video games, game development tools tend to default to features being white, and gaming culture itself tends to exclude or heavily center non-white gamers. Podcasts, despite being potential spaces for counter-narratives, remain under the control of white hosts and frames, further solidifying whose accounts get made "informational" or "authoritative." The case also illustrates how whitespaces on the internet generate psychological and social consequences for marginalized users, from the mental health impacts to economic disparities in financializing content. Finally, it delves into strategies of resistance, such as creating alternative platforms and constructing counterpublics, and invites students to consider how algorithms, policy, and users' actions cumulatively reinforce racial hierarchies online—and what common responsibility we all have to subvert them.
- Exoskeletons and DisabilityParrish, Roan (Virginia Tech, 2025-06-18)This case study addresses the innovation, adoption, and ethical issues of exoskeletons as assistive devices for individuals with disabilities. Exoskeletons promise enhanced mobility and independence, but in transitioning from the military market, they pose the question of whose needs they are really trying to serve. The case contrasts the medical model—disability as an individual shortcoming—with the social model—disability as a matter of how infrastructure and norms present barriers. Exoskeletons consistently reinforce the notion that disabled bodies need to conform to able-bodied norms, but are too costly for the majority of users. Drawing on the writing of disabled writers and veterans, the research unmasks the gap between high technology and the daily reality of disabled populations: technology priced at hundreds of thousands of dollars, requiring extensive training, and perhaps not being sufficient to meet users' requirements. The narrative also bemoans the emphasis on "fixing" disabled bodies instead of accessible spaces and low-tech fixings that are within reach. Finally, the case examines the impact of military agendas on prosthetic and exoskeleton design and invites the students to question who gets to define what constitutes technological progress, how disabled individuals can be better included in the design process, and if enhancement is at the expense of justice and genuine accessibility.
- Overcoming Fear: Social Scientists’ Perceptions of AIWood, Claire (Virginia Tech, 2025-06-18)This case study centers around tensions between social science fields over integrating artificial intelligence into research and education. While technologies like natural language processing and machine learning provide faster analysis and new knowledge, the majority of social scientists are wary of their impact on interpretive rigour, ethical values, and disciplinarity. Controversy ensued at Ridgewood University when junior researchers proposed AI-enabled coding for streamlining qualitative data analysis. Senior academics, worried about losing human-based context, resisted the prospect, dreading algorithmic partiality and the loss of methodological integrity. Early-career academics countered that limiting AI to dissertations would stifle innovation at a disadvantage to them as professionals. A symposium at the university showed that AI can augment and not replace conventional methods with the aid of strong ethical frameworks and human guidance. The case refers to broader concerns: harmonizing innovation and prudence, reducing fears over automation, and ensuring that the adoption of AI advances and does not undermine the values of social inquiry. It calls on social scientists to meaningfully address new technologies and to collaborate in exploring standards allowing responsible, open, and context-sensitive use of AI in research.
- Biometric Tracking and ImmigrationRoe, Elena (Virginia Tech, 2025-06-18)This case study takes into account the social, legal, and ethical issues of biometric monitoring and facial recognition technology at U.S. borders. Biometric tools like facial recognition have serious issues with civil rights, privacy, and systematic bias, even while they may guarantee more efficient immigration enforcement and robust surveillance of visa overstays. The case examines how federal laws, like the Intelligence Reform and Terrorism Prevention Act and the USA PATRIOT Act, gave the government the power to gather biometric information about tourists, resulting in programs like the Traveler Verification Service and Simplified Arrival. Advocates contend that these technologies can enhance national security and stop visa fraud. However, critics predict outrageous financial costs, frequent data breaches, and the lack of explicit regulations regarding data storage and informed consent. The report also indicates the manner in which algorithmic bias over-represents individuals of color, whose rights are entrenched in structural inequality within law enforcement and at borders. Shadowing more oppressive uses of biometric surveillance in other countries like China, the case questions whether or not democratic countries can achieve security in tandem with the protection of human rights. Even more significant, the case invites students to consider whether mass biometric surveillance is compatible with civil liberties and what steps need to be taken to ensure transparency, accountability, and prudent use of personal information.
- Engineering Gaze Case StudyGiles, Kendall (Virginia Tech, 2025-06-18)This case study presents the concept of the "engineering gaze," a critical theory borrowed from feminist analyses of the "male gaze." It discusses how engineers, in particular software and AI engineers, are socialized to perceive themselves as objective, rational solvers who stand outside of the worlds that they construct. Reminding us of artificial intelligence's past, the study traces how mathematical, engineering, and psychological disciplinary imaginations affected underlying assumptions of what defines a problem and how solutions are formulated, in turn potentially oversimplifying intricate social issues into technical issues at the cost of ethics, justice, and social impacts. The case contrasts this ideology with calls by scholars and practitioners for more thoughtful, inter-disciplinary approaches that acknowledge critical understandings from the humanities and social sciences. Through analysis of the heritage of the early AI pioneers and engineering education today, the study encourages students to question the myth of value-neutral technology. It invites reflection on whether engineering's laser-like focus on abstraction and efficiency has contributed to the crises of bias, discrimination, and accountability failures in AI today. Lastly, the case argues that transforming engineering education and practice requires us to humble ourselves to interdisciplinarity and ethical engagement with the social worlds technologies mediate.