Suong, Clara H.2025-08-072025-08-072025-07https://hdl.handle.net/10919/137006This case study explores the growing use of AI-based tools in immigration and refugee vetting, focusing on Sweden’s adoption of predictive algorithms under the “Extreme Vetting Initiative.” While these technologies promise efficiency and security, they raise critical concerns around fairness, transparency, and bias. The experiences of two Afghan families—one accepted and supported, the other surveilled, profiled, and deported—illustrate the starkly unequal impacts of algorithmic decision-making. While the Ahmadi family benefited from data-driven resettlement programs and tailored support, the Abdullah family was targeted by opaque and biased systems, including religious profiling and flawed predictive assessments. This contrast reveals how machine learning systems, often trained on biased or incomplete data, can replicate and amplify systemic discrimination. The case prompts reflection on the ethical foundations of immigration technology: What counts as fairness? Can systems be both efficient and just? Who is accountable for algorithmic harm? In an era where immigration is increasingly mediated by automated tools, this case challenges to think critically about how we balance national interest with human rights, and the role of public oversight in shaping ethical tech for global migration.9 pagesapplication/pdfenIn Copyright (InC)This Item is protected by copyright and/or related rights. Some uses of this Item may be deemed fair and permitted by law even without permission from the rights holder(s). For other uses you need to obtain permission from the rights holder(s).Algorithmic immigrationRefugee profilingAI ethicsGuarding the Border with Tech: Algorithmic Implementation of Immigration and Refugee PoliciesReportVirginia Tech