Guarding the Border with Tech: Algorithmic Implementation of Immigration and Refugee Policies

TR Number

Date

2025-07

Journal Title

Journal ISSN

Volume Title

Publisher

Virginia Tech

Abstract

This case study explores the growing use of AI-based tools in immigration and refugee vetting, focusing on Sweden’s adoption of predictive algorithms under the “Extreme Vetting Initiative.” While these technologies promise efficiency and security, they raise critical concerns around fairness, transparency, and bias. The experiences of two Afghan families—one accepted and supported, the other surveilled, profiled, and deported—illustrate the starkly unequal impacts of algorithmic decision-making. While the Ahmadi family benefited from data-driven resettlement programs and tailored support, the Abdullah family was targeted by opaque and biased systems, including religious profiling and flawed predictive assessments. This contrast reveals how machine learning systems, often trained on biased or incomplete data, can replicate and amplify systemic discrimination. The case prompts reflection on the ethical foundations of immigration technology: What counts as fairness? Can systems be both efficient and just? Who is accountable for algorithmic harm? In an era where immigration is increasingly mediated by automated tools, this case challenges to think critically about how we balance national interest with human rights, and the role of public oversight in shaping ethical tech for global migration.

Description

Keywords

Algorithmic immigration, Refugee profiling, AI ethics

Citation