Thebault-Spieker, JacobVenkatagiri, SukritMine, NaomiLuther, Kurt2023-07-112023-07-112023-06-12http://hdl.handle.net/10919/115725In recent years, social media companies have grappled with defining and enforcing content moderation policies surrounding political content on their platforms, due in part to concerns about political bias, disinformation, and polarization. These policies have taken many forms, including disallowing political advertising, limiting the reach of political topics, fact-checking political claims, and enabling users to hide political content altogether. However, implementing these policies requires human judgement to label political content, and it is unclear how well human labelers perform at this task, or whether biases affect this process. Therefore, in this study we experimentally evaluate the feasibility and practicality of using crowd workers to identify political content, and we uncover biases that make it difficult to identify this content. Our results problematize crowds composed of seemingly interchangeable workers, and provide preliminary evidence that aggregating judgements from heterogeneous workers may help mitigate political biases. In light of these findings, we identify strategies to achieving fairer labeling outcomes, while also better supporting crowd workers at this task and potentially mitigating biases.application/pdfenIn CopyrightDiverse Perspectives Can Mitigate Political Bias in Crowdsourced Content ModerationArticle - Refereed2023-07-01The author(s)https://doi.org/10.1145/3593013.3594080