Browsing by Author "Goyal, Nitesh"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
- Combating Problematic Information Online with Dual Process Cognitive AffordancesBhuiyan, MD Momen (Virginia Tech, 2023-08-04)Dual process theories of mind have been developed over the last decades to posit that humans use heuristics or mental shortcuts (automatic) and analytical (reflective) reasoning while consuming information. Can such theories be used to support users' information consumption in the presence of problematic content in online spaces? To answer, I merge these theories with the idea of affordances from HCI to into the concept of dual process cognitive affordances, consisting of automatic affordance and reflective affordance. Using this concept, I built and tested a set of systems to address two categories of online problematic content: misinformation and filter bubbles. In the first system, NudgeCred, I use cognitive heuristics from the MAIN model to design automatic affordances for better credibility assessment of news tweets from mainstream and misinformative sources. In TransparencyCue, I show the promise of value-centered automatic affordance design inside news articles differentiating content quality. To encourage information consumption outside their ideological filter bubble, in NewsComp, I use comparative annotation to design reflective affordances that enable active engagement with stories from opposing-leaning sources. In OtherTube, I use parasocial interaction, that is, experiencing information feed through the eyes of someone else, to design a reflective affordance that enables recognition of filter bubbles in their YouTube recommendation feeds. Each system shows various degrees of success and outlines considerations in cognitive affordances design. Overall, this thesis showcases the utility of design strategies centered on dual process information cognition model of human mind to combat problematic information space.
- NewsComp: Facilitating Diverse News Reading through Comparative AnnotationBhuiyan, Md Momen; Lee, Sang Won; Goyal, Nitesh; Mitra, Tanushree (ACM, 2023-04-19)To support efficient, balanced news consumption, merging articles from diverse sources into one, potentially through crowdsourcing, could alleviate some hurdles. However, the merging process could also impact annotators’ attitudes towards the content. To test this theory, we propose comparative news annotation; that is, annotating similarities and differences between a pair of articles. By developing and deploying NewsComp—a prototype system—we conducted a between-subjects experiment (N = 109) to examine how users’ annotations compare to experts’, and how comparative annotation affects users’ perceptions of article credibility and quality. We found that comparative annotation can marginally impact users’ credibility perceptions in certain cases; it did not impact perceptions of quality. While users’ annotations were not on par with experts’, they showed greater precision in finding similarities than in identifying disparate important statements. The comparison process also led users to notice differences in information placement and depth, degree of factuality/opinion, and empathetic/inflammatory language use. We discuss implications for the design of future comparative annotation tasks.
- SHAI 2023: Workshop on Designing for Safety in Human-AI InteractionsGoyal, Nitesh; Hong, Sungsoo Ray; Mandryk, Regan; Li, Toby; Luther, Kurt; Wang, Dakuo (ACM, 2023-03-27)Generative ML models present a novel opportunity for a wider group of societal members to engage with AI, imagine new use cases, and applications with an increasing ability to disseminate the outcomes of such endeavors to larger audiences. However, owing to the novelty and despite best intentions, inadvertent outcomes might accrue leading to harms, especially to marginalized groups in society. As this field of Human AI Interaction advances, academic/ industry researchers, and industry practitioners have an opportunity to brainstorm how to best utilize this new technology. Our workshop is aimed at such practitioners and researchers at the intersection of AI and HCI who are interested in collaboratively identifying challenges, and solutions to create safer outcomes with Generative ML models.