Browsing by Author "Rho, Eugenia"
Now showing 1 - 5 of 5
Results Per Page
Sort Options
- Conversate: Supporting Reflective Learning in Interview Practice Through Interactive Simulation and Dialogic FeedbackDaryanto, Taufiq; Ding, Xiaohan; Wilhelm, Lance; Stil, Sophia; Knutsen, Kirk; Rho, Eugenia (ACM, 2025-01-10)Job interviews play a critical role in shaping one’s career, yet practicing interview skills can be challenging, especially without access to human coaches or peers for feedback. Recent advancements in large language models (LLMs) present an opportunity to enhance the interview practice experience. Yet, little research has explored the effectiveness and user perceptions of such systems or the benefits and challenges of using LLMs for interview practice. Furthermore, while prior work and recent commercial tools have demonstrated the potential of AI to assist with interview practice, they often deliver one-way feedback, where users only receive information about their performance. By contrast, dialogic feedback, a concept developed in learning sciences, is a two-way interaction feedback process that allows users to further engage with and learn from the provided feedback through interactive dialogue. This paper introduces Conversate, a web-based application that supports reflective learning in job interview practice by leveraging large language models (LLMs) for interactive interview simulations and dialogic feedback. To start the interview session, the user provides the title of a job position (e.g., entry-level software engineer) in the system. Then, our system will initialize the LLM agent to start the interview simulation by asking the user an opening interview question and following up with questions carefully adapted to subsequent user responses. After the interview session, our back-end LLM framework will then analyze the user’s responses and highlight areas for improvement. Users can then annotate the transcript by selecting specific sections and writing self-reflections. Finally, the user can interact with the system for dialogic feedback, conversing with the LLM agent to learn from and iteratively refine their answers based on the agent’s guidance. To evaluate Conversate, we conducted a user study with 19 participants to understand their perceptions of using LLM-supported interview simulation and dialogic feedback. Our findings show that participants valued the adaptive follow-up questions from LLMs, as they enhanced the realism of interview simulations and encouraged deeper thinking. Participants also appreciated the AI-assisted annotation, as it reduced their cognitive burden and mitigated excessive self-criticism in their own evaluation of their interview performance. Moreover, participants found the LLM-supported dialogic feedback to be beneficial, as it promoted personalized and continuous learning, reduced feelings of judgment, and allowed them to express disagreement.
- Exploring Large Language Models Through a Neurodivergent Lens: Use, Challenges, Community-Driven Workarounds, and ConcernsCarik, Buse; Ping, Kaike; Ding, Xiaohan; Rho, Eugenia (ACM, 2025-01-10)Despite the increasing use of large language models (LLMs) in everyday life among neurodivergent individuals, our knowledge of how they engage with, and perceive LLMs remains limited. In this study, we investigate how neurodivergent individuals interact with LLMs by qualitatively analyzing topically related discussions from 61 neurodivergent communities on Reddit. Our findings reveal 20 specific LLM use cases across five core thematic areas of use among neurodivergent users: emotional well-being, mental health support, interpersonal communication, learning, and professional development and productivity. We also identified key challenges, including overly neurotypical LLM responses and the limitations of text-based interactions. In response to such challenges, some users actively seek advice by sharing input prompts and corresponding LLM responses. Others develop workarounds by experimenting and modifying prompts to be more neurodivergent-friendly. Despite these efforts, users have significant concerns around LLM use, including potential overreliance and fear of replacing human connections. Our analysis highlights the need to make LLMs more inclusive for neurodivergent users and implications around how LLM technologies can reinforce unintended consequences and behaviors.
- Leveraging Prompt-Based Large Language Models: Predicting Pandemic Health Decisions and Outcomes Through Social Media LanguageDing, Xiaohan; Carik, Buse; Gunturi, Uma Sushmitha; Reyna, Valerie; Rho, Eugenia (ACM, 2024-05-11)We introduce a multi-step reasoning framework using prompt-based LLMs to examine the relationship between social media lan guage patterns and trends in national health outcomes. Grounded in fuzzy-trace theory, which emphasizes the importance of “gists” of causal coherence in effective health communication, we introduce Role-Based Incremental Coaching (RBIC), a prompt-based LLM framework, to identify gists at-scale. Using RBIC, we systematically extract gists from subreddit discussions opposing COVID-19 health measures (Study 1). We then track how these gists evolve across key events (Study 2) and assess their influence on online engage ment (Study 3). Finally, we investigate how the volume of gists is associated with national health trends like vaccine uptake and hospitalizations (Study 4). Our work is the first to empirically link social media linguistic patterns to real-world public health trends, highlighting the potential of prompt-based LLMs in identifying critical online discussion patterns that can form the basis of public health communication strategies.
- Linguistically Differentiating Acts and Recalls of Racial Microaggressions on Social MediaGunturi, Uma Sushmitha; Kumar, Anisha; Ding, Xiaohan; Rho, Eugenia (ACM, 2024-04-23)In this work, we examine the linguistic signature of online racial microaggressions (acts) and how it differs from that of personal narratives recalling experiences of such aggressions (recalls) by Black social media users. We manually curate and annotate a corpus of acts and recalls from in-the-wild social media discussions, and verify labels with Black workshop participants. We leverage Natural Language Processing (NLP) and qualitative analysis on this data to classify (RQ1), interpret (RQ2), and characterize (RQ3) the language underlying acts and recalls of racial microaggressions in the context of racism in the U.S. Our findings show that neural language models (LMs) can classify acts and recalls with high accuracy (RQ1) with contextual words revealing themes that associate Blacks with objects that reify negative stereotypes (RQ2). Furthermore, overlapping linguistic signatures between acts and recalls serve functionally different purposes (RQ3), providing broader implications to the current challenges in content moderation systems on social media.
- Understanding the Relationship Between Social Identity and Self-Expression Through Animated GIFs on Social MediaWang, Marx; Bhuiyan, Md Momen; Rho, Eugenia; Luther, Kurt; Lee, Sang Won (ACM, 2024-04-23)GIFs afford a high degree of personalization, as they are often created from popular movie and video clips with diverse and realistic characters, each expressing a nuanced emotional state through a combination of characters' own unique bodily gestures and distinctive visual backgrounds. These properties of high personalization and embodiment provide a unique window for exploring how individuals represent and express themselves on social media through the lens of the GIFs they use. In this study, we explore how Twitter users express their gender and racial identities through characters in GIFs. We conducted a behavioral study ($n=398$) to simulate a series of tweeting and GIF-picking scenarios. We annotated the gender and race identities of GIF characters, and we found that gender and race identities have significant impacts on users' GIF choices: men chose more gender-matching GIFs than women, and White participants chose more race-matching GIFs than Black participants. We also found that users' prior familiarity with the source of a GIF and perceptions about the composition of the audience (viz., having a matching identity) have significant effects on whether a user will choose race- and gender-matching GIFs. This work has implications for practitioners supporting personalized social identity construction and impression management mechanisms online.