Browsing by Author "Lee, Sang Won"
Now showing 1 - 20 of 36
Results Per Page
Sort Options
- Applying Natural Language Processing and Deep Learning Techniques for Raga Recognition in Indian Classical MusicPeri, Deepthi (Virginia Tech, 2020-08-27)In Indian Classical Music (ICM), the Raga is a musical piece's melodic framework. It encompasses the characteristics of a scale, a mode, and a tune, with none of them fully describing it, rendering the Raga a unique concept in ICM. The Raga provides musicians with a melodic fabric, within which all compositions and improvisations must take place. Identifying and categorizing the Raga is challenging due to its dynamism and complex structure as well as the polyphonic nature of ICM. Hence, Raga recognition—identify the constituent Raga in an audio file—has become an important problem in music informatics with several known prior approaches. Advancing the state of the art in Raga recognition paves the way to improving other Music Information Retrieval tasks in ICM, including transcribing notes automatically, recommending music, and organizing large databases. This thesis presents a novel melodic pattern-based approach to recognizing Ragas by representing this task as a document classification problem, solved by applying a deep learning technique. A digital audio excerpt is hierarchically processed and split into subsequences and gamaka sequences to mimic a textual document structure, so our model can learn the resulting tonal and temporal sequence patterns using a Recurrent Neural Network. Although training and testing on these smaller sequences, we predict the Raga for the entire audio excerpt, with the accuracy of 90.3% for the Carnatic Music Dataset and 95.6% for the Hindustani Music Dataset, thus outperforming prior approaches in Raga recognition.
- ARCritique: Supporting Remote Design Critique of Physical Artifacts through Collaborative Augmented RealityLi, Yuan; Lee, Sang Won; Bowman, Douglas A.; Hicks, David; Lages, Wallace S.; Sharma, Akshay (ACM, 2022-12-01)Critique sessions are an essential educational activity at the center of many design disciplines, especially those involving the creation of physical mockups. Conventional approaches often require the students and the instructor to be in the same space to jointly view and discuss physical artifacts. However, in remote learning contexts, available tools (such as videoconferencing) are insufficient due to ineffective, inefficient spatial referencing. This paper presents ARCritique, a mobile Augmented Reality application that allows users to 1) scan physical artifacts, generate corresponding 3D models, and share them with distant instructors; 2) view the model simultaneously in a synchronized virtual environment with remote collaborators; and 3) point to and draw on the model synchronously to aid communication. We evaluated ARCritique with seven Industrial Design students and three faculty to use the app in a remote critique setting. The results suggest that direct support for spatial communication improves collaborative experiences.
- Backdrop Explorer: A Human-AI Collaborative Approach for Exploring Studio Backdrops in Civil War PortraitsLim, Ken Yoong (Virginia Tech, 2023-06-14)In historical photo research, the presence of painted backdrops have the potential to help identify subjects, photographers, locations, and jl{events surrounding} certain photographs. Yet, research processes around these backdrops are poorly documented, with no known tools to aid in the task. We propose a four-step human-AI collaboration workflow to support the jl{discovery} and clustering of these backdrops. Focusing on the painted backdrops of the American Civil War (1861 -- 1865), we present Backdrop Explorer, a content-based image retrieval (CBIR) system incorporating computer vision and novel user interactions. We evaluated Backdrop Explorer on nine users of diverse experience levels and found that all were able to effectively utilize Backdrop Explorer to find photos with similar backdrops. We also document current practices and pain points in Civil War backdrop research through user interviews. Finally, we discuss how our findings and workflow can be applied to other topics and domains.
- Behind the Counter: Exploring the Motivations and Perceived Effectiveness of Online Counterspeech Writing and the Potential for AI-Mediated AssistanceKumar, Anisha (Virginia Tech, 2024-01-11)In today's digital age, social media platforms have become powerful tools for communication, enabling users to express their opinions while also exposing them to various forms of hateful speech and content. While prior research has often focused on the efficacy of online counterspeech, little is known about peoples' motivations for engaging in it. Based on a survey of 458 U.S. participants, we develop and validate a multi-item scale for understanding counterspeech motivations, revealing that differing motivations impact counterspeech engagement between those that do and not find counterspeech to be an effective mechanism for counteracting online hate. Additionally, our analysis explores peoples' perceived effectiveness of their self-written counterspeech to hateful posts, influenced by individual motivations to engage in counterspeech and demographic factors. Finally, we examine peoples' willingness to employ AI assistance, such as ChatGPT, in their counterspeech writing efforts. Our research provides insight into the factors that influence peoples' online counterspeech activity and perceptions, including the potential role of AI assistance in countering online hate.
- Combating Problematic Information Online with Dual Process Cognitive AffordancesBhuiyan, MD Momen (Virginia Tech, 2023-08-04)Dual process theories of mind have been developed over the last decades to posit that humans use heuristics or mental shortcuts (automatic) and analytical (reflective) reasoning while consuming information. Can such theories be used to support users' information consumption in the presence of problematic content in online spaces? To answer, I merge these theories with the idea of affordances from HCI to into the concept of dual process cognitive affordances, consisting of automatic affordance and reflective affordance. Using this concept, I built and tested a set of systems to address two categories of online problematic content: misinformation and filter bubbles. In the first system, NudgeCred, I use cognitive heuristics from the MAIN model to design automatic affordances for better credibility assessment of news tweets from mainstream and misinformative sources. In TransparencyCue, I show the promise of value-centered automatic affordance design inside news articles differentiating content quality. To encourage information consumption outside their ideological filter bubble, in NewsComp, I use comparative annotation to design reflective affordances that enable active engagement with stories from opposing-leaning sources. In OtherTube, I use parasocial interaction, that is, experiencing information feed through the eyes of someone else, to design a reflective affordance that enables recognition of filter bubbles in their YouTube recommendation feeds. Each system shows various degrees of success and outlines considerations in cognitive affordances design. Overall, this thesis showcases the utility of design strategies centered on dual process information cognition model of human mind to combat problematic information space.
- Context-Aware Sit-Stand Desk for Promoting Healthy and Productive BehaviorsHu, Donghan; Bae, Joseph; Lim, Sol; Lee, Sang Won (ACM, 2023-10-29)To mitigate the risk of chronic diseases caused by prolonged sitting, sit-stand desks are promoted as an effective intervention to foster healthy behaviors among knowledge workers by allowing periodic posture switching between sitting and standing. However, conventional systems either let users manually switch the mode, and some research visited automated notification systems with pre-set time intervals. While this regular notification can promote healthy behaviors, such notification can act as external interruptions that hinder individuals’ working productivity. Notably, knowledge workers are known to be reluctant to change their physical postures when concentrating. To address these issues, we propose considering work context based on their screen activities to encourage computer users to alternate their postures when it can minimize disruption, promoting healthy and productive behaviors. To that end, we are in the process of building a context-aware sit-stand desk that can promote healthy and productive behaviors. To that end, we have completed two modules: an application that monitors users’ computer’s ongoing activities and a sensor module that can measure the height of sit-stand desks for data collection. The collected data includes computer activities, measured desk height, and their willingness to switch to standing modes and will be used to build an LSTM prediction model to suggest optimal time points for posture changes, accompanied by appropriate desk height. In this work, we acknowledge previous relevant research, outline ongoing deployment efforts, and present our plan to validate the effectiveness of our approach via user studies.
- A Cyber-Physical System (CPS) Approach to Support Worker Productivity based on Voice-Based Intelligent Virtual AgentsLinares Garcia, Daniel Antonio (Virginia Tech, 2022-08-16)The Architecture, Engineering, and Construction (AEC) industry is currently challenged by low productivity trends and labor shortages. Efforts in academia and industry alike invested in developing solutions to this pressing issue. The majority of such efforts moved towards modernization of the industry, making use of digitalization approaches such as cyber-physical systems (CPS). In this direction, various research works have developed methods to capture information from construction environments and elements and provide monitoring capabilities to measure construction productivity at multiple levels. At the root of construction productivity, the productivity at the worker level is deemed critical. As a result, previous works explored monitoring the productivity of construction workers and resources to address the industry's productivity problems. However, productivity trends are not promising and show a need to more rigorously address productivity issues. Labor shortages also exacerbated the need for increasing the productivity of the current labor workers. Active means to address productivity have been explored as a solution in recent years. As a result, previous research took advantage of CPS and developed systems that sense construction workers' actions and environment and enable interaction with workers to render productivity improvements. One viable solution to this problem is providing on-demand activity-related information to the workers while at work, to decrease the need for manually seeking information from different sources, including supervisors, thereby improving their productivity. Especially, construction workers whose activities involve visual and manual limitations need to receive more attention, as seeking information can jeopardize their safety. Multiple labor trades such as plumbing, steel work, or carpenters are considered within this worker classification. These workers rely on knowledge gathered from the construction project documentation and databases, but have difficulties accessing this information while doing their work. Research works have explored the use of knowledge retrieval systems to give access to construction project data sources to construction workers through multiple methods, including information booths, mobile devices, and augmented reality (AR). However, these solutions do not address the need of this category of workers in receiving on-demand activity related information during their work, without negatively impacting their safety. This research focuses on voice, as an effective modality most appropriate for construction workers whose activities impose visual and manual limit actions. to this end, first, a voice-based solution is developed that supports workers' productivity through providing access to project knowledge available in Building Information Modeling (BIM) data sources. The effect of the selected modality on these workers' productivity is then evaluated using multiple user studies. The work presented in this dissertation is structured as follows: First, in chapter 2, a literature review was conducted to identify means to support construction workers and how integration with BIM has been done in previous research. This chapter identified challenges in incorporating human factors in previous systems and opportunities for seamless integration of workers into BIM practices. In chapter 3, voice-based assistance was explored as the most appropriate means to provide knowledge to workers while performing their activities. As such, Chapter 3 presents the first prototype of a voice-based intelligent virtual agent, aka VIVA, and focuses on evaluating the human factors and testing performance of voice as a modality for worker support. VIVA was tested using a user study involving a simulated construction scenario and the results of the performance achieved through VIVA were compared with the baseline currently used in construction projects for receiving activity-related information, i.e., blueprints. Results from this assessment evidenced productivity performance improvements of users using VIVA over the baseline. Finally, chapter 4 presents an updated version of VIVA that provides automatic real-time link to BIM project data and provides knowledge to the workers through voice. This system was developed based on web platforms, allowing easier development and deployment and access to more devices for future deployment. This study contributes to the productivity improvements in the AEC industry by empowering construction workers through providing on-demand access to project information. This is done through voice as a method that does not jeopardize workers' safety or interrupt their activities. This research contributes to the body of knowledge by developing an in-depth study of the effect of voice-based support systems on worker productivity, enabling real-time BIM-worker integration, and developing a working worker-level productivity support solution for construction workers whose activities limit them in manually accessing project knowledge.
- Designing Human-AI Collaborative Systems for Historical Photo IdentificationMohanty, Vikram (Virginia Tech, 2023-08-30)Identifying individuals in historical photographs is important for preserving material culture, correcting historical records, and adding economic value. Historians, antiques dealers, and collectors often rely on manual, time-consuming approaches. While Artificial Intelligence (AI) offers potential solutions, it's not widely adopted due to a lack of specialized tools and inherent inaccuracies and biases. In my dissertation, I address this gap by combining the complementary strengths of human intelligence and AI. I introduce Photo Sleuth, a novel person identification pipeline that combines crowdsourced expertise with facial recognition, supporting users in identifying unknown portraits from the American Civil War era (1861--65). Despite successfully identifying numerous unknown photos, users often face the `last-mile problem' --- selecting the correct match(es) from a shortlist of high-confidence facial recognition candidates while avoiding false positives. To assist experts, I developed Second Opinion, an online tool that employs a novel crowdsourcing workflow, inspired by cognitive psychology, effectively filtering out up to 75% of facial recognition's false positives. Yet, as AI models continually evolve, changes in the underlying model can potentially impact user experience in such crowd--expert--AI workflows. I conducted an online study to understand user perceptions of changes in facial recognition models, especially in the context of historical person identification. Our findings showed that while human-AI collaborations were effective in identifying photos, they also introduced false positives. To reduce these misidentifications, I built Photo Steward, an information stewardship architecture that employs a deliberative workflow for validating historical photo identifications. Building on this foundation, I introduced DoubleCheck, a quality assessment framework that combines community stewardship and comprehensive provenance information, for helping users accurately assess photo identification quality. Through my dissertation, I explore the design and deployment of human-AI collaborative tools, emphasizing the creation of sustainable online communities and workflows that foster accurate decision-making in the context of historical photo identification.
- The Effects of Incorrect Occlusion Cues on the Understanding of Barehanded Referencing in Collaborative Augmented RealityLi, Yuan; Hu, Donghan; Wang, Boyuan; Bowman, Douglas A.; Lee, Sang Won (Frontiers, 2021-07-01)In many collaborative tasks, the need for joint attention arises when one of the users wants to guide others to a specific location or target in space. If the collaborators are co-located and the target position is in close range, it is almost instinctual for users to refer to the target location by pointing with their bare hands. While such pointing gestures can be efficient and effective in real life, performance will be impacted if the target is in augmented reality (AR), where depth cues like occlusion may be missing if the pointer’s hand is not tracked and modeled in 3D. In this paper, we present a study utilizing head-worn AR displays to examine the effects of incorrect occlusion cues on spatial target identification in a collaborative barehanded referencing task. We found that participants’ performance in AR was reduced compared to a real-world condition, but also that they developed new strategies to cope with the limitations of AR. Our work also identified mixed results of the effect of spatial relationships between users.
- Effects of vibrotactile feedback on yoga practiceIslam, Md Shafiqul; Lee, Sang Won; Harden, Samantha M.; Lim, Sol (Frontiers, 2022-10-31)Participating in physical exercise using remote platforms is challenging for people with vision impairment due to their lack of vision. Thus, there is a need to provide nonvisual feedback to this population to improve the performance and safety of remote exercise. In this study, the effects of different nonvisual types of feedback (verbal, vibrotactile, and combined verbal and vibrotactile) for movement correction were tested with 22 participants with normal vision to investigate the feasibility of the feedback system and pilot tested with four participants with impaired vision. The study with normal-vision participants found that nonvisual feedback successfully corrected an additional 11.2% of movements compared to the no-feedback condition. Vibrotactile feedback was the most time-efficient among other types of feedback in correcting poses. Participants with normal vision rated multimodal feedback as the most strongly preferred modality. In a pilot test, participants with impaired vision also showed a similar trend. Overall, the study found providing vibrotactile (or multimodal) feedback during physical exercise to be an effective way of improving exercise performance. Implications for future training platform development with vibrotactile or multimodal feedback for people with impaired vision are discussed.
- An Exploratory Study of Involving Parents in E-book Joint Reading with Voice AgentsVargas Diaz, Daniel Alfredo (Virginia Tech, 2024-06-06)Parent-child interactions during joint reading play an important role in young children's cognitive and language development. However, contemporary digital book formats---such as e-books or audiobooks---often overlook the role of the parent in reading the text, by either dubbing voice narration over it or reading it aloud automatically. With the advancement and prevalence of voice-based conversational artificial intelligence (AI) agents, AI reading an e-book emerges as a novel reading experience, yet reducing the role of parents in the reading process similarly. When the reading experience becomes less of a joint activity between children and parents, the potential benefits children can gain from reading may diminish. In this study involving 11 parent-child pairs, we aimed to explore how voice agents (VAs) could be used to create an interactive digital space to 1) promote parental engagement in joint e-book reading with children and 2) enhance parents' and children's joint reading experiences. We developed and evaluated TaleMate, an interactive joint reading app that allows parents and children ages 3-6 years to assign different AI voices to the characters from a book while enabling parents to embody one of the characters to read the book with the voice agents. We found that the system supported children's engagement and story comprehension. Parents reported that they found value in the interactivity of the system and enjoyed a participatory, joint reading experience, where both they and their children could choose which characters to embody. These findings offer insights into design considerations for researchers interested in developing applications that facilitate collaborative reading experiences involving parents, children, and voice agents.
- Exploring the Effectiveness of Time-lapse Screen Recording for Self-Reflection in Work ContextHu, Donghan; Lee, Sang Won (ACM, 2024-05-11)Effective self-tracking in working contexts empowers individuals to explore and reflect on past activities. Recordings of computer activities contain rich metadata that can offer valuable insight into users’ previous tasks and endeavors. However, presenting a simple summary of time usage may not effectively engage users with data because it is not contextualized, and users may not understand what to do with the data. This work explores time-lapse videos as a visual-temporal medium to facilitate self-refection among workers in productivity contexts. To explore this space, we conducted a four-week study (n = 15) to investigate how a computer screen’s history of states can help workers recall previous undertakings and gain comprehensive insights via self-refection. Our results support that watching time-lapse videos can enhance self-refection more effectively than traditional self-tracking tools by providing contextual clues about users’ past activities. The experience with both traditional tools and time-lapse videos resulted in increased productivity. Additionally, time-lapse videos assist users in cultivating a positive understanding of their work. We discuss how multimodal cues, such as time-lapse videos, can complement personal informatics tools.
- Glanceable AR: Towards a Pervasive and Always-On Augmented Reality FutureLu, Feiyu (Virginia Tech, 2023-07-06)Augmented reality head-worn displays (AR HWDs) have the potential to assist personal computing and the acquisition of everyday information. With advancements in hardware and tracking, these devices are becoming increasingly lightweight and powerful. They could eventually have the same form factor as normal pairs of eyeglasses, be worn all-day, overlaying information pervasively on top of the real-world anywhere and anytime to continuously assist people’s tasks. However, unlike traditional mobile devices, AR HWDs are worn on the head and always visible. If designed without care, the displayed virtual information could also be distracting, overwhelming, and take away the user’s attention from important real- world tasks. In this dissertation, we research methods for appropriate information displays and interactions with future all-day AR HWDs by seeking answers to four questions: (1) how to mitigate distractions of AR content to the users; (2) how to prevent AR content from occluding the real-world environment; (3) how to support scalable on-the-go access to AR content; and (4) how everyday users perceive using AR systems for daily information acquisition tasks. Our work builds upon a theory we developed called Glanceable AR, in which digital information is displayed outside the central field of view of the AR display to minimize distractions, but can be accessed through a quick glance. Through five projects covering seven studies, this work provides theoretical and empirical knowledge to prepare us for a pervasive yet unobtrusive everyday AR future, in which the overlaid AR information is easily accessible, non-invasive, responsive, and supportive.
- Helping job seekers prepare for technical interviews by enabling context-rich interview feedbackLu, Yi (Virginia Tech, 2024-06-11)Technical interviews have become a popular method for recruiters in the tech industry to assess job candidates' proficiency in both soft skills and technical skills as programmers. However, these interviews can be stressful and frustrating for interviewees. One significant cause of the negative experience of technical interviews was the lack of feedback, making it difficult for job seekers to improve their performance progressively by participating in technical interviews. Although there are open platforms like Leetcode that allow job seekers to practice their technical proficiency, resources for conducting mock interviews to practice soft skills like communication are limited and costly to interviewees. To address this, we investigated how professional interviewers provide feedback if they were conducting a mock interview and the difficulties they face when interviewing job seekers by running mock interviews between software engineers and job seekers. With the insights from the formative studies, we developed a new system for technical interviews aiming to help interviewers conduct technical interviews with less cognitive load and provide context-rich feedback. An evaluation study on the usability of using our system to conduct technical interviews further revealed the unresolved cognitive loads of interviewers, underscoring the requirements for further improvement to facilitate easier interview processes and enable peer-to-peer interview practices.
- Increase Driving Situation Awareness and In-vehicle Gesture-based Menu Navigation Accuracy with Heads-Up DisplayCao, Yusheng (Virginia Tech, 2023-04)More and more novel functions are being integrated into the vehicle infotainment system to allow individuals to perform secondary tasks with high accuracy and low accident risks. Mid-air gesture interactions are one of them. This thesis designed and tested a novel interface to solve a specific issue caused by this method of interaction: visual distraction within the car. In this study, a Heads-Up Display (HUD) was integrated with a gesture-based menu navigation system to allow drivers to see menu selections without looking away from the road. An experiment was conducted to investigate the potential of this system in improving drivers’ driving performance, situation awareness, and gesture interactions. The thesis recruited 24 participants to test the system. Participants provided subjective feedback about using the system and objective performance data. This thesis found that HUD significantly outperformed the Heads-Down Display (HDD) in participants’ preference, perceived workload, level 1 situation awareness, and secondary-task performance. However, to achieve this, the participants compensated by having poor driving performance and relatively longer visual distraction. This thesis will provide directions for future research and improve the overall user experience while the driver interacts with the in-vehicle gesture interaction system.
- Integrating Traditional Input Devices to Support Rapid Ideation in an Augmented-reality-based BrainstormingPhan, Tam; Bowman, Douglas A.; Lee, Sang Won (ACM, 2022-12-01)Augmented reality(AR) has the potential to address the limitations of in-person brainstorming, such as digitization and remote collaboration while preserving the spatial relationship between participants and their environments. However, current AR input methods are not sufficient for supporting rapid ideation compared to nondigital tools used in brainstorming: pen, paper, sticky notes, and whiteboards. To help users create comprehensible notes rapidly for AR-based collaborative brainstorming, we developed IdeaSpace, a system that allows users to use traditional tools like pens and sticky notes. We evaluated this input method through a user study (N=22) to assess the efficiency, usability, and comprehensibility of the approach. Our evaluation indicates that IdeaSpace input method outperforms the baseline method in all metrics.
- Investigating Asymmetric Collaboration and Interaction in Immersive EnvironmentsEnriquez, Daniel (Virginia Tech, 2024-01-23)With the commercialization of virtual/augmented reality (VR/AR) devices, there is an increasing interest in combining immersive and non-immersive devices (e.g., desktop computers, mobile devices) for asymmetric collaborations. While such asymmetric settings have been examined in social platforms, questions surrounding collaborative view dimensionalities in data-driven decision-making and interaction from non-immersive devices remain under-explored. A crucial inquiry arises: although presenting a consistent 3D virtual world on both immersive and non-immersive platforms has been a common practice in social applications, does the same guideline apply to lay out data? Or should data placement be optimized locally according to each device's display capacity? To this effect, a user study was conducted to provide empirical insights into the user experience of asymmetric collaboration in data-driven decision-making. The user study tested practical dimensionality combinations between PC and VR, resulting in three conditions: PC2D+VR2D, PC2D+VR3D, and PC3D+VR3D. The results revealed a preference for PC2D+VR3D, and PC2D+VR2D led to the quickest task completion. Similarly, mobile devices have become an inclusive alternative to head-worn displays in virtual reality (VR) environments, enhancing accessibility and allowing cross-device collaboration. Object manipulation techniques in mobile Augmented Reality (AR) have been typically evaluated in table-top scale and we lack an understanding of how these techniques perform in room-scale environments. Two studies were conducted to analyze object translation tasks, each with 30 participants, to investigate how different techniques impact usability and performance for room-scale mobile VR object translations. Results indicated that the Joystick technique, which allowed translation in relation to the user's perspective, was the fastest and most preferred, without difference in precision. These findings provide insight for designing collaborative, asymmetric VR environments.
- Investigating the Effects of Nudges for Facilitating the Use of Trigger Warnings and Content WarningsAltland, Emily Caroline (Virginia Tech, 2024-06-27)Social media can trigger past traumatic memories in viewers when posters post sensitive content. Strict content moderation and blocking/reporting features do not work when triggers are nuanced and the posts may not violate site guidelines. Viewer-side interventions exist to help filter and hide certain content but these put all the responsibility on the viewer and typically act as 'aftermath interventions'. Trigger and content warnings offer a unique solution giving viewers the agency to scroll past content they may want to avoid. However, there is a lack of education and awareness for posters for how to add a warning and what topics may require one. We conducted this study to determine if poster-side interventions such as a nudge algorithm to add warnings to sensitive posts would increase social media users' knowledge and understanding of how and when to add trigger and content warnings. To investigate the effectiveness of a nudge algorithm, we designed the TWIST (Trigger Warning Includer for Sensitive Topics) app. The TWIST app scans tweet content to determine whether a TW/CW is needed and if so, nudges the social media poster to add one with an example of what it may look like. We then conducted a 4-part mixed methods study with 88 participants. Our key findings from this study include (1) Nudging social media users to add TW/CW educates them on triggering topics and raises their awareness when posting in the future, (2) Social media users can learn how to add a trigger/content warning through using a nudge app, (3) Researchers grew in understanding of how a nudge algorithm like TWIST can change people's behavior and perceptions, and (4) We provide empirical evidence of the effectiveness of such interventions (even in short-time use).
- iThem: Programming Internet of Things Beyond Trigger-Action PatternWang, Marx; Manesh, Daniel; Hu, Ruipu; Lee, Sang Won (ACM, 2022-10-29)With emerging technologies bringing Internet of Things (IoT) devices into domestic environments, trigger-action programming such as IFTTT with its simple if-this-then-that pattern provides an efective way for end-users to connect fragmented intelligent services and program their own smart home/work space automation. While the simplicity of trigger-action programming can be efective for non-programmers with its straightforward concepts and graphical user interface, it does not allow the algorithmic expressivity that a programming language has. For instance, the simple if-this-thenthat structure cannot cover complex algorithms that arise from real world scenarios involving multiple conditions or keeping track of a sequence of conditions (e.g., incrementing counters, triggering one action if two conditions are both true). In this exploratory work, we take an alternative approach by creating a programmable channel between application programming interfaces (APIs), which allows programmers to preserve states and to use them to write complex algorithms. We propose iThem, which stands for intelligence of them—internet of things, that allow programmers to author any complex algorithms that can connect diferent IoT services and fully unleash the freedom of a general programming language. In this poster, we share the design, development, and ongoing validation progress of iThem, which piggybacks on existing programmable IoT system IFTTT, and which allows for a programmable channel that connects triggers and actions in IFTTT with versatility.
- Mobile Devices for Facilitating Group Fitness and Visualization of Fitness DataLiu, Shuai (Virginia Tech, 2020-05-29)Lack of physical activity is a major problem contributing to diseases and poor health. Nowadays, mobile fitness apps serve in important roles in encouraging and facilitating people to do more physical exercise. Many apps focus primarily on individual behavioral strategies, such as displaying individual steps to encourage physical activity. Such strategies help evoke one's internal motivation such as peer recognition and competition achievement. However, such apps usually de-emphasize or ignore interpersonal behavioral strategies, such as team rank. And group-based strategies are very important in aspects such as peer recognition and can facilitate more physical activity. This research explores the design strategies of group-based dynamic approaches for encouraging physical activity in small-size groups. The development effort takes into account the different roles of mobile devices and laptops and the evaluation explored the effectiveness of the design.