Browsing by Author "Knapp, R. Benjamin"
Now showing 1 - 20 of 41
Results Per Page
Sort Options
- AffecTech-an affect-aware interactive AV ArtworkCoghlan, Niall; Jaimovich, Javier; Knapp, R. Benjamin; O’Brien, Donal; Ortiz, Miguel A. (ISEA International, 2009)New developments in real-time computing and body-worn sensor technology allow us to explore not just visible gestures using inertial sensors, but also invisible changes in an individual’s physiological state using bio-sensors (Kim & André 2008). This creates an opportunity for a more intimate interaction between the observer and technology-based art (Gonsalves 2008). We present a technical overview of the AffecTech system; a bio-signal based interactive audiovisual installation commissioned as part of the pre-ISEA symposium in November 2008. Observers were invited to sit on one of 2 sensor-enhanced chairs (Coghlan & Knapp 2008), which transmitted physiological data about the occupant to a central control system. This data was used to control and modulate interactive visuals, live video feeds and a surround sound score, with events and interactions dependent on the observers’ affective/emotional state and the disparity or similarity between the bio-signals of the chairs occupants. This technical overview is followed by an examination of the outcomes of the project, from both the artistic and technical viewpoints, with recommendations for modification in future implementations.
- Affective Feedback in a Virtual Reality based Intelligent SupermarketSaha, Deba Pratim; Martin, Thomas L.; Knapp, R. Benjamin (ACM, 2017)The probabilistic nature of the inferences in a context-aware intelligent environment (CAIE) renders them vulnerable to erroneous decisions resulting in wrong services. Learning to recognize a user’s negative reactions to such wrong services will enable a CAIE to anticipate a service’s appropriateness. We propose a framework for continuous measurement of physiology to infer a user’s negative-emotions arising from receiving wrong services, thereby implementing an implicit-feedback loop in the CAIE system. To induce such negative-emotions, in this paper, we present a virtualreality (VR) based experimental platform while collecting real-time physiological data from ambulatory wearable sensors. Results from the electrodermal activity (EDA) data analysis reveal patterns that correlate with known features of negative-emotions, indicating the possibility to infer service appropriateness from user’s reactions to a service, thereby closing an implicit-feedback loop for the CAIE.
- Biosignal-driven Art: Beyond biofeedbackOrtiz, Miguel A.; Coghlan, Niall; Jaimovich, Javier; Knapp, R. Benjamin (CMMAS, 2011)Biosignal monitoring in interactive arts, although present for over forty years, remains a relatively little known field of research within the artistic community as compared to other sensing technologies. Since the early 1960s, an ever-increasing number of artists have collaborated with neuroscientists, physicians and electrical engineers, in order to devise means that allow for the acquisition of the minuscule electrical potentials generated by the human body. This has enabled direct manifestations of human physiology to be incorporated into interactive artworks. This paper presents an introduction to this field of artistic practice and scientific research that uses human physiology as its main element. A brief introduction to the main concepts and history of biosignal-driven art is followed by a review of various artworks and scientific enquiry developed by the authors. This aims at giving a complete overview of the various strategies developed for biosignal-driven interactive art.
- Contagion of Physiological Correlates of Emotion between Performer and Audience: An Exploratory StudyJaimovich, Javier; Coghlan, Niall; Knapp, R. Benjamin (2010)Musical and performance experiences are often described as evoking powerful emotions, both in the listener/observer and player/performer. There is a significant body of literature describing these experiences along with related work examining physiological changes in the body during music listening and the physiological correlates of emotional state. However there are still open questions as to how and why, emotional responses may be triggered by a performance, how audiences may be influenced by a performers mental or emotional state and what effect the presence of an audience has on performers. We present a pilot study and some initial findings of our investigations into these questions, utilising a custom software and hardware system we have developed. Although this research is still at a pilot stage, our initial experiments point towards significant correlation between the physiological states of performers and audiences and we here present the system, the experiments and our preliminary data.
- Creating a Network of Integral Music ControllersKnapp, R. Benjamin; Cook, Perry R. (NIME, 2006)In this paper, we describe the networking of multiple Integral Music Controllers (IMCs) to enable an entirely new method for creating music by tapping into the composite gestures and emotions of not just one, but many performers. The concept and operation of an IMC is reviewed as well as its use in a network of IMC controllers. We then introduce a new technique of Integral Music Control by assessing the composite gesture(s) and emotion(s) of a group of performers through the use of a wireless mesh network. The Telemuse, an IMC designed precisely for this kind of performance, is described and its use in a new musical performance project under development by the authors is discussed.
- Creating Biosignal Algorithms for Musical Applications from an Extensive Physiological DatabaseJaimovich, Javier; Knapp, R. Benjamin (NIME, 2015)Previously the design of algorithms and parameter calibration for biosignal music performances has been based on testing with a small number of individuals - in fact usually the performer themselves. This paper uses the data collected from over 4000 people to begin to create a truly robust set of algorithms for heart rate and electrodermal activity measures, as well as the understanding of how the calibration of these vary by individual.
- Design of a Mobile Brain Training Tool for Seniors: Motivational RequirementsO’Brien, Donal; Knapp, R. Benjamin; Thompson, Oonagh; Craig, David; Barrett, Suzanne (ICST, 2012)The overall goal of this project is to design, develop, and validate a mobile phone-based ‘brain training’ software suite targeted at senior users, using iterative person-centered design methodologies, that will permit a subsequent clinical trial of cognitive stimulation efficacy known as theSONIC²S Study (Stirling-Oregon-Northern Ireland-Chicago Cognitive Stimulation Study). The SONIC²S Study will represent a long term (c. 15 year), very large scale (n=12,000), embedded clinical trial that aims to definitively establish whether or not brain training acts to prevent dementia or cognitive decline. It is anticipated that participant compliance in such a study will be a significant concern. This study reports on a series of focus groups, usability studies and field trials which sought to identify the key motivational factors influencing seniors’ engagement with mobile brain-training technology in order to inform the design of a bespoke tool which is acceptable/enjoyable to target users.
- Design of a Wearable Two-Dimensional Joystick as a Muscle-Machine Interface Using Mechanomyographic SignalsSaha, Deba Pratim (Virginia Tech, 2013-11-12)Finger gesture recognition using glove-like interfaces are very accurate for sensing individual finger positions by employing a gamut of sensors. However, for the same reason, they are also very costly, cumbersome and unaesthetic for use in artistic scenarios such as gesture based music composition platforms like Virginia Tech's Linux Laptop Orchestra. Wearable computing has shown promising results in increasing portability as well as enhancing proprioceptive perception of the wearers' body. In this thesis, we present the proof-of-concept for designing a novel muscle-machine interface for interpreting human thumb motion as a 2-dimensional joystick employing mechanomyographic signals. Infrared camera based systems such as Microsoft Digits and ultrasound sensor based systems such as Chirp Microsystems' Chirp gesture recognizers are elegant solutions, but have line-of-sight sensing limitations. Here, we present a low-cost and wearable joystick designed as a wristband which captures muscle sounds, also called mechanomyographic signals. The interface learns from user's thumb gestures and finally interprets these motions as one of the four kinds of thumb movements. We obtained an overall classification accuracy of 81.5% for all motions and 90.5% on a modified metric. Results obtained from the user study indicate that mechanomyography based wearable thumb-joystick is a feasible design idea worthy of further study.
- Designing Display Ecologies for Visual AnalysisChung, HaeYong (Virginia Tech, 2015-05-07)The current proliferation of connected displays and mobile devices from smart phones and tablets to wall-sized displays presents a number of exciting opportunities for information visualization and visual analytics. When a user employs heterogeneous displays collaboratively to achieve a goal, they form what is known as a display ecology. The display ecology enables multiple displays to function in concert within a broader technological environment to accomplish tasks and goals. However, since information and tasks are scattered and disconnected among separate displays, one of the inherent challenges associated with visual analysis in display ecologies is enabling users to seamlessly coordinate and subsequently connect and integrate information across displays. This research primarily addresses these challenges through the creation of interaction and visualization techniques and systems for display ecologies in order to support sensemaking with visual analysis. This dissertation explores essential visual analysis activities and design considerations for visual analysis in order to inform the new design of display ecologies for visual analysis. Based on identified design considerations, we then designed and developed two visual analysis systems. First, VisPorter supports intuitive gesture interactions for sharing and integrating information in a display ecology. Second, the Spatially Aware Visual Links (SAViL) presents a cross-display visual link technique capable of guiding the user's attention to relevant information across displays. It also enables the user to visually connect related information over displays in order to facilitate synthesizing information scattered over separate displays and devices. The various aspects associated with the techniques described herein help users to transform and empower the multiple displays in a display ecology for enhanced visual analysis and sensemaking.
- Determining Correlation Between Video Stimulus and Electrodermal ActivityTasooji, Reza (Virginia Tech, 2018-08-06)With the growth of wearable devices capable of measuring physiological signals, affective computing is becoming more popular than before that gradually will remove our cognitive approach. One of the physiological signals is the electrodermal activities (EDA) signal. We explore how video stimulus that might arouse fear affect the EDA signal. To better understand EDA signal, two different medians, a scene from a movie and a scene from a video game, were selected to arouse fear. We conducted a user study with 20 participants and analyzed the differences between medians and proposed a method capable of detecting the highlights of the stimulus using only EDA signals. The study results show that there are no significant differences between two medians except that users are more engaged with the content of the video game. From gathered data, we propose a similarity measurement method for clustering different users based on how common they reacted to different highlights. The result shows for 300 seconds stimulus, using a window size of 10 seconds, our approach for detecting highlights of the stimulus has the precision of one for both medians, and F1 score of 0.85 and 0.84 for movie and video game respectively.
- The Emotion in Motion Experiment: Using an Interactive Installation as a Means for Understanding Emotional Response to MusicJaimovich, Javier; Ortiz, Miguel A.; Coghlan, Niall; Knapp, R. Benjamin (NIME, 2012)In order to further understand our emotional reaction to music, a museum-based installation was designed to collect physiological and self-report data from people listening to music. This demo will describe the technical implementation of this installation as a tool for collecting large samples of data in public spaces. The Emotion in Motion terminal is built upon a standard desktop computer running Max/MSP and using sensors that measure physiological indicators of emotion that are connected to an Arduino. The terminal has been installed in museums and galleries in Europe and the USA, helping create the largest database of physiology and self-report data while listening to music.
- Emotion in Motion: A Reimagined Framework for Biomusical/Emotional InteractionBortz, Brennon; Jaimovich, Javier; Knapp, R. Benjamin (NIME, 2015)Over the past four years Emotion in Motion, a long running experiment, has amassed the world’s largest database of human physiology associated with emotion in response to the presentation of various selections of musical works. What began as a doctoral research study of participants in Dublin, Ireland, and New York City has grown to include over ten thousand emotional responses to musical experiences from participants across the world, from new installations in Norway, Singapore, the Philippines, and Taiwan. The most recent iteration of Emotion in Motion is currently underway in Taipei City, Taiwan. Preparation for this installation gave the authors an opportunity to reimagine the architecture of Emotion in Motion, allowing for a wider range of potential applications than were originally possible with the initial development of the tools that drive the experiment. Now more than an experiment, Emotion in Motion is a framework for developing myriad emotional/ musical/biomusical interactions with co-located or remote participants. This paper describes the development of this flexible, open-source framework and includes discussion of its various components: hardware agnostic sensor inputs, refined physiological signal processing tools, a public database of data collected during various instantiations of applications built on the framework, and the web application frontend and backend. We also discuss our ongoing work with this tool, and provide the reader with other potential applications that they might realize in using Emotion in Motion.
- An Evaluation Method for Thinking in Technology EcologiesChu Yew Yee, Sharon L. (Virginia Tech, 2013-12-09)As technology progresses, we become surrounded with an ever increasing number of devices. Information can now be persistently represented beyond a single screen and a single session. In the educational context, we see a rapid adoption of the panoply of devices, but often without any careful thought. Devices in isolation are unlikely to enable effective learning. This research explores how devices function in technological display and device ecologies or ecosystems to support human thinking, learning and sensemaking. Based on the theories of Vygotsky's sign mediation triangle, we contribute a method that may allow one to evaluate how technology configurations support (or hinder) students' thinking. Our method proposes the concept of objectification as a way to identify the potential or opportunity for learning in technology ecologies. The significance of such an evaluation methodology is considerable, given the nascent field of sensemaking and the lack of consensus on evaluation in such contexts: our research advances a principled approach by which device ecologies can be examined for their potential to provide 'learning experiences', and enables one to articulate affordances for the design of technological spatial environments that can help to support higher thought. Our contribution thus is in terms of methodology, theory, evaluation and the design of technology ecologies.
- Exploring Electronic Storyboards as Interdisciplinary Design Tools for Pervasive ComputingForsyth, Jason Brinkley (Virginia Tech, 2015-06-09)Pervasive computing proposes a new paradigm for human-computer interaction. By embedding computation, sensing, and networking into our daily environments, new computing systems can be developed that become helpful, supportive, and invisible elements of our lives. This tight proximity between the human and computational worlds poses challenges for the design of these systems - what disciplines should be involved in their design and what tools and processes should they follow? We address these issues by advocating for interdisciplinary design of pervasive computing systems. Based upon our experiences teaching courses in interactive architecture, product design, physical computing and through surveys of existing literature, we examine the challenges faced by interdisciplinary teams when developing pervasive computing systems. We find that teams lack accessible prototyping tools to express their design ideas across domains. To address this issue we propose a new software-based design tool called electronic storyboards. We implement electronic storyboards by developing a domain-specific modeling language in the Eclipse Graphical Editor Framework. The key insight of electronic storyboards is to balance the tension between the ambiguity in drawn storyboards and the requirements of implementing computing systems. We implement a set of user-applied tags, perform layout analysis on the storyboard, and utilize natural language processing to extract behavioral information from the storyboard in the form of a timed automaton. This behavioral information is then transformed into design artifacts such as state charts, textual descriptions, and source code. To evaluate the potential impact of electronic storyboards on interdisciplinary design teams we develop of user study based around ``boundary objects''. These objects are frequently used within computer-supported collaborative work to examine how objects mediate interactions between individuals. Teams of computing and non-computing participants were asked to storyboard pervasive computing systems and their storyboards were evaluated using a prototype electronic storyboarding tool. The study examines how teams use traditional storyboarding, tagging, tool queries, and generated artifacts to express design ideas and iterate upon their designs. From this study we develop new recommendations for future tools in architecture and fashion design based upon electronic storyboarding principles. Overall, this study contributes to the expanding knowledge base of pervasive computing design tools. As an emerging discipline, standardized tools and platforms have yet to be developed. Electronic storyboards offer a solution to describe pervasive computing systems across application domains and in a manner accessible to multiple disciplines.
- Exploring the Usability of Non-verbal Vocal Interaction (NVVI) and a Pitch Based ImplementationWilliams, Samuel (Virginia Tech, 2023-12-15)Natural user interfaces, including verbal vocal interactions like speech processing, are ubiquitous and commonly used in both industry and academic settings. However, this field is limited by the speech and language components. Non-verbal vocal interaction (NVVI) provides further opportunities for people to use their vocals as an input modality. Despite the many possibilities of NVVI input modalities, such as whistling, humming, and tongue clicking, the field is niche and literature is few and far between. This work attempts to address these gaps, as well as the small sample sizes of performed studies of prior work. The problem definition is defined as to perform a large-scale study exploring a pitch-based NVVI modality that uses a relative pitch interaction technique to offer a continuous mode of one-dimensional interaction. A user study is outlined and performed via an ecosystem comprising of Amazon Mechanical Turk for recruitment and study access, a modularized study website, and a secure server that stores the study results, tasks users with controlling a slider with the NVVI technique by humming and whistling, in addition to using the computer mouse to perform these tasks as a baseline. In total, 72 participants' results are considered for analysis. Results show that the pitch based NVVI technique used in this study does not follow Fitts' Law, is not as performant as the computer mouse, humming is a more performant modality with the NVVI technique than whistling, and that participants experienced a significantly higher task workload using the NVVI technique than the computer mouse. Using the results of this study and from reviewed literature, an NVVI framework is developed and implemented as a contribution of this work.
- Feasibility Study in Development of a Wearable Device to Enable Emotion Regulation in Children with Autism Spectrum DisorderHora, Manpreet Kaur (Virginia Tech, 2014-09-17)Autism spectrum disorder (ASD) is a group of developmental disabilities characterized by impairments in social interaction and communication and by difficulties in emotion recognition and regulation. There is currently no cure for autism but psychosocial interventions and medical treatments exist. However, very few of them have been trialed on young children and others pose limitations. Strengthening young children's capacity to manage their emotions is important for academic success. Thus it becomes important to design and test the feasibility of an appropriate methodology that can teach emotion regulation to young children (age 3-6 years) with ASD. This thesis addresses the problem by proposing a novel framework that integrates physiology with Cognitive Behavior Theory to enable emotion regulation in the target population by exposing them to real-time stressful situations. The framework uses a feedback loop that measures the participant's physiology, estimates the level of stress being experienced and provides an audio feedback. The feasibility of the individual building blocks of the framework was tested by conducting pilot studies on nine typically developing children (age 3-6 years). The attention capturing capacity of different audio representations was tested, and a stress profile generating system was designed and developed to map the measured physiology of the participant on to a relative stress level. 33 out of 43 instances of audio representations proved to be successful in capturing the participants' attention and the stress profiles were found to be capable of distinguishing between stressed and relaxed state of the participants with an average accuracy of 83%.
- GPU Based Large Scale Multi-Agent Crowd Simulation and Path PlanningGusukuma, Luke (Virginia Tech, 2015-04-26)Crowd simulation is used for many applications including (but not limited to) videogames, building planning, training simulators, and various virtual environment applications. Particularly, crowd simulation is most useful for when real life practices wouldn't be practical such as repetitively evacuating a building, testing the crowd flow for various building blue prints, placing law enforcers in actual crowd suppression circumstances, etc. In our work, we approach the fidelity to scalability problem of crowd simulation from two angles, a programmability angle, and a scalability angle, by creating new methodology building off of a struct of arrays approach and transforming it into an Object Oriented Struct of Arrays approach. While the design pattern itself is applied to crowd simulation in our work, the application of crowd simulation exemplifies the variety of applications for which the design pattern can be used.
- Indoor Human Sensing for Human Building InteractionMa, Nuo (Virginia Tech, 2020-06-15)We inhabit space. This means our deepest mental and emotional understanding of the world is tied intimately to our experiences as we perceive them in a physical context. Just like a book or film may induce a sense of presence, so too may our modern sensor drenched infrastructures and mobile information spaces. With the recent development of personal and ubiquitous computing devices that we always carry with us, and increased connectivity and robustness of wireless connections, there is an increasing tie between people and things around them. This also includes the space people inhabit. However, such enhanced experiences are usually limited to a personal environment with a personal smartphone being the central device. We would like to bring such technology enhanced experiences to large public spaces with many occupants where their movement patterns, and interactions can be shared, recorded, and studied in order to improve the occupants' efficiency and satisfaction. Specifically, we use sensor networks and ubiquitous computing to create smart built environments that are seamlessly aware of and responsive to the occupants. Human sensing system is one of the key enabling technologies for smart built environments. We present our research findings related to the design and deployment of an indoor human sensing system in large public built spaces. We use a case study to illustrate the challenges, opportunities, and lessons for the emerging field of human building interaction. We present several fundamental design trade-offs, applications, and performance measures for the case study.
- Inner-Active Art: An examination of aesthetic and mapping issues in physiologically based artworksCoghlan, Niall; Knapp, R. Benjamin (ISEA, 2009)Much art seeks to describe or stimulate the feelings and emotions of the viewer, through both abstract and literal representation. With the exponential increase in computing power over recent years we also seek new ways of interacting with technology and exploring the virtual world. Physiological signals from the human body provide us with a view into the autonomic nervous system, that part of the nervous system largely unmediated by the direct intentions of the viewer. With the appropriate choice of signals and processing, we can even develop systems with the ability to interact with us on an emotional level - machines that know how we feel and can react accordingly (Haag et. al., 2004). This gives us the ability to see into and map the interior worlds of artists and viewers through a direct and visceral connection, the human body itself. A key issue in the development of physiologically based artwork is to make the observer-artwork dialogue meaningful to the observer, a question of translating the input bio-signals to visual, auditory or experiential events. We have yet to develop a suitable language for this dialogue and so this paper seeks to explore some potential mappings for bio-signal art, illustrated using several case studies from past and current works (Knapp et.al., 2008) (Gonsalves, 2009). We also examine some of the other philosophical and artistic issues involved in 'affective' and bio-art such as monitoring emotion v. engendering emotion, the involvement of the observer in creating and contributing to bio-signal art and strategies for effectively developing such works.
- The integral Music controller: Introducing a Direct Emotional Interface to gestural control of sound synthesis.Knapp, R. Benjamin; Cook, Perry R. (Michigan Publishing, 2005-09)This paper describes the concept of the integral music controller (IMC), a controller that combines gestural interface with direct emotional control of a digital musical instrument. This new controller enables the performer to move smoothly between direct physical interaction with an acoustical musical instrument, and gestural/emotional control of the instrument’s physical model. The use of physiological signals to determine gesture and emotion is an important component of the IMC. The design of a wireless IMC using physiological signals is described and possible mappings to sound synthesis parameters are explored. Controlling higher level musical systems such as conducting and style modelling is also proposed.
- «
- 1 (current)
- 2
- 3
- »