Supporting User Interactions with Smart Built Environments
Before the recent advances in sensing, actuation, computing and communication technologies, the integration between the digital and the physical environment was limited.
Humans linked those two worlds by collecting data about the physical environment before feeding it into the digital environment, and by changing the state of the physical environment based on the state of the digital environment.
The incorporation of computing, communication, sensing, and actuation technologies into everyday physical objects has empowered the vision of the Internet of Things (IoT). Things can autonomously collect data about the physical environment, exchange information with other things, and take actions on behalf of humans. Application domains that can benefit from IoT include smart buildings, smart cities, smart water, smart agriculture, smart animal farming, smart metering, security and emergencies, retail, logistics, industrial control, and health care.
For decades, building automation, intelligent buildings, and more recently smart buildings have received a considerable attention in both academia and industry. We use the term smart built environments (SBE) to describe smart, intelligent, physical, built, architectural spaces ranging from a single room to a whole city. Legacy SBEs were often closed systems operating their own standards and custom protocols. SBEs evolved to Internet-connected systems leveraging the Internet technologies and services (e.g., cloud services) to unleash new capabilities. IoT-enabled SBEs, as one of the various applications of the IoT, can change the way we experience our homes and workplaces significantly and make interacting with technology almost inevitable. This can provide several benefits to modern society and help to make our life easier. Meanwhile, security, privacy, and safety concerns should be addressed appropriately.
Unlike traditional computing devices, things usually have no or limited input/output (I/O) capabilities. Leveraging the ubiquity of general-purpose computing devices (e.g., smartphones), thing vendors usually provide interfaces for their products in the form of mobile apps or web-based portals. Interacting with different things using different mobile apps or web-based portals does not scale well. Requiring the user to switch between tens or hundreds of mobile apps and web-based portals to interact with different things in different smart spaces may not be feasible. Moreover, it can be tricky for non-domestic users (e.g., visitors) of a given SBE to figure out, without guidance, what mobile apps or web-based portals they need to use to interact with the surrounding.
While there has been a considerable research effort to address a variety of challenges associated with the thing-to-thing interaction, human-to-thing interaction related research is limited. Many of the proposed approaches and industry-adopted techniques rely on more traditional, well understood and widely used Human-Computer Interaction (HCI) methods and techniques to support interaction between humans and things. Such techniques have mostly originated in a world of desktop computers that have a screen, mouse, and keyboard. However, SBEs introduce a radically different interaction context where there are no centralized, easily identifiable input and output devices. A desktop computer of the past is being replaced with the whole SBE. Depending on the task at hand and personal preferences, a user may prefer to use one interaction modality over another. For instance, turning lights on/off using an app may be more cumbersome or time-consuming compared to using a simple physical switch.
This research focuses on leveraging the recent advances in IoT and related technologies to support user interactions with SBEs. We explore how to support flexible and adaptive multimodal interfaces and interactions while providing a consistent user experience in an SBE based on the current context and the available user interface and interaction capabilities.