Browsing by Author "Gracanin, Denis"
Now showing 1 - 20 of 90
Results Per Page
Sort Options
- Access to Autism Spectrum Disorder Services for Rural Appalachian CitizensScarpa, Angela; Jensen, Laura S.; Gracanin, Denis; Ramey, Sharon L.; Dahiya, Angela V.; Ingram, L. Maria; Albright, Jordan; Gatto, Alyssa J.; Scott, Jen Pollard; Ruble, Lisa (2020-01)Background: Low-resource rural communities face significant challenges regarding availability and adequacy of evidence-based services. Purposes: With respect to accessing evidence-based services for Autism Spectrum Disorder (ASD), this brief report summarizes needs of rural citizens in the South-Central Appalachian region, an area notable for persistent health disparities. Methods: A mixed-methods approach was used to collect quantitative and qualitative data during focus groups with 33 service providers and 15 caregivers of children with ASD in rural southwest Virginia. Results: Results supported the barriers of availability and affordability of ASD services in this region, especially relating to the need for more ASD-trained providers, better coordination and navigation of services, and addition of programs to assist with family financial and emotional stressors. Results also suggested cultural attitudes related to autonomy and trust towards outside professionals that may prevent families from engaging in treatment. Implications: Relevant policy recommendations are discussed related to provider incentives, insurance coverage, and telehealth. Integration of autism services into already existing systems and multicultural sensitivity of providers are also implicated.
- Analysis of the Relationships between Changes in Distributed System Behavior and Group DynamicsLazem, Shaimaa (Virginia Tech, 2012-04-06)The rapid evolution of portable devices and social media has enabled pervasive forms of distributed cooperation. A group could perform a task using a heterogeneous set of the devices (desktop, mobile), connections (wireless, wired, 3G) and software clients. We call this form of systems Distributed Dynamic Cooperative Environments (DDCEs). Content in DDCEs is created and shared by the users. The content could be static (e.g., video or audio), dynamic (e.g.,wikis), and/or Objects with behavior. Objects with behavior are programmed objects that take advantage of the available computational services (e.g., cloud-based services). Providing a desired Quality of Experience (QoE) in DDCEs is a challenge for cooperative systems designers. DDCEs are expected to provide groups with the utmost flexibility in conducting their cooperative activities. More flexibility at the user side means less control and predictability of the groups' behavior at the system side. Due to the lack of Quality of Service (QoS) guarantees in DDCEs, groups may experience changes in the system behavior that are usually manifested as delays and inconsistencies in the shared state. We question the extent to which cooperation among group members is sensitive to system changes in DDCEs. We argue that a QoE definition for groups should account for cooperation emergence and sustainability. An experiment was conducted, where fifteen groups performed a loosely-coupled task that simulates social traps in a 3D virtual world. The groups were exposed to two forms of system delays. Exo-content delays are exogenous to the provided content (e.g., network delay). Endo-content delays are endogenous to the provided content (e.g., delay in processing time for Objects with behavior). Groups' performance in the experiment and their verbal communication have been recorded and analyzed. The results demonstrate the nonlinearity of groups' behavior when dealing with endo-content delays. System interventions are needed to maintain QoE even though that may increase the cost or the required resources. Systems are designed to be used rather than understood by users. When the system behavior changes, designers have two choices. The first is to expect the users to understand the system behavior and adjust their interaction accordingly. That did not happen in our experiment. Understanding the system behavior informed groups' behavior. It partially influenced how the groups succeeded or failed in accomplishing its goal. The second choice is to understand the semantics of the application and provide guarantees based on these semantics. Based on our results, we introduce the following design guidelines for QoE provision in DDCEs. • If possible the system should keep track of information about group goals and add guarding constraints to protect these goals. • QoE guarantees should be provided based on the semantics of the user-generated content that constitutes the group activity. • Users should be given the option to define the content that is sensitive to system changes (e.g., Objects with behavior that are sensitive to delays or require intensive computations) to avoid the negative impacts of endo-content delays. • Users should define the Objects with behavior that contribute to the shared state in order for the system to maintain the consistency of the shared state. • Endo-content delays were proven to have significantly negative impacts on the groups in our experiment compared to exo-content delays. We argue that system designers, if they have the choice, should trade processing time needed for Objects with behavior for exo-content delay.
- Applying Dynamic Software Updates to Computationally-Intensive ApplicationsKim, Dong Kwan (Virginia Tech, 2009-06-22)Dynamic software updates change the code of a computer program while it runs, thus saving the programmer's time and using computing resources more productively. This dissertation establishes the value of and recommends practices for applying dynamic software updates to computationally-intensive applications—a computing domain characterized by long-running computations, expensive computing resources, and a tedious deployment process. This dissertation argues that updating computationally-intensive applications dynamically can reduce their time-to-discovery metrics—the total time it takes from posing a problem to arriving at a solution—and, as such, should become an intrinsic part of their software lifecycle. To support this claim, this dissertation presents the following technical contributions: (1) a distributed consistency algorithm for synchronizing dynamic software updates in a parallel HPC application, (2) an implementation of the Proxy design pattern that is more efficient than the existing implementations, and (3) a dynamic update approach for Java Virtual Machine (JVM)-based applications using the Proxy pattern to offer flexibility and efficiency advantages, making it suitable for computationally-intensive applications. The contributions of this dissertation are validated through performance benchmarks and case studies involving computationally-intensive applications from the bioinformatics and molecular dynamics simulation domains.
- An Approach to Real Time Adaptive Decision Making in Dynamic Distributed SystemsAdams, Kevin Page (Virginia Tech, 2005-12-12)Efficient operation of a dynamic system requires (near) optimal real-time control decisions. Those decisions depend on a set of control parameters that change over time. Very often, the optimal decision can be made only with the knowledge of future values of control parameters. As a consequence, the decision process is heuristic in nature. The optimal decision can be determined only after the fact, once the uncertainty is removed. For some types of dynamic systems, the heuristic approach can be very effective. The basic premise is that the future values of control parameters can be predicted with sufficient accuracy. We can either predict those value based on a good model of the system or based on historical data. In many cases, the good model is not available. In that case, prediction using historical data is the only option. It is necessary to detect similarities with the current situation and extrapolate future values. In other words, we need to (quickly) identify patterns in historical data that match the current data pattern. The low sensitivity of the optimal solution is critical. Small variations in data patterns should affect minimally the optimal solution. Resource allocation problems and other "discrete decision systems" are good examples of such systems. The main contribution of this work is a novel heuristic methodology that uses neural networks for classifying, learning and detecting changing patterns, as well as making (near) real-time decisions. We improve on existing approaches by providing a real-time adaptive approach that takes into account changes in system behavior with minimal operational delay without the need for an accurate model. The methodology is validated by extensive simulation and practical measurements. Two metrics are proposed to quantify the quality of control decisions as well as a comparison to the optimal solution.
- Automatic Visualization of the Version history of a Software System in Three DimensionsAsokan, Ramya (Virginia Tech, 2003-09-21)Software changes constantly and continuously. It is often beneficial to record the progressive changes made to software, so that when any problems arise, it is possible to identify the change that might have caused the problem. Also, recording these changes enables recovery of the software as it was at any point of time. A version control system is used to track modifications to software. Version control systems (VCS) display when and where a change was made. In the case of multiple developers working on the same software system, version control systems also record which developer was responsible for the change. RCS, SCCS and CVS are examples of such version control systems, and they usually have a command-line interface. The widespread use of CVS has however given rise to a host of "CVS clients", which provide a two-dimensional graphical interface to CVS. While working with a version control system in two dimensions is a definite improvement over traditional command line interfaces, it is still not sufficient to display all the necessary information in a single view. Using three dimensions to display the information from a version control system like CVS is an effective and efficient way to represent multiple attributes in a single view. There are many advantages to using a third dimension for visualizing the version history and evolution of software. A three-dimensional visualization tool has been developed to provide insights into the structure and characteristics of the history of a software system. It demonstrates the benefits of three-dimensional visualization and illustrates a framework that can be used to automatically derive information from a version control system.
- A Bidirectional Pipeline for Semantic Interaction in Visual AnalyticsBinford, Adam Quarles (Virginia Tech, 2016-09-21)Semantic interaction in visual data analytics allows users to indirectly adjust model parameters by directly manipulating the output of the models. This is accomplished using an underlying bidirectional pipeline that first uses statistical models to visualize the raw data. When a user interacts with the visualization, the interaction is interpreted into updates in the model parameters automatically, giving the users immediate feedback on each interaction. These interpreted interactions eliminate the need for a deep understanding of the underlying statistical models. However, the development of such tools is necessarily complex due to their interactive nature. Furthermore, each tool defines its own unique pipeline to suit its needs, which leads to difficulty experimenting with different types of data, models, interaction techniques, and visual encodings. To address this issue, we present a flexible multi-model bidirectional pipeline for prototyping visual analytics tools that rely on semantic interaction. The pipeline has plug-and-play functionality, enabling quick alterations to the type of data being visualized, how models transform the data, and interaction methods. In so doing, the pipeline enforces a separation between the data pipeline and the visualization, preventing the two from becoming codependent. To show the flexibility of the pipeline, we demonstrate a new visual analytics tool and several distinct variations, each of which were quickly and easily implemented with slight changes to the pipeline or client.
- Brain Computer Interfaces and ASD TreatmentGracanin, Denis (2012-10-12)Denis Gracanin, from the computer science department, proposed research in the use of brain computer interface devices as help with social and health aspects of autism spectrum disorders, with the goal of developing a testbed framework and guidelines for testing and usability of intervention tools.
- Calibrating Video Capture Systems To Aid Automated Analysis And Expert Rating Of Human Movement PerformanceYeshala, Sai krishna (Virginia Tech, 2022-06-27)We propose a methodology for calibrating the activity space and the cameras involved in video capture systems for upper extremity stroke rehabilitation. We discuss an in-home stroke rehabilitation system called Semi-Automated Rehabilitation At Home System (SARAH) and a clinic-based system called Action Research Arm Test (ARAT) developed by the Interactive Neuro-Rehabilitation Lab (INR) at Virginia Tech. We propose a calibration workflow for achieving invariant video capture across multiple therapy sessions. This ensures that the captured data is less noisy. In addition, there is prior knowledge of the captured activity space and patient location in the video frames provided to the Computer Vision algorithms analyzing the captured data. Such a standardized calibration approach improved machine learning analysis of patient movements and a higher rate of agreement across multiple therapists regarding the captured patient performance. We further propose a Multi-Camera Calibration approach to perform stereo camera calibration in SARAH and ARAT capture systems to help perform a 3D reconstruction of the activity space from 2D videos. The importance of the proposed activity space and camera calibration workflows, including new research paths opened as a result of our approach, are discussed in this thesis.
- CandyFactory: Cloud-Based Educational Game for Teaching FractionsYing, Tiancheng (Virginia Tech, 2019-06-17)Nowadays cross platform software development is more expensive than ever before in terms of time and effort. Meantime with increasing number of personal devices, it is harder for local applications to synchronize and connect to the Internet. In terms of educational games, they can be divided into "local educational game" and "web educational game." "Local game" indicates the ones either on tablets, mobile devices or PC, which is an application on the corresponding platform. This kind of game mostly does not have backend support nor cross platform features such as the iPad version of CandyFactory. For one specific game, if the developer wants it to run on iPad and Android tablets, they need to develop two applications based on corresponding development framework, which is time and effort consuming. "Web game" indicates the ones on websites, which support cross platforms, but do not have backend support. Usually they are pure JavaScript or flash games with no backend recording the performances and the achievements. Software development for each individual platform is time and effort consuming. In order to achieve cross platform development, many programming languages and platforms like Java, Python, and JVM appear. Among all the cross platform approaches, cloud-based software development is the most universal solution to this problem. With web browsers built into every operating system, cloud software can be compatible with almost any device. Moreover, "Software-as-a-Service" (SaaS) is becoming a new software engineering paradigm and cloud-based software development is more popular because of its flexible scalability and cross platform features. In this thesis, we create a cloud-based educational game, CandyFactory, based on an iPad version of CandyFactory, and add backend to it to record user performance as well as achievements. Firstly, we re-develop the whole game from the iOS platform to the cloud-based Java EE platform. Secondly, we add new features to improve the game play such as ruler functionality and achievements animation. Thirdly, we add backend support to CandyFactory, including user account creation, course creation and performance report generation. With this functionality, teachers can monitor their students' performances and generate course reports. Moreover, teachers can view a specific student's report in order to provide more specific and effective help to their students. Lastly, with the advantages of cloud-based software development, we can update the whole application at any time without forcing the user to reinstall the update or re-download the game. With the hot update, the cloud-based CandyFactory is highly maintainable. The cloud-based CandyFactory runs on any computer that supports minimum 1024x768 screen resolution. The computer could be iPads, Android or Microsoft tablets, Windows or Mac laptops and desktops, and any other computer with a web browser. The advantages of cloud-based educational games over local educational games and web educational games are: firstly, they have cross platform features; secondly, they have backend data collection support; thirdly, they are consistent even if users log in with different computers, their game record and history will always be the same; lastly, the teacher can always keep track of his/her students' performance and provide more specific help and feedback.
- Change Management of Long Term Composed ServicesLiu, Xumin (Virginia Tech, 2009-07-28)We propose a framework for managing changes in Long term Composed Services (LCSs). The key components of the proposed framework include a Web Service Change Management Language (SCML), change enactment, and change optimization. The SCML is a formal language to specify top-down changes. It is built upon a formal model which consists of a Web service ontology and a LCS schema. The Web service ontology gives a semantic description on the important features of a service, including functionality, quality, and context. The LCS schema gives a high-level overview of a LCS's key features. A top-down change is specified as the modification of a LCS schema in the first place. Change enactment is the process of reacting to a top-down change. It consists of two subcomponents, including change reaction and change verification. The change reaction component implements the proposed change operators by modifying a LCS schema and the membership of Web services. The change verification component ensures that the correctness of a LCS is maintained during the process of change reaction. We propose a set of algorithms for the processes of change reaction and verification. The change optimization component selects the Web services that participate in a LCS to ensure that the change has been reacted to in the best way. We propose a two-phase optimization process to select services using both service reputation and service quality. We present a change management system that implements the proposed approaches. We also conduct a set of simulations to assess the performance.
- A Cloud-Based Visual Simulation Environment for Traffic NetworksOnder, Sait Tuna (Virginia Tech, 2018-06-19)Cloud-based Integrated Development Environments (IDEs) are highly complex systems compared to stand-alone IDEs that are installed on client devices. Today, the visual simulation environments developed as services on the cloud can offer similar features as client-based IDEs thanks to the advancements to the cloud technologies. However, most of the existing visual simulation tools are developed for client-based systems. Moving towards the cloud for visual simulation environments can provide better collaboration for simulation developers, easy access to the software, and less client hardware dependency. Proper guidance for the development of visual simulation tools can help researchers to develop their tools as a service on the cloud. This thesis presents a Cloud-based visuAl simulatioN enVironment for trAffic networkS (CANVAS), providing a framework that tackles challenges on the cloud-based visual simulation tools. CANVAS offers a set of tools for the composition and visualization of simulation models for the traffic network problem domain. CANVAS uses an asynchronous visualization protocol with efficient resource utilization on the server, enabling concurrent usage of the IDE. The simulation is executed on the server while the visualization is processed on the client-device within web browsers enabling execution-heavy simulations to thin clients. The component-based architecture of CANVAS offers a fully decoupled system that provides easier development and maintenance. The architecture can be used for the development of other cloud-based visual simulation IDEs. The CANVAS design and asynchronous visualization protocol show that advanced visualization capabilities can be provided to the client without depending on the client hardware.
- Clustering Appliance Energy Consumption Data for Occupant Energy-Behavior ModelingDongre, Poorvesh; Aldrees, Asma; Gracanin, Denis (ACM, 2021-11-17)Energy consumption of buildings varies significantly across buildings with similar functions and locations. Occupant behavior is one of the most significant sources of uncertainty related to energy consumption in buildings. A deeper understanding of occupant energy behavior can help in designing personalized behavior intervention strategies to save energy and predict energy consumption. This paper uses the Pecan Street dataset to cluster building occupants based on the energy they consume for each appliance in the household, and then developed load profiles for each of the clusters.
- Comparative Study of Body Doubling in Extended RealityAnnavarapu, Swetha (Virginia Tech, 2024-02-29)Body doubling is a mechanism that lets individuals work alongside someone on a monotonous task that they might not be able to focus on when they work alone. The person they work alongside is called a body double. It could be considered similar to co-working, but it gives individuals the freedom to work on anything that they want without feeling obligated to interact with the other person. This research aims to understand if body doubling is helpful to the users and how mixed reality body doubling can be a better addition to the existing mode of in-person and video-call based body doubling. In this work, we have recruited 40 participants to perform a user study where we have done a between-groups comparative study between a no body-double, in-person body double, a video-call based body double, and a mixed reality body double modes. Through these studies, we try to analyze if body doubling is helpful, and if so, which mode the participants are more inclined towards. The work also presents a few suggestions for future improvements.
- Context Sensitive Interaction Interoperability for Distributed Virtual EnvironmentsAhmed, Hussein Mohammed (Virginia Tech, 2010-05-28)The number and types of input devices and related interaction technique types are growing rapidly. Innovative input devices such as game controllers are no longer used just for games, propriety consoles and specific applications, they are also used in many distributed virtual environments, especially the so-called serious virtual environments. In this dissertation a distributed, service based framework is presented to offer context-sensitive interaction interoperability that can support mapping between input devices and suitable application tasks given the attributes (device, applications, users, and interaction techniques) and the current user context without negatively impacting performances of large scale distributed environments. The mapping is dynamic and context sensitive taking into account the context dimensions of both the virtual and real planes. What device or device component to use, how and when to use them depend on the application, task performed, the user and the overall context, including location and presence of other users. Another use of interaction interoperability is as a testbed for input devices, and interaction techniques making it possible to test reality based interfaces and interaction techniques with legacy applications. The dissertation provides a description how the framework provides these affordances and a discussion of motivations, goals and the addressed challenges. Several proof of the concept implementations were developed and an evaluation of the framework performance (in terms of system characteristics) demonstrates viability, scalability and negligible delays.
- Controlling Scalability in Distributed Virtual EnvironmentsSingh, Hermanpreet (Virginia Tech, 2013-05-01)A Distributed Virtual Environment (DVE) system provides a shared virtual environment where physically separated users can interact and collaborate over a computer network. More simultaneous DVE users could result in intolerable system performance degradation. We address the three major challenges to improve DVE scalability: effective DVE system performance measurement, understanding the controlling factors of system performance/quality and determining the consequences of DVE system changes. We propose a DVE Scalability Engineering (DSE) process that addresses these three major challenges for DVE design. DSE allow us to identify, evaluate, and leverage trade-offs among DVE resources, the DVE software, and the virtual environment. DSE has three stages. First, we show how to simulate different numbers and types of users on DVE resources. Collected user study data is used to identify representative user types. Second, we describe a modeling method to discover the major trade-offs between quality of service and DVE resource usage. The method makes use of a new instrumentation tool called ppt. ppt collects atomic blocks of developer-selected instrumentation at high rates and saves it for offline analysis. Finally, we integrate our load simulation and modeling method into a single process to explore the effects of changes in DVE resources. We use the simple Asteroids DVE as a minimal case study to describe the DSE process. The larger and commercial Torque and Quake III DVE systems provide realistic case studies and demonstrate DSE usage. The Torque case study shows the impact of many users on a DVE system. We apply the DSE process to significantly enhance the Quality of Experience given the available DVE resources. The Quake III case study shows how to identify the DVE network needs and evaluate network characteristics when using a mobile phone platform. We analyze the trade-offs between power consumption and quality of service. The case studies demonstrate the applicability of DSE for discovering and leveraging tradeoffs between Quality of Experience and DVE resource usage. Each of the three stages can be used individually to improve DVE performance. The DSE process enables fast and effective DVE performance improvement.
- Cross-layer Control for Adaptive Video Streaming over Wireless Access NetworksAbdallah AbouSheaisha, Abdallah Sabry (Virginia Tech, 2016-03-17)Over the last decade, the wide deployment of wireless access technologies (e.g. WiFi, 3G, and LTE) and the remarkable growth in the volume of streaming video content have significantly altered the telecommunications field. These developments introduce new challenges to the research community including the need to develop new solutions (e.g. traffic models and transport protocols) to address changing traffic patterns and the characteristics of wireless links and the need for new evaluation methods that generate higher fidelity results under more realistic scenarios. Unfortunately, for the last two decades, simulation studies have been the main tool for researchers in wireless networks. In spite of the advantages of simulation studies, overall they have had a negative influence on the credibility of published results. In partial response to this simulation crisis, the research community has adopted testing and evaluation using implementation-based experiments. Implementation-based experiments include field experiments, prototypes, emulations, and testbeds. An example of an implementation-based experiment is the MANIAC Challenge, a wireless networking competition that we designed and hosted, which included creation and operation of ad hoc networks using commodity hardware. However, the lack of software tools to facilitate these sorts of experiments has created new challenges. Currently, researchers must practice kernel programming in order to implement networking experiments, and there is an urgent need to lower the barriers of entry to wireless network experimentation. With respect to the growth in video traffic over wireless networks, the main challenge is a mismatch between the design concepts of current internet protocols (e.g. the Transport Control Protocol (TCP)) and the reality of modern wireless networks and streaming video techniques. Internet protocols were designed to be deployed over wired networks and often perform poorly over wireless links; video encoding is highly loss tolerant and delay-constrained and yet, for reasons of expedience is carried using protocols that emphasize reliable delivery at the cost of potentially high delay. This dissertation addresses the lack of software tools to support implementation-based networking experiments and the need to improve the performance of video streaming over wireless access networks. We propose a new software tool that allows researchers to implement experiments without a need to become kernel programmers. The new tool, called the Flexible Internetwork Stack (FINS) Framework, is available under an open source license. With our tool, researchers can implement new network layers, protocols, and algorithms, and redesign the interconnections between the protocols. It offers logging and monitoring capabilities as well as dynamic reconfigurability of the modules' attributes and interconnections during runtime. We present details regarding the architecture, design, and implementation of the FINS Framework and provide an assessment of the framework including both qualitative and quantitative comparison with significant previous tools. We also address the problem of HTTP-based adaptive video streaming (HAVS) over WiFi access networks. We focus on the negative influence of wireless last-hop connections on network utilization and the end-user quality of experience (QoE). We use a cross-layer approach to design three controllers. The first and second controllers adopt a heuristic cross-layer design, while the third controller formulates the HAVS problem as a Markov decision process (MDP). By solving the model using reinforcement learning, we achieved 20% performance improvement (after enough training) with respect to the performance of the best heuristic controller under unstable channel conditions. Our simulation results are backed by a system prototype using the FINS Framework. Although it may seem predictable to achieve more gain in performance and in QoE by using cross-layer design, this dissertation not only presents a new technique that improves performance, but also suggests that it is time to move cross-layer and machine-learning-based approaches from the research field to actual deployment. It is time to move cognitive network techniques from the simulation environment to real world implementations.
- Deciphering Emotional Responses to Music: A Fusion of Psychophysiological Data Analysis and Bi-LSTM Predictive ModelingMahat, Maheep (Virginia Tech, 2024-06-10)This research explores the temporal patterns of psychophysiological responses to musical excerpts by analyzing the expansive Emotion in Motion dataset, the most comprehensive of its kind. Utilizing the Dynamic Time Warping and T-test analysis techniques, we examined data from participants across seven countries who listened to three distinct musical pieces. During these listening sessions, Electrodermal Activity (EDA) and Pulse Oximetry (POX) readings were collected, complemented by qualitative feedback from the participants. Our analysis focused on detecting recurring patterns and extracting meaningful insights from the data. In addition to this, we compare several Deep Neural Networks to find the one that is best suited for prediction of emotional attributes with EDA and POX signals as input. To further facilitate a comprehensive visualization and analysis of the EDA, POX, and audio signals, we developed a dedicated platform, which features a coordinated multiple view interface, as an integral part of this work.
- A Deep Learning-based Dynamic Demand Response FrameworkHaque, Ashraful (Virginia Tech, 2021-09-02)The electric power grid is evolving in terms of generation, transmission and distribution network architecture. On the generation side, distributed energy resources (DER) are participating at a much larger scale. Transmission and distribution networks are transforming to a decentralized architecture from a centralized one. Residential and commercial buildings are now considered as active elements of the electric grid which can participate in grid operation through applications such as the Demand Response (DR). DR is an application through which electric power consumption during the peak demand periods can be curtailed. DR applications ensure an economic and stable operation of the electric grid by eliminating grid stress conditions. In addition to that, DR can be utilized as a mechanism to increase the participation of green electricity in an electric grid. The DR applications, in general, are passive in nature. During the peak demand periods, common practice is to shut down the operation of pre-selected electrical equipment i.e., heating, ventilation and air conditioning (HVAC) and lights to reduce power consumption. This approach, however, is not optimal and does not take into consideration any user preference. Furthermore, this does not provide any information related to demand flexibility beforehand. Under the broad concept of grid modernization, the focus is now on the applications of data analytics in grid operation to ensure an economic, stable and resilient operation of the electric grid. The work presented here utilizes data analytics in DR application that will transform the DR application from a static, look-up-based reactive function to a dynamic, context-aware proactive solution. The dynamic demand response framework presented in this dissertation performs three major functionalities: electrical load forecast, electrical load disaggregation and peak load reduction during DR periods. The building-level electrical load forecasting quantifies required peak load reduction during DR periods. The electrical load disaggregation provides equipment-level power consumption. This will quantify the available building-level demand flexibility. The peak load reduction methodology provides optimal HVAC setpoint and brightness during DR periods to reduce the peak demand of a building. The control scheme takes user preference and context into consideration. A detailed methodology with relevant case studies regarding the design process of the network architecture of a deep learning algorithm for electrical load forecasting and load disaggregation is presented. A case study regarding peak load reduction through HVAC setpoint and brightness adjustment is also presented. To ensure the scalability and interoperability of the proposed framework, a layer-based software architecture to replicate the framework within a cloud environment is demonstrated.
- Design and Analysis of Algorithms for Efficient Location and Service Management in Mobile Wireless SystemsGu, Baoshan (Virginia Tech, 2005-09-30)Mobile wireless environments present new challenges to the design and validation of system supports for facilitating development of mobile applications. This dissertation concerns two major system-support mechanisms in mobile wireless networks, namely, location management and service management. We address this research issue by considering three topics: location management, service management, and integrated location and service management. A location management scheme must effectively and efficiently handle both user location-update and location-search operations. We first quantitatively analyze a class of location management algorithms and identify conditions under which one algorithm may perform better than others. From insight gained from the quantitative analysis, we design and analyze a hybrid replication with forwarding algorithm that outperforms individual algorithms and show that such a hybrid algorithm can be uniformly applied to mobile users with distinct call and mobility characteristics to simplify the system design without sacrificing performance. For service management, we explore the notion of location-aware personal proxies that cooperate with the underlying location management system with the goal to minimize the network communication cost caused by service management operations. We show that for cellular wireless networks that provide packet services, when given a set of model parameters characterizing the network and workload conditions, there exists an optimal proxy service area size such that the overall network communication cost for service operations is minimized. These proxy-based mobile service management schemes are shown to outperform non-proxy-based schemes over a wide range of identified conditions. We investigate a class of integrated location and service management schemes by which service proxies are tightly integrated with location databases to further reduce the overall network signaling and communication cost. We show analytically and by simulation that when given a user's mobility and service characteristics, there exists an optimal integrated location and service management scheme that would minimize the overall network communication cost for servicing location and service operations. We demonstrate that the best integrated location and service scheme identified always performs better than the best decoupled scheme that considers location and service managements separately.
- Design, Implementation and Analysis of Wireless Ad Hoc MessengerCho, Jin-Hee (Virginia Tech, 2004-07-26)Popularity of mobile devices along with the presence of ad hoc networks requiring no infrastructure has contributed to recent advances in the field of mobile computing in ad hoc networks. Mobile ad hoc networks have been mostly utilized in military environments. The recent advances in ad hoc network technology now introduce a new class of applications. In this thesis, we design, implement and analyze a multi-hop ad hoc messenger application using Pocket PCs and Microsoft .Net Compact Framework. Pocket PCs communicate wirelessly with each other using the IEEE 802.11b technology without the use of an infrastructure. The main protocol implemented in this application is based on Dynamic Source Routing (DSR), which consists of two important mechanisms, Route Discovery and Route Maintenance. We adopt DSR since DSR operates solely based on source routing and "on-demand" process, so each packet does not have to transmit any periodic advertisement packets or routing information. These characteristics are desirable for the ad hoc messenger application for which a conversation is source-initiated on-demand. To test our application easily, we have developed a testing strategy by which a mobility configuration file is pre-generated describing the mobility pattern of each node generated based on the random waypoint mobility model. A mobility configuration file thus defines topology changes at runtime and is used by all nodes to know whether they can communicate with others in a single-hop or multi-hops during an experimental run. We use five standard metrics to test the performance of the wireless ad hoc messenger application implemented based on DSR, namely, (1) average latency to find a new route, (2) average latency to deliver a data packet, (3) delivery ratio of data packets, (4) normalized control overhead, and (5) throughput. These metrics test the correctness and efficiency of the wireless ad hoc messenger application using the DSR protocol in an 802.11 ad hoc network that imposes limitations on bandwidth and resources of each mobile device. We test the effectiveness of certain design alternatives for implementing the ad hoc messenger application with these five metrics under various topology change conditions by manipulating the speed and pause-time parameters in the random waypoint model. The design alternatives evaluated include (1) Sliding Window Size (SWS) for end-to-end reliable communication control; (2) the use of per-hop acknowledgement packets (called receipt packets) deigned for rapid detection of route errors by intermediate nodes; and (3) the use of cache for path look-up during route discovery and maintenance. Our analysis results indicate that as the node speed increases, the system performance deteriorates because a higher node speed causes the network topology to change more frequently under the random waypoint mobility model, causing routes to be broken. On the other hand, as the pause time increases, the system performance improves due to a more stable network topology. For the design alternatives evaluated in our wireless ad hoc messenger, we discover that as SWS increases, the system performance also increases until it reaches an optimal SWS value that maximizes the performance due to a balance of a higher level of data parallelism introduced and a higher level of medium contention in 802.11 because of more packets being transmitted simultaneously as SWS increases. Beyond the optimal SWS, the system performance deteriorates as SWS increases because the heavy medium contention effect outweighs the benefit due to data parallelism. We also discover that the use of receipt packets is helpful in a rapidly changing network but is not beneficial in a stable network. There is a break-even point in the frequency of topology changes beyond which the use of receipt packets helps quickly detect route errors in a dynamic network and would improve the system performance. Lastly, the use of cache is rather harmful in a frequently changing network because stale information stored in the cache of a source node may adversely cause more route errors and generate a higher delay for the route discovery process. There exists a break-even point beyond which the use of cache is not beneficial. Our wireless ad hoc messenger application can be used in a real chatting setting allowing Pocket PC users to chat instantly in 802.11 environments. The design and development of the dynamic topology simulation tool to model movements of nodes and the automatic testing and data collection tool to facilitate input data selection and output data analysis using XML are also a major contribution. The experimental results obtained indicate that there exists an optimal operational setting in the use of SWS, receipt packets and cache, suggesting that the wireless ad hoc messenger should be implemented in an adaptive manner to fine-tune these design parameters based on the current network condition and performance data monitored to maximize the system performance.