Masters Theses
Permanent URI for this collection
Browse
Browsing Masters Theses by Author "Abbott, A. Lynn"
Now showing 1 - 20 of 172
Results Per Page
Sort Options
- 3-D Point Cloud Generation from Rigid and Flexible Stereo Vision SystemsShort, Nathaniel Jackson (Virginia Tech, 2009-12-04)When considering the operation of an Unmanned Aerial Vehicle (UAV) or an Unmanned Ground Vehicle (UGV), such problems as landing site estimation or robot path planning become a concern. Deciding if an area of terrain has a level enough slope and a wide enough area to land a Vertical Take Off and Landing (VTOL) UAV or if an area of terrain is traversable by a ground robot is reliant on data gathered from sensors, such as cameras. 3-D models, which can be built from data extracted from digital cameras, can help facilitate decision making for such tasks by providing a virtual model of the surrounding environment the system is in. A stereo vision system utilizes two or more cameras, which capture images of a scene from two or more viewpoints, to create 3-D point clouds. A point cloud is a set of un-gridded 3-D points corresponding to a 2-D image, and is used to build gridded surface models. Designing a stereo system for distant terrain modeling requires an extended baseline, or distance between the two cameras, in order to obtain a reasonable depth resolution. As the width of the baseline increases, so does the flexibility of the system, causing the orientation of the cameras to deviate from their original state. A set of tools have been developed to generate 3-D point clouds from rigid and flexible stereo systems, along with a method for applying corrections to a flexible system to regain distance accuracy in a flexible system.
- Addressing Occlusion in Panoptic SegmentationSarkaar, Ajit Bhikamsingh (Virginia Tech, 2021-01-20)Visual recognition tasks have witnessed vast improvements in performance since the advent of deep learning. Despite the gains in performance, image understanding algorithms are still not completely robust to partial occlusion. In this work, we propose a novel object classification method based on compositional modeling and explore its effect in the context of the newly introduced panoptic segmentation task. The panoptic segmentation task combines both semantic and instance segmentation to perform labelling of the entire image. The novel classification method replaces the object detection pipeline in UPSNet, a Mask R-CNN based design for panoptic segmentation. We also discuss an issue with the segmentation mask prediction of Mask R-CNN that affects overlapping instances. We perform extensive experiments and showcase results on the complex COCO and Cityscapes datasets. The novel classification method shows promising results for object classification on occluded instances in complex scenes.
- Adversarial Learning based framework for Anomaly Detection in the context of Unmanned Aerial SystemsBhaskar, Sandhya (Virginia Tech, 2020-06-18)Anomaly detection aims to identify the data samples that do not conform to a known normal (regular) behavior. As the definition of an anomaly is often ambiguous, unsupervised and semi-supervised deep learning (DL) algorithms that primarily use unlabeled datasets to model normal (regular) behaviors, are popularly studied in this context. The unmanned aerial system (UAS) can use contextual anomaly detection algorithms to identify interesting objects of concern in applications like search and rescue, disaster management, public security etc. This thesis presents a novel multi-stage framework that supports detection of frames with unknown anomalies, localization of anomalies in the detected frames, and validation of detected frames for incremental semi-supervised learning, with the help of a human operator. The proposed architecture is tested on two new datasets collected for a UAV-based system. In order to detect and localize anomalies, it is important to both model the normal data distribution accurately as well as formulate powerful discriminant (anomaly scoring) techniques. We implement a generative adversarial network (GAN)-based anomaly detection architecture to study the effect of loss terms and regularization on the modeling of normal (regular) data and arrive at the most effective anomaly scoring method for the given application. Following this, we use incremental semi-supervised learning techniques that utilize a small set of labeled data (obtained through validation from a human operator), with large unlabeled datasets to improve the knowledge-base of the anomaly detection system.
- An Analysis of EcoRouting Using a Variable Acceleration Rate Synthesis ModelWarpe, Hrusheekesh Sunil (Virginia Tech, 2017-08-07)Automotive manufacturers are facing increasing pressure from legislative bodies and consumers to reduce fuel consumption and greenhouse gas emissions of vehicles. This has led to many automotive manufacturers starting production of Plug-in Hybrid Electric Vehicles (PHEV's) and Battery Electric Vehicles (BEV's). Another method that helps to reduce the environmental effect of transportation is EcoRouting. The standard Global Positioning System (GPS) navigation offers route alternatives between user specified origin and destination. This technology provides multiple routes to the user and focuses on reducing the travel time to reach to the destination. EcoRouting is the method to determine a route that minimizes vehicle energy consumption, unlike traditional routing methods that minimize travel time. An EcoRouting system has been developed as a part of this thesis that takes in information such as speed limits, the number of stop lights, and the road grade to calculate the energy consumption of a vehicle along a route. A synthesis methodology is introduced that takes into consideration the distance between the origin and destination, the acceleration rate of the vehicle, cruise speed and jerk rate as inputs to simulate driver behavior on a given route. A new approach is presented in this thesis that weighs the energy consumption for different routes and chooses the route with the least energy consumption, subject to a constraint on travel time. A cost function for quantifying the effect of travel time is introduced that assists in choosing the EcoRoute with an acceptable limit on the travel time required to reach the destination. The analysis of the EcoRouting system with minimum number of conditional stops and maximum number of conditional stops is done in this thesis. The effect on energy consumption with the presence and absence of road-grade information along a route is also studied. A sensitivity study is performed to observe the change in energy consumption of the vehicle with a change in acceleration rates and road grade. Three routing scenarios are presented in this thesis to demonstrate the functionality of EcoRouting. The EcoRouting model presented in this thesis is also validated against an external EcoRouting research paper and the energy consumption along three routes is calculated. The EcoRoute solution is found to vary with the information given to the variable acceleration rate model. The synthesis and the results that are obtained show that parameters such as acceleration, deceleration, and road grade affect the overall energy consumption of a vehicle and are helpful in determining the EcoRoute.
- Analyzing and Classifying Neural Dynamics from Intracranial Electroencephalography Signals in Brain-Computer Interface ApplicationsNagabushan, Naresh (Virginia Tech, 2019-06-14)Brain-Computer Interfaces (BCIs) that rely on motor imagery currently allow subjects to control quad-copters, robotic arms, and computer cursors. Recent advancements have been made possible because of breakthroughs in fields such as electrical engineering, computer science, and neuroscience. Currently, most real-time BCIs use hand-crafted feature extractors, feature selectors, and classification algorithms. In this work, we explore the different classification algorithms currently used in electroencephalographic (EEG) signal classification and assess their performance on intracranial EEG (iEEG) data. We first discuss the motor imagery task employed using iEEG signals and find features that clearly distinguish between different classes. Second, we compare the different state-of-the-art classifiers used in EEG BCIs in terms of their error rate, computational requirements, and feature interpret-ability. Next, we show the effectiveness of these classifiers in iEEG BCIs and last, show that our new classification algorithm that is designed to use spatial, spectral, and temporal information reaches performance comparable to other state-of-the-art classifiers while also allowing increased feature interpret-ability.
- An Antenna Specific Site Modeling Tool for Interactive Computation of Coverage Regions for Indoor Wireless CommunicationBhat, Nitin (Virginia Tech, 1998-03-02)A goal of indoor wireless communication is to strategically place RF base stations to obtain optimum signal coverage at the lowest cost and power. Traditionally, transceiver locations have been selected by human experts who rely on experience and heuristics to obtain a near-optimum placement. Current methods depend on involved on-site communication measurements and crude statistical modeling of the obtained data which is time consuming and prohibitive in cost. Given the inherent variability of the indoor environment, such a method often yields poor efficiency. As an example, it is possible that more power than required or extra number of transceivers were used. This thesis describes an interactive software system that can be used to aid transceiver placement. The tool is easy to use and is targeted at users who are not experts in wireless communication system design. Once the transceiver locations are selected by the user within a graphical floor plan, the system uses simple path-loss models to predict coverage regions for each transceiver. The coverage regions are highlighted to indicate expected coverage. Earlier work assumed isotropic transceivers and had limited directional transmitter support. This thesis describes how the tool has been enhanced to support a wide range of 3D antenna patterns as encountered in practical situations. The tool has also been expanded to accommodate more partition types and to report area of coverage. The resulting system is expected to be very useful in the practical deployment of indoor wireless systems.
- Application of Computer Vision Techniques for Railroad Inspection using UAVsHarekoppa, Pooja Puttaswamygowda (Virginia Tech, 2016-08-16)The task of railroad inspection is a tedious one. It requires a lot of skilled experts and long hours of frequent on-field inspection. Automated ground equipment systems that have been developed to address this problem have the drawback of blocking the rail service during inspection process. As an alternative, using aerial imagery from a UAV, Computer Vision and Machine Learning based techniques were developed in this thesis to analyze two kinds of defects on the rail tracks. The defects targeted were missing spikes on tie plates and cracks on ties. In order to perform this inspection, the rail region was identified in the image and then the tie plate and tie regions on the track were detected. These steps were performed using morphological operations, filtering and intensity analysis. Once the tie plate was localized, the regions of interest on the plate were used to train a machine learning model to recognize missing spikes. Classification using SVM resulted in an accuracy of around 96% and varied greatly based on the tie plate illumination and ROI alignment for Lampasas and Chickasha subdivision datasets. Also, many other different classifiers were used for training and testing and an ensemble method with majority vote scheme was also explored for classification. The second category of learning model used was a multi-layered neural network. The major drawback of this method was, it required a lot of images for training. However, it performed better than feature based classifiers with availability of larger training dataset. As a second kind of defect, tie conditions were analyzed. From the localized tie region, the tie cracks were detected using thresholding and morphological operations. A machine learning classifier was developed to predict the condition of a tie based on training examples of images with extracted features. The multi-class classification accuracy obtained was around 83% and there were no misclassifications seen between two extreme classes of tie condition on the test data.
- Arc Path Collision Avoidance Algorithm for Autonomous Ground VehiclesNaik, Ankur (Virginia Tech, 2005-12-15)Presented in this thesis is a collision avoidance algorithm designed around an arc path model. The algorithm was designed for use on Virginia Tech robots entered in the 2003 and 2004 Intelligent Ground Vehicle Competition (IGVC) and on our 2004 entry into the DARPA Grand Challenge. The arc path model was used because of the simplicity of the calculations and because it can accurately represent the base kinematics for Ackerman or differentially steered vehicles. Clothoid curves have been used in the past to create smooth paths with continuously varying curvature, but clothoids are computationally intensive. The circular arc algorithm proposed here is designed with simplicity and versatility in mind. It is readily adaptable to ground vehicles of any size and shape. The algorithm is also designed to run with minimal tuning. The algorithm can be used as a stand alone reactive collision avoidance algorithm in simple scenarios, but it can be better optimized for speed and safety when guided by a global path planner. A complete navigation architecture is presented as an example of how obstacle avoidance can be incorporated in the algorithm.
- The Art of Deep Connection - Towards Natural and Pragmatic Conversational Agent InteractionsRay, Arijit (Virginia Tech, 2017-07-12)As research in Artificial Intelligence (AI) advances, it is crucial to focus on having seamless communication between humans and machines in order to effectively accomplish tasks. Smooth human-machine communication requires the machine to be sensible and human-like while interacting with humans, while simultaneously being capable of extracting the maximum information it needs to accomplish the desired task. Since a lot of the tasks required to be solved by machines today involve the understanding of images, training machines to have human-like and effective image-grounded conversations with humans is one important step towards achieving this goal. Although we now have agents that can answer questions asked for images, they are prone to failure from confusing input, and cannot ask clarification questions, in turn, to extract the desired information from humans. Hence, as a first step, we direct our efforts towards making Visual Question Answering agents human-like by making them resilient to confusing inputs that otherwise do not confuse humans. Not only is it crucial for a machine to answer questions reasonably, it should also know how to ask questions sequentially to extract the desired information it needs from a human. Hence, we introduce a novel game called the Visual 20 Questions Game, where a machine tries to figure out a secret image a human has picked by having a natural language conversation with the human. Using deep learning techniques like recurrent neural networks and sequence-to-sequence learning, we demonstrate scalable and reasonable performances on both the tasks.
- Automated Landing Site Evaluation for Semi-Autonomous Unmanned Aerial VehiclesKlomparens, Dylan (Virginia Tech, 2008-08-20)A system is described for identifying obstacle-free landing sites for a vertical-takeoff-and-landing (VTOL) semi-autonomous unmanned aerial vehicle (UAV) from point cloud data obtained from a stereo vision system. The relatively inexpensive, commercially available Bumblebee stereo vision camera was selected for this study. A "point cloud viewer" computer program was written to analyze point cloud data obtained from 2D images transmitted from the UAV to a remote ground station. The program divides the point cloud data into segments, identifies the best-fit plane through the data for each segment, and performs an independent analysis on each segment to assess the feasibility of landing in that area. The program also rapidly presents the stereo vision information and analysis to the remote mission supervisor who can make quick, reliable decisions about where to safely land the UAV. The features of the program and the methods used to identify suitable landing sites are presented in this thesis. Also presented are the results of a user study that compares the abilities of humans and computer-supported point cloud analysis in certain aspects of landing site assessment. The study demonstrates that the computer-supported evaluation of potential landing sites provides an immense benefit to the UAV supervisor.
- Automatic Detection of Elongated Objects in X-Ray Images of LuggageLiu, Wenye III (Virginia Tech, 1997-09-05)This thesis presents a part of the research work at Virginia Tech on developing a prototype automatic luggage scanner for explosive detection, and it deals with the automatic detection of elongated objects (detonators) in x-ray images using matched filtering, the Hough transform, and information fusion techniques. A sophisticated algorithm has been developed for detonator detection in x-ray images, and computer software utilizing this algorithm was programmed to implement the detection on both UNIX and PC platforms. A variety of template matching techniques were evaluated, and the filtering parameters (template size, template model, thresholding value, etc.) were optimized. A variation of matched filtering was found to be reasonably effective, while a Gabor-filtering method was found not to be suitable for this problem. The developed software for both single orientations and multiple orientations was tested on x-ray images generated on AS&E and Fiscan inspection systems, and was found to work well for a variety of images. The effects of object overlapping, luggage position on the conveyor, and detonator orientation variation were also investigated using the single-orientation algorithm. It was found that the effectiveness of the software depended on the extent of overlapping as well as on the objects the detonator overlapped. The software was found to work well regardless of the position of the luggage bag on the conveyor, and it was able to tolerate a moderate amount of orientation change.
- Automatic Dynamic Tracking of Horse Head Facial Features in Video Using Image Processing TechniquesDoyle, Jason Emory (Virginia Tech, 2019-02-11)The wellbeing of horses is very important to their care takers, trainers, veterinarians, and owners. This thesis describes the development of a non-invasive image processing technique that allows for automatic detection and tracking of horse head and ear motion, respectively, in videos or camera feed, both of which may provide indications of horse pain, stress, or well-being. The algorithm developed here can automatically detect and track head motion and ear motion, respectively, in videos of a standing horse. Results demonstrating the technique for nine different horses are presented, where the data from the algorithm is utilized to plot absolute motion vs. time, velocity vs. time, and acceleration vs. time for the head and ear motion, respectively, of a variety of horses and ponies. Two-dimensional plotting of x and y motion over time is also presented. Additionally, results of pilot work in eye detection in light colored horses is also presented. Detection of pain in horses is particularly difficult because they are prey animals and have mechanisms to disguise their pain, and these instincts may be particularly strong in the presence of an unknown human, such as a veterinarian. Current state-of-the art for detecting pain in horses primarily involves invasive methods, such as heart rate monitors around the body, drawing blood for cortisol levels, and pressing on painful areas to elicit a response, although some work has been done for humans to sort and score photographs subjectively in terms of a "horse grimace scale." The algorithms developed in this thesis are the first that the author is aware for exploiting proven image processing approaches from other applications for development of an automatic tool for detection and tracking of horse facial indicators. The algorithms were done in common open source programs Python and OpenCV, and standard image processing approaches including Canny Edge detection Hue, Saturation, Value color filtering, and contour tracking were utilized in algorithm development. The work in this thesis provides the foundational development of a non -invasive and automatic detection and tracking program for horse head and ear motion, including demonstration of the viability of this approach using videos of standing horses. This approach lays the groundwork for robust tool development for monitoring horses non-invasively and without the required presence of humans in such applications as post-operative monitoring, foaling, evaluation of performance horses in competition and/or training, as well as for providing data for research on animal welfare, among other scenarios.
- Automatic Generation of Test Cases for Agile using Natural Language ProcessingRane, Prerana Pradeepkumar (Virginia Tech, 2017-03-24)Test case design and generation is a tedious manual process that requires 40-70% of the software test life cycle. The test cases written manually by inexperienced testers may not offer a complete coverage of the requirements. Frequent changes in requirements reduce the reusability of the manually written test cases costing more time and effort. Most projects in the industry follow a Behavior-Driven software development approach to capturing requirements from the business stakeholders through user stories written in natural language. Instead of writing test cases manually, this thesis investigates a practical solution for automatically generating test cases within an Agile software development workflow using natural language-based user stories and acceptance criteria. However, the information provided by the user story is insufficient to create test cases using natural language processing (NLP), so we have introduced two new input parameters, Test Scenario Description and Dictionary, to improve the test case generation process. To establish the feasibility, we developed a tool that uses NLP techniques to generate functional test cases from the free-form test scenario description automatically. The tool reduces the effort required to create the test cases while improving the test coverage and quality of the test suite. Results from the feasibility study are presented in this thesis.
- An automatic method for inspecting plywood shear samplesAvent, R. Richard (Virginia Tech, 1990-06-07)Plywood is composed of several thin layers of wood bonded together by glue. The adhesive integrity of the glue formulation employed must surpass the structural integrity of the wood species within a given panel of plywood. The American Plywood Association (APA) regularly tests the plywood produced at various plywood manufacturing plants to ensure that this particular performance requirement is consistently met. One of the procedures used by the APA to test this requirement consists of 1) milling a plywood panel to be tested into small rectangular blocks called samples, 2) conditioning these samples with various treatments to simulate natural aging, 3) shearing each sample into two halves, and 4) estimating the percent wood failure (as opposed to glue failure) produced by the shear by visually inspecting these sample halves. A region of solid wood or a region of wood fibers embedded in glue on the shear of a sample half is a region of wood failure while a region of glue is a region of glue failure. If the wood failure of samples from a significant number of panels is too low, the right to use APA trademarks is withdrawn from the plant where the sampling occurred. Since measurements obtained by human visual inspection can contain inaccuracies due to fatigue, boredom, state of mind, etc., an automatic vision system to determine percent wood failure is proposed. The method presented is a refinement of the method developed by McMillin and is divided into three tasks. The first task is to locate the area of shear on a given sample half. The second task is to distinguish the areas of wood from the areas of glue on the shear of a sample half. Solid wood is distinguished from glue based on the difference in gray level intensity that exists between solid wood and glue. Wood fiber is distinguished from glue based on the difference in texture, i.e., edge patterns, that exists between fiber and glue. The third task is to compare the areas of shear on the two sample halves comprising a sample to determine the percent wood failure of the sample.
- Automatic Positioning and Design of a Variable Baseline Stereo BoomFanto, Peter Louis (Virginia Tech, 2012-07-17)Conventional stereo vision systems rely on two spatially fixed cameras to gather depth information about a scene. The cameras typically have a fixed distance between them, known as the baseline. As the baseline increases, the estimated 3D information becomes more accurate, which makes it advantageous to have as large a baseline as possible. However, large baselines have problems whenever objects approach the cameras. The objects begin to leave the field of view of the cameras, making it impossible to determine where they are located in 3D space. This becomes especially important if an object of interest must be actuated upon and is approached by a vehicle. In an attempt to overcome this limitation, this thesis introduces a variable baseline stereo system that can adjust its baseline automatically based on the location of an object of interest. This allows accurate depth information to be gathered when an object is both near and far. The system was designed to operate under, and automatically move to a large range of different baselines. This thesis presents the mechanical design of the stereo boom. This is followed by a derivation of a control scheme that adjusts the baseline based on an estimate object location, which is gathered from stereo vision. This algorithm ensures that a certain incident angle on an object of interest is never surpassed. This maximum angle is determined by where a stereo correspondence algorithm, Semi-Global Block Matching, fails to create full reconstructions.
- Autonomous tactile object exploration and estimation using simple sensorsHollinger, James G. (Virginia Tech, 1994-05-05)In order for robots to become more useful they must be able to adapt and operate in foreign or unpredictable environments. The goal of this thesis is to present an algorithm that will enable a robot to autonomously explore its environment by touch and then estimate the shape of objects it encounters. To demonstrate the feasibility and functionality of such an algorithm, it was fully implemented on a MERLIN 6540 industrial robot. A unique compliant end-effector (consisting of a trackball mounted to a force/torque sensor on a sliding mechanism) and a fuzzy logic force controller were developed to overcome the difficulties inherent in force control on a stepper motor robot. A Kalman filter based quadric shape estimator was then used to describe the objects encountered in the MERLIN's workspace. The minimization of a cost function based on the shape estimator's uncertainty guided the robot along an exploration trajectory designed to produce the fastest converging shape estimate. Results of various exploration trials using autonomous and preprogrammed trajectories are presented. In addition to shape estimates, surface curvature measurements were also obtained. The unique end-effector that provided compliance for the force controller was also able to measure the arc length traversed on the object's surface. Arc length combined with surface orientation makes it possible to determine local surface curvature.
- Branch Guided Metrics for Functional and Gate-level TestingAcharya, Vineeth Vadiraj (Virginia Tech, 2015-03-31)With the increasing complexity of modern day processors and system-on-a-chip (SOCs), designers invest a lot of time and resources into testing and validating these designs. To reduce the time-to-market and cost, the techniques used to validate these designs have to constantly improve. Since most of the design activity has moved to the register transfer level (RTL), test methodologies at the RTL have been gaining momentum. We present a novel functional test generation framework for functional test generation at RTL. A popular software-based metric for measuring the effectiveness of an RTL test suite is branch coverage. But exercising hard-to-reach branches is still a challenge and requires good understanding of the design semantics. The proposed framework uses static analysis to extract certain semantics of the circuit and uses several data structures to model these semantics. Using these data structures, we assist the branch-guided search to exercise these hard-to-reach branches. Since the correlation between high branch coverage and detecting defects and bugs is not clear, we present a new metric at the RTL which augments the RTL branch coverage with state values. Vectors which have higher scores on the new metric achieve higher branch and state coverages, and therefore can be applied at different levels of abstraction such as post-silicon validation. Experimental results show that use of the new metric in our test generation framework can achieve a high level of branch and fault coverage for several benchmark circuits, while reducing the length of the vector sequence. This work was supported in part by the NSF grant 1016675.
- Cell Phenotype Analyzer: Automated Techniques for Cell Phenotyping using Contactless DielectrophoresisBala, Divya Chandrakant (Virginia Tech, 2016-06-23)Cancer is among the leading causes of death worldwide. In 2012, there were 14 million new cases and 8.2 million cancer-related deaths worldwide. The number of new cancer cases is expected rise to 22 million within the next two decades. Most chronic cancers cannot be cured. However, if the precise cancer cell type is diagnosed at an earlier, less aggressive stage then the chance of curing the disease increases with accurate drug delivery. This work is a humble contribution to the advancement of cancer research. This work delves into biological cell phenotyping under a dielectrophoresis setup using computer vision. Dielectrophoresis is a well-known phenomenon in which dielectric particles are subjected to a non-homogeneous electric field. This work is an analytical part of a larger proposed system replete with hardware, software and microfluidics integration to achieve cancer cell characterization, separation and enrichment using contactless dielectrophoresis. To analyze the cell morphology, various detection and tracking algorithms have been implemented and tested on a diverse dataset comprising cell-separation video sequences. Other related applications like cell-counting and cell-proximity detection have also been implemented. Performances were evaluated against ground truth using metrics like precision, recall and RMS cell-count error. A detection approach using difference of Gaussian and super-pixel algorithm gave the highest average F-measure of 0.745. A nearest neighbor tracker and Kalman tracking method gave the best overall tracking performance with an average F-measure of 0.95. This combination of detection and tracking methods proved to be best suited for this dataset. A graphical user interface to automate the experimentation process of the proposed system was also designed.
- Characterization, Modeling of Piezoelectric Pressure Transducer for Facilitation of Field CalibrationPakdel, Zahra (Virginia Tech, 2007-05-21)Currently in the marketplace, one of the important goals is to improve quality, and reliability. There is great interest in the engineering community to develop a field calibration technique concerning piezoelectric pressure sensor to reduce cost and improve reliability. This paper summarizes the algorithm used to characterize and develop a model for a piezoelectric pressure transducer. The basic concept of the method is to excite the sensor using an electric force to capture the signature characteristic of the pressure transducer. This document presents the frequency curve fitted model based on the high frequency excitation of the piezoelectric pressure transducer. It also presents the time domain model of the sensor. The time domain response of the frequency curve fitted model obtained in parallel with the frequency response of the time domain model and the comparison results are discussed. Moreover, the relation between model parameters and sensitivity extensively is investigated. In order to detect damage and monitor the condition of the sensor on line the resonance frequency comparison method is presented. The relationship between sensitivity and the resonance frequency characteristic of the sensor extensively is investigated. The method of resonance monitoring greatly reduces the cost of hardware. This work concludes with a software implementation of the signature comparison of the sensor based on a study of the experimental data. The software would be implemented in the control system.
- Cinemacraft: Exploring Fidelity Cues in Collaborative Virtual World InteractionsNarayanan, Siddharth (Virginia Tech, 2018-02-15)The research presented in this thesis concerns the contribution of virtual human (or avatar) fidelity to social interaction in virtual environments (VEs) and how sensory fusion can improve these interactions. VEs present new possibilities for mediated communication by placing people in a shared 3D context. However, there are technical constraints in creating photo realistic and behaviorally realistic avatars capable of mimicking a person's actions or intentions in real time. At the same time, previous research findings indicate that virtual humans can elicit social responses even with minimal cues, suggesting that full realism may not be essential for effective social interaction. This research explores the impact of avatar behavioral realism on people's experience of interacting with virtual humans by varying the interaction fidelity. This is accomplished through the creation of Cinemacraft, a technology-mediated immersive platform for collaborative human-computer interaction in a virtual 3D world and the incorporation of sensory fusion to improve the fidelity of interactions and realtime collaboration. It investigates interaction techniques within the context of a multiplayer sandbox voxel game engine and proposes how interaction qualities of the shared virtual 3D space can be used to further involve a user as well as simultaneously offer a stimulating experience. The primary hypothesis of the study is that embodied interactions result in a higher degree of presence and co-presence, and that sensory fusion can improve the quality of presence and co-presence. The argument is developed through research justification, followed by a user-study to demonstrate the qualitative results and quantitative metrics.This research comprises of an experiment involving 24 participants. Experiment tasks focus on distinct but interrelated questions as higher levels of interaction fidelity are introduced.The outcome of this research is the generation of an interactive and accessible sensory fusion platform capable of delivering compelling live collaborative performances and empathetic musical storytelling that uses low fidelity avatars to successfully sidestep the 'uncanny valley'. This research contributes to the field of immersive collaborative interaction by making transparent the methodology, instruments and code. Further, it is presented in non-technical terminology making it accessible for developers aspiring to use interactive 3D media to pro-mote further experimentation and conceptual discussions, as well as team members with less technological expertise.