Show simple item record

dc.contributor.authorElshahali, Mai Hassan Ahmed Alien_US
dc.date.accessioned2015-09-18T20:05:46Z
dc.date.available2015-09-18T20:05:46Z
dc.date.issued2015-09-14en_US
dc.identifier.othervt_gsexam:6152en_US
dc.identifier.urihttp://hdl.handle.net/10919/56582
dc.description.abstractScientific visualization is primarily concerned with the visual presentation of three-dimensional phenomena in domains like medicine, meteorology, astrophysics, etc. The emphasis in scientific visualization research has been on the efficient rendering of measured or simulated data points, surfaces, volumes, and a time component to convey the dynamic nature of the studied phenomena. With the explosive growth in the size of the data, interactive visualization of scientific data becomes a real challenge. In recent years, the graphics community has witnessed tremendous improvements in the performance capabilities of graphics processing units (GPUs), and advances in GPU-accelerated rendering have enabled data exploration at interactive rates. Nevertheless, the majority of techniques rely on the assumption that a true three-dimensional geometric model capturing physical phenomena of interest, is available and ready for visualization. Unfortunately, this assumption does not hold true in many scientific domains, in which measurements are obtained from a given scanning modality at sparsely located intervals in both space and time. This calls for the fusion of data collected from multiple sources in order to fill the gaps and tell the story behind the data. For years, data fusion has relied on machine learning techniques to combine data from multiple modalities, reconstruct missing information, and track features of interest through time. However, these techniques fall short in solving the problem for datasets with large spatio-temporal gaps. This realization has led researchers in the data fusion domain to acknowledge the importance of human-in-the-loop methods where human expertise plays a major role in data reconstruction. This PhD research focuses on developing visualization and interaction techniques aimed at addressing some of the challenges that experts are faced with when analyzing the spatio-temporal behavior of physical phenomena. Given a number of datasets obtained from different measurement modalities and from simulation, we propose a generalized framework that can guide research in the field of multi-sensor data fusion and visualization. We advocate the use of GPU parallelism in our developed techniques in order to emphasize interaction as a key component in the successful exploration and analysis of multi-sourced data sets. The goal is to allow the user to create a mental model that captures their understanding of the spatio-temporal behavior of features of interest; one which they can test against real data measurements to verify their model. This model creation and verification is an iterative process in which the user interacts with the visualization, explores and builds an understanding of what occurred in the data, then tests this understanding against real-world measurements and improves it. We developed a system as a reference implementation of the proposed framework. Reconstructed data is rendered in a way that completes the users' cognitive model, which encodes their understanding of the phenomena in question with a high degree of accuracy. We tested the usability of the system and evaluated its support for this cognitive model construction process. Once an acceptable model is constructed, it is fed back to the system in the form of a reference dataset, which our framework uses to guide the real-time tracking of measurement data. Our results show that interactive exploration tasks enable the construction of this cognitive model and reference set, and that real-time interaction is achievable during the exploration, reconstruction, and enhancement of multi-modal time-variant three-dimensional data, by designing and implementing advanced GPU-based visualization techniques.en_US
dc.format.mediumETDen_US
dc.publisherVirginia Techen_US
dc.rightsThis Item is protected by copyright and/or related rights. Some uses of this Item may be deemed fair and permitted by law even without permission from the rights holder(s), or the rights holder(s) may have licensed the work for use under certain conditions. For other uses you need to obtain permission from the rights holder(s).en_US
dc.subjectVisualizationen_US
dc.subjectUser Interactionen_US
dc.subjectParallel Processingen_US
dc.titleReal-Time Processing and Visualization of 3D Time-Variant Datasetsen_US
dc.typeDissertationen_US
dc.contributor.departmentComputer Scienceen_US
dc.description.degreePh. D.en_US
thesis.degree.namePh. D.en_US
thesis.degree.leveldoctoralen_US
thesis.degree.grantorVirginia Polytechnic Institute and State Universityen_US
thesis.degree.disciplineComputer Science and Applicationsen_US
dc.contributor.committeechairGracanin, Denisen_US
dc.contributor.committeechairCao, Yongen_US
dc.contributor.committeememberMatkovic, Kresimiren_US
dc.contributor.committeememberNorth, Christopher L.en_US
dc.contributor.committeememberElmongui, Hicham Galalen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record