Real-Time Processing and Visualization of 3D Time-Variant Datasets
Scientific visualization is primarily concerned with the visual presentation of three-dimensional phenomena in domains like medicine, meteorology, astrophysics, etc. The emphasis in scientific visualization research has been on the efficient rendering of measured or simulated data points, surfaces, volumes, and a time component to convey the dynamic nature of the studied phenomena. With the explosive growth in the size of the data, interactive visualization of scientific data becomes a real challenge. In recent years, the graphics community has witnessed tremendous improvements in the performance capabilities of graphics processing units (GPUs), and advances in GPU-accelerated rendering have enabled data exploration at interactive rates. Nevertheless, the majority of techniques rely on the assumption that a true three-dimensional geometric model capturing physical phenomena of interest, is available and ready for visualization. Unfortunately, this assumption does not hold true in many scientific domains, in which measurements are obtained from a given scanning modality at sparsely located intervals in both space and time. This calls for the fusion of data collected from multiple sources in order to fill the gaps and tell the story behind the data.
For years, data fusion has relied on machine learning techniques to combine data from multiple modalities, reconstruct missing information, and track features of interest through time. However, these techniques fall short in solving the problem for datasets with large spatio-temporal gaps. This realization has led researchers in the data fusion domain to acknowledge the importance of human-in-the-loop methods where human expertise plays a major role in data reconstruction.
This PhD research focuses on developing visualization and interaction techniques aimed at addressing some of the challenges that experts are faced with when analyzing the spatio-temporal behavior of physical phenomena. Given a number of datasets obtained from different measurement modalities and from simulation, we propose a generalized framework that can guide research in the field of multi-sensor data fusion and visualization. We advocate the use of GPU parallelism in our developed techniques in order to emphasize interaction as a key component in the successful exploration and analysis of multi-sourced data sets. The goal is to allow the user to create a mental model that captures their understanding of the spatio-temporal behavior of features of interest; one which they can test against real data measurements to verify their model. This model creation and verification is an iterative process in which the user interacts with the visualization, explores and builds an understanding of what occurred in the data, then tests this understanding against real-world measurements and improves it.
We developed a system as a reference implementation of the proposed framework. Reconstructed data is rendered in a way that completes the users' cognitive model, which encodes their understanding of the phenomena in question with a high degree of accuracy. We tested the usability of the system and evaluated its support for this cognitive model construction process. Once an acceptable model is constructed, it is fed back to the system in the form of a reference dataset, which our framework uses to guide the real-time tracking of measurement data. Our results show that interactive exploration tasks enable the construction of this cognitive model and reference set, and that real-time interaction is achievable during the exploration, reconstruction, and enhancement of multi-modal time-variant three-dimensional data, by designing and implementing advanced GPU-based visualization techniques.