Improving the Perception of Depth of Image-Based Objects in a Virtual Environment
MetadataShow full item record
In appreciation of High-Performance Computing, modern scientific simulations are scaling into millions and even billions of grid points. As we enter the exa-scale, new strategies are required for visualization and analysis. While Image-Based Rendering (IBR) has emerged as a viable solution to the asymmetry between data size and its storage and required rendering power, it is limited in its 2D image portrayal of 3D spatial objects. This work describes a novel technique to capture, represent, and render depth information in the context of 3D IBR. We tested the value of displacement by displacement map, shading by normal, and image angle interval with our technique. We ran an online user study of 60 participants to evaluate the value of adding depth information back to Image-Based Rendering and found significant benefits.
General Audience Abstract
In scientific research, data visualization is important for better understanding data. Modern experiments and simulations are expanding rapidly in scale, and there will come a day when rendering the entire 3D geometry becomes impossible resource-wise. Cinema was proposed as an image-Based solution to this problem, where the model was represented by an interpolated series of images. However, using flat images cannot fully express the 3D characteristics of a data. Therefore, in this work, we try to improve the depth portrayal of the images by protruding the pixels and applying shading. We show the results of a user study conducted with 60 participants on the effect of pixel protrusion, shading, and varying the number of images representing the object. Results show that this method would be useful for 3D scientific visualizations. The resulting object almost accurately resembles the 3D object.
- Masters Theses