Browsing by Author "Mohammed, Ayat"
Now showing 1 - 4 of 4
Results Per Page
Sort Options
- Enhancing Brain Flow Visualization with Automated 3D Data Processing: A Study on DCE-MRI Data from Mice with TumorsMohammed, Ayat; Polys, Nicholas F.; Cunningham, Jessica; Munson, Jennifer M.; Chutkowski, James; Liang, Hun; Park, Daniel; Rockne, Russell; Woodall, Ryan; Esparza, Cora (ACM, 2023-10-09)Enhancing the process of generating entirely automated visualization schemes of complex fluid flow patterns within brain tumors is critical for gaining insights into their movements and behaviors. This study focused on optimizing and automating the processing of 3D volumetric and vector field data sets obtained from DCE-MRI (Dynamic Contrast-Enhanced Magnetic Resonance Imaging) scans. It is crucial to maintain performance, preserve data quality and resolution, and provide an accessible platform for biomedical scientists. In this paper, we represent an innovative approach to enhance fluid flow visualization of brain tumors through scalable visualization techniques. New techniques have been designed, benchmarked, and authenticated to produce X3D visualizations in Web3D environments using Python, and ParaView. The proposed approach does not only enhance fluid flow visualization in the context of brain tumor research but also provides a reproducible and transparent framework for future studies with both human and mouse scans.
- Evaluating visual channels for multivariate map visualizationMohammed, Ayat; Polys, Nicholas F.; Sforza, Peter M. (EuroGraphics, 2018-06-04)Visual differencing, or visual discrimination, is the ability to differentiate between two or more objects in a scene depending on the values of certain attributes. Focusing on multivariate maps visualization, this work examined human’s predictable bias in interpreting visual-spatial information and inference making. Moreover, this study seeks to develop and evaluate new techniques to mitigate the trade-off between proximity and occlusion and to enable analysts to explore multivariate maps. Therefore, we developed a multi-criteria decision-making technique for land suitability using multivariate maps, and we carried out a user study where users are tasked to choose the most suitable piece of land to plant grapes. We designed the user study to evaluate mapping a map’s layers (variables) to visual channels (Transparency, Hue, Saturation and Brightness/Lightness); two color spaces were used Hue Saturation Value (HSV)and Hue Saturation Lightness (HSL). The categorical variables were mapped to the Hue channel and the quantitative/ordinal variables were mapped to either Saturation, Brightness/lightness, or Transparency channels. Our online user study was taken by 85 participants to test the users’ perception of different map visualizations. The statistical analysis of survey responses showed that mapping quantitative layers to the Transparency channel outperformed the other channels, and the use of HSV color space showed a more efficient mapping than HSL, especially for the extreme values in the dataset.
- Prompt Engineering for X3D Object Creation with LLMsPolys, Nicholas; Mohammed, Ayat; Sandbrook, Ben (ACM, 2024-09-25)Large Language Models (LLMs) are a new class of knowledge embodied in a computer and trained on massive amounts of human text, image, and video examples. As the result of a user prompt, these LLMs can generate generally coherent responses in several kinds of media and languages. Can LLMs write X3D code? In this paper we explore the ability of several leading LLMs to generate valid and sensible code for interactive X3D scenes. We compare the prompt results from three different LLMs to examine the quality of the generated X3D. We setup an experimental framework that uses a within-subjects repeated-measures design to create X3D from text prompts. We vary our prompt strategies and give the LLMs increasingly challenging and increasingly detailed scene requests.We assess the quality of the resulting X3D scenes including geometry, appearances, animations, and interactions. Our results provide a comparison of different prompt strategies and their outcomes. Such results provide early probes into the limited epistemology and fluency of contemporary LLMs in composing multi-part, animate-able 3D objects.
- Visualize This: Lessons from the Front-lines of High Performance VisualizationMohammed, Ayat; Polys, Nicholas F.; Farrah, Duncan (Department of Computer Science, Virginia Polytechnic Institute & State University, 2020-04-02)This paper presents a comprehensive workflow to address two major factors in multivariate multidimensional (MVMD) scientific visualization: the scalability of rendering and the scalability of representation (for perception). Our workflow integrates the metrics of scientific computing and visualization across di fferent STEM domains to deliver perceivable visualizations that meet scientists’ expectations. Our approach attempts to balance the performance of MVMD visualizations using techniques such as sub-sampling, domain decomposition, and parallel rendering. When mapping data to visual form we considered: the nature of the data (dimensionality, type, and distribution), the computing power (serial or parallel), and the rendering power (rendering mechanism, format, and display spectrum). We used HPC clusters to perform remote parallel processing and visualization of large-scale data sets such as 3D point clouds, galaxy catalogs, and airflow simulations. Our workflow brings these considerations into a structured form to guide the decisions of visualization designers who deal with large heterogeneous data sets.