Abrams, GregoryAdhinarayanan, VigneshFeng, Wu-chunRogers, DavidAhrens, JamsWilson, Luke2017-09-292017-09-292017-09-29http://hdl.handle.net/10919/79454As high-performance computing (HPC) moves towards the exascale era, large-scale scientific simulations are generating enormous datasets. A variety of techniques (e.g., in-situ methods, data sampling, and compression) have been proposed to help visualize these large datasets under various constraints such as storage, power, and energy. However, evaluating these techniques and understanding the various trade-offs (e.g., performance, efficiency, quality) remains a challenging task. To enable the investigation and optimization across such tradeoffs, we propose a toolkit for the early-stage exploration of visualization and rendering approaches, job layout, and visualization pipelines. Our framework covers a broader parameter space than existing visualization applications such as ParaView and VisIt. It also promotes the study of simulation-visualization coupling strategies through a data-centric approach, rather than requiring the code itself. Furthermore, with experimentation on an extensively instrumented supercomputer, we study more metrics of interest than was previously possible. Overall, our framework will help to answer important what-if scenarios and trade-off questions in early stages of pipeline development, helping scientists to make informed choices about how to best couple a simulation code with visualization at extreme scale.enIn CopyrightHigh Performance ComputingParallel and Distributed ComputingComputer SystemsAlgorithmsComputational Science and EngineeringModeling and SimulationETH: A Framework for the Design-Space Exploration of Extreme-Scale VisualizationTechnical reportTR-17-05