Show simple item record

dc.contributor.authorPanwar, Lokendra Singhen
dc.date.accessioned2014-10-22T08:00:34Zen
dc.date.available2014-10-22T08:00:34Zen
dc.date.issued2014-10-21en
dc.identifier.othervt_gsexam:2246en
dc.identifier.urihttp://hdl.handle.net/10919/50585en
dc.description.abstractToday, heterogeneous computing has truly reshaped the way scientists think and approach high-performance computing (HPC). Hardware accelerators such as general-purpose graphics processing units (GPUs) and Intel Many Integrated Core (MIC) architecture continue to make in-roads in accelerating large-scale scientific applications. These advancements, however, introduce new sets of challenges to the scientific community such as: selection of best processor for an application, effective performance optimization strategies, maintaining performance portability across architectures etc. In this thesis, we present our techniques and approach to address some of these significant issues. Firstly, we present a fully automated approach to project the relative performance of an OpenCL program over different GPUs. Performance projections can be made within a small amount of time, and the projection overhead stays relatively constant with the input data size. As a result, the technique can help runtime tools make dynamic decisions about which GPU would run faster for a given kernel. Usage cases of this technique include scheduling or migrating GPU workloads over a heterogeneous cluster with different types of GPUs. We then present our approach to accelerate a seismology modeling application that is based on the finite difference method (FDM), using MPI and CUDA over a hybrid CPU+GPU cluster. We describe the generic computational complexities involved in porting such applications to the GPUs and present our strategy of efficient performance optimization and characterization. We also show how performance modeling can be used to reason and drive the hardware-specific optimizations on the GPU. The performance evaluation of our approach delivers a maximum speedup of 23-fold with a single GPU and 33-fold with dual GPUs per node over the serial version of the application, which in turn results in a many-fold speedup when coupled with the MPI distribution of the computation across the cluster. We also study the efficacy of GPU-integrated MPI, with MPI-ACC as an example implementation, in a seismology modeling application and discuss the lessons learned.en
dc.format.mediumETDen
dc.publisherVirginia Techen
dc.rightsIn Copyrighten
dc.rights.urihttp://rightsstatements.org/vocab/InC/1.0/en
dc.subjectHeterogeneous Computingen
dc.subjectGraphics Processing Unit (GPU)en
dc.subjectGPU Emulationen
dc.subjectPerformance Modelingen
dc.subjectFinite Difference Methoden
dc.subjectSeismology Modelingen
dc.titlePerformance Modeling, Optimization, and Characterization on Heterogeneous Architecturesen
dc.typeThesisen
dc.contributor.departmentComputer Scienceen
dc.description.degreeMaster of Scienceen
thesis.degree.nameMaster of Scienceen
thesis.degree.levelmastersen
thesis.degree.grantorVirginia Polytechnic Institute and State Universityen
thesis.degree.disciplineComputer Science and Applicationsen
dc.contributor.committeechairFeng, Wu-Chunen
dc.contributor.committeememberAthanas, Peter M.en
dc.contributor.committeememberCao, Yongen


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record