Exploring Performance Portability for Accelerators via High-level Parallel Patterns
dc.contributor.author | Hou, Kaixi | en |
dc.contributor.committeechair | Feng, Wu-chun | en |
dc.contributor.committeemember | Cao, Yong | en |
dc.contributor.committeemember | Agrawal, Gagan | en |
dc.contributor.committeemember | Ribbens, Calvin J. | en |
dc.contributor.committeemember | Wang, Hao | en |
dc.contributor.department | Computer Science | en |
dc.date.accessioned | 2018-08-28T08:00:40Z | en |
dc.date.available | 2018-08-28T08:00:40Z | en |
dc.date.issued | 2018-08-27 | en |
dc.description.abstract | Nowadays, parallel accelerators have become prominent and ubiquitous, e.g., multi-core CPUs, many-core GPUs (Graphics Processing Units) and Intel Xeon Phi. The performance gains from them can be as high as many orders of magnitude, attracting extensive interest from many scientific domains. However, the gains are closely followed by two main problems: (1) A complete redesign of existing codes might be required if a new parallel platform is used, leading to a nightmare for developers. (2) Parallel codes that execute efficiently on one platform might be either inefficient or even non-executable for another platform, causing portability issues. To handle these problems, in this dissertation, we propose a general approach using parallel patterns, an effective and abstracted layer to ease the generating efficient parallel codes for given algorithms and across architectures. From algorithms to parallel patterns, we exploit the domain expertise to analyze the computational and communication patterns in the core computations and represent them in DSL (Domain Specific Language) or algorithmic skeletons. This preserves the essential information, such as data dependencies, types, etc., for subsequent parallelization and optimization. From parallel patterns to actual codes, we use a series of automation frameworks and transformations to determine which levels of parallelism can be used, what optimal instruction sequences are, how the implementation change to match different architectures, etc. Experiments show that our approaches by investigating a couple of important computational kernels, including sort (and segmented sort), sequence alignment, stencils, etc., across various parallel platforms (CPUs, GPUs, Intel Xeon Phi). | en |
dc.description.abstractgeneral | Nowadays, parallel accelerators have become prominent and ubiquitous, e.g., multi-core CPUs, many-core GPUs (Graphics Processing Units) and Intel Xeon Phi. The performance gains from them can be as high as many orders of magnitude, attracting extensive interest from many scientific domains. However, the gains are closely followed by two main problems: (1) A complete redesign of existing codes might be required if a new parallel platform is used, leading to a nightmare for developers. (2) Parallel codes that execute efficiently on one platform might be either inefficient or even non-executable for another platform, causing portability issues. To handle these problems, in this dissertation, we propose a general approach using parallel patterns, an effective and abstracted layer to ease the generating efficient parallel codes for given algorithms and across architectures. From algorithms to parallel patterns, we exploit the domain expertise to analyze the computational and communication patterns in the core computations and represent them in DSL (Domain Specific Language) or algorithmic skeletons. This preserves the essential information, such as data dependencies, types, etc., for subsequent parallelization and optimization. From parallel patterns to actual codes, we use a series of automation frameworks and transformations to determine which levels of parallelism can be used, what optimal instruction sequences are, how the implementation change to match different architectures, etc. Experiments show that our approaches by investigating a couple of important computational kernels, including sort (and segmented sort), sequence alignment, stencils, etc., across various parallel platforms (CPUs, GPUs, Intel Xeon Phi). | en |
dc.description.degree | Ph. D. | en |
dc.format.medium | ETD | en |
dc.identifier.other | vt_gsexam:15187 | en |
dc.identifier.uri | http://hdl.handle.net/10919/84923 | en |
dc.publisher | Virginia Tech | en |
dc.rights | In Copyright | en |
dc.rights.uri | http://rightsstatements.org/vocab/InC/1.0/ | en |
dc.subject | GPU | en |
dc.subject | AVX | en |
dc.subject | sort | en |
dc.subject | stencil | en |
dc.subject | wavefront | en |
dc.subject | pattern | en |
dc.subject | parallelism | en |
dc.title | Exploring Performance Portability for Accelerators via High-level Parallel Patterns | en |
dc.type | Dissertation | en |
thesis.degree.discipline | Computer Science and Applications | en |
thesis.degree.grantor | Virginia Polytechnic Institute and State University | en |
thesis.degree.level | doctoral | en |
thesis.degree.name | Ph. D. | en |
Files
Original bundle
1 - 1 of 1