Helal, A.Sathre, PaulFeng, Wu-chun2017-04-032017-04-032016-11-15978-1-4673-8815-32167-4337http://hdl.handle.net/10919/76745To attain scalable performance efficiently, the HPC community expects future exascale systems to consist of multiple nodes, each with different types of hardware accelerators. In addition to GPUs and Intel MICs, additional candidate accelerators include embedded multiprocessors and FPGAs. End users need appropriate tools to efficiently use the available compute resources in such systems, both within a compute node and across compute nodes. As such, we present MetaMorph, a library framework designed to (automatically) extract as much computational capability as possible from HPC systems. Its design centers around three core principles: abstraction, interoperability, and adaptivity. To demonstrate its efficacy, we present a case study that uses the structured grids design pattern, which is heavily used in computational fluid dynamics. We show how MetaMorph significantly reduces the development time, while delivering performance and interoperability across an array of heterogeneous devices, including multicore CPUs, Intel MICs, AMD GPUs, and NVIDIA GPUs.119 - 129 (11) page(s)application/pdfenIn CopyrightLibrariesHardwareInteroperabilityKernelPerformance EvaluationExascaleParallel LibrariesPerformance PortabilityProgrammabilityAcceleratorsGPUMICCUDAOpenCLOpenMPMPIStructured GridsMetaMorph: A Library Framework for Interoperable Kernels on Multi- and Many-Core ClustersConference proceedingACM/IEEE SC16: The International Conference for High Performance Computing, Networking, Storage and Analysis (also known as Supercomputing)https://doi.org/10.1109/SC.2016.10