Ponce, SeanJing, HuangPark, Seung InKhoury, ChaseQuek, FrancisCao, Yong2013-06-192013-06-192009-03-01http://hdl.handle.net/10919/20162This paper presents a novel parallelization and quantitative characterization of various optimization strategies for data-parallel computation on a graphics processing unit (GPU) using NVIDIA's new GPU programming framework, Compute Unified Device Architecture (CUDA). CUDA is an easy-to-use development framework that has drawn the attention of many different application areas looking for dramatic speed-ups in their code. However, the performance tradeoffs in CUDA are not yet fully understood, especially for data-parallel applications. Consequently, we study two fundamental mathematical operations that are common in many data-parallel applications: convolution and accumulation. Specifically, we profile and optimize the performance of these operations on a 128-core NVIDIA GPU. We then characterize the impact of these operations on a video-based motion-tracking algorithm called vector coherence mapping, which consists of a series of convolutions and dynamically weighted accumulations, and present a comparison of different implementations and their respective performance profiles.application/pdfenIn CopyrightParallel computationAlgorithmsData structuresAn Application-Oriented Approach for Accelerating Data-Parallel Computation with Graphics Processing UnitTechnical reportTR-09-05http://eprints.cs.vt.edu/archive/00001064/01/paper.pdf