Applying Source Level Auto-Vectorization to Aparapi Java

dc.contributor.authorAlbert, Frank Curtisen
dc.contributor.committeechairRavindran, Binoyen
dc.contributor.committeememberBroadwater, Robert P.en
dc.contributor.committeememberWang, Chaoen
dc.contributor.departmentElectrical and Computer Engineeringen
dc.date.accessioned2014-06-20T08:00:15Zen
dc.date.available2014-06-20T08:00:15Zen
dc.date.issued2014-06-19en
dc.description.abstractEver since chip manufacturers hit the power wall preventing them from increasing processor clock speed, there has been an increased push towards parallelism for performance improvements. This parallelism comes in the form of both data parallel single instruction multiple data (SIMD) instructions, as well as parallel compute cores in both central processing units (CPUs) and graphics processing units (GPUs). While these hardware enhancements offer potential performance enhancements, programs must be re-written to take advantage of them in order to see any performance improvement Some lower level languages that compile directly to machine code already take advantage of the data parallel SIMD instructions, but often higher level interpreted languages do not. Java, one of the most popular programming languages in the world, still does not include support for these SIMD instructions. In this thesis, we present a vector library that implements all of the major SIMD instructions in functions that are accessible to Java through JNI function calls. This brings the benefits of general purpose SIMD functionality to Java. This thesis also works with the data parallel Aparapi Java extension to bring these SIMD performance improvements to programmers who use the extension without any additional effort on their part. Aparapi already provides programmers with an API that allows programmers to declare certain sections of their code parallel. These parallel sections are then run on OpenCL capable hardware with a fallback path in the Java thread pool to ensure code reliability. This work takes advantage of the knowledge of independence of the parallel sections of code to automatically modify the Java thread pool fallback path to include the vectorization library through the use of an auto-vectorization tool created for this work. When the code is not vectorizable the auto-vectorizer tool is still able to offer performance improvements over the default fallback path through an improved looped implementation that executes the same code but with less overhead. Experiments conducted by this work illustrate that for all 10 benchmarks tested the auto-vectorization tool was able to produce an implementation that was able to beat the default Aparapi fallback path. In addition it was found that this improved fallback path even outperformed the GPU implementation for several of the benchmarks tested.en
dc.description.degreeMaster of Scienceen
dc.format.mediumETDen
dc.identifier.othervt_gsexam:3019en
dc.identifier.urihttp://hdl.handle.net/10919/49022en
dc.publisherVirginia Techen
dc.rightsIn Copyrighten
dc.rights.urihttp://rightsstatements.org/vocab/InC/1.0/en
dc.subjectAuto-Vectorizationen
dc.subjectAparapien
dc.subjectJavaen
dc.subjectGPGPU Computingen
dc.subjectSIMDen
dc.subjectParallelismen
dc.subjectThreadeden
dc.titleApplying Source Level Auto-Vectorization to Aparapi Javaen
dc.typeThesisen
thesis.degree.disciplineComputer Engineeringen
thesis.degree.grantorVirginia Polytechnic Institute and State Universityen
thesis.degree.levelmastersen
thesis.degree.nameMaster of Scienceen

Files

Original bundle
Now showing 1 - 2 of 2
Loading...
Thumbnail Image
Name:
Albert_FC_T_2014.pdf
Size:
1.54 MB
Format:
Adobe Portable Document Format
Loading...
Thumbnail Image
Name:
Albert_FC_T_2014_support_1.pdf
Size:
1.29 MB
Format:
Adobe Portable Document Format
Description:
Supporting documents

Collections