Machine Learning Techniques for Gesture Recognition

dc.contributor.authorCaceres, Carlos Antonioen
dc.contributor.committeechairWicks, Alfred L.en
dc.contributor.committeememberBird, John P.en
dc.contributor.committeememberWoolsey, Craig A.en
dc.contributor.departmentMechanical Engineeringen
dc.date.accessioned2015-05-23T08:02:40Zen
dc.date.available2015-05-23T08:02:40Zen
dc.date.issued2014-10-13en
dc.description.abstractClassification of human movement is a large field of interest to Human-Machine Interface researchers. The reason for this lies in the large emphasis humans place on gestures while communicating with each other and while interacting with machines. Such gestures can be digitized in a number of ways, including both passive methods, such as cameras, and active methods, such as wearable sensors. While passive methods might be the ideal, they are not always feasible, especially when dealing in unstructured environments. Instead, wearable sensors have gained interest as a method of gesture classification, especially in the upper limbs. Lower arm movements are made up of a combination of multiple electrical signals known as Motor Unit Action Potentials (MUAPs). These signals can be recorded from surface electrodes placed on the surface of the skin, and used for prosthetic control, sign language recognition, human machine interface, and a myriad of other applications. In order to move a step closer to these goal applications, this thesis compares three different machine learning tools, which include Hidden Markov Models (HMMs), Support Vector Machines (SVMs), and Dynamic Time Warping (DTW), to recognize a number of different gestures classes. It further contrasts the applicability of these tools to noisy data in the form of the Ninapro dataset, a benchmarking tool put forth by a conglomerate of universities. Using this dataset as a basis, this work paves a path for the analysis required to optimize each of the three classifiers. Ultimately, care is taken to compare the three classifiers for their utility against noisy data, and a comparison is made against classification results put forth by other researchers in the field. The outcome of this work is 90+ % recognition of individual gestures from the Ninapro dataset whilst using two of the three distinct classifiers. Comparison against previous works by other researchers shows these results to outperform all other thus far. Through further work with these tools, an end user might control a robotic or prosthetic arm, or translate sign language, or perhaps simply interact with a computer.en
dc.description.degreeMaster of Scienceen
dc.format.mediumETDen
dc.identifier.othervt_gsexam:3858en
dc.identifier.urihttp://hdl.handle.net/10919/52556en
dc.publisherVirginia Techen
dc.rightsIn Copyrighten
dc.rights.urihttp://rightsstatements.org/vocab/InC/1.0/en
dc.subjectGesture Recognitionen
dc.subjectSmart Prostheticen
dc.subjectHidden Markov Modelen
dc.subjectSupport Vector Machinesen
dc.subjectDynamic Time Warpingen
dc.titleMachine Learning Techniques for Gesture Recognitionen
dc.typeThesisen
thesis.degree.disciplineMechanical Engineeringen
thesis.degree.grantorVirginia Polytechnic Institute and State Universityen
thesis.degree.levelmastersen
thesis.degree.nameMaster of Scienceen

Files

Original bundle
Now showing 1 - 2 of 2
Loading...
Thumbnail Image
Name:
Caceres_CA_T_2014.pdf
Size:
2.78 MB
Format:
Adobe Portable Document Format
Loading...
Thumbnail Image
Name:
Caceres_CA_T_2014_support_3.pdf
Size:
854.2 KB
Format:
Adobe Portable Document Format
Description:
Supporting documents

Collections