Automated Vision-Based Tracking and Action Recognition of Earthmoving Construction Operations

dc.contributor.authorHeydarian, Arsalanen
dc.contributor.committeechairGolparvar-Fard, Manien
dc.contributor.committeememberde la Garza, Jesus M.en
dc.contributor.committeememberMarr, Linsey C.en
dc.contributor.committeememberNiebles, Juan Carlosen
dc.contributor.departmentCivil Engineeringen
dc.date.accessioned2017-04-04T19:49:03Zen
dc.date.adate2012-06-06en
dc.date.available2017-04-04T19:49:03Zen
dc.date.issued2012-04-30en
dc.date.rdate2016-09-30en
dc.date.sdate2012-05-15en
dc.description.abstractThe current practice of construction productivity and emission monitoring is performed by either manual stopwatch studies which are significantly labor intensive and subject to human errors, or by the use of RFID and GPS tracking devices which may be costly and impractical. To address these limitations, a novel computer vision based method for automated 2D tracking, 3D localization, and action recognition of construction equipment from different camera viewpoints is presented. In the proposed method, a new algorithm based on Histograms of Oriented Gradients and hue-saturation Colors (HOG+C) is used for 2D tracking of the earthmoving equipment. Once the equipment is detected, using a Direct Linear Transformation followed by a non-linear optimization, their positions are localized in 3D. In order to automatically analyze the performance of these operations, a new algorithm to recognize actions of the equipment is developed. First, a video is represented as a collection of spatio-temporal features by extracting space-time interest points and describing each with a Histogram of Oriented Gradients (HOG). The algorithm automatically learns the distributions of these features by clustering their HOG descriptors. Equipment action categories are then learned using a multi-class binary Support Vector Machine (SVM) classifier. Given a novel video sequence, the proposed method recognizes and localizes equipment actions. The proposed method has been exhaustively tested on 859 videos from earthmoving operations. Experimental results with an average accuracy of 86.33% and 98.33% for excavator and truck action recognition respectively, reflect the promise of the proposed method for automated performance monitoring.en
dc.description.degreeMaster of Scienceen
dc.identifier.otheretd-05152012-125057en
dc.identifier.sourceurlhttp://scholar.lib.vt.edu/theses/available/etd-05152012-125057/en
dc.identifier.urihttp://hdl.handle.net/10919/76761en
dc.language.isoen_USen
dc.publisherVirginia Techen
dc.rightsIn Copyrighten
dc.rights.urihttp://rightsstatements.org/vocab/InC/1.0/en
dc.subjectSupport Vector Machineen
dc.subjectHistogram of Gradientsen
dc.subjectAction Recognitionen
dc.subject2D Trackingen
dc.subjectConstruction Performance Monitoringen
dc.titleAutomated Vision-Based Tracking and Action Recognition of Earthmoving Construction Operationsen
dc.typeThesisen
dc.type.dcmitypeTexten
thesis.degree.disciplineCivil Engineeringen
thesis.degree.grantorVirginia Polytechnic Institute and State Universityen
thesis.degree.levelmastersen
thesis.degree.nameMaster of Scienceen

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
etd-05152012-125057_Heydarian_A_T_2012.pdf
Size:
3.31 MB
Format:
Adobe Portable Document Format

Collections