Automated Vision-Based Tracking and Action Recognition of Earthmoving Construction Operations
dc.contributor.author | Heydarian, Arsalan | en |
dc.contributor.committeechair | Golparvar-Fard, Mani | en |
dc.contributor.committeemember | de la Garza, Jesus M. | en |
dc.contributor.committeemember | Marr, Linsey C. | en |
dc.contributor.committeemember | Niebles, Juan Carlos | en |
dc.contributor.department | Civil Engineering | en |
dc.date.accessioned | 2017-04-04T19:49:03Z | en |
dc.date.adate | 2012-06-06 | en |
dc.date.available | 2017-04-04T19:49:03Z | en |
dc.date.issued | 2012-04-30 | en |
dc.date.rdate | 2016-09-30 | en |
dc.date.sdate | 2012-05-15 | en |
dc.description.abstract | The current practice of construction productivity and emission monitoring is performed by either manual stopwatch studies which are significantly labor intensive and subject to human errors, or by the use of RFID and GPS tracking devices which may be costly and impractical. To address these limitations, a novel computer vision based method for automated 2D tracking, 3D localization, and action recognition of construction equipment from different camera viewpoints is presented. In the proposed method, a new algorithm based on Histograms of Oriented Gradients and hue-saturation Colors (HOG+C) is used for 2D tracking of the earthmoving equipment. Once the equipment is detected, using a Direct Linear Transformation followed by a non-linear optimization, their positions are localized in 3D. In order to automatically analyze the performance of these operations, a new algorithm to recognize actions of the equipment is developed. First, a video is represented as a collection of spatio-temporal features by extracting space-time interest points and describing each with a Histogram of Oriented Gradients (HOG). The algorithm automatically learns the distributions of these features by clustering their HOG descriptors. Equipment action categories are then learned using a multi-class binary Support Vector Machine (SVM) classifier. Given a novel video sequence, the proposed method recognizes and localizes equipment actions. The proposed method has been exhaustively tested on 859 videos from earthmoving operations. Experimental results with an average accuracy of 86.33% and 98.33% for excavator and truck action recognition respectively, reflect the promise of the proposed method for automated performance monitoring. | en |
dc.description.degree | Master of Science | en |
dc.identifier.other | etd-05152012-125057 | en |
dc.identifier.sourceurl | http://scholar.lib.vt.edu/theses/available/etd-05152012-125057/ | en |
dc.identifier.uri | http://hdl.handle.net/10919/76761 | en |
dc.language.iso | en_US | en |
dc.publisher | Virginia Tech | en |
dc.rights | In Copyright | en |
dc.rights.uri | http://rightsstatements.org/vocab/InC/1.0/ | en |
dc.subject | Support Vector Machine | en |
dc.subject | Histogram of Gradients | en |
dc.subject | Action Recognition | en |
dc.subject | 2D Tracking | en |
dc.subject | Construction Performance Monitoring | en |
dc.title | Automated Vision-Based Tracking and Action Recognition of Earthmoving Construction Operations | en |
dc.type | Thesis | en |
dc.type.dcmitype | Text | en |
thesis.degree.discipline | Civil Engineering | en |
thesis.degree.grantor | Virginia Polytechnic Institute and State University | en |
thesis.degree.level | masters | en |
thesis.degree.name | Master of Science | en |
Files
Original bundle
1 - 1 of 1
Loading...
- Name:
- etd-05152012-125057_Heydarian_A_T_2012.pdf
- Size:
- 3.31 MB
- Format:
- Adobe Portable Document Format