Deep Transfer Learning for Vulnerable Road Users Detection using Smartphone Sensors Data
dc.contributor.author | Elhenawy, Mohammed | en |
dc.contributor.author | Ashqar, Huthaifa I. | en |
dc.contributor.author | Masoud, Mahmoud | en |
dc.contributor.author | Almannaa, Mohammed H. | en |
dc.contributor.author | Rakotonirainy, Andry | en |
dc.contributor.author | Rakha, Hesham A. | en |
dc.contributor.department | Civil and Environmental Engineering | en |
dc.date.accessioned | 2020-10-27T13:18:15Z | en |
dc.date.available | 2020-10-27T13:18:15Z | en |
dc.date.issued | 2020-10-25 | en |
dc.date.updated | 2020-10-26T14:25:37Z | en |
dc.description.abstract | As the Autonomous Vehicle (AV) industry is rapidly advancing, the classification of non-motorized (vulnerable) road users (VRUs) becomes essential to ensure their safety and to smooth operation of road applications. The typical practice of non-motorized road users’ classification usually takes significant training time and ignores the temporal evolution and behavior of the signal. In this research effort, we attempt to detect VRUs with high accuracy be proposing a novel framework that includes using Deep Transfer Learning, which saves training time and cost, to classify images constructed from Recurrence Quantification Analysis (RQA) that reflect the temporal dynamics and behavior of the signal. Recurrence Plots (RPs) were constructed from low-power smartphone sensors without using GPS data. The resulted RPs were used as inputs for different pre-trained Convolutional Neural Network (CNN) classifiers including constructing 227 × 227 images to be used for AlexNet and SqueezeNet; and constructing 224 × 224 images to be used for VGG16 and VGG19. Results show that the classification accuracy of Convolutional Neural Network Transfer Learning (CNN-TL) reaches 98.70%, 98.62%, 98.71%, and 98.71% for AlexNet, SqueezeNet, VGG16, and VGG19, respectively. Moreover, we trained resnet101 and shufflenet for a very short time using one epoch of data and then used them as weak learners, which yielded 98.49% classification accuracy. The results of the proposed framework outperform other results in the literature (to the best of our knowledge) and show that using CNN-TL is promising for VRUs classification. Because of its relative straightforwardness, ability to be generalized and transferred, and potential high accuracy, we anticipate that this framework might be able to solve various problems related to signal classification. | en |
dc.description.version | Published version | en |
dc.format.mimetype | application/pdf | en |
dc.identifier.citation | Elhenawy, M.; Ashqar, H.I.; Masoud, M.; Almannaa, M.H.; Rakotonirainy, A.; Rakha, H.A. Deep Transfer Learning for Vulnerable Road Users Detection using Smartphone Sensors Data. Remote Sens. 2020, 12, 3508. | en |
dc.identifier.doi | https://doi.org/10.3390/rs12213508 | en |
dc.identifier.uri | http://hdl.handle.net/10919/100718 | en |
dc.language.iso | en | en |
dc.publisher | MDPI | en |
dc.rights | Creative Commons Attribution 4.0 International | en |
dc.rights.uri | http://creativecommons.org/licenses/by/4.0/ | en |
dc.subject | transportation mode classification | en |
dc.subject | vulnerable road users | en |
dc.subject | recurrence plots | en |
dc.subject | computer vision | en |
dc.subject | image classification system | en |
dc.title | Deep Transfer Learning for Vulnerable Road Users Detection using Smartphone Sensors Data | en |
dc.title.serial | Remote Sensing | en |
dc.type | Article - Refereed | en |
dc.type.dcmitype | Text | en |
dc.type.dcmitype | StillImage | en |