VTechWorks staff will be away for the Memorial Day holiday on Monday, May 27, and will not be replying to requests at that time. Thank you for your patience.
Vision and Radar Fusion for Identification of Vehicles in Traffic
MetadataShow full item record
This report presents a method for estimating the presence and duration of preceding and lead vehicle in front of a motorcycle using an object detection algorithm guided by radar data. The video and radar data were collected as part of a large transportation project. The data are recorded by the ego vehicle during a trip while in a naturalistic research study. The goal is to validate objects detected by radar using vision, to identify moving preceding vehicles and the lead vehicle. The proposed approach takes advantage of radar data in locating the vehicles and other targets and then validates the targets as vehicles using Dual-Tree Branch-and-Bound (Kokkinos, 2011) object detection algorithm. Localization, detection and tracking took 0.0385 seconds per frame on average. Precision and recall of lead vehicle detection is 98.61% and 90.53% respectively. The algorithm presents a comprehensive approach to localize target vehicles in video. The radar object coordinates are mapped on the video frame using perspective projection map- ping. Then persistent radar objects are determined by analyzing their trajectory on video frames. When a radar object appears for three consecutive frames, its called a persistent object. A region of interest (ROI) around the persistent radar object is cropped from the frame, and passed to the object detection algorithm to determine if the persistent object is a car. Once a car is detected the validation of the radar object is complete. We track the detected car in the following frames and refresh the detection after every fourteen frames. The car detection algorithm runs whenever a new persistent radar object is introduced. After validating radar objects, at each timestamp, the lead vehicle is determined using radar object's forward and lateral distance. The time from detecting a lead vehicle to the time when the vehicle disappears or another vehicle becomes lead vehicle, is recorded to get the epochs of following driving mode for that lead vehicle. Finally, the detection result is integrated with MATLAB lane detection system to make a complete system for lead vehicle detection and tracking. The video of interest has 240x720 resolution and approximately 15 frames per second. The car detection algorithm takes 0.1960 seconds on average to detect one car in a machine with Windows operating system and 4GB RAM. But as the detection algorithm is not run for each frame it saves time. Since no annotated motorcycle video dataset is publicly available, two videos of 52 seconds and 26 seconds were manually annotated to test the performance of the approach. The current approach works almost at real time. The algorithm has been tested and results have been reported on 1 video.
- Masters Theses