Enhancing Perception Systems using V2V Sensor Fusion

dc.contributor.authorGwash, Ansh Sundeepen
dc.contributor.committeechairTalty, Timothy Josephen
dc.contributor.committeechairGracanin, Denisen
dc.contributor.committeememberMeng, Naen
dc.contributor.departmentComputer Science and#38; Applicationsen
dc.date.accessioned2025-07-18T08:00:16Zen
dc.date.available2025-07-18T08:00:16Zen
dc.date.issued2025-07-17en
dc.description.abstractWith the surge in popularity of autonomous vehicles that depend on complex perception systems to make safety critical judgments, it is necessary to test and improve them. One of the ways this can be done is through vehicle-to-vehicle communication. This concept has been around for decades but was first standardized in 2010. Since then, there have been many hurdles in the path to applying this technology. Security, reliability, latency, and cost are the main reasons for the slow growth in this space. Another main problem is the lack of compelling applications that make overcoming these limitations worthwhile for industry. Autonomous Vehicles rely on a number of sensor types, with the most common ones being Cameras, Radars, and LiDAR. The detections from these three sensors are fused into a track list that can be used to plan and control the vehicles movements. This thesis proposes a system to introduce data from Vehicle-to-Vehicle messages into this fused track list. This extra information can be beneficial in cases when the onboard sensors are occluded or have low visibility. City and highway driving scenarios and Software-in-the-Loop testing is used to evaluate the proposed fused track list.en
dc.description.abstractgeneralAs cars become more automated they rely more on sensors to perceive the environment around them. Vehicles with automated features today use a combination of cameras, radars, and LiDAR to do this. Each of these sensor types has it's own strengths and weaknesses. However, just like a human driver these sensors can't see everything. Bad weather, parked cars, and large vehicle can block their view, which makes it hard to detect objects in time. This thesis explores a solution to that problem using Vehicle-to-Vehicle (V2V) communication. V2V allows cars to wirelessly share location and speed information with each other. This project tested how adding V2V information to a car's regular sensor system could improve its awareness and safety. Using simulations of realistic city and highway driving scenarios, the study compared three approaches: using only the car's own sensors, using only shared data from other cars, and combining both. The results showed that combining the two made the car better at spotting hidden or distant hazards than sensors alone. Interestingly, in many cases, V2V alone performed even better—highlighting its powerful potential in improving safety on the road. By showing how connected cars can help each other see better, this work supports the idea of cooperative driving and safer autonomous vehicles in the future.en
dc.description.degreeMaster of Scienceen
dc.format.mediumETDen
dc.identifier.othervt_gsexam:44374en
dc.identifier.urihttps://hdl.handle.net/10919/136861en
dc.language.isoenen
dc.publisherVirginia Techen
dc.rightsIn Copyrighten
dc.rights.urihttp://rightsstatements.org/vocab/InC/1.0/en
dc.subjectvehicle-to-vehicleen
dc.subjectautomated drivingen
dc.titleEnhancing Perception Systems using V2V Sensor Fusionen
dc.typeThesisen
thesis.degree.disciplineComputer Science & Applicationsen
thesis.degree.grantorVirginia Polytechnic Institute and State Universityen
thesis.degree.levelmastersen
thesis.degree.nameMaster of Scienceen

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Gwash_AS_T_2025.pdf
Size:
2.25 MB
Format:
Adobe Portable Document Format

Collections