Harnessing the Power of Self-Training for Gaze Point Estimation in Dual Camera Transportation Datasets

dc.contributor.authorBhagat, Hirva Alpeshen
dc.contributor.committeechairKarpatne, Anujen
dc.contributor.committeechairAbbott, Amos L.en
dc.contributor.committeememberSarkar, Abhijiten
dc.contributor.committeememberFox, Edward A.en
dc.contributor.departmentComputer Science and Applicationsen
dc.date.accessioned2023-06-15T08:00:32Zen
dc.date.available2023-06-15T08:00:32Zen
dc.date.issued2023-06-14en
dc.description.abstractThis thesis proposes a novel approach for efficiently estimating gaze points in dual camera transportation datasets. Traditional methods for gaze point estimation are dependent on large amounts of labeled data, which can be both expensive and time-consuming to collect. Additionally, alignment and calibration of the two camera views present significant challenges. To overcome these limitations, this thesis investigates the use of self-learning techniques such as semi-supervised learning and self-training, which can reduce the need for labeled data while maintaining high accuracy. The proposed method is evaluated on the DGAZE dataset and achieves a 57.2\% improvement in performance compared to the previous methods. This approach can prove to be a valuable tool for studying visual attention in transportation research, leading to more cost-effective and efficient research in this field.en
dc.description.abstractgeneralThis thesis presents a new method for efficiently estimating the gaze point of drivers while driving, which is crucial for understanding driver behavior and improving transportation safety. Traditional methods require a lot of labeled data, which can be time-consuming and expensive to obtain. This thesis proposes a self-learning approach that can learn from both labeled and unlabeled data, reducing the need for labeled data while maintaining high accuracy. By training the model on labeled data and using its own estimations on unlabeled data to improve its performance, the proposed approach can adapt to new scenarios and improve its accuracy over time. The proposed method is evaluated on the DGAZE dataset and achieves a 57.2\% improvement in performance compared to the previous methods. Overall, this approach offers a more efficient and cost-effective solution that can potentially help improve transportation safety by providing a better understanding of driver behavior. This approach can prove to be a valuable tool for studying visual attention in transportation research, leading to more cost-effective and efficient research in this field.en
dc.description.degreeMaster of Scienceen
dc.format.mediumETDen
dc.identifier.othervt_gsexam:37781en
dc.identifier.urihttp://hdl.handle.net/10919/115430en
dc.language.isoenen
dc.publisherVirginia Techen
dc.rightsIn Copyrighten
dc.rights.urihttp://rightsstatements.org/vocab/InC/1.0/en
dc.subjectPoint of gazeen
dc.subjectgaze point estimationen
dc.subjectself trainingen
dc.subjectsemi-supervised learningen
dc.subjectdriver safetyen
dc.titleHarnessing the Power of Self-Training for Gaze Point Estimation in Dual Camera Transportation Datasetsen
dc.typeThesisen
thesis.degree.disciplineComputer Science and Applicationsen
thesis.degree.grantorVirginia Polytechnic Institute and State Universityen
thesis.degree.levelmastersen
thesis.degree.nameMaster of Scienceen

Files

Original bundle
Now showing 1 - 2 of 2
Loading...
Thumbnail Image
Name:
Bhagat_HA_T_2023.pdf
Size:
7.99 MB
Format:
Adobe Portable Document Format
Loading...
Thumbnail Image
Name:
Bhagat_HA_T_2023_support_1.pdf
Size:
89.42 KB
Format:
Adobe Portable Document Format
Description:
Supporting documents

Collections