Harnessing the Power of Self-Training for Gaze Point Estimation in Dual Camera Transportation Datasets
dc.contributor.author | Bhagat, Hirva Alpesh | en |
dc.contributor.committeechair | Karpatne, Anuj | en |
dc.contributor.committeechair | Abbott, Amos L. | en |
dc.contributor.committeemember | Sarkar, Abhijit | en |
dc.contributor.committeemember | Fox, Edward A. | en |
dc.contributor.department | Computer Science and Applications | en |
dc.date.accessioned | 2023-06-15T08:00:32Z | en |
dc.date.available | 2023-06-15T08:00:32Z | en |
dc.date.issued | 2023-06-14 | en |
dc.description.abstract | This thesis proposes a novel approach for efficiently estimating gaze points in dual camera transportation datasets. Traditional methods for gaze point estimation are dependent on large amounts of labeled data, which can be both expensive and time-consuming to collect. Additionally, alignment and calibration of the two camera views present significant challenges. To overcome these limitations, this thesis investigates the use of self-learning techniques such as semi-supervised learning and self-training, which can reduce the need for labeled data while maintaining high accuracy. The proposed method is evaluated on the DGAZE dataset and achieves a 57.2\% improvement in performance compared to the previous methods. This approach can prove to be a valuable tool for studying visual attention in transportation research, leading to more cost-effective and efficient research in this field. | en |
dc.description.abstractgeneral | This thesis presents a new method for efficiently estimating the gaze point of drivers while driving, which is crucial for understanding driver behavior and improving transportation safety. Traditional methods require a lot of labeled data, which can be time-consuming and expensive to obtain. This thesis proposes a self-learning approach that can learn from both labeled and unlabeled data, reducing the need for labeled data while maintaining high accuracy. By training the model on labeled data and using its own estimations on unlabeled data to improve its performance, the proposed approach can adapt to new scenarios and improve its accuracy over time. The proposed method is evaluated on the DGAZE dataset and achieves a 57.2\% improvement in performance compared to the previous methods. Overall, this approach offers a more efficient and cost-effective solution that can potentially help improve transportation safety by providing a better understanding of driver behavior. This approach can prove to be a valuable tool for studying visual attention in transportation research, leading to more cost-effective and efficient research in this field. | en |
dc.description.degree | Master of Science | en |
dc.format.medium | ETD | en |
dc.identifier.other | vt_gsexam:37781 | en |
dc.identifier.uri | http://hdl.handle.net/10919/115430 | en |
dc.language.iso | en | en |
dc.publisher | Virginia Tech | en |
dc.rights | In Copyright | en |
dc.rights.uri | http://rightsstatements.org/vocab/InC/1.0/ | en |
dc.subject | Point of gaze | en |
dc.subject | gaze point estimation | en |
dc.subject | self training | en |
dc.subject | semi-supervised learning | en |
dc.subject | driver safety | en |
dc.title | Harnessing the Power of Self-Training for Gaze Point Estimation in Dual Camera Transportation Datasets | en |
dc.type | Thesis | en |
thesis.degree.discipline | Computer Science and Applications | en |
thesis.degree.grantor | Virginia Polytechnic Institute and State University | en |
thesis.degree.level | masters | en |
thesis.degree.name | Master of Science | en |