Context Dependent Gaze Metrics for Evaluation of Laparoscopic Surgery Manual Skills

dc.contributor.authorKulkarni, Chaitanya Shashikanten
dc.contributor.committeechairLau, Nathanen
dc.contributor.committeememberJin, Ranen
dc.contributor.committeememberParker, Sarah H.en
dc.contributor.departmentIndustrial and Systems Engineeringen
dc.date.accessioned2022-12-03T07:00:35Zen
dc.date.available2022-12-03T07:00:35Zen
dc.date.issued2021-06-10en
dc.description.abstractWith the growing adoption of laparoscopic surgery practices, high quality training and qualification of laparoscopic skills through objective assessment has become critical. While eye-gaze and instrument motion analyses have demonstrated promise in producing objective metrics for skill assessment in laparoscopic surgery, three areas deserve further research attention. First, most eye-gaze metrics do not account for trainee behaviors that change the visual scene or context that can be addressed by computer vision. Second, feedforward control metrics leveraging on the relationship between eye-gaze and hand movements has not been investigated in laparoscopic surgery. Finally, eye-gaze metrics have not demonstrated sensitivity to skill progressions of trainees as the literature has focused on differences between experts and novices although feedback on skill acquisition is most useful for trainees or educators. To advance eye-gaze assessment in laparoscopic surgery, this research presents a three-stage gaze based assessment methodology to provide a standardized process for generating context-dependent gaze metrics and estimating the proficiency levels of medical trainees on surgery. The three stages are: (1) contextual scene analysis for segmenting surgical scenes into areas of interest, (2) compute context dependent gaze metrics based on eye fixation on areas of interest, and (3) defining and estimating skill proficiency levels with unsupervised and supervised learning, respectively. This methodology was applied to analyze 499 practice trials by nine medical trainees practicing the peg transfer task in the Fundamental of Laparoscopic Surgery program. The application of this methodology generated five context dependent gaze and one tool movement metrics, defined three proficiency levels of the trainees, and developed a model predicting proficiency level of a participant for a given trial with 99% accuracy. Further, two of six metrics are completely novel, capturing feed-forward behaviors in the surgical domain. The results also demonstrated that gaze metrics could reveal skill levels more precisely than between experts and novices as suggested in the literature. Thus, the metrics derived from the gaze based assessment methodology also shows high sensitive to trainee skill levels. The implication of this research includes providing automated feedback to trainees on where they have looked during practice trial and what skill proficiency level attained after each practice trial.en
dc.description.abstractgeneralLaparoscopic surgery is type of minimally invasive surgery which is being widely adopted. Skills required for performing laparoscopic surgeries are different than open surgeries. Hence, it is critical to ensure that adequate training and assessment is provided to surgeons. Eye-gaze tracking technology has made it possible to compute metrics that could be employed for skill assessment. These metrics are based on involuntary gaze behaviors and are independent of the nature of the surgical training task being performed. Hence, they may not be suitable for feedback during training. Metrics suitable for feedback are context dependent metrics which take into account the task based information. Experts tend to show look-ahead behavior while performing a task which can be quantified using context dependent metrics. This research presents a three stage methodology which facilitates computation of context dependent metrics and feed-forward metrics enabling identification of different skill levels in trainees. Applying this methodology to dataset of nine trainees with 499 practice trials, a total of six metrics were computed and a classification model was built to predict three identified skill level with 99% accuracy. This research is directly applicable to developing an automated system for laparoscopic training and assessment.en
dc.description.degreeMaster of Scienceen
dc.format.mediumETDen
dc.identifier.othervt_gsexam:31815en
dc.identifier.urihttp://hdl.handle.net/10919/112788en
dc.publisherVirginia Techen
dc.rightsIn Copyrighten
dc.rights.urihttp://rightsstatements.org/vocab/InC/1.0/en
dc.subjectLaparoscopic Surgeryen
dc.subjectMachine Learningen
dc.subjectComputer Visionen
dc.subjectSurgical Trainingen
dc.subjectHuman Performanceen
dc.subjectEye Trackingen
dc.titleContext Dependent Gaze Metrics for Evaluation of Laparoscopic Surgery Manual Skillsen
dc.typeThesisen
thesis.degree.disciplineIndustrial and Systems Engineeringen
thesis.degree.grantorVirginia Polytechnic Institute and State Universityen
thesis.degree.levelmastersen
thesis.degree.nameMaster of Scienceen

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Kulkarni_C_T_2021.pdf
Size:
1.69 MB
Format:
Adobe Portable Document Format

Collections