VTechWorks staff will be away for the Independence Day holiday from July 4-7. We will respond to email inquiries on Monday, July 8. Thank you for your patience.
 

Multiscale Quantitative Analytics of Human Visual Searching Tasks

dc.contributor.authorChen, Xiaoyuen
dc.contributor.committeechairJin, Ranen
dc.contributor.committeememberBowman, Douglas A.en
dc.contributor.committeememberGabbard, Joseph L.en
dc.contributor.committeememberLau, Nathanen
dc.contributor.departmentIndustrial and Systems Engineeringen
dc.date.accessioned2021-07-17T08:00:19Zen
dc.date.available2021-07-17T08:00:19Zen
dc.date.issued2021-07-16en
dc.description.abstractBenefit from the recent advancements of artificial intelligence (AI) methods, industrial automation has replaced human labors in many tasks. However, humans are still placed in the central role when visual searching tasks are highly involved for manufacturing decision-making. For example, highly customized products fabricated by additive manufacturing processes have posed significant challenges to AI methods in terms of their performance and generalizability. As a result, in practice, human visual searching tasks are still widely involved in manufacturing contexts (e.g., human resource management, quality inspection, etc.) based on various visualization techniques. Quantitatively modeling the visual searching behaviors and performance will not only contribute to the understanding of decision-making process in a visualization system, but also advance AI methods by incubating them with human expertise. In general, visual searching can be quantitatively understood from multiple scales, namely, 1) the population scale to treat individuals equally and model the general relationship between individual's physiological signals with visual searching decisions; 2) the individual scale to model the relationship between individual differences and visual searching decisions; and 3) the attention scale to model the relationship between individuals' attention in visual searching and visual searching decisions. The advancements of wearable sensing techniques enable such multiscale quantitative analytics of human visual searching performance. For example, by equipping human users with electroencephalogram (EEG) device, eye tracker, and logging system, the multiscale quantitative relationships among human physiological signals, behaviors and performance can be readily established. This dissertation attempts to quantify visual searching process from multiple scales by proposing (1) a data-fusion method to model the quantitative relationship between physiological signals and human's perceived task complexities (population scale, Chapter 2); (2) a recommender system to quantify and decompose the individual differences into explicit and implicit differences via personalized recommender system-based sensor analytics (individual scale, Chapter 3); and (3) a visual language processing modeling framework to identify and correlate visual cues (i.e., identified from fixations) with humans' quality inspection decisions in human visual searching tasks (attention scale, Chapter 4). Finally, Chapter 5 summarizes the contributions and proposes future research directions. The proposed methodologies can be readily extended to other applications and research studies to support multi-scale quantitative analytics. Besides, the quantitative understanding of human visual searching behaviors performance can also generate insights to further incubate AI methods with human expertise. Merits of the proposed methodologies are demonstrated in a visualization evaluation user study, and a cognitive hacking user study. Detailed notes to guide the implementation and deployment are provided for practitioners and researchers in each chapter.en
dc.description.abstractgeneralExisting industrial automation is limited by the performance and generalizability of artificial intelligence (AI) methods. Therefore, various human visual searching tasks are still widely involved in manufacturing contexts based on many visualization techniques, e.g., to searching for specific information, and to make decisions based on sequentially gathered information. Quantitatively modeling the visual searching performance will not only contribute to the understanding of human behaviors in a visualization system, but also advance the AI methods by incubating them with human expertise. In this dissertation, visual searching performance is characterized from multiple scales, namely, 1) the population scale to understand the visual searching performance in regardless of individual differences; 2) the individual scale to model the performance by quantifying individual differences; and 3) the attention scale to quantify the human visual searching-based decision-making process. Thanks to the advancements in wearable sensing techniques, this dissertation attempts to quantify visual searching process from multiple scales by proposing (1) a data-fusion method to model the quantitative relationship between physiological signals and human's perceived task complexities (population scale, Chapter 2); (2) a recommender system to suggest the best visualization design to the right person at the right time via sensor analytics (individual scale, Chapter 3); and (3) a visual language processing modeling framework to model humans' quality inspection decisions (attention scale, Chapter 4). Finally, Chapter 5 summarizes the contributions and proposes future research directions. Merits of the proposed methodologies are demonstrated in a visualization evaluation user study, and a cognitive hacking user study. The proposed methodologies can be readily extended to other applications and research studies to support multi-scale quantitative analytics.en
dc.description.degreeDoctor of Philosophyen
dc.format.mediumETDen
dc.identifier.othervt_gsexam:31941en
dc.identifier.urihttp://hdl.handle.net/10919/104200en
dc.publisherVirginia Techen
dc.rightsIn Copyrighten
dc.rights.urihttp://rightsstatements.org/vocab/InC/1.0/en
dc.subjectComputational attentionen
dc.subjectMachine learningen
dc.subjectrecommender systemen
dc.subjectvisual attentionen
dc.subjectwearable sensingen
dc.titleMultiscale Quantitative Analytics of Human Visual Searching Tasksen
dc.typeDissertationen
thesis.degree.disciplineIndustrial and Systems Engineeringen
thesis.degree.grantorVirginia Polytechnic Institute and State Universityen
thesis.degree.leveldoctoralen
thesis.degree.nameDoctor of Philosophyen

Files

Original bundle
Now showing 1 - 3 of 3
Loading...
Thumbnail Image
Name:
Chen_X_D_2021.pdf
Size:
7.07 MB
Format:
Adobe Portable Document Format
Loading...
Thumbnail Image
Name:
Chen_X_D_2021_support_4.pdf
Size:
99.61 KB
Format:
Adobe Portable Document Format
Description:
Supporting documents
Loading...
Thumbnail Image
Name:
Chen_X_D_2021_support_3.pdf
Size:
106.93 KB
Format:
Adobe Portable Document Format
Description:
Supporting documents