Browsing by Author "Bian, Yali"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
- CS5604: Information and Storage Retrieval Fall 2017 - FE (Front-End Team) Chon, Jieun; Wang, Haitao; Bian, Yali; Niu, Shuo (Virginia Tech, 2017-12-24)Social media and Web data are becoming important sources of information for researchers to monitor and study global events. GETAR, led by Dr. Edward Fox, is a project aiming to collect, organize, browse, visualize, study, analyze, summarize, and explore content and sources related to biodiversity, climate change, crises, disasters, elections, energy policy, environmental policy/planning, geospatial information, green engineering, human rights, inequality, migrations, nuclear power, population growth, resiliency, shootings, sustainability, violence, etc. The report introduces the work of the Front End (FE) team analyzing users' requirements and building user interfaces for people to explore tweet/webpage data. The work of the FE team highly relies on the results from other teams. Our duty includes presenting the collected tweets/webpages, visualizing the clusters and topics, showing the indexed and clustered search results, and last but not least allowing users to perform customized queries and exploration. Therefore the team needs to consider how other teams collect and manage the data, as well as how people utilize the information to gain insights from the data repository. Throughout Fall 2017, our team aims to bridge the data archive and users’ need, focusing on providing various user interfaces for tweet/webpage exploration and analysis. Overall, two main user interfaces are designed and implemented throughout the semester. (1) A visualization-based analytical tool for people to create categories by searching and interacting with filtering tools, which are presented in visualizations such as bar-chart, tag cloud, and node-link graph. (2) A geo-based interface for location-based information, implemented with GeoBlacklight, enabling users to view tweets/webpages on maps. This report documents the background, plans, schedule, design, implementation, software installation, and other related useful information. We used Solr and a triple-store to provide data, and the "getar-cs5604f17-final_shard1_replica1" collection was used in the final testing and delivery. An overview of the team work and detailed design and implementation are both provided. We highlight the visualization-based interface and the location-based interface, as they provide visual tools for people to better understand the data collected by all the teams. We seek to provide information on how we extract users' requirements, how user needs are reflected in light of the related literature, and how that leads to the design of the visualization and geo-interface. An installation manual is also detailed, seeking to help other software engineers who will keep working on GETAR to reuse our work.
- Human-AI Sensemaking with Semantic Interaction and Deep LearningBian, Yali (Virginia Tech, 2022-03-07)Human-AI interaction can improve overall performance, exceeding the performance that either humans or AI could achieve separately, thus producing a whole greater than the sum of the parts. Visual analytics enables collaboration between humans and AI through interactive visual interfaces. Semantic interaction is a design methodology to enhance visual analytics systems for sensemaking tasks. It is widely applied for sensemaking in high-stakes domains such as intelligence analysis and academic research. However, existing semantic interaction systems support collaboration between humans and traditional machine learning models only; they do not apply state-of-the-art deep learning techniques. The contribution of this work is the effective integration of deep neural networks into visual analytics systems with semantic interaction. More specifically, I explore how to redesign the semantic interaction pipeline to enable collaboration between human and deep learning models for sensemaking tasks. First, I validate that semantic interaction systems with pre-trained deep learning better support sensemaking than existing semantic interaction systems with traditional machine learning. Second, I integrate interactive deep learning into the semantic interaction pipeline to enhance inference ability in capturing analysts' precise intents, thereby promoting sensemaking. Third, I add semantic explanation into the pipeline to interpret the interactively steered deep learning model. With a clear understanding of DL, analysts can make better decisions. Finally, I present a neural design of the semantic interaction pipeline to further boost collaboration between humans and deep learning for sensemaking.