Browsing by Author "Zhu, Kecheng"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
- Active Learning for Microarray based Leukemia ClassificationZhu, Kecheng (ACM, 2021-11-12)In machine learning, data labeling is assumed to be easy and cheap. However, in real word cases especially clinical field, data sets are rare and expensive to obtain. Active learning is an approach that can query the most informative data for the training. This leads to an alternative to deal with the concern mentioned above. The Sampling method is one of the key parts in active learning because it minimizes the training cost of the classifier. By different query method, models with considerable difference could be produced. The difference in model could lead to significant difference in training cost and final accuracy outcome. The approaches that were used to in this experiment is uncertainty sampling, diversity sampling and query by committee. In the experiment, active learning is applied on the microarray data with improving results. The classification on two types leukemia (acute myeloid leukemia and acute lymophoblastic leukemia) indicates a boost in accuracy with the same number of samples compared to passive machine learning. The experiments leads to the conclusion that with small number of samples with randomness in the field of leukemia classification, active learning produce an more active model. Additionally, active learning with query by committee finds the most informative sample with fewest trials.
- Object DetectionZhu, Kecheng; Gager, Zachary; Neal, Shelby; Li, Jiangyue; Peng, you (Virginia Tech, 2022-05-09)Electronic theses and dissertations (ETDs) contain valuable knowledge that can be useful in a wide range of research areas. To effectively utilize the knowledge contained in ETDs, the data first needs to be parsed and stored in an XML document. However, since most of the ETDs available on the web are presented in PDF, parsing them is a challenge to make their data useful for any downstream task, including question-answering, figure search, table search, and summarizing. For information search and extraction, contextual information is needed to perform these tasks. However, such semantic information is hidden in PDF documents. In contrast, XML can explicitly share semantic information. The structure within XML documents can enforce semantic continuity within the tag elements. Accordingly, knowledge graphs can be more easily built from XML, rather than PDF, representations. The goal of this project was to extract different elements of scholarly documents such as metadata (title, authors, year), chapter headings and subheadings, equations, figures (and captions), tables (and captions), and paragraphs, and then package them into an XML document. Subsequently, a pipeline responsible for the conversion and a dataset to support the object detection step was developed. Over the semester, 200 ETDs, both born-digital and scanned, were annotated using a online tool called RoboFlow. A model based on Facebook’s open-sourced object detection model, Detectron2, was trained with the created dataset. Besides that, a pipeline that utilizes the model has been built that converts an ETD in PDF into an XML document, which can then be used for future downstream tasks and HTML for visualization. A dataset consisting of 200 annotated ETDs and a working pipeline were delivered to the client. From the project, the Object Detection Team learned numbers of libraries related to the task, built a sense of the importance of version control, and understood how to split a large task into smaller and more approachable pieces.