Human Learning-Augmented Machine Learning Frameworks for Text Analytics

TR Number

Date

2020-05-18

Journal Title

Journal ISSN

Volume Title

Publisher

Virginia Tech

Abstract

Artificial intelligence (AI) has made astonishing breakthroughs in recent years and achieved comparable or even better performance compared to humans on many real-world tasks and applications. However, it is still far from reaching human-level intelligence in many ways. Specifically, although AI may take inspiration from neuroscience and cognitive psychology, it is dramatically different from humans in both what it learns and how it learns. Given that current AI cannot learn as effectively and efficiently as humans do, a natural solution is analyzing human learning processes and projecting them into AI design. This dissertation presents three studies that examined cognitive theories and established frameworks to integrate crucial human cognitive learning elements into AI algorithms to build human learning–augmented AI in the context of text analytics.

The first study examined compositionality—how information is decomposed into small pieces, which are then recomposed to generate larger pieces of information. Compositionality is considered as a fundamental cognitive process, and also one of the best explanations for humans' quick learning abilities. Thus, integrating compositionality, which AI has not yet mastered, could potentially improve its learning performance. By focusing on text analytics, we first examined three levels of compositionality that can be captured in language. We then adopted design science paradigms to integrate these three types of compositionality into a deep learning model to build a unified learning framework. Lastly, we extensively evaluated the design on a series of text analytics tasks and confirmed its superiority in improving AI's learning effectiveness and efficiency.

The second study focused on transfer learning, a core process in human learning. People can efficiently and effectively use knowledge learned previously to solve new problems. Although transfer learning has been extensively studied in AI research and is often a standard procedure in building machine learning models, existing techniques are not able to transfer knowledge as effectively and efficiently as humans. To solve this problem, we first drew on the theory of transfer learning to analyze the human transfer learning process and identify the key elements that elude AI. Then, following the design science paradigm, a novel transfer learning framework was proposed to explicitly capture these cognitive elements. Finally, we assessed the design artifact's capability to improve transfer learning performance and validated that our proposed framework outperforms state-of-the-art approaches on a broad set of text analytics tasks.

The two studies above researched knowledge composition and knowledge transfer, while the third study directly addressed knowledge itself by focusing on knowledge structure, retrieval, and utilization processes. We identified that despite the great progress achieved by current knowledge-aware AI algorithms, they are not dealing with complex knowledge in a way that is consistent with how humans manage knowledge. Grounded in schema theory, we proposed a new design framework to enable AI-based text analytics algorithms to retrieve and utilize knowledge in a more human-like way. We confirmed that our framework outperformed current knowledge-based algorithms by large margins with strong robustness. In addition, we evaluated more intricately the efficacy of each of the key design elements.

Description

Keywords

Artificial intelligence, text analytics, design science, human learning, cognitive theories

Citation