Moving Toward Intelligence: A Hybrid Neural Computing Architecture for Machine Intelligence Applications
dc.contributor.author | Bai, Kang Jun | en |
dc.contributor.committeechair | Yi, Yang | en |
dc.contributor.committeemember | Zhou, Wei | en |
dc.contributor.committeemember | Ha, Dong S. | en |
dc.contributor.committeemember | Davalos, Rafael V. | en |
dc.contributor.committeemember | Abbott, A. Lynn | en |
dc.contributor.department | Electrical Engineering | en |
dc.date.accessioned | 2021-06-09T08:01:12Z | en |
dc.date.available | 2021-06-09T08:01:12Z | en |
dc.date.issued | 2021-06-08 | en |
dc.description.abstract | Rapid advances in machine learning have made information analysis more efficient than ever before. However, to extract valuable information from trillion bytes of data for learning and decision-making, general-purpose computing systems or cloud infrastructures are often deployed to train a large-scale neural network, resulting in a colossal amount of resources in use while themselves exposing other significant security issues. Among potential approaches, the neuromorphic architecture, which is not only amenable to low-cost implementation, but can also deployed with in-memory computing strategy, has been recognized as important methods to accelerate machine intelligence applications. In this dissertation, theoretical and practical properties of a hybrid neural computing architecture are introduced, which utilizes a dynamic reservoir having the short-term memory to enable the historical learning capability with the potential to classify non-separable functions. The hybrid neural computing architecture integrates both spatial and temporal processing structures, sidestepping the limitations introduced by the vanishing gradient. To be specific, this is made possible through four critical features: (i) a feature extractor built based upon the in-memory computing strategy, (ii) a high-dimensional mapping with the Mackey-Glass neural activation, (iii) a delay-dynamic system with historical learning capability, and (iv) a unique learning mechanism by only updating readout weights. To support the integration of neuromorphic architecture and deep learning strategies, the first generation of delay-feedback reservoir network has been successfully fabricated in 2017, better yet, the spatial-temporal hybrid neural network with an improved delay-feedback reservoir network has been successfully fabricated in 2020. To demonstrate the effectiveness and performance across diverse machine intelligence applications, the introduced network structures are evaluated through (i) time series prediction, (ii) image classification, (iii) speech recognition, (iv) modulation symbol detection, (v) radio fingerprint identification, and (vi) clinical disease identification. | en |
dc.description.abstractgeneral | Deep learning strategies are the cutting-edge of artificial intelligence, in which the artificial neural networks are trained to extract key features or finding similarities from raw sensory information. This is made possible through multiple processing layers with a colossal amount of neurons, in a similar way to humans. Deep learning strategies run on von Neumann computers are deployed worldwide. However, in today's data-driven society, the use of general-purpose computing systems and cloud infrastructures can no longer offer a timely response while themselves exposing other significant security issues. Arose with the introduction of neuromorphic architecture, application-specific integrated circuit chips have paved the way for machine intelligence applications in recently years. The major contributions in this dissertation include designing and fabricating a new class of hybrid neural computing architecture and implementing various deep learning strategies to diverse machine intelligence applications. The resulting hybrid neural computing architecture offers an alternative solution to accelerate the neural computations required for sophisticated machine intelligence applications with a simple system-level design, and therefore, opening the door to low-power system-on-chip design for future intelligence computing, what is more, providing prominent design solutions and performance improvements for internet of things applications. | en |
dc.description.degree | Doctor of Philosophy | en |
dc.format.medium | ETD | en |
dc.identifier.other | vt_gsexam:31005 | en |
dc.identifier.uri | http://hdl.handle.net/10919/103711 | en |
dc.publisher | Virginia Tech | en |
dc.rights | In Copyright | en |
dc.rights.uri | http://rightsstatements.org/vocab/InC/1.0/ | en |
dc.subject | Neuromorphic Architecture | en |
dc.subject | Machine Intelligence | en |
dc.subject | Deep learning (Machine learning) | en |
dc.subject | Reservoir Computing Network | en |
dc.subject | Delay-Feedback Reservoir Network | en |
dc.subject | Hybrid Neural Network | en |
dc.subject | Internet of Things | en |
dc.title | Moving Toward Intelligence: A Hybrid Neural Computing Architecture for Machine Intelligence Applications | en |
dc.type | Dissertation | en |
thesis.degree.discipline | Electrical Engineering | en |
thesis.degree.grantor | Virginia Polytechnic Institute and State University | en |
thesis.degree.level | doctoral | en |
thesis.degree.name | Doctor of Philosophy | en |
Files
Original bundle
1 - 1 of 1